1. From “Writing Code” to “Solving Problems”: If coding becomes widespread, the role of programmers may shift from simply “writing code” to “solving complex problems.” Programmers will need to focus on understanding and addressing problems from a high-level perspective, applying technical solutions to real-world challenges.
2. AI and Creativity: While AI can generate code, it still lacks creativity and deep understanding. Programmers will play a crucial role in designing and guiding the architecture of complex systems, driving innovation, and developing unique products that meet market and user needs.
3. Quality and Security Gatekeepers: AI-generated code might contain bugs or fail to follow best practices. Programmers will be responsible for reviewing, optimizing, and ensuring the quality, security, and maintainability of code to prevent technical debt from piling up.
4. Trainers and Supervisors of AI: Although AI can generate code, it relies on training data and model optimization. Programmers will take on the role of overseeing AI, improving models, and ensuring AI-generated code meets quality standards and aligns with user requirements.
5. Connecting with User Needs: Programmers are not just tech experts, but also the bridge between technology and user needs. Even with AI automating code generation, programmers will remain essential in understanding and translating customer or business needs into effective technological solutions.
In a world where AI becomes more intelligent, how will the role of programmers evolve? How will they continue to hold irreplaceable value in the tech industry?
But assuming we find a way to make gold grow on trees, the value of gold will also drop considerably.
Now if we look at the current situation, AI does not replace developers. It allows non-developers to generate code that looks like it works, which may be enough in some situations and not in other. It most definitely isn't capable of building a big software project that must work. There is a big difference between a small script that sorts my photos and a big program that is used by professionals to accomplish real work. If Excel starts making calculation errors, suddently it's not a viable tool for many fields.
AI is just a tool, like syntax highlighting or linting. Those did not replace developers.
Reason being: memetic monoculture.
Everyone has some blind spots, same for LLMs: No matter how much you use any particular model (or ask a particular human), when you hit a blind spot, they can't reflect on it and actually resolve it — even if you ask them to, they'll make a new "solution" with the same flaw, whatever that flaw happens to be.
Code review gets an extra pair of eyes to check for such things (or at least can, I have experienced coworkers who were write-only-read-never for any criticism of their code or architectural choices).
In chess, this is "being a centaur" (human + AI hybrid working better than either alone); people have been arguing that this is no longer true since about 2013, but given that was 16 years after Deep Blue finally beat Kasparov, even that is an economically useful delay for those of us who want to keep getting paid.
(Was the Ask HN itself AI generated? The trouble is, humans copying each other is also a way to get a memetic monoculture, and we also copy LLMs…)