3. evaluating whether the end result works as intended.
I have some news for the author about what 80% of a programmer's job consists of already today.
There is also the issue #4, that "idea guy" types frequently gloss over: If things do not work as intended, find out why they don't do so and work out a way to fix the root cause. It's not impossible that an AI can get good at that, but I haven't seen it so far and it definitely doesn't fit into the standard "write down what I want to have, press button, get results" AI workflows.
Generally, I feel this ignores the inherent value that there is in having an actual understanding of a system.
The current implementations of LLMs are focusing on providing good answers, but the real effort of modern day programming is to ask good questions, of both the system, the data, and the people that supposedly understand the requirements
LLMs might become good at this, but nobody seems to be doing much of this commercially just yet
You make a good point and those kind of senior engineer skills may be the least affected. My post does not argue against that. It argues that writing code manually may quickly become obsolete.
What I am trying to say is that people who see the output of their work as "code" will be replaced just like human computers did. I believe even debugging will be increasingly aided by AI. I do not believe that AI will eliminate the need for system understanding, just to be clear.
Then again, you might argue that writing lines of code and manually debugging issues is exactly what builds your understanding of the system. I agree with that too, I suppose the challenge will be maintaining deep system knowledge as more tasks become automated.
[...]
3. evaluating whether the end result works as intended.
I have some news for the author about what 80% of a programmer's job consists of already today.
There is also the issue #4, that "idea guy" types frequently gloss over: If things do not work as intended, find out why they don't do so and work out a way to fix the root cause. It's not impossible that an AI can get good at that, but I haven't seen it so far and it definitely doesn't fit into the standard "write down what I want to have, press button, get results" AI workflows.
Generally, I feel this ignores the inherent value that there is in having an actual understanding of a system.