At least the Rust compiler gives pretty good advice on what is going wrong. And for complete beginners, agentic AI can soften the pain a lot if used correctly. By used correctly I mean the following work flow:
1) Design in correspondence with AI. Let it criticise your ideas, give you suggestions on tools/libraries/techniques, and have concepts and syntax explained to you. Stay aware that these models are sycophantic yes-machines.
2) Implement yourself.
3) Debug in collaboration with AI. If you ask a question like "I'm getting [error], what are the most likely reasons for this happening?", you can save a lot of time finding the issue. Just make sure to also research why it is happening and how to solve it independently.
4) Let AI criticise your final result and let it offer suggestions on what to improve. Judge these critically yourself.
There is some worth in spending hours trying to fix a bug you don't understand, it builds resilience, helps you get familiar with a lot of language topics, and you probably won't make the same mistake again. But the above approach is a pretty good compromise of letting AI help where it excels, while still keeping enough control to actually learn something yourself.
I believe that Rust is the language benefiting the most from agentic AI, because the compiler is such a strong gate-keeper, and the documentation of almost all aspects of the language is comprehensive and clear. The biggest pain points of Rust are also reduced by AI: Front-loaded learning curve is softened, refactoring is something gen AI is actually decent at, and long compile times can be spent productively by already planning out the next steps.
> I believe that Rust is the language benefiting the most from agentic AI
Except in my experience, chatgpt and claude both struggle to write rust code that compiles correctly. Chatgpt is pretty good at complex tasks in typescript like "Write a simple snake game using (web framework x). It should have features X and Y". Its can be surprisingly good at some complex problems like that.
If you try the same in rust, it often fails. I've also had plenty of situations where I've had some complex borrowing error in rust code, and chatgpt just can't figure it out. It goes in loops. "Oh I see the problem. Sure, this should fix it ..." except the "fixed code" fails in just the same way.
I'm not sure why. Maybe there's just not enough rust code in the training set for chatgpt to figure it out. But rust is definitely a weakness of the current generation of models.
1) Design in correspondence with AI. Let it criticise your ideas, give you suggestions on tools/libraries/techniques, and have concepts and syntax explained to you. Stay aware that these models are sycophantic yes-machines.
2) Implement yourself.
3) Debug in collaboration with AI. If you ask a question like "I'm getting [error], what are the most likely reasons for this happening?", you can save a lot of time finding the issue. Just make sure to also research why it is happening and how to solve it independently.
4) Let AI criticise your final result and let it offer suggestions on what to improve. Judge these critically yourself.
There is some worth in spending hours trying to fix a bug you don't understand, it builds resilience, helps you get familiar with a lot of language topics, and you probably won't make the same mistake again. But the above approach is a pretty good compromise of letting AI help where it excels, while still keeping enough control to actually learn something yourself.
I believe that Rust is the language benefiting the most from agentic AI, because the compiler is such a strong gate-keeper, and the documentation of almost all aspects of the language is comprehensive and clear. The biggest pain points of Rust are also reduced by AI: Front-loaded learning curve is softened, refactoring is something gen AI is actually decent at, and long compile times can be spent productively by already planning out the next steps.