People who enjoy mentoring juniors are generally satisfied with the ROI of iterating through LLM code generation.
People who find juniors sort-of-frustrating-but-part-of-the-job-sometimes have a higher denominator on that ROI calc, and ask themselves why they would keep banging their head against the LLM wall.
The first group is probably wiser and more efficient at multiplying their energies, in the long term.
I find myself in the second group. I run tests every couple months, but I'm still waiting for the models to have a higher R or a lower I. Any day now.
It's the complete opposite for me. I enjoy the process of mentoring juniors and am usually sought out for a lot of little issues like fixing git workflows or questions on how a process works. Working with an LLM is absolutely not what I want to do because I'd much rather mentees actually learn and ask me less and less questions. My experience with AI so far is that it never learns at all and it has never felt to me like a human. It pretends to be contrite and apologises for mistakes but it makes those mistakes anyway. It's the worst kind of junior who repeats the same mistake multiple times and doesn't bother committing those to memory.
You're right, I'm probably lumping the first group over-broadly, since I understand them less well.
It would make sense for there to be subgroups within the first group. It sounds like you prioritize results (mentee growth, possibly toward long-term contribution), and it's also likely that some people just enjoy the process of mentoring.
I'm cynical person, and IME the former are some of the most annoying and usually the worst engineers I've met.
Most people who "mentor" other people (like, make it a pride and distinction part of their identity) are usually the last people you want to take advice from.
Actual mentors are the latter group, who juniors seek out or look up to.
In other words, the former group is akin to those people on YouTube who try to sell shitty courses.
People who enjoy mentoring juniors are generally satisfied with the ROI of iterating through LLM code generation.
People who find juniors sort-of-frustrating-but-part-of-the-job-sometimes have a higher denominator on that ROI calc, and ask themselves why they would keep banging their head against the LLM wall.
The first group is probably wiser and more efficient at multiplying their energies, in the long term.
I find myself in the second group. I run tests every couple months, but I'm still waiting for the models to have a higher R or a lower I. Any day now.