> Ex-Microsoft software developer Igor Ostrovsky believes that soon, there won’t be a developer who doesn’t use AI in their workflows.
I'm really curious about this. I honestly don't see a case where I would use a coding assistant and everyone I have spoken to that does, hasn't been a strong coder (they would characterize themselves this way and I certainly agree with their assessment).
I'd love to hear from strong coders — people whom others normally go to when they have a problem with some code (whether debugging or designing something new) who now regularly use AI coding assistants. What do you find useful? In what way does it improve your workflow? Are there tasks where you explicitly avoid them? How often do you find bugs in the generated code (and a corollary, how frequently has a bug slipped in that was only caught later)?
I was skeptical at first - tried the copilot beta, got annoyed by it, and quickly turned it off. Later I tried it again and haven't looked back.
For the most part it's not that I'm ceding the thinking to the machine; more often it suggests what I was going to type anyway, which if nothing else saves typing. It's especially helpful if I'm doing something repetitive.
Beyond that, it can save some cognitive load by auto completing a block of code that wouldn't have necessarily been very difficult, but that I would've had a stop to think about. E.g an API I'm not used to, or a nested loop or something.
The other big advantage that comes to mind is when I'm doing something I'm not familiar with, e.g. I recently started using Rust, and copilot has been a major help when I vaguely know what I _want_ to do but don't quite know how to get there on my own. I'm experienced enough to evaluate whether the suggested code does what I want. If there's anything in the output I don't understand, I can look it up.
> Are there tasks where you explicitly avoid them?
Not necessarily that I can think of, but after having copilot on for a little while it's gotten easier to tune it out when I'm not interested in its suggestions or when they're not helpful.
> How often do you find bugs in the generated code (and a corollary, how frequently has a bug slipped in that was only caught later)?
90% of the time I'm only accepting a single line of code at a time, so it's less a question of "is there a bug here" and more "does this do what I want or not?" Like, if I'm writing an email and gmail suggests an end to my sentence, I'm not worried about whether there's a bug in the sentence. If it's not what I want to say, I either don't take the suggestion, or I take the suggestion and modify it.
If I do accept larger chunks of suggested code, I review it thoroughly to the point where it's no longer "the AI's" code -- for all intents and purposes it's mine now. Like I said before, most of the time it's basically the code I was going to write anyway, I just got there faster.
That’s what I thought too in the beginning but that’s because the demos were always about writing a comment telling the Copilot something like “// write a function to sort this array”, but in reality it’s just a better auto complete for me where I just write the regular code like “func Delete(“ and it auto completes the parameters and the boring crud code for that function.
At the current time it’s not that magic for me but more a small speed up with smarter auto complete.
In future iterations when it knows your whole code base, everything you see on the screen, your microservices and how they are connected and manipulated multiple files at the same time that’s when it would become more interesting to me.
I’m a strong coder. AI assistants can effectively be considered a really smart autocomplete. It simply saves time to insert the characters I was going to type anyway. If it suggests something other than what I want, I just simply don’t press tab.
Mostly copilot is a nice autocomplete. Sometimes it writes what I would write and then I don’t have to type it out.
Sometimes it helps when I’m writing code in a domain I don’t know. It can pull in a library function I wasn’t aware of for example.
It isn’t always right and sometimes hallucinates, but usually static analysis notifies me when the library function doesn’t exist or the signature is wrong, and then I have to go back and do the work I was going to have to do anyways.
The key I think is that most software isn’t writing unique code. We might write little nuggets of unique code and then glue it together with a ton of boilerplate. And LLMs are great at boilerplate.
I think I’m a strong programmer, but my learning style seems to be inquisitive and it’s a perfect match for GPT cos I can ask a million questions. So for me, GPT has been like working with someone much more knowledgeable about my platform (I haven’t done a lot of modern front end stuff before), but seemingly not as strong on architecture or domain.
This combination has meant that I’ve done in about 3 days what I thought would take me 2 weeks. And I’ve enjoyed the shit out of it too.
> What do you find useful? In what way does it improve your workflow?
It's a better auto complete. When I'm writing markup it has surprisingly good suggestions for labels/placeholder/whatever.
> Are there tasks where you explicitly avoid them?
I don't use it to fix bugs/errors. Occasionally I try and see what it comes up with. it has never once successfully fixed anything in my entire history of using it.
> How often do you find bugs in the generated code (and a corollary, how frequently has a bug slipped in that was only caught later)?
Since it's just typing what I'd expect to type myself, I probably have the same bug rate as before. I haven't seen it insert off-by-one errors (yet). That's probably the most likely one I can imagine missing.
I'm really curious about this. I honestly don't see a case where I would use a coding assistant and everyone I have spoken to that does, hasn't been a strong coder (they would characterize themselves this way and I certainly agree with their assessment).
I'd love to hear from strong coders — people whom others normally go to when they have a problem with some code (whether debugging or designing something new) who now regularly use AI coding assistants. What do you find useful? In what way does it improve your workflow? Are there tasks where you explicitly avoid them? How often do you find bugs in the generated code (and a corollary, how frequently has a bug slipped in that was only caught later)?