Hacker Newsnew | past | comments | ask | show | jobs | submit | beej71's commentslogin

>At higher levels you are responsible for taking your $n number of years of experience to turn more ambiguous, more impactful, larger scoped projects into working implementations that are done on time, on budget and meets requirements.

Is this not a job for LLMs, though?


LLMs are good at turning well defined requirements to code.

But even now it’s struggling on a project to understand the correlation between “It is creating Lambda code to do $x meaning it needs to change the corresponding IAM role in CloudFormation to give it permission it needs”


The LLMs are fantastic at writing terraform when you tell it what to do which is a huge timesaver, but good heavens is it terrible at actually knowing what pieces need to be wired up for anything but the simplest cases. Job security for now I guess?

I was able to one shot CDK, Terraform and CloudFormation on my last three projects respectively (different clients - different IAC). But I was really detailed about everything I needed and I fed ChatGPT the diagram.

I guess I could be more detailed in the prompt/md files about every time it changes lambda code, check the permissions in the corresponding IAC and check to see if a new VPC endpoint is needed.


I also predict an explosion of work for qualified devs. And I predict there will be an undersupply of them.

I think you can invert GP's point #2 for that one.

Here's a start: end immunity and masks for ICE. This will naturally improve things.

But that's not all. You can also naturalize people who are being productive citizens.


ELI5 how it makes my life better. Right now the blowback is making my life worse.

[flagged]


Do you have the statistics for the crime rates or did you again completely make up crap without knowing anything? Show the numbers or get out.

It's been a long time and our memory only goes so far back. I'm not even that old, but the time between WWII and my birth is waaaaay less then my current age. Jimmy Doolittle was still hosting Christmas specials on TV when I was a kid. Nobody knows who TF that even is now. I doubt half of America has even heard of the Third Reich. Sure, they know that "Nazi" is some kind of insult, but the rest is history forgotten. The last educational film on the matter was Indiana Jones III.

Those of us who remember history will continue to fight, and our numbers aren't small. Maybe one day we can begin to repair the enormous national and global damage that has occurred.


Yes. Depending on the country and situation, these things can take a long time. We have ours in the works already and it's looking like 1-2 years to go through.

Ideally they get snatched up by a competitor with more vision. You know who processes more volume than one guy with AI? Six guys with AI. I think it'll take the market a minute to relearn this.

The only thing that makes me happy about that is the fact that our entire economy just might not implode. But I think we'd be better off without LLMs at all. Oh well. Pandora Box is open and here we are.


I define it both ways on context.

If I won the lottery, I'd never work again and I'd work every day until I died.


I'm an experienced dev, out of the industry now. I'm trying to level up in Rust, and here's what I do.

I bust my ass getting software written by hand using The Book and the API reference. Then I paste it into an LLM and ask it to review it. I steal the bits I like. The struggle is where we learn, after all.

I also bounce ideas off LLMs. I tell it of a few approaches I was considering and ask it to compare and contrast.

And I ask it to teach me about concepts. I tell it what my conception is, and ask it to help me better understand it. I had a big back and forth about Rust's autoderef this morning. Very informative.

I very, very rarely ask it to code things outright, preferring to have it send me to the API docs. Then I ask it more questions if I'm confused.

When learning, I use LLMs a lot. I just try to do it to maximize my knowledge gain instead of maximizing output.

I'm of the belief that LLMs are multipliers of skill. If your base skill is zero, well, the product isn't great. But if you possess skill level 100, then you can really cook.

Put more bluntly, a person with excellent base coding skills and great LLM skills with always outperform, significantly, someone with low base coding skills and great LLM skills.

If I were writing code for a living, I'd have it generate code for me like crazy. But I'd direct it architecturally and I'd use my skills to verify correctness. But when learning something, I think it's better to use it differently.

IMHO. :)


Imagine a world where npm and all the other library repositories are 99% AI slop. And the posts about which library to choose are 99% AI-generated.

It'll be so hard to find anything in the chaff you might get your old job as a dev back. :)


We'll have AIs doing our chaff-sifting for us, too, right?

padme.jpg

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: