Hacker Newsnew | past | comments | ask | show | jobs | submit | spicyusername's commentslogin

    I can confirm that they are completely useless for real programming
And I can confirm, with similar years of experience, that they are not useless.

Absolutely incredible tools that have saved hours and hours helping me understand large codebases, brainstorm features, and point out gaps in my implementation or understanding.

I think the main disconnect in the discourse is that there are those pretending they can reliably just write all the software, when anyone using them regularly can clearly see they cannot.

But that doesn't mean they aren't extremely valuable tools in an engineer's arsenal.


Same. I started coding before hitting puberty, and Im well into my 30s.

If you know the problem space well, you can let LLMs(I use Claude and ChatGPT) flesh it out.


> I use Claude and ChatGPT

Both for code? For me, it's Claude only for code. ChatGPT is for general questions.


Yes, I use them in tandem. Generally Claude for coding and ChatGPT when I run out of tokens in Claude.

I also use ChatGPT to summarise my project. I ask it to generate mark down and PDFs, explaining the core functionality.


I feel like I have to be strategic with my use of claude code. things like frequently clearing out sessions to minimize context, writing the plan out to a file so that I can review it more effectively myself and even edit it, breaking problems down into consumable chunks, attacking those chunks in separate sessions, etc. it's a lot of prep work I have to do to make the tool thrive. that doesn't mean it's useless, though.

It probably started before, but the covid era really feels like it was a turning point after which everyone I see, including, it seems, Rich Hickey, is drowning in news headlines and social media takes.

Are things as bad as they seem? Or are we just talking about everything to death, making everything feel so immediate. Hard to say.

Every time I read any kind of history book about any era, I'm always struck at how absolutely horrible any particular detail was.

Nearly every facet of life always has the qualities it has today. Things are changing, old systems are giving way to new systems, people are being displaced, politicians acting corrupt, etc.

I can't help but feel like AI is just another thing we're using as an excuse to feel despair, almost like we're forgetting how to feel anything else.


These are the perfect size projects vibe coding is currently good for.

At some point you hit a project size that is too large or has too many interdependencies, and you have to be very careful about how you manage the context and should expect the llm to start generating too much code or subtle bugs.

Once you hit that size, in my opinion, it's usually best to drop back to brainstorming mode, only use the llm to help you with the design, and either write the code yourself, or write the skeleton of the code yourself and have the llm fill it in.

With too much code, llms just don't seem able yet to only add a few more lines of code, make use of existing code, or be clever and replace a few lines of code with a few more lines of code. They nearly always will add a bunch of new abstractions.


I agree with you as far as project size for vibe-coding goes - as-in often not even looking at the generated code.

But I have no issues with using Claude Code to write code in larger projects, including adapting to existing patterns, it’s just not vibe coding - I architect the modules, and I know more or less exactly what I want the end result to be. I review all code in detail to make sure it’s precisely what I want. You just have to write good instructions and manage the context well (give it sample code to reference, have agent.md files for guidance, etc.)


> I know more or less exactly what I want the end result to be

This is key.

And this is also why AI doesn't work that well for me. I don't know yet how I want it to work. Part of the work I do is discovering this, so it can be defined.


I've found this to be the case as well. My typical workflow is:

1. Have the ai come up with an implementation plan based on my requirements

2. Iterate on the implementation plan / tweak as needed, and write it to a markdown file

3. Have it implement the above plan based on the markdown file.

On projects where we split up the task into well defined, smaller tickets, this works pretty well. For larger stuff that is less well defined, I do feel like it's less efficient, but to be fair, I am also less efficient when building this stuff myself. For both humans and robots, smaller, well defined tickets are better for both development and code review.


Yeah, this exactly. And if the AI wanders in confusion during #3, it means the plan isn’t well-defined enough.

There actually is a term for this LLM-assisted coding/engineering. Unfortunately it has been pushed away by the fake influencer & PR term "vibe coding" which conflates coding with unknowledgeable people just jerking the slot machine.

Sounds like so much work just not to write it yourself.

Getting it right definitely takes some time and finesse, but when it works you spend 30 minutes to get 4-24+ hours of code.

And usually that code contains at least one or two insights you would not normally have considered, but that makes perfect sense, given the situation.


Or you can apply software architecture methods that are designed to help humans with exactly the same type of problems.

Once your codebase exceeds a certain size, it becomes counter-productive to have code that is dependent on the implementation of other modules (tight coupling). In Claude Code terms this means your current architecture is forcing the model to read too many lines of code into its context which is degrading performance.

The solution is the same as it is for humans:

  "Program to an interface, not an implementation." --Design Patterns: Elements of Reusable Object-Oriented Software (1994)
You have to carefully draw boundaries around the distinct parts of your application and create simple interfaces for them that only expose the parts that other modules in your application need to use. Separate each interface definition into its own file and instruct Claude (or your human coworker) to only use the interface unless they're actually working on the internals of that module.

Suddenly, you've freed up large chunks of context and Claude is now able to continue making progress.

Of course, the project could continue to grow and the relatively small interface declarations could become too many to fit in context. At that point it would be worthwhile taking a look at the application to see if larger chunks of it could be separated from the rest. Managing the number and breadth of changes that Claude is tasked with making would also help since it's unlikely that every job requires touching dozens of different parts of the application so project management skills can get you even further.


Is vibe architecting a thing too, or is architecting to make your vibe coder work better something that the human needs to know?

Haha, actually yes. You can prompt them to be their own architect but I do find it works better when you help. You could probably get pretty far by prompting them to review their own code and suggest refactoring plans. That's essentially what Plan Mode is for in Claude Code.

Or should we just call them microservices instead?

Engineering code now is not just binary, it's a spectrum from vibe-coding through copilot-style (design and coding assistance) to your help-with-design-only to no-AI.

The capabilities now are strong enough to mix and match almost fully in the co-pilot range on substantial projects and repos.


  These are the perfect size projects vibe coding is currently good for.
So far... it's going to keep getting better to the point until all software is written this way.

Sure, but that's basically the same as saying that we'll have human-equivalent AI one day (let's not call it AGI, since that means something different to everyone that uses it), and then everything that humans can do could then be done by AI (whether or not it will be, is another question).

So, yes, ONE DAY, AI will be doing all sorts of things (from POTUS and CEO on down), once it is capable of on-the-job learning and picking up new skills, and everything else that isn't just language model + agent + RAG. It the meantime, the core competence of an LLM is blinkers-on (context-on) executing - coding - according to tasks (part of some plan) assigned to it by a human who, just like a lead assigning tasks to human team members, is aware of what it can and can not do, and is capable of overseeing the project.


It seems like it's approaching a horizontal asymptote to me, or is at the very least concave down. You might be describing a state 50 years from now.


Improved benchmarks are undeniably an improvement, but the bottleneck isn't the models anymore, it's the context engineering necessary to harness them. The more time and effort we put into our benchmarking systems the better we're able to differentiate between models, but then when you take an allegedly smart one and try to do something real with it, it behaves like a dumb one again because you haven't put as much work into the harness for the actual task you've asked it to do as you did into the benchmark suite.

The knowledge necessary to do real work with these things is still mostly locked up in the humans that have traditionally done that work.


The systems around the LLM will get built out. But do you think it will take 50 years to build out like you said before?

I’m thinking 5 years at most.

The key is that the LLMs get smart enough.


The more I think of it the less likely I think it is that "all code written via LLM" will happen at all.

I use LLMs to generate systems that interpret code that I use to express my wishes, but I don't think is would be desirable to express those wishes in natural language all of the time.


sonnet 3.7 was released 10 months ago! (the first model truly capable of any sort of reasonable agentic coding at all) and opus 4.5 exists today.

To add to this: the tooling or `harness` around the models has vastly improved as well. You can get far better results with older or smaller models today than you could 10 months ago.

The harnesses are where most progress is made at the moment. There are some definite differences in the major models as to what kind of code they prefer, but I feel the harnesses make the biggest difference.

Copilot + Sonnet is a complete idiot at times, while Claude Code + Sonnet is pretty good.


Successfully building an IKEA shelf doesn’t make you a carpenter.

no, but I have furniture. it's important to keep sight of the end goal, unless the carpentry is purely a hobby.

What's the job title and education requirements for designing the supply chain and engineering of the ikea furniture?

I don't know, I don't work at IKEA. Sorry.

Air traffic control software is not going to be vibe-coded anytime soon and neither is the firmware controlling the plane.

Sure it will. But they will be tested far more stringently by both human experts and the smartest LLM models.

I will be perfectly honest. Given what I am seeing, I fully expect someone to actually try just that.

Considering how much work at Boeing is given to consultants and other third party contractors (eg famous MCAS), some piece of work after moving through the bowls of multiple subcontractors will end up in the hands of a under-qualified developer who will ask his favourite slop machine to generate code he doesn’t exactly understands purpose of.

I've got a bridge to sell you

Reminds me of Ken Miles saying 7000 rpm quote. At what size do you think this happens? Whatever is the most relevant metric of size for this context.

I think this limitation goes away as long as your code is modular. If the Ai has to read the entire code base each time, sure but if everything is designed well then it only needs to deal with a limited set of code each time and it excels at that.

> make use of existing code, or be clever and replace a few lines of code with a few more lines of code

You can be explicit about these things.


Yes. It is called programming.

Using agents is programming. Programming is done with the mind, the tapping of keys isn’t inherent to the process.

Unfortunately IDEs are not yet directly connected to our minds, so there's still that silly little step of encoding your ideas in a way that can be compiled into binary. Playing the broken telephone game with an LLM is not always the most efficient way of explaining things to a computer.

Of course not. It’s a tool.

History and pop culture (and life) are like that.

Richard Feynman is a person well worth remembering, but I'm sure many of his contemporaries that get talked about less were as well.

So it goes.


I can't think of a job that is less automatable.

The entire job is almost entirely human to human tasks: the salesmanship of selling a vision, networking within and without the company, leading the first and second line executives, collaborating with the board, etc.

What are people thinking CEOs do all day? The "work" work is done by their subordinates. Their job is basically nothing but social finesse.


> The entire job is almost entirely human to human tasks: sales, networking, leading, etc.

So, writing emails?

"Hey, ChatGPT. Write a business strategy for our widget company. Then, draft emails to each department with instructions for implementing that strategy."

There, I just saved you $20 million.


People seem to have a poor model of what management and many knowledge workers. Much of it isn't completing tasks, but identifying and creating them.

> Much of it isn't completing tasks, but identifying and creating them.

They failed miserably in the Automotive industry in Europe. The only thing that they identified was: "Shit, the profits are falling, do something"


"ChatGPT, please identify the tasks that a CEO of this company must do."

I get your point but if you think that list of critical functions (or the unlisted "good ol boys" style tasks) boils down to some emails then I think you don't have an appreciation for the work or finesse or charisma required.

> I think you don't have an appreciation for the work or finesse or charisma required.

I think that you don't appreciate that charismatic emails are one of the few things that modern AI can do better than humans.

I wouldn't trust ChatGPT to do my math homework, but I would trust it to write a great op-ed piece.


For some reason the AI prompt "make me 20 million" hasn't been working for me. What am I doing wrong?

Have you got that plan reviewed by your analysts and handed over to implement by your employees? You may be missing those steps...

Automation depends on first getting paid to do something.

We could solve that by replacing all CEOs to remove the issue of finesse and charisma. LLMs can then discuss the actual proposals. (not entirely kidding)

It would be actually nicely self reinforcing and resisting a change back, because now it's in board's interest to use an LLM which cannot be smoothtalked into bad deals. Charisma becomes the negative signal and excludes more and more people.


Why are there "good ol boys" tasks in the first place? Instead, automate the C-suite with AI, get rid of these backroom dealings and exclusive private networks, and participate in a purer free market based on data. Isn't this what all the tech libertarians who are pushing AI are aiming for anyways? Complete automation of their workforces, free markets, etc etc? Makes more sense to cut the fat from the top first, as it's orders of magnitude larger than the fat on the bottom.

A more fair, less corrupt system/market sounds great! I also think once we solve that tiny problem that the "should ai do ceo jobs" problem is way easier!

What should we do while we wait for the good ol boys networks to dismantle themselves?

On a more serious note, the meritocracy, freedom, data, etc that big tech libertarians talk about seems to mostly be marketing. When we look at actions instead it's just more bog standard price fixing, insider deals, regulatory capture, bribes and other corruption with a little "create a fake government agency and gut the agencies investigating my companies" thrown in to keep things exciting.


> There, I just saved you $20 million.

If it were this easy, you could have done it by now. Have you?


> If it were this easy, you could have done it by now. Have you?

In order to save $20 million dollars with this technique, the first step is to hire a CEO who gets paid $20 million dollars. The second step is to replace the CEO with a bot.

I confess that I have not yet completed the first step.


Have you replaced the executive function in any one of your enterprises with ChatGPT?

I have completely replaced management of every company that I own with ChatGPT.

0 x 0 = 0 I guess?

How have they scaled?

This is literally a caricature of what the average HN engineer thinks a businessperson or CEO does all day, like you can't make satire like this up better if you tried.

Do you think CEOs have an accurate idea of what engineers do?

Neither side can truly know, that is the nature of a diffuse organization.

That won't stop them from replacing us.

Even if the AI gets infinitely good, the task of guiding it to create software for the use of other humans is called...software engineering. Therefore, SWEs will never go away, because humans do not know what they want, and they never will until they do.

It's mind-boggling. I get riffing on the hyped superiority of CEOs. I've heard inane things said by them. But, being a human being with some experience observing other humans and power structures, I can assuredly say that the tight-knit group of wealthy power-brokers who operate on gut and bullshitting each other (and everyone) will not cede their power to AI, but use it as a tool.

Or maybe the person you're describing is right, and CEOs are just like a psy-rock band with a Macbook trying out some tunes hoping they make it big on Spotify.


I am sympathetic to your point, but reducing a complex social exchange like that down to 'writing emails' is wildly underestimating the problem. In any negotiation, it's essential to have an internal model of the other party. If you can't predict reactions you don't know which actions to take. I am not at all convinced any modern AI would be up to that task. Once one exists that is I think we stop being in charge of our little corner of the galaxy.

Artists, musicians, scientists, lawyers and programmers have all argued that the irreducible complexity of their jobs makes automation by AI impossible and all have been proven wrong to some degree. I see no reason why CEOs should be the exception.

Although I think it's more likely that we're going to enter an era of fully autonomous corporations, and the position of "CEO" will simply no longer exist except as a machine-to-machine protocol.


The one big reason why CEOs exist is trust. Trust from the shareholders that someone at the company is trying to achieve gains for them. Trust from vendors/customers that someone at the company is trying to make a good product. Trust from the employees that someone is trying to bring in the money to the company (even if it doesn't come to them eventually).

And that trust can only be a person who is innately human, because the AI will make decisions which are holistically good and not specifically directed towards the above goals. And if some of the above goals are in conflict, then the CEO will make decisions which benefit the more powerful group because of an innately uncontrollable reward function, which is not true of AI by design.


> The one big reason why CEOs exist is trust.

This sounds a lot like the specious argument that only humans can create "art", despite copious evidence to the contrary.

You know what builds trust? A history of positive results. If AIs perform well in a certain task, then people will trust them to complete it.

> Trust from vendors/customers that someone at the company is trying to make a good product.

I can assure you that I, as a consumer, have absolutely no truth in any CEO that they are trying to making a good product. Their job is to make money, and making a good product is merely a potential side-effect.


I feel like the people who can't comprehend the difficulties of an AI CEO are people who have never been in business sales or high level strategy and negotiating.

You can't think of a single difference in the nature of the job of artist/musician vs. lawyer vs. business executive?


> I feel like the people who can't comprehend the difficulties of an AI <thing doer> are people who have never <tried to do that thing really well>.

That applies to every call to replace jobs with current-gen AI.

But I can't think of a difference between CEOs and other professions that works out in favor of keeping the CEOs over the rest.


>You can't think of a single difference in the nature of the job of artist/musician vs. lawyer vs. business executive?

I can think of plenty, but none that matter.

As the AI stans say, there is nothing special about being human. What is a "CEO?" Just a closed system of inputs and outputs, stimulus and response, encased in wetware. A physical system that like all physical systems can be automated and will be automated in time.


My assertion is that it's a small club of incredibly powerful people operating in a system of very human rules - not well defined structures like programming, or to a lesser extent, law.

The market they serve is themselves and powerful shareholders. They don't serve finicky consumers that have dozens of low-friction alternatives, like they do in AI slop Youtube videos, or logo generation for their new business.

A human at some point is at the top of the pyramid. Will CEOs be finding the best way to use AI to serve their agenda? They'd be foolish not to. But if you "replace the CEO", then the person below that is effectively the CEO.


You sound like a CEO desperately trying not to get fired.

Everyone is indispensable until they aren't.


This whole thread is delightful. Well done.

Alas, this doesn't answer the question I posed.

CEOs are a different class of worker, with a different set of customers, a smaller pool of workers. They operate with a different set of rules than music creation or coding, and they sit at the top of the economy. They will use AI as a tool. Someone will sit at the top of a company. What would you call them?


They've been proven wrong? I'm not sure I've seen an LLM that does anything beyond the most basic rote boilerplate for any of these. I don't think any of these professions have been replaced at all?

Hooker is probably harder to automate.

They both need social finesse and CEO’s don’t need a body.


> Hooker is probably harder to automate.

I'm pretty sure I've seen news articles about attempts to do exactly that.


You can’t automate joining an ivy league fraternity.

Seeing as there are people that believe that they are dating a chat bot and others that believe that chat bots contain divinity, there are probably some people that would respond positively to slop emails about Business Insight Synergy Powered By Data-ScAIence and buy some SaaS product for their Meritocrat-Nepo sneaker collab drops company

Alright PopOS team... time to get cosmic out the door.

I've officially missed a whole cycle!

jkjk, thanks for the hard work, I'll wait as long as it takes.


Cosmic desktop shipped in PopOS 24.04 a few weeks ago btw

Nice, I missed that!

> Alright PopOS team... time to get cosmic out the door.

The alpha was in early September:

https://www.theregister.com/2024/09/12/pop_os_2404_cosmic_de...

The beta was the end of September:

https://www.theregister.com/2025/09/30/pop_os_2404_beta_rele...

The release date was announced as mid-December in November at the Ubuntu Summit:

https://www.theregister.com/2025/11/03/cosmic_1_before_xmas/

The full final release shipped before Yule:

https://www.theregister.com/2025/12/22/popos_2404_cosmic_epo...

If you care, were you not paying attention since the summer?


    It might sound like I’m just offering clichés – less is more, stop and smell the roses, take your time – and I guess I am. But clichés suffer the same issue: they are often profound insights, consumed and passed on too rapidly for their real meaning to register anymore. You really should stop and smell roses, as you know if you’re in the habit of doing that.
Great quote

I started using C# recently for a hobby project writing a game engine on top of Monogame, and I have been very surprised at how nice of a language C# is to use.

It has a clean syntax, decent package management, and basically every language feature I regularly reach for except algebraic data types, which are probably coming eventually.

I think the association of .NET to Microsoft tarnished my expectations.


Modern C# and .NET are great. It still suffers from the bad reputation of the Windows-only .NET Framework. It's still a quite heavy platform with a lot of features, but the .NET team invested a lot of time to make it more approachable recently.

With top level Programs and file-based apps[1] it can be used as a scripting language now, just add a shebang (#!/usr/local/share/dotnet/dotnet run) to the first line and make it executable. It will still compile to a temporary file (slow on first run), but it doesn't require a build step anymore for smaller scripts.

[1]: https://learn.microsoft.com/en-us/dotnet/csharp/fundamentals...


Also, if you compile Ahead Of Time (AOT) you can cut down on the features and get basically as small a subset as you want of the libraries. IMHO C# and dotnet are really starting to become very impressive.

There's also bflat [0]. Not an official Microsoft product, more of a passion project of a specific employee.

"C# as you know it but with Go-inspired tooling that produces small, selfcontained, and native executables out of the box." Really impressive. Self contained and small build system.

[0] https://github.com/bflattened/bflat


AOT requires a lot of fiddling around, and might break the application unexpectedly, with very weird errors. It is mostly targeted to Blazor (WASM) and for serverless functions.

The default runtime and JIT are fine for most use cases.


  > AOT requires a lot of fiddling around, and might break the application unexpectedly, with very weird errors.
It hasn't been my experience. Native AOT does come with some limitations [1][2], but nothing awful. Mostly it's that you can't generate code at runtime and you have to tame the code trimmer. Just don't ignore code analysis warnings and you should be good.

  > It is mostly targeted to Blazor (WASM) and for serverless functions.
Making your CLIs start fast is also a huge use case.

[1]: https://learn.microsoft.com/en-us/dotnet/core/deploying/nati...

[2]: https://learn.microsoft.com/en-us/dotnet/core/deploying/trim...


C# AOT may sometimes require fiddling around, but in my experience a lot less fiddling around than what my alternative used to be, which was to use C++.

I love C# and .NET and use it every day.

Sometimes I wonder if the .NET department is totally separated from the rest of Microsoft. Microsoft is so bad on all fronts I stopped using everything that has to do with it. Windows, Xbox, the Microsoft account experience, the Microsoft store, for me it has been one big trip of frustration.


Microsoft is huge, it's many companies inside one company.

.NET seems to be somewhere close to Azure, but now far away from Windows or the business applications (Office/Teams, Dynamics, Power Platform). Things like GitHub, LinkedIn or Xbox seem to be de facto separate companies.

Edit: .NET used to be tied closely to Windows, which gave it the horrible reputation. The dark age of .NET ;)



Incredible.

My absolute favorite thing about modernity is how enabled we are to riff on a riff of a riff.

In 1346, if a blacksmith came up with something cool, its quite possible that it died with them.


One thing I've learned from checking up on assumptions I've had about history is that it's easy to underestimate people in past times. They were probably better at communicating this stuff than you think.

If there's very little text before the internet, what would scaling up look like?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: