Hacker Newsnew | past | comments | ask | show | jobs | submit | bopbopbop7's commentslogin

Just a couple more trillion and 6 more months!

> I believe that LLM's are making traditional programming obsolete. In fact there isn't any doubt in my mind.

Is this what AI psychosis looks like? How can anyone that is a half decent programmer actually believe that English + non-deterministic code generator will replace "traditional" programming?


That's also my take, vibe coding as a non-deterministic 4GL. https://en.wikipedia.org/wiki/Fourth-generation_programming_...

4GLs are productive yes, but also limited, and still require someone to come up with specs that are both informed by (business) realities and engineering considerations.

But this is also an arena where bosses expect magic to happen when people are involved; just pronounce a new strategy, and your business magically transforms - without any of that pesky 'figuring out what to do' or 'aligning stakeholders' or 'wondering what drugs the c-suite is doing'. Let LLMs write the specs!


I think the author forgot that code has to compile and be useful.

And how much is technical debt worth?


What coding agent are you using where the code doesn't even compile!?


The one that cursor used to build their famous browser.


[flagged]


AI misinformation? Please do provide some examples.

Your whole history is AI psychosis btw, seek help.


> AI misinformation? Please do provide some examples.

Like saying they can't generate compiling code in this very thread?


Go look at the 40k failing CI/CD runs on the famous cursor browser.


Ah, it couldn't generate a 100% complete working browser from scratch in a week. I guess the technology is cooked.


Ah, so no longer misinformation?


Depends if I can bundle the technical debt, get a triple AAA rating on it and then sell it


Has there been any good and useful software created with LLMs or any increase in software quality that we can actually look at?

So far it's just AI doom posting, hype bloggers that haven't shipped anything, anecdotes without evidence, increase in CVEs, increase in outages, and degraded software quality.


Software quality has been degrading for decades without LLMs though.

I only have anecdotal evidence from some engineers I know that they don't write software by hand any more. Provided the software they are working on was useful before, we can say that LLMs are writing useful software now.


Because who cares about correct and compilable code, any code will do!


Exactly!


Maybe we should look at output like quality of software being produced instead of discourse on forums where AI companies are spending billions to market?

Where is all this new software and increased software quality from all this progression?


It doesn't necessarily enhance or detract from software quality. You could use it for quality assurance or code health initiatives, but you'd have to prioritize that. Obviously it is hard to find a lot of humans who will (be allowed to) choose that over adding some new feature to satisfy the sales guys. And since you measure quality basically on vibes (how many times has this app crashed lately?) it probably takes a while to diffuse from commit to your consciousness. But I have seen it used for the purpose of quality, so I am cautiously optimistic.


Quality shmuality. Get good bro, my app already uses best patterns and you can do all the things and has enterprise SSO and runs on vercel and needs 39 services and costs a few million to run to show you AI generated excel sheets because you cant be bothered to think for a hot minute. We can't have you thinking you might get wrong ideas about ownership. I'm afraid open source was a mistake in the end because it enabled enterprises to iterate faster than they ever could on their own.


A non-deterministic, slow, pay to use, compiler for a language that is not precise enough for software. What an amazing abstraction!


You are just salty and old. If you're young and hip you don't need to know what you're doing just do the thing.


Something tells me a non-deterministic code generator won't be the solution to this problem.


Humans are also non-deterministic code generators though. It can be possible that an LLM is more deterministic or consistent at building reliable code than a human.


You're missing the point. Consider this:

Mathematicians use LLMs. Obviously, they don't trust LLM to do math. But LLM can help with formalizing a theorem, then finding a formal proof. It's usually a very tedious thing - but LLMs are _already_ quite good at that. In the end you get a proof which gets checked by normal proof-checking software (not LLM!), you can also inspect, break into parts, etc.

You really need to look into detail rather than dismiss wholesale ("It made a math error so it's bad at math" is wrong.)


I'll believe it when I start seeing examples of good and useful software being created with LLMs or some increase in software quality. So far it's just AI doom posting, hype bloggers that haven't shipped anything, anecdotes without evidence, increase in CVEs, increase in outages, and degraded software quality.


It would be helpful if you could define “useful” in this context.

I’ve built a number of team-specific tools with LLM agents over the past year that save each of us tens of hours a month.

They don’t scale beyond me and my six coworkers, and were never designed to, but they solve challenges we’d previously worked through manually and allow us to focus on more important tasks.

The code may be non-optimal and won’t become the base of a new startup. I’m fine with that.

It’s also worth noting that your evidence list (increased CVEs, outages, degraded quality) is exclusively about what happens when LLMs are dropped into existing development workflows. That’s a real concern, but it’s a different conversation from whether LLMs create useful software.

My tools weren’t degraded versions of something an engineer would have built better. They’re net-new capability that was never going to get engineering resources in the first place. The counterfactual in my case isn’t “worse software”—it’s “no software.“


It really shouldn't be this hard to just provide one piece of evidence. Is anecdotes of toy internal greenfield projects that could probably be built with a drag and drop no-code editor really the best from this LLM revolution?


What is your bar for “useful”? Let’s start there and we’ll see what evidence can be offered.

User count? Domain? Scope of development?

You have something in mind, obviously.


If you're asking me to define a very clear bar, it's obvious nothing cleared it.

Anything that proves that LLMs increase software quality. Any software built with an LLM that is actually in production, survives maintenance, doesn't have 100 CVEs, that people actually use.


At my work ~90% of code is now LLM generated. It's not "new" software in the sense that you're describing but it's new features, bug fixes, and so on to the software that we all work on. (Although we are working on something that we can hopefully open source later this year that is close to 100% LLM generated, and I can say, as someone that has been reviewing most of the code, is quite high quality)


Well, on the surface it may seem like there’s nothing being created of value, but I can assure you every company from seed stage to unicorns are heavily using claude code, cursor, and the like to produce software. At this point, most software you touch has been modified and enhanced with the use of LLMs. The difference in pace of shipping with and without AI assistance is staggering.


> every company from seed stage to unicorns are heavily using claude code, cursor, and the like to produce software

> The difference in pace of shipping with and without AI assistance is staggering.

Lets back up these statements with some evidence, something quantitative, not just what pre-IPO AI marketing blog posts are telling you.


Why quantitative? I have friends at most major tech companies and I work at a startup now. You shouldn’t write by hand what can be prompted. Doesn’t mean the hard parts shouldn’t be done with the same care as when everything was handwritten, but a lot of minutiae are irrelevant now.


Because anything not quantitative is either “trust me bro” or AI marketing. Some of us are engineers, so we want to see actual numbers and not vibes.

And there are studies on this subject, like the MITRE study that shows that development speed decreases with LLMs while developers think it increases speed.


It's not clear what evidence you expect to see? Every major tech company is using AI for a significant % of their code. Ask anyone that works there and you will have all the evidence you need.


> Every major tech company is using AI for a significant % of their code

It shows, increased outages, increased vulnerabilities, windows failing to boot, windows task bar is still react native and barely works. And I have spoken to engineers at FANG companies, they are forced to use LLMs, managers are literally tracking metrics. So where is all this amazing new software and software quality or increased productivity from them?


You are measuring differently..they measure in how much stuff they ship. They don't measure if it's going to break , if it is not very maintainable, or how much it will cost to keep using the LLM I. The future. Remember its an extraction grift: buy now, pay later. Preferably after you have made the model an intrinsic part of the process. Oh snap, now we definitely need to bailout LLMs cause noone knows how this stuff works. Please help. Useful idiots all around. Classic case of not using their brains the way they evolved for.


Like the new features in Windows 11? They’ve just anointed a “software quality czar” and I suspect this is not coincidence.


Just a couple more trillion dollars, we are so close!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: