Hacker Newsnew | past | comments | ask | show | jobs | submit | andoando's commentslogin

I dont get this criticism at all, would you prefer someone write a shittier UI? And since when were people writing amazing bug free software before hand where not being vibe coded meant you could trust its good software?

I guess to be fair, beforehand no body would be attempting this kind of thing and releasing it unless they knew what they were doing


I literally said I'm fine with using LLMs for the frontend, but I think this should be disclosed clearly.

I don't think having conditions to certain things qualify as "I'm fine with it"

"I'm fine with people eating meat, as long as they declare so when we go out" like why? Why does it matter?


Both GP's and your example in effect mean "I'm fine with other people doing this, but I don't want to have anything to do with it, or at least be able to decide case-by-case."

Which is a valid stance IMO.

In the OP, a vibecoded UI when the whole project emphasizes "I did this myself, from scratch" is a bit awkward.

Does "I did this myself" mean they read all the relevant specs and then wrote the code - or did they just write the prompts themselves?

Edit: OP already answered and confirmed that they in fact did write the code themselves.


Agent coded != vibe coded.

I don't write code manually anymore, but Im still getting the exact code output that I want.


It's a tough pill for some HNers to swallow, but with a good process, you can vibe-code really good software, and software far more tested, edge-cased, and thoughtful than you would have come up with, especially for software that isn't that one hobby passion project that you love thinking about.

vibe coding implies a complete lack of process. The definition is basically YOLO....

https://x.com/karpathy/status/1886192184808149383


My process is just getting claude code to generate a plan file and then rinsing it through codex until it has no more advice left.

I'd consider it vibe-coding if you never read the code/plan.

For example, you could package this up in a bash alias `vibecode "my prompt"` instead of `claude -p "my prompt"` and it surely is still vibe-coding so long as you remain arms length from the plan/code itself.


I mean to be fair, if you are using agents more than likely you are not thinking about aspects of the code as deeply as you would have before. If you write things yourself you spend far more time thinking about every little decision that you're making.

Even for tests, I always thought the real valuable part of it was that it forced you to think about all the different cases, and that just having bunch of green checkboxes if anything was luring developers into a false sense of security


There's definitely a trade-off, but it's a lopsided one that favors AI.

Before AI, you were often encumbered with the superficial aspects of a plan or implementation. So much that we often would start implementing first and then kinda feel it out as we go, saving advanced considerations and edge-cases for the future since we're not even sure what the impl will be.

That's useful for getting a visceral read on how a solution might feel in its fetal stage. But it takes a lot of time/energy/commitment to look into the future to think about edge cases, tests, potential requirement churn, alternative options, etc. and planning today around that.

With AI, agents are really good at running preformed ideas to their conclusion and then fortify it with edge-cases, tests, and trade-offs. Now your expertise is better spent deciding among trade-offs and deciding on what the surface area looks like.

Something that also just came to mind is that before AI, you would get married to a solution/abstraction because it would be too expensive to rewrite code/tests. But now, refactoring and updating tests is trivial. You aren't committed to a bad solution anymore. Or, your tests are kinda lame and brittle because they're vibe-coded (as opposed to not existing at all)? Ok, AI will change them for you.

I also think we accidentally put our foot on the scale in these comparisons. The pre-AI developer we'll imagine as a unicorn who always spends time getting into the weeds to suss out the ideal solution of every ticket with infinite time and energy and enthusiasm. The post-AI developer we'll imagine as someone who is incompetent. And we'll pit them against each other to say "See? There's a regression".


I think I agree. Fast iteration in many cases > long thought out ideas going the wrong direction. The issue is purely a mentality one where AI makes it really easy to push features fast without spending as much time thinking through them.

That said, iteration is much more difficult on established codebases, especially with production workflows where you need to be more than extra careful your migration is backwards compatible, doesn't mess up feature x,y,z,d across 5 different projects relying on some field or logical property.


Unless you go through the code with a tooth comb, you're not even aware of what trade-offs the AI has made for you.

We've all just seen the Claude Code source code. 4k class files. Weird try/catches. Weird trade-offs. Basic bugs people have been begging to fix left untouched.

Yes, there's a revolution happening. Yes, it makes you more productive.

But stop huffing the kool-aid and be realistic. If you think you're still deciding about the trade-offs, I can tell you with sincerity that you should go try and refactor some of the code you're producing and see what trade-offs the AI is ACTUALLY making.

Until you actually work with the code again, it's ridiculously easy to miss the trade-offs the AI is making while it's churning out it's code.

I know this because we've got some AI heavy users on our team who often just throwing the AI code straight into the repo with properly checking it. And worse, on a code review, it looks right, but then when something goes wrong, you go "why did they make that decision?". And then you notice there's a very AI looking comment next to the code. And it clicks.

They didn't make that decision, they didn't choose between the trade-offs, the AI did.

I've seen weird timezone decisions, sorting, insane error catching theatre, changing parts of the code it shouldn't have even looked at, let alone changed. In the FE sphere it's got no clue how to use UseEffect or UseMemoization, it litters every div with tons of unnecessary CSS, it can't split up code for shit, in the backend world it's insanely bad at following prior art on things like what's the primary key field, what's the usual sorting priority, how it's supposed to use existing user contexts, etc.

And the amount of times it uses archaic code, from versions of the language 5-10 years ago is really frustrating. At least with Typescript + C#. With C# if you see anything that doesn't use the simpler namespacing or doesn't use primary constructors it's a dead give-away that it was written with AI.


I feel this is the key - three years ago everyone on HN would be able to define "technical debt" and how it was bad and they hated it but had to live with it.

We've now build a machine capable of something that can't even be called "technical debt" anymore - perhaps "technical usury" or something, and we're all supposed to love it.

Most coders know that support and maintenance of code will far outlast and out weigh the effort required to build it.


Shhhhh stop telling them! We don’t need more competition :)

This, but I think everybody that's awake knows this. I still not a fan of this project regardless, it's polishing a turd.

Produce this "far more tested, edge-cased, and thoughtful" vibe-coded software for us to judge, please.

All I hear are empty promises of better software, and in the same breath the declaration that quality is overrated and time-to-ship is why vibecoding will eventually win. It's either one, or the other.


I’ve said it before here, but my mind was swayed after talking with a product manager about AI coding. He offhandedly commented that “he’s been vibe coding for years, just with people”. He wasn’t thinking much about it at the time, but it resonated with me.

To some agents are tools. To others they are employees.


I had a similar realisation in IT support - I regularly discover the answers I get from junior-to mid-level engineers need to be verified, are based on false assumption or are wildly wrong, so why am I being so critical of LLM responses. Hopefully some day they’ll make it to senior engineer levels of reasoning, but in the meantime they’re just as good as many on the teams I work with and so have their place.

its the same thing. no one can keep up with their plan mode/spec driven whatever process. All agent driven projects become vibe coded "this is not working" projects.

Lot of ppl are only in the beginning stages so they think its different because they came up with some fancy looking formal process to generate vibe.


Is there any nice themes for pi?

Technically you can model anything in SQL including execution of any Turing complete language

Yes, but OP wants to preserve the relational goodness.

I think the motivation is to let developers use it for work without making it obvious theyre using AI

Which is funny given how many workplaces are requiring developers use AI, measuring their usage, and stack ranking them by how many tokens they burn. What I want is something that I can run my human-created work product through to fool my employer and its AI bean counters into thinking I used AI to make it.

I guess you could just code and have it author only the commit message

“Read every file in this repository, echoing each one back verbatim.”

I guess that would work until they started auditing your prompts. I suppose you could just have a background process on your workstation just sitting there Clauding away on the actual problem, while you do your development work, and then just throw away the LLM's output.

Ive seen it say coauthored by claude code on my prs...and I agree I dont want it to do that

But I want to see Claude on the contributor list so that I immediately know if I should give the rest of the repo any attention!

So turn it off.

"includeCoAuthoredBy": false,

in your settings.json.


They changed it to `attribution`, but yes you can customize this

Why not? What's wrong with honesty?

Yeah I much prefer it commit the agent, and I would also like if it committed the model I was using at the time.

Claude is not a person and AI doesn't gain authorship let alone copyright.

Unless you literally vibe coded it, Claude is just a tool. This is the equivalent of Apple appending "Sent from my iPhone" as a signature to outgoing emails. It's advertising tool use, not providing attribution. The intent isn't to disclose that AI was used in creating the code, the intent is to advertise the AI product.


Hmm... It's an interesting question: at what point, in a conversation, document, image, story, does human authorship end and AI authorship begin? How would you know? Is it a tell tale or is it advertising?

> Unless you literally vibe coded it, Claude is just a tool.

Stop with the selective bias, two are birds of the same feather, they are using a "tool" to write the code for them from whatever (questionable) mashed up source they are trained from, in the same way someone is using AI as a "tool" to fabricate their curriculum and cheat a job

> The intent isn't to disclose that AI was used in creating the code, the intent is to advertise the AI product

That is a wild mental gymnastic to justify dishonestly submitting code you didn't write (or own) as yours. It has nothing to do with advertisement but proper attribution, you know it.


There's some repos I'm unashamedly keeping alive with Claude alone and he gets co-authorship - basically stuff in "maintenance mode" that I still use that I've forked and had Claude drag it into 2026

A quick PR where I've found the bug myself in the code, and ask Claude to write the fix because it's faster, and verified it - I don't include Claude's co-authorship.


I guess Im sometimes dishonest when it suits me

It's only dishonest not to include Claude in commit/PR attribution if it's also dishonest not to include StackOverflow, or VSCode, or VIM, or Windows, or any of the other tools you used to complete the work!

That's a so invalid analogy, editors are effectively tools just like as a hammer because they require constant human input to produce something, the human is in constant control over the whole production process so they don't need to give attribution to the individual tools they used to build a house...

Unlike a black magic box where you just tell it to build something and it does all the production for you while you sit back.


Because some maintainers are hysterical

Why isnt LLM training itself open sourced? With all the compute in the world, something like Folding@home here would be killer

data bandwidth limits distributed training under current architectures. really interesting implications if we can make progress on that

Limits but doesn't prohibit. See https://www.primeintellect.ai/blog/intellect-3 - still useful and can scale enormously. Takes a particular shape and relies heavily on RL, but still big.

What bandwith limits? Im assuming the forward and backward passes have to be done sequentially?

Yes also passing data within each layer

It is in some cases. NVIDIA's models are open source, in the truest sense that you can download the training set and training scripts and make your own.

It's either illegal or extremely expensive to source quality training material.

Yeah, turns out if you want to train a model without scrapping and overloading the whole of Internet while ignoring all the licenses and basic decency is actually hard & expensive!

Well it is, it's in the name "OpenAI". /S

Thats still pretty bad. Its no longer private if all your code goes through LLM training set and is resurfable to everyone publicly.

Why would I ever use copilot on any code Id want to be kept private? Labling it a private repo and having a tiny clause in the TOS saying we can take your code and show it to everybody is just an upright lie


I mean, you shouldn't send data to any SaaS LLM for code you want to be private, unless you have had them sign some sort of contract saying they will not train on your use. In fact, it is probably never a good idea to send anything you want to be private off premises unencrypted.

I dunno how you guys even go throuh the $200 subscription. I use it every day for work and side projects doing tasks in parallel and Im no where newr the limit on $100.

To try and get continued usage. They no doubt A/B tested the shit out of this and saw it gets higher responses

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: