Hacker Newsnew | past | comments | ask | show | jobs | submit | andyferris's commentslogin

I move the record to another _index_, generally.

It depends whether you reliably control all the DB client code, of course.


This, make sure the 'active' flag (or deleted_at timestamp) is part of most indexes and you're probably going to see very small impacts on reads.

It then turns into a slowly-growing problem if you never ever clean up the soft-deleted records, but just being able to gain auditability nearly immediately is usually well worth kicking the can down the road.


Vibe coding is actually "good" for small, bespoke things. The same way that Excel is "good" for small tasks, bad bad for larger things. Too easy to make mistakes, too hard to maintain.

I could equally ask - where are all the Excel workbooks that are actually _good_? No-one needs to share their Excel workbooks. They don't need 10k github stars. They just achieve some small goal of the Excel user. These LLM agents just need to do what the user needs doing at any moment.

(Sometimes, that can be a small part of a larger job in software, or a series of small parts perhaps - but again you are going to see this "show up" as a part of peoples workflow in maintaining enterprise software which is what most programmers are employed to do, in other words, you won't directly see it at all. And no, digital cameras didn't change the field 18 months after the first somewhat-usable one was released - it took quite a while for the technology to become good enough and cheap to democratize filmmaking).


> maintaining enterprise software which is what most programmers are employed to do …

I hear little from those involved with enterprise or line-of-business applications discussing their findings. Forums like this are dominated by SAAS, tool makers, computer and data scientists, and infrastructure concerns.

Anyone using AI with large, complex business systems?


Totally agree. I see a lot of experimentation, initial exploration for an idea, etc. but the middle and end portions are never noteworthy except when it goes haywire and someone makes a blogpost about it.

Ideating is important but it is also very far from what is being promised. It’s also not that useful to the average person most of the time. If this is truly a revolutionary, must-have, daily-use technology, then by now we should have some idea of where it lives. But we don’t! The best and most consistent application so far is coding agents for coders. That’s great, but again, not the promise and very limited in scope.


I don’t care about GitHub stars. There are tons of excel workbooks and such that are useful, publicly available, and utilized.

Wonderful analogy.

I still use spreadsheets regularly. Relieved of the pressure of making something "good", I can get basic things done quickly.

Is it sustainable or maintainable? Nope. Doesn't need to be. Not qualities I'm remotely concerned about when I'm writing a spreadsheet.

Vibe coding is similar in that you can solve a specific problem without much concern for generalization or future reuse.


Custom domains (BYO or buy through Apple) and email hosting is in the announcement, too.

I believe it said it will be free, starting April 14.

I think what they've announced is the best fit for small businesses, not large enterprises. They can still treat it as a B2C-style service - many tiny customers with similar needs. Mom and Pop can now get a domain name through Apple, with email accounts - for a lot of people that might be the only way they'd know how to do something like that.

The business needs here aren't so different to family manangement features, say.

Throwing in Entra ID / Google Workspace authentication and multiple Apple IDs per device is probably the most "interesting" part as to where that ends up in the distant future.


When people say the "bubble will pop" it's meant in analogy to the dotcom era - businesses and investers lost money, but the internet (and its opportunities) didn't vanish.

Even open-weight local models are becoming good enough for teaching yourself quite a range of stuff, especially the beginner aspects. LLMs are not going to simply disappear because of a financial reallignment. The worst thing might be not being able to access a super-duper frontier model for free?


Thank you. I needed that.

I guess it would be too obvious a lie to say Codex is "the present"?

I highly doubt the A20 Pro will be slower than the A19 Pro - particularly for AI workloads.

We're talking six orders of magnitude difference between 0.6t/sec and 35kt/sec.

While there are problems that can be solved with 0.6t/sec, particularly offline, at the edge, in the field applications, these are currently vastly outnumbered by other applications.

There's just no competing. Local sucks.


> There's just no competing. Local sucks.

absolutely, however this doesn’t mean we should abandon local. i can’t remember who, but someone in the ai nuts and bolts arena said “smaller local models is where the exciting stuff is happening right now. it’s the area real fast progression is happening.” and it seems to be true. new big models aren’t making near the leaps smaller models are.

it’s so important we keep moving forward on running locally for the same reason it was important for us to use open standards when building the internet. if we hadn’t we’d all be connected through aol with 10 hours/month allowed internet usage and termed in through a sun workstation renting cpu cycles from some mainframe company at like “you’ve got 10,000 cpu cycles left on your monthly plan, please deposit $500 for 5,000 more.”

while all of this this is before my time, i’ve heard and read so many horror stories about how people could only connect through dumb terminals to “you wouldn’t believe it, computers then were the size of buildings” 1000 miles away and had to sign up for workload timeslots. make no mistake, this is the future these companies want, they want us to rent everything and own nothing.


Local is enough for most users as long as they're willing to accept a non-realtime response - which is a real limitation (especially for personal agentic use) but not a very significant one. The hardware is not that expensive, a single user's needs aren't going to saturate a state-of-the art AI datacenter rack or anything like that. Not even for heavy agentic workloads.

You rent your broadband internet. It's not a foreign concept that we can't own all the infra.

I don't know why we can't just get over the local compute thing and instead build open infra and models in the cloud. That's literally the only way we'll be able to keep pace with hyperscalers.

Local is not going to benefit 99% of use cases. It's a silly toy.

If we build open infra for cloud-based provisioning and inference, we could build a future we still have some ownership in. We'd be able to fine tune large models for lots of purposes. We wouldn't be locked in to major vendors.


SK Hynix: "Hold my LPDDR5X"

I am not a carpenter; I use a saw at least once a month.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: