Hacker Newsnew | past | comments | ask | show | jobs | submit | AyyEye's commentslogin

You should probably disclose that you're a CTO at an AI startup, I had to click your bio to see that.

> The amount of compute in the world is doubling over 2 years because of the ongoing investment in AI (!!)

All going into the hands of a small group of people that will soon need to pay the piper.

That said, VC backed tech companies almost universally pull the rug once the money stops coming in. And historically those didn't have the trillions of dollars in future obligations that the current compute hardware oligopoly has. I can't see any universe where they don't start charging more, especially now that they've begun to make computers unaffordable for normal people.

And even past the bottom dollar cost, AI provides so many fun, new, unique ways for them to rug pull users. Maybe they start forcing users to smaller/quantized models. Maybe they start giving even the paying users ads. Maybe they start inserting propaganda/ads directly into the training data to make it more subtle. Maybe they just switch out models randomly or based on instantaneous hardware demand, giving users something even more unstable than LLMs already are. Maybe they'll charge based on semantic context (I see you're asking for help with your 2015 Ford Focus. Please subscribe to our 'Mechanic+' plan for $5/month or $25 for 24 hours). Maybe they charge more for API access. Maybe they'll charge to not train on your interactions.

I'll pass, thanks.


I'm not longer CTO at an AI startup. Updated, but don't actually see how that is relevant.

> All going into the hands of a small group of people that will soon need to pay the piper.

It's not very small! On the inference side there are many competitive providers as well as the option of hiring GPU servers yourself.

> And historically those didn't have the trillions of dollars in future obligations that the current compute hardware oligopoly has. I can't see any universe where they don't start charging more, especially now that they've begun to make computers unaffordable for normal people.

I can't say how strongly I disagree with this - it's just not how competition works, or how the current market is structured.

Take gpt-oss-120B as an example. It's not frontier level quality but it's not far off and certainly gives a strong redline that open source models will never get less intelligent than.

There is a competitive market in hosting providers, and you can see the pricing here: https://artificialanalysis.ai/models/gpt-oss-120b/providers?...

In what world is there a way in which all the providers (who are want revenue!) raise prices above the premium price Cerebas is charging for their very high speed inference?

There's already Google, profitable serving at the low-end at around half the price of Cerebas (but then you have to deal with Google billing!)

The fact that Azure/Amazon are all pricing exactly the same as 8(!) other providers as well as the same price https://www.voltagepark.com/blog/how-to-deploy-gpt-oss-on-a-... gives for running your own server shows how the economics work on NVidia hardware. There's no subsidy going on there.

This is on hardware that is already deployed. That isn't suddenly going to get more expensive unless demand increases... in which case the new hardware coming online over the next 24 months is a good investment, not a bad one!


Tell that to all of the humans who were capable of driving, but blocked by a fake autonomous car that froze in the middle of the road.

Their forgejo instance has an interpreter in go https://forge.nouveau.community/nova/ni


If Samsung cared about their reputation they would have stopped releasing garbage electronics a decade ago and anyone suggesting putting ads on a fridge (and a high end one at that) would have been fired the same day they suggested it.


Back in the days of manually setting IRQs enough of them were used by the system that no, you couldn't use 8 gamepads. Assuming you could even connect them.

(I think this game is probably past those times but not by much)


I specifically checked if DirectInput from DirectX 5 already supports/provides USB HID devices, and it does! Granted, even then it was unlikely to encounter 8 USB devices, let alone HID devices in particular.


Human news isn't a good comparison because this is second order -- LMMs are downstream of human news. It's a game of stochastic telephone. All the human error is carried through with additional hallucinations on top.


But the issue is that the vast majority of "human news" is second order (at best), essentially paraphrasing releases by news agencies like Reuters or Associated Press, or scientific articles, and typically doing a horrible job at it.

Regarding scientific reporting, there's as usual a relevant xkcd ("New Study") [0], and in this case even better, there's a fabulous one from PhD Comics ("Science News Cycle") [1].

[0] https://xkcd.com/1295/

[1] https://phdcomics.com/comics/archive.php?comicid=1174


Then the point still stands, this makes things even worse given that it's adding its own hallucinations on top, instead of simply relaying the content or idealistically, identifying issues in the reporting.


You understand that an LLM can only poorly regurgitate whatever it’s fed right? An LLM will _always_ be less useful than a primary/secondary source, because they can’t fucking think.


Regardless of how you define "think", you still need to get a baseline of whether human reporters do that effectively.


Not even two weeks after Stellantis mandates vibe coding engineering workflows. Has to be a new record.

https://www.stellantis.com/en/news/press-releases/2025/octob...


Wow! But seriously this would have to be code written before two weeks ago to be pushed to production OTA to a fleet of vehicles, right?


All bets are off for any org willing to push fleetwide updates on a Friday afternoon.


To cars that are currently being driven...


Apparently for some people the update makes it worse.

https://www.jlwranglerforums.com/forum/threads/2024-4xe-loss...

Not even two weeks after going all-in on enterprise vibe coding including for "engineering workflows".

> [Stellantis'] determination to apply AI across every part of the enterprise

https://www.stellantis.com/en/news/press-releases/2025/octob...


You can usually delete the modem on your car.


Sounds sort of like markovjunior, minus the learning bits.

https://github.com/mxgmn/MarkovJunior


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: