Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The real code red here is less that Google just one-upped OpenAI but that they demonstrated there’s no moat to be had here.

Absent a major breakthrough all the major providers are just going to keep leapfrogging each other in the most expensive race to the bottom of all time.

Good for tech, but a horrible business and financial picture for these companies.



> for these companies

They’re absolutely going to get bailed out and socialize the losses somehow. They might just get a huge government contract instead of an explicit bailout, but they’ll weasel out of this one way or another and these huge circular deals are to ensure that.


>They’re absolutely going to get bailed out and socialize the losses somehow.

I've had that uneasy feeling for a while now. Just look at Jensen and Nvidia -- they're trying to get their hooks into every major critical sector as they're able to (Nokia last month, Synopsys just recently). When chickens come home to roost, my guess is that they'll pull out the "we're too big to fail, so bailout pls" card.

Crazy times. If only we had regulators with more spine.


Nvidia is the ultimate beneficiary of the money invested (due to expensive GPUs). If Nvidia loses these good customers, it will have less revenue. So it prefers to slowly buy it's customers with this money...

I get that, but what I'm saying is that it's anticompetitive as heck. In a fair system, profits from NVDA's revenue growth should've been distributed to shareholders as dividends or reinvested into the company itself, not buy its own customers -- that's my (and countless others') biggest gripe with the whole AI bubble bs.

Antitrust regulators must be sleeping at the wheels.


This would trigger something that people in power would rather not trigger.


The shenanigans that set off the GFC were much more nakedly corrupt and didn’t have even a fig leaf of potential usefulness to anybody to justify them. The revolution failed to materialize then. If the AI bust isn’t worse for the median person than 2008, I don’t think people in power have anything to fear.

Why do we think it won't be worse? If you exclude the circular trading of AI companies from metrics, we're already in a pretty big recession, and that will only get worse if the AI companies collapse.

I’m circumspect that anyone knows how it’ll play out. I think there’s strong evidence a correction is coming, but some talking head is predicting it’s going to be this week every week, and have been for a while. Nobody has convinced me it’s likely to look like the Great Depression yet.

I also think the circular dealing fears in particular are overstated. Debt financing that looks like this is common in semicon, and I doubt there are any serious investors that haven’t already priced it in. If the bust is fatal for AI investment, it’ll just be bankrupt companies owing money to other bankrupt companies.


Anything that can't continue forever must eventually stop, but the market can stay irrational longer than you can remain solvent. I don't think there's a problem saying that there will be a crash but it's impossible to know when.

The longer a bubble grows, though, the worse it gets when it pops. According to Fed stats, we might still be postponing most of the crash that was going to happen in 2008.


If the AI boosters are right about what the technology will be capable of in 5 years, at least some of the big player’s investments will be literally the most profitable investments that ever happened. This is a potential mispricing, not a system inherently incapable of paying off long term like the subprime mortgages were. I don’t think another step-change like we saw with GPT2/3 is likely in the mid-term future, but I don't think anyone has shown it’s virtually impossible.

That's betting, basically, that there will be an oligarchy (multiple dictatorship) and the oligarchs (dictators) will be whoever owns OpenAI shares.

If AI turns the world into a dictatorship, what gives anyone the idea they'll just agree to share that dictatorship with their shareholders? They could just ignore company law - they're dictators!


the only thing power is concerned about is China dominating American in AI, because of the military and economic edge it would give them. Future wars will be AI fighting against AI.

Even Chinese leadership is somewhat skeptical about AI maximalism [0] with worries about "AI Washing" by enthusiastic cadre trying to climb rungs [1], and evoking Solow's Paradox [2].

There is still significant value in AI/ML Applications from a NatSec perspective, but no one is actually seriously thinking about AGI in the near future. In a lot of cases, AI from a NatSec perspective is around labor augmentation (how do I reduce toil in analysis), pattern recognition (how do I better differentiate bird from FPV drone), or Tiny/Edge ML (how do I distill models such that I can embed them into commodity hardware to scale out production).

It's the same reason why during the Chips War zeitgeist, while the media was harping about sub-7nm, much of the funding was actually targeted towards legacy nodes (14/28nm), chip packaging (largely offshored to China in the 2010s because it was viewed as low margins/low value work), and compound semiconductors (heavily utilized in avionics).

[0] - https://www.zaobao.com.sg/news/china/story20250829-7432514

[1] - https://finance.sina.com.cn/roll/2025-09-30/doc-infsfmit7787...

[2] - https://m.huxiu.com/article/4780003.html


Pointing to Solow’s Paradox is kind of weird to me. Productivity growth accelerated in the 90s and 2000s, so it’s easy to tell a story where the computer age simply didn’t accelerate things until it had sufficiently penetrated the economy. If AI follows the same pattern, betting big on it still makes sense: China would probably be the predominant superpower if the computing developments of the 70s and 80s were centered there instead of the US.

The point is that just like in the US, Chinese decision-makers are increasingly voicing concerns about unrealistic assumptions, valuations, and expectations around the capabilities of AI/ML.

You can be optimistic about the value of agentic workflows or domain specific applications of LLMs but at the same time recognize that something like AGI is horseshit techno-millenarianism. I myself have made a pretty successful career so far following this train of logic.

The point about Solow's Paradox is that the gains of certain high productivity technologies do not provide society-wide economic benefit, and in a country like China where the median household income is in the $300-400/mo range and the vast majority of citizens are not tech adjacent, it can lead to potential discontent.

The Chinese government is increasingly sensitive to these kinds of capital misallocations after the Evergrande Crisis and the ongoing domestic EV Price War between SoEs, because vast amounts of government capital is being burnt with little to show for it from an outcomes perspective (eg. a private company like BYD has completely trounced every other domestic EV competitor in China - the majority of whom are state owned and burnt billions investing in SoEs that never had a comparative advantage against BYD or an experienced automotive SoE like SAIC).


> The point about Solow's Paradox is that the gains of certain high productivity technologies do not provide society-wide economic benefit

Some people certainly argue that about the computer age, and it’s not totally unsupported. But I don’t think the evidence for that interpretation (as opposed to a delayed effect) is strong enough that I’d want to automatically generalize it to a new information technology advance.

To be clear, I don’t think China’s reticence is necessarily wrongheaded. But “we will usher in an age of undisputed dominance in a decade or two instead of right now from this investment” is a weird argument, especially from a government as ostensibly long-term focused as China.


> To be clear, I don’t think China’s reticence is necessarily wrongheaded. But “we will usher in an age of undisputed dominance in a decade or two instead of right now from this investment” is a weird argument, especially from a government as ostensibly long-term focused as China

The most important priority for any government is political stability. In China's case, the local and regional government fiscal crisis is the primary concern because every yuan spent on subsidizing an industry is also a yuan taken away from social spending - which is entirely the responsibility of local governments after the Deng reforms. This is why despite China being a large economy has only just caught up to Iran and Thailand's developmental indicators in the past 2-3 years.

The meme of a "long-term focused China" is just that - a meme. Setting grand targets and incentivizing the entire party cadre to meet those targets or goals is leading to increasingly inefficient deployments of limited capital and led to two massive bubbles busting in the past 5 years (real estate and EVs). The Chinese government doesn't want a third one, and is increasingly trying to push for capital to be deployed to social services instead of promotion-targeted initiatives.

Also, read Chinese pronouncements in the actual Putonghua - the translations in English make bog standard pronouncements sound magnanimous because most people who haven't heard or read a large number of Chinese government pronouncements don't understand how they tend to be structure and written as well as the tone used.


Oh I don’t doubt any of this - it’s just that you don’t usually see the CCP publicly making arguments that they need to sacrifice the long term for the short term. They have a brand for how they public ally justify their decisions, and it’s definitely about the inevitable, long arc of Chinese history of whatever.

> it’s just that you don’t usually see the CCP publicly making arguments that they need to sacrifice the long term for the short term

They do.

These kinds of statements and discussions happen all the time - in Chinese. The "long-termism" trope is largely an English language one because outsiders either severely degrade or severely fawn Chinese policymaking. Additionally, because most outsiders don't speak or understand Chinese, the spectre of China is often used as a rhetorical device to help drive decisionmaking and using "long-termism" is an easy device for that. A similar thing used to be used with Japan in the 1980s and Germany in the 2000s.

And what actually is the long term value of investing tens of billions in (eg.) AGI versus a similar amount in subsidized healthcare expansion in China? Applications based usescases and domain specific usecases of AI/ML have shown the most success from an outcomes perspective for both National Security and Economic usecases.

AI/ML has a lot of value, but a large amount of the promise is unrealistic for the valuations provided in both the US and China. THe issue is in China, an AI bubble bursting risks leaving local and regional governments holding the bag like during the real estate crisis because the vast majority of capital deployed in subsidizes came from regional and local government's budgets, and takes a large amount of capital away from social service expansion.

For a lot of Chinese leadership, the biggest worry is Japanification, which itself set itself due to the three-way punch of the 1985 Endaka recession, the 1990 Asset Bubble bust, and the 1997 Asian Financial Crisis. Much of China's financial leadership and regulators started their careers managing the blowback of these crises in China during that era or were scholars on them. As such, Chinese regulators are increasingly trying to pop bubbles sooner rather than later especially after the past experiences dealing with the 2015-16 market crash and the Evergrande crisis. Irrational exuberance around AI is increasingly being viewed through that lens as well.


Nah, people in power are openly and blatantly corrupt and it does a little. People in power dont care and dont have to care.

I haven’t read in detail, but based on sheer speculation, this may be relevant:

https://www.whitehouse.gov/presidential-actions/2025/08/demo...

Many retirement accounts/managers may already be channeling investment such that 401k accounts are broadly set up to absorb any losses… Could also just be this large piece of tin foil on my head.


Absolutely. And they will figure out how to bankrupt any utilities and local governments they can in the process by offloading as much of their costs overhead for power generation and shopping for tax rebates.


It will be the biggest bailout in history and financed entirely by money printing at a time when the stability of the dollar is already being questioned, right? Not good.

Absolutely. I don't understand why investors are excited about getting into a negative-margin commodity. It makes zero sense.

I was an OpenAI fan from GPT 3 to 4, but then Claude pulled ahead. Now Gemini is great as well, especially at analyzing long documents or entire codebases. I use a combination of all three (OpenAI, Anthropic & Google) with absolutely zero loyalty.

I think the AGI true believers see it as a winner-takes-all market as soon as someone hits the magical AGI threshold, but I'm not convinced. It sounds like the nuclear lobby's claims that they would make electricity "too cheap to meter."


It's the same reason for investing in every net-loss high-valuation tech startup of the past decade. They're hoping they'll magically turn into Google, Apple, Netflix, or some other wealthy tech company. But they forget that Google owns the ad market, Apple owns the high-end/lifestyle computer market, and Netflix owns tv/movie habit analytics.

Investors in AI just don't realize AI is a commodity. The AI companies' lies aren't helping (we will not reach AGI in our lifetimes). The bubble will burst if investors figure this out before they successfully pivot (and they're trying damn hard to pivot).


Helping to prevent a possible skynet scenario probably makes those checks easier to write.

There's a lot more than money at stake.


> I don't understand why investors are excited about getting into a negative-margin commodity. It makes zero sense.

Long term, yes. But Wall Street does not think long term. Short or medium term, you just need to cash out to the next sucker in line before the bubble pops, and there are fortunes to be made!


Maybe there's no tangible moat still, but did Gemini 3's exceptional performance actually funnel users away from ChatGPT? The typical Hacker News reader might be aware of its good performance on benchmarks, but did this convert a significant number of ChatGPT users to Gemini? It's not obvious to me either way.

Definitely. The fact that they inject it into Google Search means that even fewer people who have never used ChatGPT or just used it as a "smarter" Google search will just directly try the search function. It is terrible for actually detailed information i.e. debugging errors, but for summarizing basic searches that would have taken 2-3 clicks on the results is handled directly after the search. I feel bad for the website hosts who actually want visitors instead of visibility.

Anecdotally yes. Since launch I’ve observed probably 50% of the folks that were “ChatGPT those that” all the time suddenly talking about Gemini non-stop. The more that gets rolled into Google’s platform the more there’s point to using separate tooling from OpenAI. There’s a reason Sam is calling this “code red.”

Interesting. And these people weren't mostly techies? My impression has been that the further someone is from tech, the more likely they are to think that ChatGPT is synonymous with LLMs.

Mostly non-techies which surprised me. Like borderline tech illiterate folks talking about Gemini which really surprised me too. I can see why OpenAI is freaking out. They’ve massively overextended themselves financially, and if the base starts to slip even just a bit they’re in big trouble.

> My impression has been that the further someone is from tech, the more likely they are to think that ChatGPT is synonymous with LLMs.

This is still sorta true, but swap "LLM" for "chatbot." I mentor high school kids, and a lot of them use ChatGPT. A lot of them use AI summaries from Google Search. None of them use gemini.google.com.


I'm seeing it outside of techies. My dad told me "AI Google said that..."

I've had sales clerks at stores say that when I asked basic questions about their products, including questions about subscription pricing.

They integrated it into Google search immediately so I think a lot of people will bother less with ChatGPT when a google search is just as effective.

I think the theory is if you get to that point, it's already over.

Especially if we're approaching a plateau, in a couple years there could be a dozen equally capable systems. It'll be interesting to see what the differentiators turn out to be.


...and there would be dozen equally capable open-weight models which could be run locally at almost no cost... poor AI investors in this case..

So why did Google stock increase massively since about when Gemini 2.5 Pro was released, their first competitive model?


Because Google already has many healthy revenue streams that will benefit from LLMs and all it has to do in the AI space is remain competitive.

That’s not evidence of anything in and of itself. RIMs stock price was at its highest in 2009 two years after the iPhone came out.

I was curious about this - if my Google results are accurate, it looks like the stock actually peaked in June 2007, the same month that the iphone was released.

It seems that Blackberry's market share of new phone sales peaked at 20% in 2009. So I'm not sure if it's coincidence, but it looks like the market actually did a pretty good job of pricing in the iphone/android risk well before it was strongly reflected in sales.


You are correct. I remember the anecdote as something peaking. I thought it was the stock price. It was actually market share

It drives me a bit crazy when people say OpenAI has no moat.

Yes, companies like Google can catch up and overtake them, but a moat is merely making it hard and expensive.

99.999.. perc of companies can't dream of competing with OpenAI.


As history in tech shows you don’t need everyone copying you to be in big trouble, just one or two well positioned players. Typically that has been a big established player just adding your “special sauce product” as a feature to an existing well established product. Thats exactly what’s playing out now and why OpenAI is starting to panic as thy know how that movie typically ends.

Yep, I thought they might have some secret sauce in terms of training techniques, but that doesn't seem to be the case.


> Good for tech, but a horrible business and financial picture for these companies.

That’s not a bubble at all is it?


Did Google actually train a new model? The cutoff dates for Gemini 3 and 2.5 are the same.

I think this simply suggests the same (or very similar) training corpora.

Surely, they would throw current events, news articles, the latest snapshot of WikiPedia, etc...

I can't imagine it making sense to purposefully neglect to keep a model as up-to-date as possible!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: