Hacker Newsnew | past | comments | ask | show | jobs | submit | PeterHolzwarth's commentslogin

The dot com crash was absolutely expected - today's "cmon, get it over with! crash!" tone we see in regards to the AI bubble is hilariously reminiscent of the late 90s dot com bubble. It was the era that spawned the famous Economist leader "Crash Dammit!"

This is a great question, and one that drives right to the key issue! (oh god, that sounds like an LLM response, sorry)

Like with the implosion of the Japanese economy, people will just not invest, instead parking their money in low-yield bank accounts. It was, in some cases continues to be, an issue for that country.


People, or organizations, but mainly people, can just refuse to invest in stuff, parking their money in low-interest bank accounts, or the old style "stuffed into mattresses."

This was the multi-decade problem Japan ran into after its hot economy imploded, and unleashed the "lost decade" (which became decades). It was not a marginal issue, and for year the Japanese government tried everything it could think of to get people to invest in things - to little effect.


The Economist front page illustration of the 90s .com stock market, with the heading "CRASH DAMMIT!"

Everyone knew there was a bubble. People began to get impatient for what obviously was going to happen, as you say.


A lot of us don't have time for all the long reads, podcasts, or in-depth videos posted in the discussion here. Able to provide a summary for us that expresses your general point? Those interested can then use your link to learn more.

LLM summary, for discussion only:

The article’s core argument is that the U.S. dollar isn’t going to lose global dominance in some dramatic, headline-friendly collapse; instead, like every reserve currency before it, it will slowly erode at the margins as users quietly reduce reliance on it. Historical transitions (sterling to dollar) didn’t happen because of declarations or crises, but because the world gradually found alternatives that were good enough for specific needs. What’s changing now isn’t that the dollar has “failed,” but that the global financial system has evolved past some of the assumptions that made dollar dominance frictionless. The freezing of Russia’s reserves in 2022 shattered the idea that reserve assets are politically neutral, prompting central banks to hedge geopolitical risk via gold, bilateral trade arrangements, and non-dollar settlement systems. The result isn’t de-dollarization as revolution, but de-dollarization as creep: a long, largely invisible process that only looks obvious once it’s mostly done.


>Only to a very small degree and systems like Germany THANK GOD do not have any AI exposure at all.

Well put, and it makes sobering reading to see the impact of what happened the last time Germany was deeply reliant on money from the US.


This is a profoundly important - central, even - issue that I am very surprised to not see widely understood or acknowledged.

China is in a life-or-death race against time. A good number of their decisions are explained when viewed through this demographic implosion-bomb they are facing.


The same can be argued for Russia. Many-- myself included --believe it's the #1 reason Putin decided to invade the Ukraine as its youth are seen by the Kremlin as "Russian enough".

Able to summarize what you mean vs just a link?

>and you have to create constant pressure to ensure that stored wealth is best re-invested in the economy at large

I have no idea if this is true (I've asked economists-in-training, they say they'll get back to me), but I've read that the huge increases in tax rates on high income during the war was less to generate revenue (tho more revenue was certainly a need - there was also a growing focus on growing the number of people who paid taxes, which prior had been quite small), but more to ensure profits were not realized and instead kept invested in the economy and the war machine.

A kind of practical "hodl" to keep the wartime economy stock with reinvestment - or really to discourage removing money from industrial investment - to benefit the war-time economy.

Would like links to things to learn more about this line of reasoning.


>"The models are trained on fake internet conversations where group appeasement is an apparent goal. So now we have machines that just tell us what we clearly already want to hear."

I get what you mean in principle, but the problem I'm struggling with is that this just sounds like the web in general. The kid hits up a subreddit or some obscure forum, and similarly gets group appeasement or what they want to hear from people who are self selected for the forum for being all-in on the topic and Want To Believe, so to speak.

What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?

<edit> And let me add that I don't mean this argumentatively. I am trying to square the idea of ChatGPT, in this case, as being, in the end, fundamentally different from going to a forum full of fans of the topic who are also completely biased and likely full of very poor knowledge.


> What's the actual difference, in that sense, between that forum or subreddit, and an LLM do you feel?

In a forum, it is the actual people who post who are responsible for sharing the recommendation.

In a chatbot, it is the owner (e.g. OpenAI).

But in neither case are they responsible for a random person who takes the recommendation to heart, who could have applied judgement and critical thinking. They had autonomy and chose not to use their brain.


Nah, OpenAI can’t have it both ways. If they’re going to assert that their model is intelligent and is capable of replacing human work and authority they can’t also claim that it (and they) don’t have to take the same responsibility a human would for giving dangerous advice and incitement.


Imagine a subreddit full of people giving bad drug advice. They're at least partially full of people who are intelligent and capable of performing human work - but they're mostly not professional drug advisors. I think at best you could hold OpenAI to the same standard as that subreddit. That's not a super high bar.

It'd be different if one was signing up to an OpenAI Drug Advice Product, which advertised itself as an authority on drug advice. I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.


> I think in this case the expectation is set differently up front, with a "ChatGPT can make mistakes" footer on every chat.

If I keep telling you I suck at math while getting smarter every few months, eventually you're just going to introduce me as the friend who is too unconfident but is super smart at math. For many people LLMs are smarter than any friend they know, especially at K-12 level.

You can make the warning more shrill but it'll only worsen this dynamic and be interpreted as routine corporate language. If you don't want people to listen to your math / medical / legal advice, then you've got to stop giving decent advice. You have to cut the incentive off at the roots.

This effect may force companies to simply ban chatbots from certain conversation.


The "at math" is the important part here - I've met more than a few people who are super smart about math but significantly less smart about drugs.

I don't think that it's a good policy to forcibly muzzle their drug opinions just because of their good arithmetic skills. Absent professional licensing standards, the burden is on the listener to decide where a resource is strong and where it is weak.


Aternately, Google claimed gMail wa in public beta for years. People did not treat it like a public beta that could die with no warning, despite being explicitly told to by a company that, in recent years, has developed a reputation for doing that exact thing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: