The biggest problem is the infrastructure left behind from the Dotcom boom that laid the path for the current world (the high speed fiber) doesn't translate to computer chips. Are you still using intel chips from 1998? And the chips are such a huge cost, and being backed by debt but they depreciate in value exponentially. It's not the same because so much of the current debt fueled spending is on an asset that has very short shelf life. I think AI will be huge, I don't doubt the endgame once it matures. But the bubble now, spending huge amounts on these data centers using debt without a path to profitability (and inordinate spending on these chips) is dangerous. You can think AI will be huge and see how dangerous the current manifestation of the bubble is. A lot of people will get hurt very very badly. This is going to maim the economy in a generational way.
And a lot of the gains from the Dotcom boom are being paid back in negative value for the average person at this point. We have automated systems that waste our time when we need support, product features that should have a one-time-cost being turned into subscriptions, a complete usurping of the ability to distribute software or build compatible replacements, etc..
The Dotcom boom was probably good for everyone in some way, but it was much, much better for the extremely wealthy people that have gained control of everything.
If you're ever been to a third world country then you'd see how this is completely untrue. The dotcom boom has revolutionized the way of life for people in countries like India.
Even for the average person in America, the ability to do so many activities online that would have taken hours otherwise (eg. shopping, research, DMV/government activities, etc). The fact that we see negative consequences of this like social network polarization or brainrot doesn't negate the positives that have been brought about.
I think you’re putting too much weight on cost (time, money), and not enough weight on “quality of life”, in your analysis.
For sure, we can shop faster, and (attempt) research and admin faster. But…
Shopping: used to be fun. You’d go with friends or family, discuss the goods together, gossip, bump into people you knew, stop for a sandwich, maybe mix shopping and a cinema or dinner trip. All the while, you’d be aware of other peoples’ personal space, see their family dynamics. Queuing for event tickets brought you shoulder to shoulder with the crowd before the event began… Today, we do all this at home; strangers (and communities) are separated from us by glass, cables and satalites, rather than by air and shouting distance. I argue that this time saving is reducing our ability to socialise.
Research: this is definitely accelerated, and probably mostly for the better. But… some kinds of research were mingled with the “shopping” socialisation described above.
Admin: the happy path is now faster and functioning bureaucracy is smoother in the digital realm. But, it’s the edge cases which are now more painful. Elderly people struggle with digital tech and prefer face to face. Everyone is more open to more subtle and challenging threats (identity theft, fraud); we all have to learn complex and layered mitigation strategies. Also: digital systems are very fragile: they leak private data, they’re open to wider attack surfaces, they need more training and are harder to intuit without that training; they’re ripe for capture by monopolists (Google, Palantir).
The time and cost savings of all these are not felt by the users, or even the admins of these systems. The savings are felt only by the owners of the systems.
Technologgy has saved billions of person-hours individual costs, in travel, in physical work. Yet, wemre working longer, using fewer ranges of motions, are less fit, less able to tolerste others’ differences and the wealth gap is widening.
> I think you’re putting too much weight on cost (time, money), and not enough weight on “quality of life”, in your analysis.
"Quality of life" is a hugely privileged topic to be zooming in on. For the vast majority of people both inside and outside the US, Time and Money are by far the most important factors in their lives.
Setting aside time, is money not downstream from quality of life? Meaning, in a better world one might not need to care as much about money? I believe that time and quality of life are congruent - good quality of life means control over one’s own time.
Two decades ago, in the Bay Area we used to have a lot of books stores, specialized, chains, children's, grade school, college slugbooks, etc. Places like Fry's had a coffee and a book store inside. The population grew, number of book stores went down to near zero.
It seems the crux is that we needed X people to produce goods, and we had Y demand.
Now we need X*0.75 people to do meet Y demand.
However, those savings are partially piped to consumers, and partially piped to owners.
There is only so much marginal propensity to spend that rich people have, so that additional wealth is not resulting in an increase in demand, at least commensurate enough to absorb the 25% who are unemployed or underemployed.
Ideally that money would be getting ploughed back into making new firms, or creating new work, but the work being created requires people with PHDs, and a few specific skills, which means that entire fields of people are not in the work force.
However all that money has to go somewhere, and so asset classes are rising in value, because there is no where else for it to go.
It's actually, "they end up" and the 33% gains you're talking about aren't realized en masse until all the coal miners have black lung. It's really quite the, "dealy" as Homer Simpson would say. See, "Charles Dickens" or, "William Blake" for more. #grease
> partially piped to consumers, and partially piped to owners.
Or, the returns on capital exceed the rate of economic growth (r > g), if you like Piketty's Capital in the Twenty First Century.
One of the central points is about how productivity and growth gains increasingly accrue to capital rather than labor, leading to capital accumulation and asset inflation.
Yep, that’s the source of the point. The effort is in finding a way to make it easy to convey. Communication of an idea is almost as critical as its verification now.
You're in luck! A couple years ago he released an "abridged version" of sorts. A Brief History of Inequality is the name. Much more accessible than the 700 pages of Capital in the 21st Century.
With telecom, we benefited from skipping generations. I got into a telecom management program because in 2001-ish, I was passed by on a village street by a farmer bicycling while talking on his cellphone. Mind you my family could not afford cellphone call rates at the time.
In fact, the technology was introduced out here assuming corporate / elite users. The market reality became such that telcos were forced kicking and screaming to open up networks to everybody. The Telecom Regulatory Authority of India (back then) mandated rural <> urban parity of sorts. This eventually forced telcos to share infrastructure costs (share towers etc.) The total call and data volumes are eye-watering, but low-yield (low ARPU). I could go on and on but it's just batshit crazy.
Now UPI has layered on top of that---once again, benefiting from Reserve Bank of India's mandate for zero-fee transactions, and participating via a formal data interchange protocol and format.
Speaking from India, having lived here all my life, and occasionally travelled abroad (USAmerica, S.E. Asia).
We, as a society and democracy, are also feeling the harsh, harsh hand of "Code is Law", and increasingly centralised control of communication utilities (which the telecoms are). The left hand of darkness comes with a lot of darkness, sadly.
Which brings me to the moniker of "third world".
This place is insane, my friend --- first, second, third, and fourth worlds all smashing into each others' faces all the time. In so many ways, we are more first world here than many western countries. I first visited USAmerica in 2015, and I could almost smell an empire in decline. Walking across twitter headquarters in downtown SF of all the places, avoiding needles and syringes strewn on the sidewalk, and avoiding the completely smashed guy just barely standing there, right there in the middle of it all.
That kind of extreme poverty juxtaposed to extreme wealth, and all of the social ills that come along with it, have always been a fixture of the American experience. I don’t think it’s a good barometer or whether the USA is in decline when there has long been pockets of urban decay, massive inequality, drug use etc. Jump back to any point in American history and you’ll find something similar if not much, much worse. Even in SF of all places, back in the wild west era gold rush or in the 1970s… America has always held that contradiction.
Yeah, I sort of recounted a stark memory. That juxtaposition was a bit too much.
However, it wasn't just that, and the feeling has only solidified in three further visits. It isn't rational, very much a nose thing, coming from an ordinary software programmer (definitely not an economist, sociologist, think tank).
AI itself is a manifestation of that too, a huge time waster for a lot of people. Getting randomly generated wrong but sounding right information is very frustrating. Start asking AI questions you already know the answer too and the issues can become very obvious.
I know HN and most younger people or people with otherwise political leanings always push narratives pointing at rich people bad but I feel a lot of tech has made our lives easier and better. It's also made it more complicated and worse in some ways. That effect has applied to everyone.
In poor countries, they may not have access to clean running water but it's almost guaranteed they have cell phones. We saw that in a documentary recently. What's good about that? They use cell phones not only to stay in touch but to carry out small business and personal sales. Something that wouldn't have been possible before the Internet age.
> The Dotcom boom was probably good for everyone in some way, but it was much, much better for the extremely wealthy people that have gained control of everything.
You are describing platform capture. Be it Google Search, YouTube, TikTok, Meta, X, App Store, Play Store, Amazon, Uber - they have all made themselves intermediaries between public and services, extracting a huge fee. I see it like rent going up in a region until it reaches maximum bearable level, making it almost not worth it to live and work there. They extract value both directions, up and down, like ISPs without net-neutrality.
But AI has a different dynamic, it is not easy to centrally control ranking, filtering and UI with AI agents. You can download a LLM, can't download a Google or Meta. Now it is AI agents that got the "ear" of the user base.
It's not like before it was good - we had a generation of people writing slop to grab attention on web and social networks, from the lowest porn site to CNN. We all got prompted by the Algorithm. Now that Algorithms is replaced by many AI agents that serve users more directly than before.
>You can download a LLM, can't download a Google or Meta.
You can download a model. That doesn't necessarily mean you can download the best model and all the ancillary systems attached to it by whatever service. Just like you can download a web index but you probably cannot download google's index and certainly can't download their system of crawlers for keeping it up to date.
That's true for the GPUs themselves, but the data centers with their electricity infrastructure and cooling and suchlike won't become obsolete nearly as quickly.
this is a good point, and it would be interesting to see the relative value of this building and housing 'plumbing' overhead Vs the chips themselves.
I guess another example of the same thing is power generation capacity, although this comes online so much more slowly I'm not sure the dynamics would work in the same way.
The data centers built in 1998 don't have nearly enough power or cooling capacity to run today's infrastructure. I'd be surprised if very many of them are even still in use. Cheaper to build new than upgrade.
How come? I'd expect that efficiency gains would lower power and thus cooling demands - are we packing more servers into the same space now or losing those gains elsewhere?
Power limitations are a big deal. I haven't shopped for datacenter cages since web 2.0, but even back then it was a significant issue. Lots of places couldn't give you more than a few kw per rack. State of the art servers can be 2kw each, so you start pushing 60kw per rack. Re rigging a decades old data center for that isn't trivial. Remember you need not just the raw power but cooling, backup generator capacity, enough battery to cover the transition, etc.
It's hugely expensive, which is why the big cloud infrastructure companies have spent so much on optimizing every detail they can.
Yes - blades-of-servers replacing what was 2 or 3 rack mount servers. Both air exchange and power requirements are radically different in order to fill that rack as it was before.
It's just an educated guess, but I expect that power density has gone up quite a bit as a form of optimization. Efficiency gains permit both lower power (mobile) and higher compute (server) parts. How tightly you pack those server parts in is an entirely different matter. How many H100s can you fit on average per 1U of space?
How much more of centralized data center capacity we actually need outside AI? And how much more we would need if we used slightly more time on doing things more efficiently?
This is true. It’s probably 2-3 times as long as a GPU chip. But it’s still probably half or a quarter of the depreciation timeline of a carrier fiber line.
Even if the building itself is condemnable, what it took to build it out is still valuable.
To give a different example, right now, some of the most prized sites for renewable energy are former coal plant sites, because they already have big fat transmission lines ready to go. Yesterday's industrial parks are now today's gentrifying urban districts, and so on.
Eh not really. Maybe retro cloud gaming services. But games haven't stopped getting more demanding every year. Not only are the AI GPUs focused on achieving clusters with great compute performance per watt and dollar rather than making singular GPUs with great raster performance; even the GPUs which are powerful enough for current games won't be powerful enough for games in 5 years.
Not to mean that we're still nowhere near close to solving the broadband coverage problem, especially in less developed countries like the US and most of the third world. If anything, it seems like we're moving towards satellite internet and cellular for areas outside of the urban centers, and those are terrible for latency-sensitive applications like game streaming.
> But games haven't stopped getting more demanding every year.
This is not particularly true.
Even top of the line AAA games make sure they can be played on the current generation consoles which have been around for the last N years. Right now N=5.
Sure you’ll get much better graphics with a high end PC, but those looking for cloud gaming would likely be satisfied with PS5 level graphics which can be pretty good.
If you look at year over year chip improvements in 2025 vs 1998, it's clear that modern hardware just has a longer shelf life than it used to. The difficulties in getting more performance for the same power expenditure are just very different than back in the day.
There's still depreciation, but it's not the same. Also look at other forms of hardware, like RAM, and the bonus electrical capacity being built.
In 1998, 16 MiB of RAM was ~$200, in 2025, 16 GiB of ram is about $50. A Pentium II in 1998 at 459 MHz was $600. Today, a AMD Ryzen 7 9800X can be had for $500. That Ryzen is maybe 100 times as powerful as the Pentium II. What's available at what price point has changed, but it's ridiculous how much computing I can get for $150 at Best Buy, and it's also ridiculous how little I can do with that much computing power. Wirth’s law still holds: software is getting slower more rapidly than hardware is getting faster.
Honestly I think the most surprising thing about this latest investment boom has been how little debt there is. VC spending and big tech's deep pockets keep banks from being too tangled in all of this, so the fallout will be much more gentle imo.
FLOP/s/$ is still increasing exponentially, even if the specific components don't match Moore's original phrasing.
Markets for electronics have momentum, and estimating that momentum is how chip producers plan for investment in manufacturing capacity, and how chip consumers plan for deprecation.
They kind of aren't. If you actually look at "how many dollars am I spending per month on electricity", there's a good chance it's not worth upgrading even if your computer is 10 years old.
Of course this does make some moderate assumptions that it was a solid build in the first place, not a flimsy laptop, not artificially made obsolete/slow, etc. Even then, "install an SSD" and "install more RAM" is most of everything.
Of course, if you are a developer you should avoid doing these things so you won't get encouraged to write crappy programs.
Companies want GW data centers, which are a new thing that will last decades, even if GPUs are consumable and have high failure rates. Also, depending on how far it takes us, it could upgrade the electric grid, make electricity cheaper.
And there will also be software infrastructure which could be durable. There will be improvements to software tooling and the ecosystem. We will have enormous pre-trained foundation models. These model weight artifacts could be copied for free, distilled, or fine tuned for a fraction of the cost.
About 40% of AI infrastructure spending is the physical datacenter itself and the associated energy production. 60% is the chips.
That 40% has a very long shelf life.
Unfortunately, the energy component is almost entirely fossil fuels, so the global warming impact is pretty significant.
At this point, geoengineering is the only thing that can earn us a bit of time to figure...idk, something out, and we can only hope the oceans don't acidify too much in the meantime.
Interesting. Do you have any sources for this 60/40 split?
And while I agree that the infrastructure has a long shelf life, it seems to me like an AI bubble burst would greatly depreciate the value of this infrastructure as the demand for it plummets, no?
They're only replacing GPUs because investors will give "free" money to do so. Once the bubble pops people will realize that GPUs actually last a while.
While yes, I sure look forward to the flood of cheap graphics cards we will see 5-10 years from now. I don't need the newest card, but I don't mind the five-year old top-of-the-line at discount prices.
I think you partially answer to yourself though. Is the value in the depreciating chips, or in the huge datacenters, with cooling, energy supply, at such scale etc. ?
The wealth the Dotcom boom left behind wasn't in dial up modems or internet over the telephone, it was in the huge amounts of high speed fiber optic networks that were laid down. I think a lot of that infrastructure is still in use today, fiber optic cables can last 30 years or more.
In the late 90s to 2001? Many people were still using modems at that time. Cable or DSL wasn't even an option for a considerable percentage of the population.
Low Global Penetration: Only 361 million people had internet access worldwide in 2000, a small fraction of the global population.
Specific Country Examples
United States: The US had a significant portion of the world's internet users, making up 31.1% of all global users in 2000. Its penetration rate was 43.1%.
Not still using, flat out modemless. Lots of guys got their hand on a mouse for the first time only after Windows XP launched. Which was after the collapse.
Personally I think people should stop trying to reason from the past.
As tempting as it is, it leads to false outcomes because you are not thinking about how this particular situation is going to impact society and the economy.
Its much harder to reason this way, but isnt that the point? personally I dont want to hear or read analogies based on the past - I want to see and read stuff that comes from original thinking.
Doesn't that line of reasoning leave you in danger of being largely ignorant? There's a wonderful quote from Twain "History doesn't repeat itself but it often rhymes" there are two critical things I'd highlight in that quote - first off the contrast between repetition and rhyming is drawing attention to the fact that things are never exactly the same - there's just a gist of similarities - the second is that it often but doesn't always rhyme - this sure looks like a bubble but it might not be and it might be something entirely new. _That all said_ it's important to learn from history because there are clear echoes of history in events because we, people in general, don't change that fundamentally.
IME the number of times where people have said "this time it's different" and been wrong is a lot higher than the number of times they've said "this time is the same as the last" and been wrong. In fact, it is the increasing prevalence of the idea that "this time it's different" that makes me batten down the hatches and invest somewhere with more stability.
This won’t even come close to maiming the economy, that’s one of the more extreme takes I’ve heard.
AI is already making us wildly more productive. I vibe coded 5 deep ML libraries over the last month or so. This would have taken me maybe years before when I was manually coding as an MLE.
We have clearly hit the stage of exponential improvement, and to not invest basically everything we have in it would be crazy. Anyone who doesn’t see that is missing the bigger picture.