Hacker Newsnew | past | comments | ask | show | jobs | submit | est31's commentslogin

Centralized planning is needed in any civilization. You need some mechanism to decide where to put resources, whether it's to organize the annual school's excursion or to construct the national highway system.

But yeah in the end companies behave in trends, if some companies do it then the other companies have to do it too, even if this makes things less efficient or is even hurtful. We can put that onto the human factor, but I think even if we replaced all CEOs with AIs, those AIs would all see the same information and make similar decisions on those information.

There is pascal's wager arguments to be had: for each individual company, the punishment of not playing the AI game and missing out on something big is bigger than the punishment of wasting resources by allocating them towards AI efforts plus annoying customers with AI features they don't want or need.

> Right now are living through a new gilded age with a few barons running things, because we have made the rewards too extreme and too narrowly distributed.

The usa has rid itself multiple times of its barons. There is mechanisms in place, but I am not sure that people really are going to exercise those means any time soon. If this AI stuff is successful in the real world as well, then increasing amounts of power will shift away from the people to the people controlling the AI, with all the consequences this has.


This was a really great read. Loved it!


All information technology is killing privacy, deriving from the trend that it's getting easier to collect and copy data.

Of course it doesn't help that people tell their most secret thoughts to an LLM, but before ChatGPT people did that to Google.

The recent AI advancements do make it easier though to process large amounts of data that is already being collected through existing means, and distill them, which has negative consequences on privacy.

But the distillation power of LLMs can also be used for privacy preserving purposes, namely local inference. You don't need to go to recipe websites any more, or go to wikipedia, or stack overflow, but can ask your local model. Sadly though, the non-local ones are still distinguishably better than the locally running ones, and this is probably going to stay.


Another instance of GEMA fighting an american company. Anyone who was on the german internet in the first half of the last decade remembers the "not available in your country" error messages on youtube because Google didn't make a deal with GEMA.

I don't think that we will end up here with such a scenario: lyrics are pervasive and probably also quoted in a lot of other publications. Furthermore, it's not just about lyrics but one can make a similar argument about any published literary work. GEMA is for music but for literary publications there is VG Wort who in fact already have an AI license.

I rather think that OpenAI will license the works from GEMA instead. Ultimately this will be beneficial for the likes of OpenAI because it can serve as a means to keep out the small players. I'm sure that GEMA won't talk to the smaller startups in the field about licensing.

Is this good for the average musician/author? these organizations will probably distribute most of the money to the most popular ones, even though AI models benefit from quantity of content instead of popularity.

https://www.vgwort.de/veroeffentlichungen/aenderung-der-wahr...


LLMs (or LLM assisted coding), if successful, will more likely make the number of compilers go down, as LLMs are better with mainstream languages compared to niche ones. Same effect as with frameworks. Less languages, less compilers needed.


I mostly disagree.

First, LLMs should be happy to use made up languages described in a couple thousand tokens without issues (you just have to have a good llm-friendly description, some examples). That and having a compiler it can iterate with / get feedback from.

Second, LLMs heavily reduce ecosystem advantage. Before LLMs, presence of libraries for common use cases to save myself time was one of the main deciding factors for language choice.

Now? The LLM will be happy to implement any utility / api client library I want given the API I want. May even be more thoroughly tested than the average open-source library.


Have you tried having an LLM write significant amounts of, say, F#? Real language, lots of documentation, definitely in the pre-training corpus, but I've never had much luck with even mid sized problems in languages like it -- ones where today's models absolutely wipe the floor in JavaScript or Python.


Even best in class LLMs like GPT5 or Sonnet 4.5 do noticeably worse in languages like C# which are pretty mainstream, but not on the level of Typescript and Python - to the degree that I don't think they are reliably able to output production level code without a crazy level of oversight.

And this is for generic backend stuff, like a CRUD server with a Rest API, the same thing with an Express/Node backend works no trouble.


I’m doing Zig and it’s fine, though not significant amounts yet. I just had to have it synthesize the latest release changelog (0.15) into a short summary.

To be clear, I mean specifically using Claude Code, with preloaded sample context and giving it the ability to call the compiler and iterate on it.

I’m sure one-shot results (like asking Claude via the web UI and verifying after one iteration) will go much worse. But if it has the compiler available and writes tests, shouldn’t be an issue. It’s possible it causes 2-3 more back and forths with the compiler, but that’s an extra couple minutes, tops.

In general, even if working with Go (what I usually do), I will start each Claude Code session with tens of thousands of tokens of context from the code base, so it follows the (somewhat peculiar) existing code style / patterns, and understands what’s where.


Humans can barely untangle F# code..


See, I'm coming from the understanding that language development is a dead-end in the real world. Can you name a single language made after Zig or Rust? And even those languages haven't taken over much of the professional world. So when I say companies will maintain compilers, I mean DSLs (like starlark or RSpec), application-specific languages (like CUDA), variations on existing languages (maybe C++ with some in-house rules baked in), and customer-facing config languages for advanced systems and SaaS applications.


Yes, several, e.g., Gleam, Mojo, Hare, Carbon, C3, Koka, Jai, Kotlin, Reason ... and r/ProgrammingLanguages is chock full of people working on new languages that might or might not ever become more widely known ... it takes years and a lot of resources and commitment. Zig and Rust are well known because they've been through the gauntlet and are well marketed ... there are other languages in productive use that haven't fared well that way, e.g., D and Nim (the best of the bunch and highly underappreciated), Odin, V, ...

> even those languages haven't taken over much of the professional world.

Non sequitur goalpost moving ... this has nothing to do with whether language development is a dead-end "in the real world", which is a circular argument when we're talking about language development. The claim is simply false.


This seems like a case of moving the goalposts because Zig and Rust still seem newfangled to me. I thought nothing would come after C++11.


Bad take. People said the same about c/c++ and now rust and zig are considered potential rivals. The ramp up is slow and there's never going to be a moment of viral adoption the way we're used to with SaaS, but change takes place.


AIs replacing jobs is not the only way those companies can see a return on investment, it's not necessarily zero sum. If the additional productivity given by AI unlocks additional possibilities of endeavor, jobs might stay, just change.

Say idk, we add additional regulatory requirements for apps, so even though developers with an AI are more powerful (let's just assume this for a moment), they might still need to solve more tasks than before.

Kind of how oil prices influence whether it makes sense to extract it from some specific reservoir: if better technology makes it cheaper to extract oil, those reservoirs will be tapped at lower oil prices too, leading to more oil being extracted in total.

When it comes to the valuations of these AI companies, they certainly have valuations that are very high compared to their earnings. It doesn't necessarily mean though that replacement of jobs is priced in.

But yeah, once AI is capable enough to do all tasks humans do in employment, there will be no need to employ any humans at all for any task whatsoever. At that point, many bets are off how it will hit the economy. Modelling that is quite difficult.


> once AI is capable enough to do all tasks humans do in employment, there will be no need to employ any humans at all for any task whatsoever

AI has no skin, you can't shame it, fire it, jail it. In all critical tasks, where we take risk on life, health, money, investment or resources spent we need that accountability.

Humans, besides being consequence sinks, are also task originators and participate in task iteration by providing feedback and constraints. Those come from the context of information that is personal and cannot be owned by AI providers.

So, even though AI might do the work, humans spark it, maintain/guide it, and in the end receive the good or bad outcomes and pay the cost. There are as many unique contexts as people, contextual embeddedness cannot be owned by others.


>But yeah, once AI is capable enough to do all tasks humans do in employment,

Also at this point the current ideas of competition go wonky.

In theory most companies in the same industry should homogenize at a maxima which leads to rapid consolidation. Lots of individual people think they'll be able to compete because they 'also have robots', but this seems unlikely to me except in the case of some boutique products. Those companies with the most data and the cheapest energy costs will win out.


Which EU countries have those been?

The EU has recently reduced fees for one of the biggest instant payment systems of the world (SCT inst reaches the Eurozone's 350M residents). Compare the quality of that to a wire or to a ACH transfer.

EU is also ahead with security. PSD2's requirements go further than US requirements, and they are also ahead in the magnetic swipe card phaseout.

Wise and Revolut, two companies which brought a lot of innovation to international money transfer, were founded in the EU as well (since 2020 not EU companies any more).

Of course, all of this doesn't mean that the average EU bank doesn't suck. But I heard worse of the US.


>magnetic swipe card phaseout.

Swipe? I don't recall a time when I needed to swipe in US in the last few years. Pretty much tap, tap, tap, tap. Actually you cannot swipe a card in US that has a chip, and probably 99% of cards have chips.

>Compare the quality of that to a wire or to a ACH transfer.

Zelle? Just a qr code or a phone number? And it's free?

>Wise and Revolut

No clue. What's so special that I don't have with Chase?

>EU is also ahead with security

Um isn't that useless? As more scams are via social engineering.

>But I heard worse of the US.

I heard the same about EU, actually MUCH worse :)


> Swipe? I don't recall a time when I needed to swipe in US in the last few years.

I do, earlier this year visiting the USA. The readers on pumps at two different gas stations.

But the EU started phasing out reading magnetic strips twenty years ago, well before the USA had even started issuing EMV chip cards.

> Zelle?

Zelle is only for person-to-person transfers, Europe has had good person-to-business, business-to-person and business-to-business transfers for decades.

> ...

The point wasn't that the USA didn't have these things, but that Europe had them earlier (sometimes much earlier), so the banking system led to this innovation.


Well you found one, and I can tell you about time when in EU that a place took a hard print of my card in the last 5 years! Didn't even know that card imprinters still exists.

Zelle is not just person to person, it's just a transfer. You can pay businesses, people and even transfer to yourself. Zero fees.

Europe is a big continent and I can easily find a place that is way more backwards ;)

Also 2 letters from EMV stands for 2 American companies :).


Monzo was also founded in the EU, in the UK specifically when they were still in the EU.

> The EU has recently reduced fees for one of the biggest instant payment systems of the world (SCT inst reaches the Eurozone's 350M residents).

But that was done by regulation, wasn't it? Would have been nicer to see that come as a result of competition.

> Of course, all of this doesn't mean that the average EU bank doesn't suck. But I heard worse of the US.

I don't know about the average. But I can tell you that quality varies a lot. I was generally OK with German banks (having grown up there), but UK banks before Monzo (and Revolut, Wise etc) used to be the scum of the earth. Just like their supermarkets used to feel openly hostile to me as a customer before Aldi and Lidl showed up and shook up the market.

Yes, Tesco and friends regularly get told off by the regulator before, but nothing changed until competition forced their hands, and gave customers something they preferred.


> UK banks [...] used to be the scum of the earth

They still had chip&pin before US banks, and dropped unsafe cheques before US banks.

The US banking system, afaik, did one thing better: credit cards. But since the '00s, European ones have been just as good and often better.


> They still had chip&pin before US banks, and dropped unsafe cheques before US banks.

Oh, I never banked in the US, so I can't comment on them from a consumer point of view.

> The US banking system, afaik, did one thing better: credit cards. But since the '00s, European ones have been just as good and often better.

I used credit cards perhaps a handful of times in my life. It's almost exclusively been debit cards for me.


> But that was done by regulation, wasn't it? Would have been nicer to see that come as a result of competition.

Bill Gurly has been crying for years now about how US banks have been bocking/not-participating in equivalent services in US(Fed Now) and for good business reasons for them.

A well functioning market does need regulations. Not everything can be magically fixed by "competition"

https://www.linkedin.com/posts/kivatinos_bill-gurley-on-paym...


> A well functioning market does need regulations. Not everything can be magically fixed by "competition"

Ideally, you can set up your regulations so that competition has more bite.

Much of the time, you can remove special purpose regulations for a specific sector, and can get by with just the generics: enforcing contracts, punishing fraud, etc.


Deuterium might be the oil of the future as one can do fusion with it easily (in comparison).


> (when the max their productivity is worth relative to AI falls below a living wage), humans will continue working side by side with AGI as even relatively unproductive workers

This assumes that humans will be unwilling to work if their wage is below living. It depends on the social programs of the government, but if there is none, or only very bad ones, people will probably be more desperate and thus be more willing to work in even the cheapest jobs.

So in this overabundance of human labor world, the cost of human labor might be much closer to zero than living wage. It all depends on how desperate to find work government policy will make humans.


> Excellent Linux support with in-tree drivers. For 15 years!

Linux support has been excellent on AMD for less than 15 years though. It got really good around 10 years ago, not before.


ATI/AMD open source linux support has been blowing hot and cold for over 25 years now.

They were one of the first to actually support open source drivers, with the r128 and original radeon (r100) drivers. Then went radio silence for the next few years, though the community used that as a baseline to support the next few generations (r100 to r500).

Then they reemerged with actually providing documentation for their Radeon HD series (r600 and r700), and some development resources but limited - and often at odds with the community-run equivalents at the time (lots of parallel development with things like the "radeonhd" driver and disagreements on how much they should rely on their "atombios" card firmware).

That "moderate" level of involvement continued for years, releasing documentation and some initial code for the GCN cards, but it felt like beyond the initial code drops most of the continuing work was more community-run.

Then only relatively recently (the last ~10 years) have they started putting actual engineering effort into things again, with AMDGPU and the majority of mesa changes now being paid for by AMD (or Valve, which is "AMD by proxy" really as you can guarantee every $ they spend on an engineer is $ less they pay to AMD).

So hopefully that's a trend you can actually rely on now, but I've been watching too long to think that can't change on a dime.


It is possible that at some point, maybe 15 years ago, AMD provided sufficient documentation to write drivers, but even 10 years ago, a lot of documentation was missing (without even mentioning that fact), which made trying to contribute rather frustrating. Not too bad, because as you said, they had a (smallish) number of employees working on the open drivers by then.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: