But the users would have to maintain their own forks then. Unless you stream back patches into your forks, which implies there's some upstream being maintained. Software doesn't interoperate and maintain itself for free - somebody's gotta put in the time for that.
I think as long as AI isn't literal AGI, social pressures will keep projects alive, in some state. There definitely is something scary about stealing entire products as a mean for new market domination - e.g. steal linux then make a corporate linux, and force everybody to contribute to corporate linux only (many linux contributors are paid by corporations, after all), and make that the new central pointer. That might be worst case scenario - then Microsoft, in collusion (which I admit is far fetched, but def possible), could completely adopt linux for servers and headless compute, and enforce very strict hardware restrictions such that only Windows works.
> But the users would have to maintain their own forks then.
I suppose the idea would be, they don't have to maintain it: if it ever starts to rot from whatever environmental changes, then they can just get the LLM to patch it, or at worst, generate it again from scratch.
(And personally, I prefer writing code so that it isn't coupled so tightly to the environment or other people's fast-moving libraries to begin with, since I don't want to poke at all of my projects every other year just to keep them functional.)
The LLM can a priori test on all possible software and hardware environments, test all possible edge cases for deployment, get feedback from millions of eyes on the project explicitly or implicitly via bug reports and usage, find good general case use features given the massive amounts of data gathered through the community of where the project needs to go next, etc?
Even in a world with pure LLM coding, it's more likely that LLMs maintain an open source place for other LLMs to contribute to.
You're forgetting that code isn't just a technical problem (well, even if it was, that would be a wild claim that goes against all hardness results known to humans given the limits of a priori reasoning...)
> The LLM can a priori test on all possible software and hardware environments, test all possible edge cases for deployment, get feedback from millions of eyes on the project explicitly or implicitly via bug reports and usage, find good general case use features given the massive amounts of data gathered through the community of where the project needs to go next, etc?
Even if that's the ideal (and a very expensive one in terms of time and resources), I really don't think it accurately describes the maintainers of the very long tail of small open-source projects, especially those simple enough for the relevant features to be copied into a few files' worth of code.
Like, sure, projects like Linux, LLVM, Git, or the popular databases may fit that description, but people aren't trying to vendor those via LLMs (or so I hope). And in any case, if the project presently fulfills a user's specific use case, then it "going somewhere next" may well be viewed as a persistent risk.
Yeah, the funny thing that Linux being open-source is absolutely in line with capitalism. Just look at the list of maintainers - they are almost all paid employees of gigacorps.
It is just an optimization that makes sense -- writing an OS that is compatible with all sorts of hardware is hard, let alone one that is performant, checked for vulnerabilities, etc.
Why would each gigacorp waste a bunch of money on developing their own, when they could just spend a tiny bit to improve a specific area they deeply care about, and benefit from all the other changes financed by other companies.
And the GPL makes it all work - as no single gigacorp can just take the whole and legally run with it for their gain, like they could if it was say MIT or BSD licensed.
So you have direct competitors all contributing to a common project in harmony.
Well, GPL is good but I think this setup would still be a local optimum for gigacorps, were it MIT or so. They are using plenty of MIT libraries, e.g. Harfbuzz.
It would just simply not make sense for them to let other companies' improvements go out of the window, unless they can directly monetize it. So it doesn't apply to every project, but especially these low-lying ones would be safe even without any sensible license.
Agents can read the binary that makes up a compiled file and detect behavior directly from that. I've been doing it to inspect my own builds for the presence of a feature.
> it's unbelievable watching the market's seemingly unlimited ability to coopt, repackage and in turn sell literally anything, even a religion and philosophical system which would be completely opposite to a consumer society.
In some sense, this is one manifestation of what Nietzche said was a good state. A scrappy, anti-metaphysical system that doesn't need to rest on grand notions of reason or morality (not that there is no reason or morality, but in the traditional Western metaphysics sense; I find that people often conflate the two, I certainly did at one point), that simply outcompetes, adapts, and comes out on top.
On the other hand, I think Nietzche would have hated the outcome and would have worked to further refine his philosophy. I wonder what his thoughts would be in the 21st century.
Also through your comment, I realize I don't actually understand subtle differences in Eastern philosophy. Confucianism would have been up Nietzche's alley (no metaphysics), but Buddhism is a weird mix of "metaphysics" in the sense of spirits and gods, but not "metaphysics" in the Western Platonic tradition, and is in fact in many ways opposite to many of the dualities and boundaries that Western metaphysics creates.
Classic kafka trap! The mere sign of resistance is a sign of a deeper psychological incompatibility that fundamentally needs to be worked through until you agree with the state.
On the one hand, every time I read an article like this I'm vindicated against astroturfed bots claiming that nothing ever happens and this isn't where we're headed.
What? Doesn't this boil down to "people like people who reliably get results", e.g., we live in a complicated nondeterministic world but we try and make it as deterministic as possible, except for some reason you focus on the nondeterministic part for managers, and "deterministic" part for engineers?
Not even sure if determinism is a good axis to analyze this problem. Also smells extremely like concept creep - do you mean "moving up the abstraction stack" as "non determinism" too?
semantic decentralization (not just AWS owning thousands of data centers and having their own distributed interoperability problems), standards, and regulations.
These are super interesting problems. However, it seems like selection pressures, or just pure greed, attracts people to the "easiest" solution: pure domination. You don't need to care about any of these (well, you still do eventually, but in the minds of said people) if you just have pure utter control over every part of the stack.
Even further, not everything is a math proof, where everything has been standardized and open (although understanding the proof is a whole other topic). Heck, take it one step lower - coding - and even though theoretically the source code is 100% transparent, still often times your claims are not reproducible because of environment. Now lower it one more to any kind of science where replication is expensive and/or hard, and then one step lower to personal experiences... And yeah, things can seem tough, can't it?
And even in the case of mathematics proofs, that tells you nothing about things such as: extendability, taste, where future direction should go, what this philosophically means, etc. Which we definitely do care about.
It's funny because the people throwing around fallacy accusations everywhere don't understand that they are semi selectively using fallacies alongside claiming universality while not actually practicing it (not that you have to, of course, I very much don't agree with that premise, but if you're the one saying it...)
Anyways. /rant, it's crazy how many people don't discuss these basic but subtle ideas. To be fair, I struggled with the same exact things when I was 15, and it doesn't seem like you get taught this kind of nuance until maybe the tail end of a rigorous bachelor's degree, though personally I only learned this stuff on my own through extensive trial and error and suffering.
Observing, measuring, but also repeatability and ground truth.
Math (and theoretical adjacents like TCS) claim not to make any fundamental claims about the actual world (compared to 17th century philosopher-mathematicians like Leibniz), and physics studies the basest of, well, physical phenomenon.
I don't even know how you would begin actually rigorously studying sociology unless you could start simulating real humans in a vat, or you inject everybody with neuralink. (but that already selects for a type of society, and probably not a good one...)
To be clear, I don't think all sociological observations are bad. However, I tend to heavily disregard "mathematical sociological studies" in favor of just... hearing perspectives. New ones and unconventional ones especially, as in a domain where a lot of theories "seem legit", I want to just hear very specific new ways of thinking that I didn't think about before. I find that to be a pretty good heuristic for finding value, if the verification process itself is broken.
Damn. I've put quite a lot of effort into open source tools w.r.t. debugging and bugfixing, but yeah putting that for a corporate product that doesn't even respect you must be draining.
reply