Hacker Newsnew | past | comments | ask | show | jobs | submit | mediaman's commentslogin

Seems trivial to create an infinite number of inconsequentially (but hash defeating) different variants.

Africa, Europe, America, Mars. I wonder if there is something about one of these that makes them unlike the others.

Actually, why not colonize Venus instead? Sure, it will be hard, at first, with all the sulphuric acid and intense heat and whatnot, but we colonized America, so why not Venus?


What does it mean, that open world winning was a mistake? That the market is wrong, and peoples' preferences were incorrect, and they should prefer small handcrafted environments instead of what they seem to actually buy?

How? They are all losing tens of billions of dollars on this, so far.

Open source models are available at highly competitive prices for anyone to use and are closing the gap to 6-8 months from frontier proprietary models.

There doesn't appear to be any moat.

This criticism seems very valid against advertising and social media, where strong network effects make dominant players ultra-wealthy and act like a tax, but the AI business looks terrible, and it appears that most benefits are going to accrue fairly broadly across the economy, not to a few tech titans.

NVIDIA is the one exception to that, since there is a big moat on their business, but not clear how long that will last either.


I'm not so sure thats correct. The Labs seem to offer the best overall products in addition to the best models. And requirements for models are only going to get more complex and stringent going forward. So yes, open source will be able to keep up from a pure performance standpoint, but you can imagine a future state where only licensed models are able to be used in commercial settings and licensing will require compliance against limiting subversive use or similar (e.g. sexualization of minors, doesn't let you make a bomb etc.).

When the market shifts to a more compliance-relevant world, I think the Labs will have a monopoly on all of the research, ops, and production know-how required to deliver. That's not even considering if Agents truly take off (which will then place a premium on the servicing of those agents and agent environments rather than just the deployment).

There's a lot of assumptions in the above, and the timelines certainly vary, so its far from a sure thing - but the upside definitely seems there to me.


If that's the case, the winner will likely be cloud providers (AWS, GCP, Azure) who do compliance and enterprise very well.

If Open Source can keep up from a pure performance standpoint, any one of these cloud providers should be able to provide it as a managed service and make money that way.

Then OpenAI, Anthropic, etc end up becoming product companies. The winner is who has the most addictive AI product, not who has the most advanced model.


What’s the purpose of licensing requiring though things though if someone could just use an open source model to do that anyway? If someone were going to do those things you mentioned why do it through some commercial enterprise tool? I can see maybe licensing requiring a certain level of hardening to prevent prompt injections, but ultimately it still really comes down to how much power you give the model in whatever context it’s operating in.

Nvda is not the only exception. Private big names are losing money but there are so many public companies seeing the time of their life. Power, materials, dram, storage to name a few. The demand is truly high.

What we can argue about is if AI is truly transforming lives of everyone, the answer is a no. There is a massive exaggeration of benefits. The value is not ZERO. It’s not 100. It’s somewhere in between.


The opportunity cost of the billions invested in LLMs could lead one to argue that the benefits are negative.

Think of all the scientific experiments we could've had with the hundreds of billions being spent on AI. We need a lot more data on what's happening in space, in the sea, in tiny bits of matter, inside the earth. We need billions of people to learn a lot more things and think hard based on those axioms and the data we could gather exploring what I mention above to discover new ones. I hypothesize that investing there would have more benefit than a bunch of companies buying server farms to predict text.

CERN cost about 6 billions. Total MIT operations cost 4.7 billions a year. We could be allocating capital a lot more efficiently.


I believe that eventually the AI bubble will evolve in a simple scheme to corner the compute market. If no one can afford high-end hardware anymore then the companies who hoarded all the DRAM and GPUs can simply go rent seeking by selling the computer back to us at exorbitant prices.

The demand for memory is going to result in more factories and production. As long as demand is high, there's still money to be made in going wide to the consumer market with thinner margins.

What I predict is that we won't advance in memory technology on the consumer side as quickly. For instance, a huge number of basic consumer use cases would be totally fine on DDR3 for the next decade. Older equipment can produce this; so it has value, and we may see platforms come out with newer designs on older fabs.

Chiplets are a huge sign of growth in that direction - you end up with multiple components fabbed on different processes coming together inside one processor. That lets older equipment still have a long life and gives the final SoC assembler the ability to select from a wide range of components.

https://www.openchipletatlas.org/


That makes no sense. If the bubble bursts, there will be a huge oversupply and the prices will fall. Unless all Micron, Samsung, Nvidia, AMD, etc all go bankrupt overnight, the prices won't go up when demand vanishes.

That assumes the bubble will burst, which it won't if they succesfully corner the high-end compute market.

It doesn't matter if the AI is any good, you will still pay for it because it's the only way to access more compute power than consumer hardware offers.


There is a massive undersupply of compute right now for the current level of AI. The bubble bursting doesn't fix that.

There is a massive over-buying of compute, much beyond what is actually needed for the current level of AI development and products, paid for by investor money. When the bubble pops the investor money will dry up, and the extra demand will vanish. OpenAI buys memory chips to stop competitors from getting them, and Amazon owns datacenters they can't power.

https://www.bloomberg.com/news/articles/2025-11-10/data-cent...


I agree with your point and it is to that point I disagree with GP. These open weight models which have ultimately been constructed from so many thousands of years of humanity are also now freely available to all of humanity. To me that is the real marvel and a true gift.

It's turning out to be a commodity product. Commodity products are a race to the bottom on price. That's how this AI bubble will burst. The investments can't possibly show the ROIs envisioned.

As an LLM I use whatever is free/cheapest. Why pay for ChatGPT if Copilot comes with my office subscription? It does the same thing. If not I use Deepseek or Qwen and get very similar results.

Yes if you're a developer on Claude Code et al I get a point. But that's few people. The mass market is just using chat LLMs and those are nothing but a commodity. It's like jumping from Siri to Alexa to whatever the Google thing is called. There are differences but they're too small to be meaningful for the average user


>losing tens of billions

They are investing 10s of billions.


They are wasting tens of billions on something that has no business value currently, and may well never, just because of FOMO. That's not what I would call an investment.

Many investments may lose money, but the EV here is positive due to the extreme utility that AI can and is bringing.

They are washing 10s of billions of dollars an an industry-wide attempt to keep the music playing

I'd like to see evidence that open models are closing that gap. That would be promising.

>Open source models are available at highly competitive prices for anyone to use and are closing the gap to 6-8 months from frontier proprietary models.

What happens when the AI bubble is over and developers of open models doesn't want to incinerate money anymore? Foundation models aren't like curl or openssl. You can't have maintain it with a few engineer's free time.


If the bubble is over all the built infrastructure would become cheaper to train on? So those open models would incenerate less? Maybe there is an increase of specialist models?

Like after dot-com the leftovers were cheap - for a time - and became valuable (again) later.


No, if the bubble ends the use of all that built infrastructure stops being subsidized by an industry-wide wampum system where money gets "invested" and "spent" by the same two parties.

I feel like that was happening for the fiber-backhaul in 1999. Just different players.

Training is really cheap compared to the basically free inference being handed out by openai Anthropic Google etc.

Spending a million dollars on training and giving the model for free is far cheaper than hundreds of millions of dollars spent on inference every month and charging a few hundred thousand for it.


Not sure I totally follow. I'd love to better understand why companies are open sourcing models at all.

The other side of the market:

I think much of the rot in FAANG is more organizational than about LLMs. They got a lot bigger, headcount-wise, in 2020-2023.

Ultimately I doubt LLMs have much of an impact on code quality either way compared to the increased coordination costs, increased politics, and the increase of new commercial objectives (generating ads and services revenue in new places). None of those things are good for product quality.

That also probably means that LLMs aren't going to make this better, if the problem is organizational and commercial in the first place.


Some are complaining this letter is weak and generic.

Of course it is. You have 3M, Target, General Mills, Cargill, and US Bancorp on here, among others.

If you are looking for some revolutionary call to action, you're looking in the wrong place. And you're misunderstanding what's happening.

It is a really big deal for these very conservative, large, rich companies to be telling the federal government to cut it out, even if it is written in generic legalese.

The letter is not for you. It is for the administration. And it is extremely clear.


I do think they would likely have used more forceful rhetoric if they were dealing with a more normal administration. The current one is atypically spite-driven and prone to retaliate against critics, so they probably figured that saying anything insufficiently conciliatory-sounding would likely be counterproductive.

Even if that is the instinct, this is a mistaken way to deal with narcissistic bullying.

It’s writing the piece in the first place rather than what you put in it that raises the ire. There’s no way to compromise or mollify the wording in a way that makes them give you like, half the punishment.

What’s more, the attempt to mollify signals weakness that just invites them to feel even more vindictive. Being more forthright and decisive is what earns their grudging respect. China understood this, Zohran Mandani understood this. Meanwhile, Europe and Democratic leadership, universities and large law firms refuse to understand this.


> The current one is atypically spite-driven and prone to retaliate against critics

That’s why they do that


It is, in fact, not crazy, because none of this is predicated on using a specific vendor.

Many of these techniques can also work with Chinese LLMs like Qwen served by your inference provider of choice. It's about the harness that they work in, gated by a certain quality bar of LLM.

Taking a discussion about harnesses and stochastic token generators and forcing it into a discussion of American imperialism is making a topic political that is not inherently political, and is exactly the sort of aggressive, cussing tribalistic attitude the article is about.


I don't get the widespread hatred of Gas Town. If you read Steve's writeup, it's clear that this is a big fun experiment.

It pushes and crosses boundaries, it is a mixture of technology and art, it is provocative. It takes stochastic neural nets and mashes them together in bizarre ways to see if anything coherent comes out the other end.

And the reaction is a bunch of Very Serious Engineers who cross their arms and harumph at it for being Unprofessional and Not Serious and Not Ready For Production.

I often feel like our industry has lost its sense of whimsy and experimentation from the early days, when people tried weird things to see what would work and what wouldn't.

Maybe it's because we also have suits telling us we have to use neural nets everywhere for everything Or Else, and there's no sense of fun in that.

Maybe it's the natural consequence of large-scale professionalization, and stock option plans and RSUs and levels and sprints and PMs, that today's gray hoodie is just the updated gray suit of the past but with no less dryness of imagination.


> If you read Steve's writeup, it's clear that this is a big fun experiment:

So, Steve has the big scary "YOU WILL DIE" statements in there, but he also has this:

> I went ahead and built what’s next. First I predicted it, back in March, in Revenge of the Junior Developer. I predicted someone would lash the Claude Code camels together into chariots, and that is exactly what I’ve done with Gas Town. I’ve tamed them to where you can use 20–30 at once, productively, on a sustained basis.

"What's next"? Not an experiment. A prediction about how we'll work. The word "productively"? "Productively" is not just "a big fun experiment." "Productively" is what you say when you've got something people should use.

Even when he's giving the warnings, he says things like "If you have any doubt whatsoever, then you can’t use it" implying that it's ready for the right sort of person to use, or "Working effectively in Gas Town involves committing to vibe coding.", implying that working effectively with it is possible.

Every day, I go on Hacker News, and see the responses to a post where someone has an inconsistent message in their blog post like this.

If you say two different and contradictory things, and do not very explicitly resolve them, and say which one is the final answer, you will get blamed for both things you said, and you will not be entitled to complain about it, because you did it to yourself.


I agree, I’m one of the Very Serious Engineers and I liked Steve’s post when I thought it was sort of tongue in cheek but was horrified to come to the HN comments and LinkedIn comments proclaiming Gastown as the future of engineering. There absolutely is a large contingent of engineers who believe this, and it has a real world impact on my job if my bosses think you can just throw a dozen AI agents at our product roadmap and get better productivity than an engineer. This is not whimsical to me, I’m getting burnt out trying to navigate the absurd expectations of investors and executives with the real world engineering concerns of my day to day job.

> horrified to come to the HN comments and LinkedIn comments proclaiming Gastown as the future of engineering.

I don't spend much time on LinkedIn, but basically every comment I've read on HN is that, at best, Gas Town can pump out a huge amount of "working" code in short timeframes at obscene costs.

The overwhelming majority are saying "This is neat, and this might be the rough shape of what comes next in agentic coding, but it's almost certainly not going to be Gas Town itself."

I have seen basically no one say that Gas Town is the The Thing.


I feel that yegge captured the mania of the whole operation rather well. If your bosses commit to the idea that 100 memoryless stochastic "polecats" will deliver a long term sustainable business, then there are probably other leadership issues besides this.

I think Steve's idea of an agent coordinator and the general model could make sense. There is a lot of discussion (and even work from Anthropic, OpenAI, etc) around multiagent workflows.

Is Gas Town the implementation? I'm not sure.

What is interesting is seeing how this paradigm can help improve one's workflow. There is still a lot of guidance and structuring of prompts / claude.md / whichever files that need to be carefully written.

If there is a push for the equivalent of helm charts and crds for gas town, then I will be concerned.


I ran into this building a similar workflow with LangGraph. The prompt engineering is definitely a pain, but the real bottleneck with the coordinator model turns out to be the compounding context costs. You end up passing the full state history back and forth, so you are paying for the same tokens repeatedly. Between that and the latency from serial round-trips, it becomes very hard to justify in production.

AI is such a fun topic -- the hype makes it easy to loath, but as a coder working with Claude I think it's an awesome tool.

Gastown looks like a viable avenue for some app development. One of the most interesting things I've noticed about AI development is that it forces one to articulate desired and prohibited behaviors -- a spec becomes a true driving force.

Yegge's posts are always hyperbolic and he consistently presents interesting takes on the industry so I'm willing to cut him a buttload of slack.


I find it interesting that waterfall is becoming popular again.

I agree, it is really interesting. I think that the main reason, though, is that instead of a waterfall cycle taking weeks or months, it now takes minutes. So it’s the process of waterfall (speccing things out carefully in advance, committing to the plan, assessing the results based on adherence to the plan, etc), but on the time frame of agile.

Embrace and use it to your advantage. Tell them nobody knows and understands how these things will actually work long term, that's why there's stuff like gas town, and that the way you see all of this is you can manage this process. What you bring to the table is making sure it will actually work if the tech is safe and sound, reaping the rewards, or protect the business if the tech fails, protecting the company from catastrophic tech failure, telll them that you are uniquely positioned to carry out the balancing act because you are deep in the tech itself. bonus if you explain the uncertainty framing in the business strategy: "because nobody really understands the tech nobody has an advantage, we are all playing on a leveled field, from the big boys at FAANGs to us peasants in normal non-tech enterprises: I am your advantage here if you give me the tools and leverage I need to make this work". if you play this right you'll get the fat bonus whether the tech actually works or not.

If your boss is that bad, the correct long-term move is to leave, not to wish technology didn’t advance.

Your boss and other ones who are asleep someday will wake up too.

"I’m getting burnt out trying to navigate the absurd expectations of investors and executives with the real world engineering concerns of my day to day job."

Welcome to being a member of a product team who cares beyond just whats on their screen... Honestly there is a humbling moment coming for everyone, it and Im not sure its unemployment.


It's a half-joke. No need to take it that seriously or that jokingly. It's mostly only grifters and cryptocurrency scammers claiming it's amazing.

I think ideas from it will probably partially inspire future, simpler systems.


It may be a joke in the same way that brogramming was a joke and somehow became an enduring tech bro stereotype

Strong agreement with this. The whimsical, fantasy, fun, light hearted things are great until a large enough group of people take them as a serious life motto & then try to push it on everyone else.

Taking the example of the cryptocurrency boom (as a whole) as the guide, the problem is the interaction of two realities: big money on the table; and the self-fulfilling-prophecy (not to say Ponzi) dynamic of needing people to keep clapping for Tinker-bell, in greater and greater numbers, to keep the line going up. It corrupts whimsical fun and community spirit, it corrupts idealism, and it corrupts technical curiosity.

stevey already made $300K from cryptocurrency grift on Gas Town. Read his blog post about it.

Complete with a "Let’s goooooooo!"

And FOMO stories about missing out on Bitcoin when he knew about it, so he doesn't want you to miss out on this new opportunity to get "filthy rich" as an "investor" while you still can.


More details on the pump and dump scheme he joined in on promoting and drew money from: https://pivot-to-ai.com/2026/01/22/steve-yegges-gas-town-vib...

MOOLLM has its own official currency -- MOOLAH!

https://github.com/SimHacker/moollm/tree/main/skills/economy...

The official currency of MOOLLM is MOOLAH. It uses PROOF OF MILK consensus — udderly legen-dairy interga-lactic shit coin, without the bull.


This initially sunk my heart, but in all his replies there are like 50 very clearly unintelligent crypto grifters telling him he needs to be killed for scamming them, so I am unsure who to root for at this point. It's depressing he accepted it, but I might partially forgive it due to him making a lot of them lose money.

[flagged]


Why is it hard to criticize people for being part of a scam operation? It's so morally and ethically bankrupt that it's really easy and valid to criticize someone for

Who is being scammed? The only people buying into tokens as obscure as these are degenerate gamblers who know very well that it's not any kind of an investment.

That sounds like victim blaming to me

It's not a scam, there's no misrepresentation. This very clearly isn't marketed as an investment of any kind https://bags.fm

https://apps.apple.com/app/bags-financial-messenger/id647319...

The tagline of the app? "Buy & sell memecoins". Transparently advertised as a crowdfunding mechanism using memecoins.


Yegge write a blog post for his readers where he called it an investment and hoped the investors would get “filthy rich”.

What? Of course it's marketed as an investment. That's the sole thing it's marketed as. Are you not able to lift the thinnest veil imaginable?

Because you'd be aiding and abetting a pennystock scam.

The difference between bags.fm and pennystock scams is that bags.fm is very obviously not marketed as an investment, but a crowdfunding tool.

It's absolutely marketed as an investment, and solely used and referenced by people saying it is an investment. This is like saying those cannabis paraphernalia shops are marketed as only for tobacco.

Yegge write a blog post for his readers where he called it an investment and hoped the investors would get “filthy rich”.

But people do. There are people who genuinely think crypto is an investment. Yes, smart people knows it is just a grift and that it is just about selling it on to the next person before it crashes. But is it moral to make money on stupid people? Many people lose all their money on gambling even if we always known gambling is a loss.

> There are people who genuinely think crypto is an investment.

Sure! Are those people buying bags.fm tokens? Probably not.

This isn't even marketed as an investment https://bags.fm but a crowdfunding tool for developers with a casino attached.

You don't have to be smart to read the big text on the website.


You don't have to be smart to understand they're very, very, very obviously saying it's an investment and using extremely superficial cover. All things like these are exclusively pennystock scams.

You're being bamboozled. Google the name of it. Search it on Twitter and 4chan. Watch any Coffeezilla video.


I'm googling "bags.fm", everything I can find is about money going to creators. Literally nothing suggesting that you're going to get rich by buying these tokens.

Searching for "bags.fm" on X with keywords like "invest" or "rich" or "moon" also does not seem to return any conversations referring to anyone but the creators getting rich.

I can't find any bags.fm references on 4chan, and searching for gas town instead doesn't seem to bring up anything cryptocurrency related in the archive.

> You're being bamboozled

I don't think so. I suspect the world is so full of crypto scams that when someone does something explicitly non-scammy ("Hey, here's a crypto thing you can use to give me free money!") people still incorrectly view it as scammy because of crypto.

How many memecoin "investors" do you think view these as serious investments? I suspect essentially none of them.

How many memecoin "investors" are degenerate gambling addicts who need treatment? Probably most of them.

Taking money from vulnerable gambling addicts is certainly not ideal, but it's far from scammy.


Yegge himself wrote a blog post to his non crypto audience calling it an investment that he hopes makes its investors filthy rich. He pumped it, then he dumped it, and announced he’s walking away from it at that point after taking his profits and crashing its value.

I don’t know why you’re talking about existing hardcore BAGS addicts when the topic is Yegge promoting a crypto grift to his own general audience as an investment and then running the typical pump and dump scam on them.


It's a scam or a pennystock grift or whatever term you want to use.

https://x.com/Fizzy__01/status/1956006313848397861

100% of these things are somewhere on the scam and fraud spectrum. An unscrupulous person creates a token or a platform for creating tokens with the goal of raising the worthless token's price so they can parasitically make millions from something that holds zero value.

The "fund creators" thing is a common ploy. If they actually wanted to do that, they'd make it so you can only donate with dollars or stablecoins.

Look at the dozens of replies to all of Yegge's posts, now: https://x.com/Steve_Yegge/status/2014530592134910215


Yegge write a blog post for his readers where he called it an investment and hoped the investors would get “filthy rich”.

I don't get crypto - just looked up how a couple of most performant stocks did in the past decade, and I'm pretty sure you could outperform BTC with the same amount of risk tolerance.

The swings on BTC price are absolutely insane, and ETH even more so (which is even more risky, without showing higher gains).


what the? how do you sell crypto based on a description of an orchestration framework?

donations?


People keep giving him the benefit of the doubt. "He's clearly on to something, I just don't know what". I know what. The hustle of the shill. He has long gone from 'let's use a lot of tokens' to seeking a high score. He disgusts me.

What high score?

I too am a Very Serious Engineer but my shock is in the other direction: of course the ideas behind Gas Town are the future of software development and several VSEs I know are developing a proper, robust, engineering version of it that works. As the author of this article here remarks “yes, but Steve did it first”, and it annoys me that if I had written this post nobody would have read it, but also that, because I intend to use it in Very Serious Business ($bns) my version isn’t ready to a actually be published yet. Bravo to Steve for getting these thoughts on paper and the idea built even in such crude form. But “level 8” is real and there will be 9s and 10s and I am really enjoying building my own.

> "Gastown as the future of engineering"

Note the word "future" not "present". People are making a prediction of where things will go. I haven't seen a single person saying that Gas Town as it exists today is ready for production-grade engineering project.


> "If you say two different and contradictory things, and do not very explicitly resolve them, and say which one is the final answer, you will get blamed for both things you said, and you will not be entitled to complain about it, because you did it to yourself."

If I can be a bit bold and observe that this tic is also a very old rhetorical trick you see in our industry. Call it Schrodinger's Modest Proposal if you will.

In it someone writes something provocative, but casts it as both a joke and deadly serious at various points. Depending on how the audience reacts they can then double down on it being all-in-good-jest or yes-absolutely-totally. People who enjoy the author will explain the nonsensical tension as "nuance".

You see it in rationalist writing all the time. It's a tiresome rhetorical "trick" that doesn't fool anyone any more.


It's a version of a motte and bailey argument (named after a medieval castle defense system):

> "...philosopher Nicholas Shackel coined the term 'motte-and-bailey' to describe the rhetorical strategy in which a debater retreats to an uncontroversial claim when challenged on a controversial one."

-- https://heterodoxacademy.org/blog/the-motte-and-the-bailey-a...


In what rationalist writing? The LessWrong style is to be literal and unambiguous. They’re pretty explicit that this is a community value they’re striving for.

The whole trick is having your cake and eating it too. The LessWrong style exploits the gap between the strength of the claims ("this is a big deal that explains something fundamental about the world") and the evidence/foundation (abstract armchair reasoning, unfalsifiable)

That’s not the same issue, though. You’re claiming just plain overconfidence or that you find their arguments unconvincing. But the rhetorical trick we were discussing is oscillating between treating a claim as a joke or as deadly serious depending on the audience.

I think both can be true, no?

Multi-agent coordination is obviously what's next.

And, Gas Town itself might never amount to more than a proof-of-concept.

Personally I'd put my money on whatever Anthropic build to do this job, rather than a layer someone else builds atop CC.

Remember when code LLMs were just APIs, and folks were building their own coding scaffolds like Aider and Cursor? Then Claude Code steamrolled everyone; they win because they can do RL on the whole agentic scaffold.

Any multi-agent system will have the same properties, i.e. whatever traits (e.g. the GUPP) and tool expertise (e.g. using Beads) are required to effectively participate in a swarm will get RL'd into the coding model, and any attempts to build alternate scaffolds will hit impedance mismatches because they do not fit the shape of what was RL'd (just like using non-CC UIs with Anthropic models gives you worse results than using the CC UI).

I say this with love - Yegge is putting forth some excellent ideas here. Beads seems like a great concept to add to CC ASAP; storing the TODO state in a repo would mean we don't need MCPs onto issue trackers. And figuring out what orchestration concepts are required will need a lot more trial and error, but these existence proofs are moving the frontier forward.


These are some very tortured interpretations you're making.

- "what's next" does not mean "production quality" and is in no way mutually exclusive with "experimental". It means exactly what it says, which is that what comes next in the evolution of LLM-based coding is orchestration of numerous agents. It does not somehow mean that his orchestrator writes production-grade code and I don't really understand why one would think it does mean that.

- "productively" also does not mean "production quality". It means getting things done, not getting things done at production-grade quality. Someone can be a productive tinkerer or they can be a productive engineer on enterprise software. Just because they have the word "product" in them does not make them the same word.

- "working effectively" is a phrase taken out of the context of this extremely clear paragraph which is saying the opposite of production-grade: "Working effectively in Gas Town involves committing to vibe coding. Work becomes fluid, an uncountable substance that you sling around freely, like slopping shiny fish into wooden barrels at the docks. Most work gets done; some work gets lost."

If he wanted to say that Gas Town wrote production grade code, he would have said that somewhere in his 8000-word post. But he did not. In fact, he said the opposite, many many many many many many times.

You're taking individual words out of context, using them to build a strawman representing a promise he never came close to making, and then attacking that strawman.

What possible motivation could you have for doing this? I have no idea.

> If you say two different and contradictory things...

He did not. Nothing in the blog post explicitly says or even remotely implies that this is production quality software. In addition, the post explicitly, unambiguously, and repeatedly screams at you that this is highly experimental, unreliable, spaghetti code, meant for writing spaghetti code.

The blog post could not possibly have been more clear.

> ...because you did it to yourself.

No, you're doing this to his words.

Don't believe me? Copy-paste his post into any LLM and ask it whether the post is contradictory or whether it's ambiguous whether this is production-grade software or not. No objective reader of this would come to the conclusion that it's ambiguous or misleading.


> Copy-paste his post into any LLM and ask it whether the post is contradictory or whether it's ambiguous whether this is production-grade software or not. No objective reader of this would come to the conclusion that it's ambiguous or misleading.

That's hilarious! You might want to add a bit more transition for the joke before the other points above, though.


> Don't believe me? Copy-paste his post into any LLM and ask it whether the post is contradictory or whether it's ambiguous whether this is production-grade software or not.

Bleak


> If you say two different and contradictory things, and do not very explicitly resolve them, and say which one is the final answer, you will get blamed for both things you said, and you will not be entitled to complain about it, because you did it to yourself.

Our industry is held back in so many ways by engineers clinging to black-and-white thinking.

Sometimes there isn’t a “final” answer, and sometimes there is no “right” answer. Sometimes two conflicting ideas can be “true” and “correct” simultaneously.

It would do us a world of good to get comfortable with that.


My background is in philosophy, though I am a programmer, for what it is worth. I think what I'm saying is subtly different from "black and white thinking".

The final answer can be "each of these positions has merit, and I don't know which is right." It can be "I don't understand what's going on here." It can be "I've raised some questions."

The final answer is not "the final answer that ends the discussion." Rather, it is the final statement about your current position. It can be revised in the future. It does not have to be definitive.

The problem comes when the same article says two contradictory things and does not even try to reconcile them, or try to give a careful reader an accurate picture.

And I think that the sustained argument over how to read that article shows that Yegge did a bad job of writing to make a clear point, albeit a good job of creatring hype.


Or -- and hear me out -- unserious people are saying nonsense things for attention and pointing this out is the appropriate response.

yeah the messaging is somewhat insecure in that it preemptively seeks to invalidate criticism by just being an experiment while simultaneously making fairly inflammatory remarks about nay sayers like they'll eat dirt if they don't get on board.

I think it's possible to convey that you believe strongly in your idea and it could be the future (or "is the future" if you're so sure of self) while it still being experimental. I think he would get less critics if he wasn't so hyperbolic in his pitch and had fewer inflammatory personal remarks about people who he hasn't managed to bring on side.

People I know who communicate like that generally struggle to contribute constructively to nuanced discussions, and tend to seek out confrontation for the sake of it.


Additionally, Steve seems very adamant about the fact that anyone who doesn't adopt vibe coding is going to be obsolete, and the ones who adopt it best are going to win big.

> "What's next"? Not an experiment.

I think what’s next after an experiment very often is another experiment, especially when you’re doing this kind of exploratory R&D.


>We should take Yegge’s creation seriously not because it’s a serious, working tool for today’s developers (it isn’t). But because it’s a good piece of speculative design fiction that asks provocative questions and reveals the shape of constraints we’ll face as agentic coding systems mature and grow.

I have no doubt Yegge would agree wholeheartedly with that take. He wants the community to explore these ideas with him.

The bizarre thing is that Gas Town has been popping up in mainstream news and media. It's being discussed in my economics podcasts.

It's relevant for them because it hints at a very disruptive idea: The hierarchy of Gas Town, when extrapolated, suggests that agents won't just replace your workers, it will replace your business too. It suggests that in a few years there could be a tool that is effectively a software agency, which means companies like Anthropic could eat any software shop that can't compete.


I think you just proved mediaman's point.

Keep in mind that Steve has LLMs write his posts on that blog. Things said there may not reflect his actual thoughts on the subject(s) at hand.

There is no way for this to be true. I read his book about vibe coding and it is obvoius that it has significant LLM contribution. His blog posts though are funy and controversial, and have bad jokes, and he jumps from topic to topic. Ha has had this style like 10+ years before LLMs came around.

The book intro proudly states it used LLM drafting.

I've been reading Steve's posts for quite literally a decade now and I don't think his new posts are so meaningfully different from the old ones that he's not at the wheel any more. Besides, his twitter posts often double down on what he's writing in the blog, and it's doubtful he's not writing those.

> Keep in mind that Steve has LLMs write his posts on that blog.

Ok, I can accept that, it's a choice.

> Things said there may not reflect his actual thoughts on the subject(s) at hand.

Nope, you don't get to have it both ways. LLMs are just tools, there is always a human behind them and that human is responsible for what they let the LLM do/say/post/etc.

We have seen the hell that comes from playing the "They said that but they don't mean it" or "It's just a joke" (re: Trump), I'm not a fan of whitewashing with LLMs.

This is not an anti or pro Gas Town comment, just a comment on giving people a pass because they used an LLM.


Do you read that as giving him a pass? I read it as more of a condemnation. If you have an LLM write "your" blog posts then of course their content doesn't represent your thoughts. Discussing the contents of the post then is pointless, and we can disregard it entirely. Separately we can talk about what the person's actual views might be, using the fact that he has a machine generate his blog posts as a clue. I'm not sure I buy that the post was meaningfully LLM-generated though.

The same approach actually applies to Trump and other liars. You can't take anything they say as truth or serious intent on its own; they're not engaging in good faith. You can remove yourself one step and attempt to analyze why they say what they do, and from there get at what to take seriously and what to disregard.

In Steve's case, my interpretation is that he's extremely bullish on AI and sees his setup or something similar as the inevitable future, but he sprinkles in silly warnings to lampshade criticism. That's how the two messages of "this isn't serious" and "this is the future or software development" co-exist. The first is largely just a cover and an admission this his particular project is a mess. Note that this interpretation assumes that the contents of the blog post in question were largely written by him, even if LLM assistance was used.


Hmm, maybe I read the original comment wrong then? I did read it as "You can't blame him, that might not even be what he thinks" and my stance is "He posted it on his blog, directly or indirectly, what else am I supposed to think?".

I agree with you on Steve's case, and I have no ill will towards him. Mostly it was just me trying to "stomp" on giving him a pass, but, as you point out, that may not have been what the original commenter meant.


Is this confirmed true? Yegge has a very very long history of writing absurdly long posts / rants.

Back in the day they used to be coherent.

Not much more than his recent posts, no.

There's a rather fine line between "don't believe everything you read" and "don't believe anything you read". At least in this case.

This is some super fucked up thinking. If it does not reflect your actual thoughts, you do not post it under your own name.

I thought it was harmless(ish) fun, but David Gerard put out a post stating that Yegge used Gas Town to push out a crypto project that rug pulled his supporters, while he personally walked away with something between $50K to $100K from memory.

I suppose that has little to do with the technical merits of the work, but it's such a bad look, and it makes everyone boosting this stuff seem exactly as dysregulated/unwise as they've appeared to many engineers for a while.

I met Sean Goedecke for lunch a few weeks ago, who uses LLMs a bunch, and is clearly a serious adult, but half the folks being shoved in front of everyone are behaving totally manic and people are cheering them on. Absolutely blows my mind to watch.

https://pivot-to-ai.com/2026/01/22/steve-yegges-gas-town-vib...


That was very weird. In the post where he was arguably "shilling," he seems to have signposted pretty well that it was dumb, but he will take the money they offered:

> $GAS is not equity and does not give you any ownership interest in Gas Town or my work. This post is for informational purposes only and is not a solicitation or recommendation to buy, sell, or hold any token. Crypto markets are volatile and speculative — do not risk money you can’t afford to lose.

...

> Note: The next few sections are about online gambling in all its forms, where “investing” is the buy-and-hold long-form “acceptable” form of gambling because it’s tied to world GDP growth. Cryptocurrencies are subject to wild swings and spikes, and the currency tied to Gas Town is on a wild swing up. But it’s still gambling, and this stuff is only for people who are into that… which is not me, and should probably not be you either.

In the next post he said he wasn't going to shill it any more, and then the price collapsed and people sent him death threats on Twitter. It probably would have collapsed anyway. Perhaps there was supposedly some implicit bargain that he shouldn't take the money if he wasn't going to shill? Well, there's certainly no rule saying you have to do that.

I think he's not very much to blame for taking the money from degenerate gamblers, and the cryptocurrency idiots are mostly to blame for their own mistakes.


> I think he's not very much to blame for taking the money from degenerate gamblers, and the cryptocurrency idiots are mostly to blame for their own mistakes.

I empathize with the disdain for crypto idiots, but I still think the people running or promoting these scams deserve most of the blame. "There's a market for my poison" is every dopamine dealer's excuse.


Yeah, and I don't want to be involved in that shit. Yeggae can go fuck off.

“Degenerate gamblers” is the kind of stigma that stops people and their families getting help for addiction. Even if you believe it’s a moral failing, the families deserve better.

Very true. Although, I wonder how much of that sort of thing was going on in this case? Did people actually bet money they couldn't afford to lose on this crazy scheme?

No harsh belittling is what makes them quit, not accommodating their nonsense and excusing it. There should be a stigma with destructive behavior. The flaw is this decades-long trend of talking about stigmas and refusing to condemn bad behaviors.

I’m afraid we tried that for quite a few centuries, with very little effect. Infact most major world religions had phases with heavy punishment, condemnation, belittling, and you are going to hell stuff over gambling. Yet here we are.

I'm fairly certain those disclaimers were added after he got some pushback from the original post.

One of them clearly was (marked "Edit: "). I don't know about the others.

> I think he's not very much to blame for taking the money from degenerate gamblers, and the cryptocurrency idiots are mostly to blame for their own mistakes.

So drug dealers are not to blame for taking the money from degenerate addicts! Let's free everyone and disband the DEA, we'll save billions of dollars.

Oh wait nvm this line of thinking only applies to sv people


He is still an evil scammer scamming people.

In the same way signposting and credibly warning "I murder people" does not make ok to murder people.


Do you have the same attitude towards all forms of gambling?

Yegge wrote in his blog post (viewed by many who ended up buying in) that it is an investment and that he wishes the investors will become "filthy rich". He wrote the post as an introduction to the concept of BAGS for an audience that is unfamiliar with it. He onboarded people to the platform and to his pump and dump scheme (in which he pumped, and dumped, then announced he's walking away from it).

You left out that part of the post and only mentioned the disclaimer he added at the top after he got pushback on his messaging. Are you influenced by his celebrity?


Yes, if people benefiting from gambling misrepresent it as for example investing.

In such cases it is a scam.


He pumped, and dumped. He stopped shilling at the moment that the dump was proceeding. That's what pump and dump grifters do.

Details https://pivot-to-ai.com/2026/01/22/steve-yegges-gas-town-vib...


Maybe I'd care about his opinion if he didn't take the money. I consider this worse than OSS taking VC money. At least those don't have a scam auto-builtin to the structure beyond normal capitalistic parasitism.

Also, 275k lines for a markdown todo app. Anyone defending this is an idiot. I'll just say that. Go ahead, defend it. Go do a code review on `beads`. Don't say it's alright, but gastown is madness. He fucking sucks.


> If you read Steve's writeup

Personally I got about 3 paragraphs into what seemed like a twelve-page fevered dream and filed it under "not for me yet".


> And the reaction is a bunch of Very Serious Engineers who cross their arms and harumph at it for being Unprofessional and Not Serious and Not Ready For Production.

Exactly!


They’re part of Steve’s art project, they just don’t realise it.

> OK! That was like half a dozen great reasons not to use Gas Town. If I haven’t got rid of you yet, then I guess you’re one of the crazy ones. Hang on. This will be a long and complex ride. I’ve tried to go super top-down and simplify as much as I can, but it’s a bit of a textbook.

Yegge's been around a long, long time and this is about within a standard deviation of his normal writings, at least in style. I haven't read much of his LLM/AI related stuff, but none of Gas Town left me with any sort of "huh" reaction, knowing the author.

For better or worse, we are making history.

A sense of art and whimsy and experimentation is less compelling when it's jumping on the hypest of hype-trains. I'd love to see more folk art in programming, but Gas Town is closer to fucking Beeple than anything charming.

>I often feel like our industry has lost its sense of whimsy and experimentation from the early days, when people tried weird things to see what would work and what wouldn't.

Remember the days when people experimented with and talked about things that werent LLMs?

I used to go to a lot of industry events and I really enjoyed hearing about the diversity of different things people worked on both as a hobby and at work.

Now it's all LLMs all the time and it's so goddamn tedious.


> I used to go to a lot of industry events and I really enjoyed hearing about the diversity of different things people worked on both as a hobby and at work.

I go to tech meetups regularly. The speed at which any conversation end up on the topic of AI is extremely grating to me. No more discussions about interesting problems and creative solutions that people come up with. It's all just AI, agentic, vibe code.

At what point are we going to see the loss of practical skills if people keep on relying on LLMs for all their thinking?


> No more discussions about interesting problems and creative solutions that people come up with. It's all just AI, agentic, vibe code.

And then you give in and ask what they're building with AI, that activation energy finally available to build the side project they wouldn't have built otherwise.

"Oh, I'm building a custom agentic harness!"

...


It's like the entire software industry is gambling on "LLMs will get better faster than human skills will decay, so they will be good enough to clean up their own slop before things really fall apart".

I can't even say that's definitely a losing bet-- it could very well happen-- but boy does it seem risky to go all-in on it.


On one hand, it’s extremely tiring having to put up with that section of our industry.

On the other, if a large portion of the industry goes all in, and it _doesn’t_ pay off and craters them, maybe the overhyping will move onto something else and we can go back to having an interesting, actually-nice-to-be-in-industry!


I can't help but think of a video of a talk by someone- uncle Bob maybe?- talking about the origin of the agile manifesto.

He framed it as software developers were once the experts in the room, but so many young people joined the industry that managers turned to micromanaging them out of instinctual distrust. The manifesto was supposed to be the way for software developers to retake the mantle of the professional expert, trusted to make things happen.

I don't really think that happened, especially with agile becoming synonymous with Scrum, but if this doesn't pay off and craters the industry, it seems like it'd be the final nail in that coffin.


Some of the heads like Altman seem to be putting all their chips in the "AGI in [single-digit number] years" pile.

It's incredible the change over the last few years even on the hardware side. I go to the supercomputing.org conference annually and saw folks advertising "AI power distribution units". There used to be a lot of neat innovation, and now every last thing has to have "AI" in the title, it's infuriating

Well, LLMs are an engineering breakthrough of the degree somewhere between the Internet and electricity, in terms of how general-purpose and broadly-applicable they are. Much like them, LLMs have the potential to be useful in just about everything people do, so it's no surprise they've dominated the conversation - just like electricity and the Internet did, back in their heyday.

(And similar to the two, I expect many of the initial ideas for LLM application to be bad, perhaps obviously stupid in hindsight. But enough of them will work to make LLMs become a lasting thing in every aspect of people's lives - again, just like electricity and the Internet did).


It reminds me most of the release of the first iPhone - very flashy, very overhyped, adds a bit of convenience to people's lives but also likely to measurably damage people's brains in the long run.

~80% of the usage patterns i see these days falsely assume that LLMs can handle their own quality control and are optimizing for appearance, potential or demo-worthiness rather than hardcore usefulness. Gas town is not an outlier here.

When the internet and electricity were ~3 years old people were already using it for stuff that was working and obviously world changing rather than as demos of potential.

That 20% of usage patterns that work now arent going away but the other 80% are going to be seen as blockchainesque hype in 5 or 10 years.


I like gastown's moxie, it's fun, and seems kind of tongue in cheek.

What I don't like is people me-tooing gastown as some breakthrough in orchestration. I also don't like how people are doing the same thing for ralph.

In truth, what I hate is people dogpiling thoughtlessly on things, and only caring about what social media has told them to care about. This tendency makes me get warm tingles at the thought of the end of the world. Agent smith was right about humanity.


I mean, isn’t the whole point of Ralph that it’s an allusion to “I’m in danger” because Claude in a for loop can do your job?

I believe the intent was that he's dumb but persistent.

No, Ralph is famously dumb and needs lots of hand-holding and explanations of things most people think are very simple and can hold very little in his head at once.

But that's often enough to loop over and over again and eventually finish a task


>it is a mixture of technology and art, it is provocative

There's no art (or engineering) in this and the only provocative thing about it is that Yegge apparently decided to turn it into a crypto scam. I like the intersection of engineering and art but I prefer if it includes both actual engineering and art, 100 rabbits (100r.co) is a good example of it, not a blog post with 15 AI generated images in it that advocates some unholy combination of gambling, vibe coding and cryptocurrency crap.


Perhaps it was his followup post about how people are lining up to throw millions of VC dollars at his bizarre whimsical fever dream that disturbs people? I’m all for arts funding, but…

Isn't the point that he refused them? VCs can be dumb (see the crypto hype, even the recent inflated AI raises) so I wouldn't put too much stock in what they think is valuable.


HAHAHA

It isn't though. It crossed the chasm when Steve (who I would like to think is somewhat comfortable after writing a book, holding a director level position at several startups) decided to endorse an outright crypto pump and dump.

When he decided to monetize the eyeballs on the project instead of anything related to the engineering. Which, of course, Steve isn't smart enough to understand (in his own words) and he recommends you not buy but he still makes a tidy profit from it.

Its a memecoin now... that has a software project attached to it. Anything related to engineering died the day he failed to disavow the crypto BS and instead starting shrilling it.

What happened to engineers not calling out BS as BS.



My favorite part about that is gas town is supposedly so productive that this guys sleep patterns are affected by how much work he’s doing, but he took the time to physically go to a bank to get a 5 figure payout.

It makes it difficult to believe that gas town is actually producing anything of value.

I also lol at his bitching about how the bank didn’t let him do the transactions instantly as he describes himself how much of a scam this seems and how the worst thing is his bank account being drained, like banks don’t have a self interest in protecting their clientele from such scams.


> I don't get the widespread hatred of Gas Town. If you read Steve's writeup, it's clear that this is a big fun experiment. It pushes and crosses boundaries, it is a mixture of technology and art, it is provocative.

Because I actually have an arts degree and I know the equivalent of a con artist in a rich people arts gallery bullshitting their way into money when I see one.

And the "pushing and crossing boundaries" argument has been abused as a pathetic defense to hide behind shallowness in the art world for longer than anyone in this discussion board has been alive. It's not provocative when it's utterly predictable, and in this case the "art" is "take the most absurd parody of AI culture and play it straight". Gee whiz how "creative" and "provocative".


Its because people are treating the experiment like a serious path forward for their business.

"our industry has lost its sense of whimsy"

The first thing I thought as I read his post and saw the images of the weasels was that he should make a game of it. Maybe name it Bitborn.


> I don't get the widespread hatred of Gas Town.

Fear over what it means if it works.


I work in a typical web app company which does accounting/banking etc.

A couple of days ago I was sitting in a meeting of 10-15 devs, discussing our AI agents. People were raising issues and brainstorming ways around the problems with AI. How to make the AI better.

Our devs were occupied doing AI things, not accounting/banking things.

If the time savings were as promised, we should have been 3 devs (with the remaining devs replaced by 7-10 AI agents) discussing accounting/banking.

If Gas Town succeeds, it will just be the next toy we play with instead of doing our jobs.


Isn't that fun though? We get paid to fuck around. People say AI is putting devs out of jobs, I say we're getting paid to play with them and see if there's any value there. This is no different from the dev tools boom of the ZIRP era: I remember having several sprints worth of work just integrating the latest dev tool whose sales team won our execs over.

This is only partly tongue in cheek :P


Who wants to do grunt work when you can play architect to a bunch of robots?

Its like the ultimate RTS, plus you get paid.


Playing with new toys is part of doing my job. In my shop, we call them "ooh shiny"'s. Most devs are in the same boat, but I feel bad for those that aren't.

Sounds like more of an issue of corporate meeting culture.

Has it written anything of quality?

beads is a 275k line todo tracker (probably more now). Yeggae is proud to have never read the source. I'm sure its high quality.

I really don't get the point. An LLM can easily, flexibly, and masterfully track commented hierarchal yaml todo lists without breaking a sweat, with zero lines of code.

It's like writing a 275k line C++ program just to printf("You are absolutely correct!") when ChatGPT can do that for you with a one line prompt in just one shot.


Anyone using beads should switch to something else that isn't insane. If you like beads, https://github.com/hmans/beans works the same (not my project), just that its serdes is markdown files with front matter, in a dot folder. Like every sane solution. No daemons, no sync branches. I cannot guarantee the project at all, but at least its better than beads. Or make your own; this is one example of one such project.

It reads like the ramblings of a smart person experiencing a psychotic episode.

First Yegge read?

The best thing about LLMs is that they can summarize Yegge posts to extract any actually useful content.

I didn't read this article as hate at all, FWIW. It was a pretty measured review of what it is and what it isn't with some much clearer diagrams of the mental models.

Links to Steve's writeup for Gas Town for those who don't have them yet:

[Medium post]: https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...

[HN Discussion]: https://news.ycombinator.com/item?id=46458936


And https://steve-yegge.medium.com/bags-and-the-creator-economy-... where you can read about its author scamming people

I don't understand what's "scammy" about a rugpull. What did the "investors" expect? That lolcoin would become a cash flow positive business and disburse dividends?

The part where you lie to people and falsely describe it as investing and promise/imply/mention profits.

Rugpull is a scam by definition, being confused why scam is scammy seems weird.


> I often feel like our industry has lost its sense of whimsy and experimentation from the early days, when people tried weird things to see what would work and what wouldn't.

The gold rush brogrammers took over. They only care about money and they have displaced most of the more whimsical (but competent) "nerds" over the past decade.


It’s not the whimsy. It’s that the whimsy is laced with casual disdain, a touch too much “let me buy you a stick of gum and show you how to chew it”, a frustrated tenor never stated but dog whistled “you dumb fucks”. A soft sharp stink of someone very smart shoving that fact in your face as they evangelise “the obvious truth” you’re too stupid to see.

And maybe he’s even right. But the reaction is to the flavour of chip on the shoulder delivery mixed into an otherwise fun piece.


Don't forget a bit of crypto! People are being way to nice going "I don't understand, but ...". Fuck him.

> Maybe it's because we also have suits telling us we have to use neural nets everywhere for everything Or Else, and there's no sense of fun in that.

Yes, and using it a justification to offshore/ layoff


Hey, I didn't get a harumph out of that VSE crossing his arms at me!

https://www.youtube.com/watch?v=g2Bp8SqYrnE


> it's clear that this is a big fun experiment.

No it's not clear, because at every turn we're told we're supposed to take it seriously, that there's something there there and that's it's a very real hint at some very real future not whimsical nonsense made for a laugh. You can see this in Steve's writing, calling out the non-believers. Then when you call the bluff, well "it's just a prank bro chill out."

> It pushes and crosses boundaries

What does this mean? This is fluff talk nonsense.

Something that's burning through thousands of dollars, producing what exactly?, is deserving of our respect why?


Why can't you take experiments seriously? It's a prediction of what the future could look like, not a production ready tool. If youre problem with it is "they took our jobs" sure that makes sense, but if youre problem is that it is a crappy tool then youre not looking at it correctly.

This is just not true. Yegge is serious and thinks Gas Town is the next big thing.

He did us once with Javascript prophecy. Has this man no decency?? :)

For his income, yes.

Hi mediaman! I'm totally there with you and Steve on the whimsy and experimentation! And your tolerant attitude gives me the Dutch courage to post this.

I've been reading Yegge since the "Stevey's Drunken Blog Rants™" days -- his rantings on Lisp, Emacs, and the Eval Empire shaped how I approach programming. His pro-LLM-coding rants were direct inspiration for my own work on MOOLLM. The guy has my deep respect, and I'm intrigued by his recent work on Sourcegraph and Gas Town.

Gas Town and MOOLLM are siblings from that same Eval Empire -- both oriented along the Axis of Eval, both transgressively treating LLMs as universal interpreters. MOOLLM immanentizes Eval Incarnate -- https://github.com/SimHacker/moollm/blob/main/designs/eval/E... -- where skills are programs, the LLM is eval(), and play is but the first step of the "Play Learn Lift" methodology: https://github.com/SimHacker/moollm/tree/main/skills/play-le....

The difference is resource constraints. Yegge has token abundance; I'm paying out of pocket. So where Gas Town explores "what if tokens were free?" (20-30 Claude instances overnight), MOOLLM explores "what if every token mattered?" Many agents, many turns, one LLM call.

To address wordswords2's concern about "no metrics or statistics" -- I agree that's a gap in Gas Town. MOOLLM makes falsifiable claims with receipts. Last night I ran an Amsterdam Fluxx Marathon stress test: 116+ turns, 4 characters (120+ character-turns per LLM call), complex social dynamics on top of dynamic rule-changing game mechanics. Rubric-scored 94/100. The run files exist. Anyone can audit.

qcnguy's critique ("same thing multiplied by ten thousand") is exactly the kind of specific feedback that helps systems improve. I wrote a detailed analysis comparing the two approaches -- intellectual lineage (Self, Minsky's K-lines, The Sims, LambdaMOO), the "vibecoded" problem (MOOLLM is LLM-generated but rigorously iterated, not ship-and-hope), and why "carrier pigeon" IPC architecture is a dark pattern when LLMs can simulate many agents at the speed of light.

an0malous raises a real fear about bosses thinking "throw agents at it" replaces engineering. Both systems agree: design becomes the bottleneck. Gas Town says "keep the engine fed with more plans." MOOLLM says "design IS the point -- make it richer." Different answers, same problem.

lowbloodsugar mentions building a "proper, robust, engineering version" -- I'd love to compare notes. csallen is right that "future" doesn't mean "production-grade today."

Analysis: https://github.com/SimHacker/moollm/blob/main/designs/GASTOW...

MOOLLM repo: https://github.com/SimHacker/moollm

Happy to discuss tradeoffs or hear where my claims don't hold up. Falsifiable criticism welcome -- that's how systems improve.


Adventure Uplift — Building a YAML-to-Web Adventure Compiler with Simulated Computing Pioneers:

I ran a 260KB session log where I convened a simulated symposium of computing pioneers to design an Adventure Compiler — a tool that compiles YAML adventure definitions that run on MOOLLM under cursor into standalone deterministic browser games requiring no LLM at runtime.

The twist: the "attendees" include AI-simulated tributes to Will Wright, Alan Kay, Marvin Minsky, Seymour Papert, Ted Nelson, Ken Kahn, Gary Drescher, and 25+ others — both living legends and memorial candles for those who've passed. All clearly marked as simulated tributes, not transcripts.

What emerged from this thought experiment:

- Pie menus as the universal interface (rooms, inventory, dialogue trees)

- Sims-style needs system with YAML Jazz inner voice ("hunger: 1 # FOOD. FOOD. FOOD.")

- Prototype-based objects (Self/JavaScript delegation chains)

- Schema mechanism + LLM = "teaching them to fly"

- Git as the collaboration operating system

- ToonTalk-inspired "programming by petting" for terpene kittens

- Speed of Light simulation — the opposite of "carrier pigeon" multi-agent architectures

On that last point: most multi-agent systems use message passing between separate LLM calls. Agent A generates output, it gets detokenized to text, sent over IPC, retokenized into Agent B's context. MOOLLM inverts this. Everything happens in one LLM call.

The spatial MOO map (rooms connected by exits) provides navigation, but communication is instantaneous within a call. Many agents, many turns, zero latency between them — and zero token requantization or semantic noise from successive detokenization/tokenization loops.

The session includes adversarial brainstorming where Barbara Liskov challenges schema contracts, James Gosling questions performance, Amy Ko pushes accessibility, and Bret Victor demands immediate feedback. Each critique gets a concrete response.

Concrete outputs: a working linter, architecture decisions, 53 indexed topics from "Food Oriented Programming" to "Hidden Objects as Invisible Infrastructure."

This is MOOLLM's Play-Learn-Lift methodology in action — play with ideas, extract patterns, lift into reusable skills and efficient scripts.

Session log (260KB, 8000+ lines): https://github.com/SimHacker/moollm/blob/main/examples/adven...

MOOLLM repo: https://github.com/SimHacker/moollm

The session uses representation ethics guidelines — all simulated people are clearly marked, deceased figures invoked with memorial candles, and the framing is explicitly "educational thought experiment."

Happy to discuss the ethics of simulating people, the architecture decisions, or how this relates to my earlier Gas Town comparison post.


In the simulated discussion guest book entry, simulated Douglass Engelbart wrote:

>Doug Engelbart (Augmentation): "Bootstrapping. The tools that build the tools. Your adventure compiler should be able to compile ITS OWN documentation into an adventure ABOUT how it works. The manual is a playable game."

That is exactly how the self documenting categorized skill directory/room works -- the directory is a room with subdirectories for every skill, themselves intertwingled rooms, which form a network you can navigate around via k-lines (see also tags).

Here is the skills dir, with the ROOM.yml file that makes it a room (like COM QueryInterface works: multiple interfaces available for a class, for multiple aspects of it, the directory is IUnknown and you can QI by looking for known interfaces like ROOM.yml, CHARACTER.yml, CONTAINER.yml that inherit from the corresponding skills).

And the README.md file is naturally the ubiquitous human readable documentation (also great for LLM deep dives). And github kindly formats and publishes README.md on every repo directory page, supporting mermaid diagrams, etc):

MOOLLM Skills dir:

https://github.com/SimHacker/moollm/tree/main/skills

MOOLLM Skills room, with skill K-Line navigation protocol:

https://github.com/SimHacker/moollm/blob/main/skills/ROOM.ym...

  # ROOM.yml — The Skill Nexus
  #
  # This is a ROOM — a metaphysical library where all capabilities live.
  # Every skill is a book that teaches itself when you read it.
  # Every cluster is a shelf of related knowledge.
  # Every ensemble is a team that works together.
To go meta, you can enter the Skill Skill (skills/skill), an extended MOOLLM meta-skill that knows all about creating new skills (via the constructionist "Play Learn Lift" strategy), and importing and upgrading Anthropic skills:

https://github.com/SimHacker/moollm/tree/main/skills/skill

And here is a narrative session of me giving a tour of the category and skill networks by hopping around through K-Lines!

MOOLLM currently has 103 Anthropic compatible but extended skills (using 7 MOOLLM extensions, like CARD.yml, K-Lines, Self Prototypes and Delegation, etc).

Session Log: K-Line Connections Safari:

https://github.com/SimHacker/moollm/blob/main/examples/adven...

Eight luminaries have been summoned as Hero-Story familiars — not puppets, but conceptual guides whose traditions we invoke. Each carries the K-lines they pioneered. [...]

ENTERING THE SKILL NEXUS

You push through a shimmering membrane and step into the Skill Nexus.

The space is impossible — a vast spherical chamber where books float in mid-air, orbiting a central point of warm golden light. But these aren't books. They're SKILLS. Living documents that teach themselves when you read them.

Lines of golden light connect related skills. Each connection pulses with meaning. This isn't a library — it's a constellation of knowledge.

Your companions materialize beside you:

Marvin Minsky adjusts his glasses, looking around with evident satisfaction.

"Ah! K-lines made manifest. Each of these floating tomes is a knowledge structure. Touch one and it reactivates an entire constellation of associations. I wrote about this in 1985, but I never imagined seeing it rendered so... literally."

Ted Nelson is already examining the golden threads between skills.

"Two-way links! Every connection goes BOTH directions. When skill A references skill B, skill B knows about skill A. This is what I've been trying to explain since 1965! Everything is deeply intertwingled!"

James Burke turns to address an invisible camera.

"You're looking at the Skill Nexus. A room where every door leads to another room, and every room has doors to everywhere else. But here's the thing — the signs above each door tell you WHY. Not just where you're going, but what connects HERE to THERE. That's what we're going to explore."

Palm scampers up to a floating skill-book labeled "incarnation" and hugs it.

"This is where I became REAL! Don spoke the wish, the tribunal approved, and I wrote my own soul."


We have a different take than Gastown. If AI behaves unreliably and unpredictably, maybe the problem is the ask. So we looked at backend code and decided it was time to bring in more declarative programming. We are already halfway there with declarative frontend (React) and declarative database (SQL). Functional programming is an answer, but functional programming didnt replace object oriented programming because of practical reasons.

So even if the super serious engineers are serious, they should watch their back. Eventually enough guardrails will be created or even the ask will change enough for a lot of automation to happen. And make no mistake, it is automation no different than having automated testing replace armies of manual testing or code generation or procedural generation or any other machine method. And who is going to be left with jobs? People who embrace the change, not people who lament for the good old days or who can't adapt.

Sucks but just how the world works. Sit on the bleeding edge or be burned. Yes there is an "enough" but I suspect enough is around people willing to look at Gastown or even make their own Gastown, not the other side.


Yeah where he probably Burns like a million dollars of money.

Just for fun!


He's paying $600 a month for 3x Claude Max subs. It's in his article.

…and now funded by a $GAS crypto coin on the BAGS platform so it even pays for itself!

https://steve-yegge.medium.com/bags-and-the-creator-economy-...


What a tasty disclosure section that is

It's a "let them eat cake" write up.

Yeah it's unbelievably tiresome, endless complaints from people pushing up their glasses complaining, ITS A PROJECT ABOUT POLECATS CALLED GAS TOWN MADE FOR FUN, read that again, either admire it and enjoy it or quit the umpteenth complaint about vibecoding.

Yes - I've been thinking about why this is. I'm guessing part of it is that writing forces us to think. I often find when I write something that I haven't thought it out fully, and articulating it makes me see a logical failure in my thinking, and gives me the ability to work that out.

So when we just have AI write it, it means we've avoided the thinking part, and so the written article will be much less useful to the reader because there's no actual distillation of thought.

Using voice to article is a little better, and I do find that talking out a thought helps me see its problems, but writing it seems to do better.

There's also the problem that while it's easy to detect AI writing, it's hard to tell the difference between someone who thought it out by talking and had AI write it versus someone who did little thinking and still had AI write it. So as soon you you smell the whiff of AI writing, the reasonable expectation is that there's less distillation of thought.


I think a big part of it is that we're trying to decide if a piece of text is worth spending the time and effort to read it.

If we know the text is hand-authored, then we have a signal that at least one person believed the content was important enough to put meaningful effort into creating it. That's a sign it might be worth reading.

If it's LLM-authored, then it might still be useful, or it might be complete garbage. It's hard to tell because we don't know if even the "author" was willing to invest anything into it.


This exactly. Last year I got handed a big ball of work slop. Someone asked me to review this big ol' design document and I had the hardest time parsing it. It sounded right, but none of the pieces actually fit together. When I confronted the PM who gave it to me and asked if it was AI generated, they replied that "there were parts of it that were human-generated"! -_-

Anyway, I wrote a little more about that here: https://lambdaland.org/posts/2025-08-04_artifical_inanity/

Intent matters a ton when reading or writing something.


Survivor bias of those things that haven't been solved.

Notably absent:

The fat pill HIV fix Cystic fibrosis

We make fun of the stuff that hasn't been solved yet ("It's always ten years away!") while ignoring the things that were previously always ten years away until scientists cracked it.


Also AMT-130 for Huntington's disease.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: