LLMs have always been great at generating code that doesn't really mean anything - no architectural decisions, the same for "any" program. But only rarely does one see questions why we're needing to generating "meaningless" code in the first place.
This gets to one of my core fears around the last few years of software development. A lot of companies right now are saddling their codebases with pages and pages of code that does what they need it to do but of which they have no comprehension.
For a long time my motto around software development has been "optimize for maintainability" and I'm quite concerned that in a few years this habit is going to hit us like a truck in the same way the off-shoring craze did - a bunch of companies will start slowly dying off as their feature velocity slows to a crawl and a lot of products that were useful will be lost. It's not my problem, I know, but it's quite concerning.
Even Claude opus 4.6 is pretty willing to start tearing apart my tests or special-case test values if it doesn't find a solution quickly (and in c++/rust land a good proportion of its "patience" seems to be taken up just getting things that compile)
I’ve found that GPT-5.2 is shockingly good at producing code that compiles, despite also being shockingly good at not even trying to compiling it and instead asking me whether I want it to compile the code.
> Or, you could use spaces between em dashes, as incorrect as it is.
That's the normal way of using them in British English. Though they also tend to be the (slightly shorter) en-dashes too.
I feel that style is often pretty common on the "old" internet - possibly related to how they can be so easily be replaced by a hyphen back when ascii was a likely limitation.
We loved Github as a product when it needed to return or profit beyond "getting more users".
I feel this is just the natural trajectory for any VC-funded "service" that isn't actually profitable at the time you adopt it. Of course it's going to change for the worse to become profitable.
Moving to client-side rendering via React means less server load spent generating boilerplate HTML over and over again.
If you have a captive audience, you can get away with making the product shittier because it's so difficult for anyone to move away from it - both from an engineering standpoint and from network effects.
It seems most of the complaints are about the reliability and infrastructure - which is very much often a direct result of lack of investment and development resources.
And then many UI changes people have been complaining about are related to things like copilot being forcibly integrated - which is very much in the "Microsoft expect to gain a profit by encouraging it's use" camp.
It's pretty rare companies make a UI because they want a bad UI, it's normally a second order thing from other priorities - such as promoting other services or encouraging more ad impressions or similar.
Man I can't wait for tcc to be reposted for the 4th time this week with the license scrubbed and the comment of "The Latest AI just zero-shotted an entire C compiler in 5 minutes!"
There actually was an article like this from Anthropic the other day but instead of 5 minutes I think it was weeks and $20,000 worth of tokens. Don't have the link handy though.
Except it was written in a completely different language (Rust), which likely would have necessitated a completely different architecture, and nobody has established any relationship either algorithmically or on any other level between that compiler and TCC. Additionally, and Anthropic's compiler supports x86_64 (partially), ARM, and RISC-V, whereas TCC supports x86, x86_64, and ARM. Additionally, TCC is only known to be able to boot a modified version of the Linux 2.4 kernel[1] instead of an unmodified version of Linux 6.9.
Additionally, it is extremely unlikely for a model to be able to regurgitate this many tokens of something, especially translated into another language, especially without being prompted with the starting set of tokens in order to specifically direct it to do that regurgitation.
So, whatever you want to say about the general idea that all model output is plagiarism of patterns it's already seen or something. It seems pretty clear to me that this does not fit the hyperbolic description put forward in the parent comments.
Except it was written in a completely different language (Rust), which would have necessitated a completely different architecture, and nobody has established any relationship either algorithmically or on any other level between that compiler and TCC.
Man I hope so - the context limit is hit really quickly in many of my use cases - and a compaction event inevitably means another round of corrections and fixes to the current task.
Though I'm wary about that being a magic bullet fix - already it can be pretty "selective" in what it actually seems to take into account documentation wise as the existing 200k context fills.
How is generating a continuation prompt materially different from compaction? Do you manually scrutinize the context handoff prompt? I've done that before but if not I do not see how it is very different from compaction.
I wonder if it's just: compact earlier, so there's less to compact, and more remaining context that can be used to create a more effective continuation
In my example the Figma MCP takes ~300k per medium sized section of the page and it would be cool to enable it reading it and implementing Figma designs straight. Currently I have to split it which makes it annoying.
I mean the systems I work on have enough weird custom APIs and internal interfaces just getting them working seems to take a good chunk of the context. I've spent a long time trying to minimize every input document where I can, compact and terse references, and still keep hitting similar issues.
At this point I just think the "success" of many AI coding agents is extremely sector dependent.
Going forward I'd love to experiment with seeing if that's actually the problem, or just an easy explanation of failure. I'd like to play with more controls on context management than "slightly better models" - like being able to select/minimize/compact sections of context I feel would be relevant for the immediate task, to what "depth" of needed details, and those that aren't likely to be relevant so can be removed from consideration. Perhaps each chunk can be cached to save processing power. Who knows.
But I kinda see your point - assuming from you're name you're not just a single purpose troll - I'm still not sold on the cost effectiveness of the current generation, and can't see a clear and obvious change to that for the next generation - especially as they're still loss leaders. Only if you play silly games like "ignoring the training costs" - IE the majority of the costs - do you get even close to the current subscription costs being sufficient.
My personal experience is that AI generally doesn't actually do what it is being sold for right now, at least in the contexts I'm involved with. Especially by somewhat breathless comments on the internet - like why are they even trying to persuade me in the first place? If they don't want to sell me anything, just shut up and keep the advantage for yourselves rather than replying with the 500th "You're Holding It Wrong" comment with no actionable suggestions. But I still want to know, and am willing to put the time, effort and $$$ in to ensure I'm not deluding myself in ignoring real benefits.
Output from radiating heat scales with area it can dissipate from. Lots of small satellites have a much higher ratio than fewer larger satellites. Cooling 10k separate objects is orders of magnitude easier than 10 objects at 1000x the power use, even if the total power output is the same.
Distributing useful work over so many small objects is a very hard problem, and not even shown to be possible at useful scales for many of the things AI datacenters are doing today. And that's with direct cables - using wireless communication means even less bandwidth between nodes, more noise as the number of nodes grows, and significantly higher power use and complexity for the communication in the first place.
Building data centres in the middle of the sahara desert is still much better in pretty much every metric than in space, be it price, performance, maintainance, efficiency, ease of cooling, pollution/"trash" disposal etc. Even things like communication network connectivity would be easier, as at the amounts of money this constellation mesh would cost you could lay new fibre optic cables to build an entire new global network to anywhere on earth and have new trunk connections to every major hub.
There are advantages to being in space - normally around increased visibility for wireless signals, allowing great distances to be covered at (relatively) low bandwidth. But that comes at an extreme cost. Paying that cost for a use case that simply doesn't get much advantages from those benefits is nonsense.
Whatever sat datacenter they biuld, it would run better/easier/faster/cheaper sitting on the ground in antarctica than it would in space, or floating on the ocean, without the launch costs. Space is useful for those activities that can only be done from space. For general computing? Not until all the empty parts of the globe are full.
This is a pump-and-dump bid for investor money. They will line up to give it to him.
Yup - my example of the Sahara wasn't really a specific suggestion, so much as an example of "The Most Inconvenient Inhospitable part of the earth's surface is still much better than space for these use cases". This isn't star trek, the world doesn't match sci-fi.
It's like his "Mars Colony" junk - and people lap it up, keeping him in the news (in a not explicitly negative light - unlike some recent stories....)
Space is so expensive that you can power it pretty much any way you want and it will be cheaper. Nuclear reactor, LNG, batteries (truck them in and out if you have to). Hell, space based solar and beam it down. Why would there ever be an advantage to putting the compute in space?
Or burn them in a furnace. Pretty much any way you can think of to accomplish something on earth, is vastly cheaper, easier, and faster than doing it in space.
Why would they bother to build space data center in such monolithic massive structures at all? Direct cables between semi-independent units the size of a star link v2 satellite. That satellite size is large enough to encompass a typical 42U server rack even without much physical reconfiguration. It doesn't need to be "warehouse sized building, but in space", and neither does it have to be countless objects kilometers apart from each other beaming data wirelessly. A few dozen wired as a cluster is much more than sufficient to avoid incurring any more bandwidth penalties on server-to-server communication with correlated work loads than we already have on earth for most needs.
Of course this doesn't solve the myriad problems, but it does put dissipation squarely in the category of "we've solved similar problems". I agree there's still no good reason to actually do this unless there's a use for all that compute out there in orbit, but that too is happening with immense growth and demand expected for increased pharmaceutical research and various manufacturing capabilities that require low/no gravity.
Not just a 42U rack, but a 42U rack that needs one hundred thousand watts of power, and it also needs to be able to remove one hundred thousand watts of heat out of the rack, and then it needs to dump that one hundred thousand watts of heat into space.
And it needs to communicate the data to and from a ground-based location. It’s all of the problems with satellite internet, but in your production environment!
Attach heat-pipes with that stuff to the chips as is common now, or go the direct route via substrate-embedded microfluidics, as is thought of at the moment.
Radiate the shit out of it by spraying it into the vacuum, dispersing into the finest mist with highest possible surface, funnel the frozen mist back in after some distance, by electrostatic and/or electromagnetic means. Repeat. Flow as you go.
This is sort of where I think he is going with it. Run the compute part super cold (-60C) in a dielectric fluid. Maybe even at a low pressure. It boils off, gets collected, and is then condensed into something way hotter. Like boiling water hot. This is sent through a high temperature radiator for heat dispersion (because Stefan-Boltzmann has a damned 4), and then pumped back into the common storage area. Cycle indefinitely. Beyond the simple space whatever non-sense, there is a nugget of a good idea in there. Cold things are going to have less internal resistance - so they will produce less waste heat. If you can keep them at a constant temperature via submerged cooling they are also going to suffer less thermal stress due to heat fluctuations. So the vacuum of space becomes the perfect insulator. You can’t have humans getting into them anyways because then you have to reheat and recool, causing stress on the system. Just have to accept your slow component losses. Microsoft and IBM have been working the same basic concept for a while (decade plus), Elon is just throwing ‘Space!!’ into the equation because of who he is. I think it’s 50% hype and 50% this is where the industry is going regardless. I always assumed they would just find an abandoned mine or something. But the always-cold, thermally-stable, no-humans-allowed data center is coming. We are hitting the point where the upfront cost of doing it is overshadowed by the tail cost savings.
> Radiate the shit out of it by spraying it into the vacuum, dispersing into the finest mist with highest possible surface, funnel the frozen mist back in after some distance, by electrostatic and/or electromagnetic means. Repeat. Flow as you go.
Even if that worked, you don’t gain much. It’s not the local surface area that matters — it’s the global surface. A device confined within a 20m radius sphere can radiate no more heat than a plain black sphere of the same radius.
There are only two ways to cheat this. First, you can run hotter. But a heat pump needs power, and you need to get that power from somewhere, and you need to dissipate that power too. But you can at least run your chips as hot as they will tolerate. Second is things like lasers or radio transmitters, but those are producing non-thermal output, which is actually worse at cooling.
At the end of the day, you have only two variables to play with: the effective radiating surface temperature and the temperature of the blackbody radiation you emit.
hits crack pipe used by elon but only after washing it thoroughly What if we used the waste heat to power a perpetual motion device that generated electricity?
> using wireless communication means even less bandwidth between nodes, more noise as the number of nodes grows, and significantly higher power use
Space changes this. Laser based optical links offer bandwidth of 100 - 1000 Gbps with much lower power consumption than radio based links. They are more feasible in orbit due to the lack of interference and fogging.
> Building data centres in the middle of the sahara desert is still much better in pretty much every metric
This is not true for the power generation aspect (which is the main motivation for orbital TPUs). Desert solar is a hard problem due to the need for a water supply to keep the panels clear of dust. Also the cooling problem is greatly exacerbated.
You don’t need to do anything to keep panels with a significant angle clear of dust in deserts. The Sahara is near the equator but you can stow panels at night and let the wind do its thing.
The lack of launch costs more than offset the need for extra panels and batteries.
“The reason I concentrate my research on these urban environments is because the composition of soiling is completely different,” said Toth, a Ph.D. candidate in environmental engineering at the University of Colorado who has worked at NREL since 2017. “We have more fine particles that are these stickier particles that could contribute to much different surface chemistry on the module and different soiling. In the desert, you don’t have as much of the surface chemistry come into play.”
You’re not summarizing the article fairly. She is saying the soiling mechanisms are environmentally dependent, not that there is no soiling in the desert. Again, it cites an efficiency hit of 50% in the ME. The article later notes that they’ve experimented with autonomous robots for daily panel cleaning, but it’s not a generally solved problem and it’s not true that “the wind takes care of it.”
And you still haven’t provided a source for your claim.
I’m saying the same thing she is, that soiling isn’t as severe in the desert not that it doesn’t exist.
The article itself said the maximum was 50% and it was significantly less of a problem in the desert. Even 50% still beats space by miles, that only increases per kWh cost by ~2c the need for batteries is still far more expensive.
So sure I could bring up other sources but I don’t want to get into a debate about the relative validity of sources etc because it just isn’t needed when the comparison point is solar on satellites.
You are again misquoting the article. She did not say soiling was "significantly less of a problem" in the desert. She in fact said it "requires you to clean them off every day or every other day or so" to prevent cement formation.
You claimed it was already a solved problem thanks to wind, which is false. You are unable to provide any source at all, not even a controversial one.
And that's just generation. Desert solar, energy storage and data center cooling at scale all remain massive engineering challenges that have not yet been generally solved. This is crucial to understand properly when comparing it to the engineering challenges of orbital computing.
Now you make me want to come up with a controversial source. The Martian rovers continued to operate at useful power level for decades without cleaning.
Thank you for providing a source. That’s an early stage research paper, not the proven solution you originally implied. There are tons of early stage research papers on all these problems on earth and in space. Often we encounter a bunch of complications in applying them at scale such as dew-related cementation[1], which is a key reason why they haven’t been deployed at sufficient scale.
That you point to the Mars rover, a mission with extremely budgeted power requirements, as proof of how soiling doesn’t pose an impediment to mega scale desert solar farms, only underscores the flaw in your reasoning.
“I don’t want to get into a debate about the relative validity of sources etc”
> Not the proven solution
Yet you quote a paper saying it can work. “This impact can have a positive or negative effect depending on the climatic conditions and the surface properties.”
I have no interest in debating with you because I don’t believe you are capable of a honest debate here. The physics doesn’t change and the physics is what matters.
> doesn’t pose an impediment
Nope. I said it beats “space” not that soiling doesn’t exist. That’s what you have to demonstrate here and you have provided zero evidence whatsoever supporting that viewpoint. Hell they could replace the entire array every 5 years and it would still beat space.. Even if what you said was completely true, you still lose the argument.
The argument here is simply over your false claim that "You don’t need to do anything to keep panels with a significant angle clear of dust in deserts." Your only source does not, in fact, establish that, and cementation is in fact a challenge with desert solar -- something that happens much faster than every five years.
Repeating unsupported claims and declaring yourself the winner does not, it turns out, actually help you win an argument.
Indeed, that seems unnecessarily complex for what is actually needed. I don't understand why the great grandparent comment seems to suggest it's an "unsolved" problem - as if grid-scale solar buildouts don't already have examples of things like motorized brushes on rails for exactly this already.
And it's always a numbers game - sure they're not /perfect/, but a few % efficiency loss is fine when it's competing against strapping every kilo of weight to tons of liquid hydrogen and oxygen and firing it into space. How much "extra" headroom to buffer those losses would that equivalent cost pay for?
And solar panels in space degrade over time too - between 1-5% per year depending on coatings/protections.
The same panel produces much more electricity in space than at the bottom of the atmosphere, because the atmosphere already reflects most of the light. Additionally, the panel needs less glass or no glass in space, which makes it lighter and cheaper.
Launch costs have shrunk significantly thanks to SpaceX, and they are projected to shrink further with the Super Heavy Booster and Starship.
Space doesn't really change it though because the effective bandwidth between nodes is reduced by the overall size of the network and how much data they need to relay between each other.
And profiting from it, though less directly than "$ for illegal images". Even if it wasn't behind a paywall (which it mostly is) driving more traffic for more ads for more income is still profiting from illegal imagery.
reply