Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That email (from Dec 2017) ends with “Such ecosystems come with incredible costs. For instance, rust cannot even compile itself on i386 at present time because it exhausts the address space.”. I presume Theo is actually complaining about using more than 3 GB (?) of memory, but still it really shows the different cost-versus-benefit decisions that we all make.


Is it actually reasonable to expect compiler toolchains to work on old or underpowered hardware? Or is the real problem that cross-compilation is still a special case, rather than being the only way compilers work? If it wasn't still normal to depend on your target environment being the same as your build host, would anyone really want to do serious development work directly on their RPi/Gameboy/watch/whatever, just because that is the target runtime environment?


I think latchin onto this example misses the crux of his raitonale:

> In OpenBSD there is a strict requirement that base builds base. So we cannot replace any base utility, unless the toolchain to build it is in the base.

> Such ecosystems come with incredible costs.

Basically, the cost of adding rust to the OpenBSD base system currently far outweights the proposed benefits (reimplement fileutils), especially considering people will probably not want to pour in the effort needed to rewrite those with strict POSIX compliance (which is another requirement).

He's not saying rust isn't useful but that it wouldn't be a net benefit to have a hard dependency on it in the base system. In the BSD world folks take a very strict and conservative view of what can go in base (and for good reason IMHO).

This is different from the GP comments about just making it possible to use rust programs/libraries in OpenBSD, so we're definitely on a tangent here.


I am wildly guessing that Theo’s beef is more that rust uses a lot of memory (paraphrase: 640kb should be enough for anybody). OpenBSD does integration builds on a variety of different systems, and maybe Theo noticed the OpenBSD/386 build failing due to lack of necessary memory?


That seems a reasonable guess, but it brings me straight back to wondering why such a diversity of build environments is necessary or useful. And my wild guess is that it's mostly because cross compilation is still a second class citizen in most languages today. Though I guess it could be a kind of cultural expectation that you should be able to compile the whole OS on the hardware you're running it on.


My understanding is that the policy of the project is that the base system must not be cross compiled. My understanding of why is that they want to be able to fully bootstrap from base itself.


The rust compiler runs on i86, the complaint is that it can't compile the rust compiler, because it uses more memory than is addressable on i86.


That’s what the person was getting at. Does it matter if it can be cross-compiled elsewhere?


It matters to Theo, due to policies created for specific objectives. It does not matter to many other people.


I guess at the end of the day this is all that really matters. If there are specific goals that Theo has around this, then there's no point in me second-guessing whether those are "reasonable"; it's a matter of values and preferences and whatever.

Whereas if this conversation was about something with broader stewardship, like _Linux_, I'd be saying this is silly, you shouldn't be compromising other things just so you can build your RPi kernel on an RPi.


Yes. My recollection is that around that time, llvm and/or rustc itself would sometimes go over that. I don't know off the top of my head what memory usage looks like right now.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: