Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Modern Linux in an ancient PC (yeokhengmeng.com)
198 points by yeokm1 on Jan 6, 2018 | hide | past | favorite | 126 comments


When build stuff on a real slow platform a trick I have used is to setup distcc on the slow computer and at least one real fast computer with a compiler set to the slow computer's arch. Set the slow computer distcc to 0 so it compiles nothing locally, but it does do the configure/linking/etc. This avoids almost all the cross compilation issues you might run into while getting many of the fast benefits.


Isn't linking the step which needs the most memory?


Yes, but you can't easily avoid doing linking in the local machine, because the output of linking depends on what system libraries there currently are. Also, it doesn't need _that_ much memory. You're still doing all the CPU-intensive parts on the powerful machine, thus saving time and likely keeping the non-powerful machine from overheating.


This is exactly how I work on 9front. All my compiles and computationally heavy stuff gets done on my loud cpu/fs server, while I can use my raspberry pi (or quite old thinkpad when I'm not at my desk) as the terminal and not have to deal with noise and space requirements of another computer or dual boot/virtualize on my main rig.


I'll look into distcc next time. Thanks!


Can you do a quick guide? This sounds extremely usable!



This is neat, i think i have a laptop with a 486 somewhere around here so i might try this at some point. The only trouble is that said laptop only has infrared for I/O (not even floppy) :-P. Also i have one with a Pentium although that is a bit faster, obviously.

However i wonder if it would be faster if you went with a Linux From Scratch-like approach and used some lightweight init system and only installed minimal stuff - perhaps replacing the some of the more heavyweight GNU tools with alternatives from suckless [1].

[1] https://core.suckless.org/


I'd recommend trying to cross-compile the binaries on another machine to save your sanity.

I built an LFS back around 2002 when I was in University and used it for over a semester and a half. It made me appreciate package management. :-P

I use Gentoo myself these days, which is pretty much LFS + package management.


Yes, i forgot to mention that the build should be done from a more powerful system :-).


I am in the same boat. I have a perfectly working Thinkpad 560E (Pentium) but no floppy or other adapter to really connect to it. I use it to play Doom but adding some other old games would be nice. Hm, come to think of it my old tablet has infrared, maybe i could send some files that way ...


The IR port can be used for networking too, see https://linux.die.net/man/4/irnet

But it first needs a working system installed. I never tried it (actually I did the opposite), but it should be possible to install a minimal system on a similar spec-ed virtual machine, then write the created disk image into a physical disk, stick it into the laptop and boot it.


It is the "write the created disk image into a physical disk" that is the hard bit though :-P. It has some tiny HDD. Although now that i think about it, i wonder if it can be used with another laptop that i have. That one only has a floppy disk, but it is still better than nothing.


Yes, probably. Linux isn't Windows and is deeply modular these days, to the point you can install a full system plus firmwares onto an AMD machine, then move the disk to an Intel one, or the other way around, and expect everything to work because drivers aren't hardcoded anywhere (there are exceptions, but they're rare) and the system loads what is needed for the hardware it finds at each boot. BUT, in a very small memory and storage constrained system one could be forced to recompile the kernel with just the bare necessary static drivers to make the hardware work, which could cause problems in different architectures.

Now that I think of it, there are on Ebay floppy "emulators" which offer a compatible floppy drive interface to the system but use flash memories as storage. They're made in slightly different versions to be compatible with different systems, including home computers and musical instruments. Might be a solution to a fast install through floppies without actually using floppies. Also there are ATA to compact flash converters which are even cheaper because the two ports are about 100% compatible. I used the latter years ago to put a firewall (pfSense IIRC) compact flash into a small PC which had only ATA ports. Booted from it and worked without problems.


> The only trouble is that said laptop only has infrared for I/O (not even floppy)

Doesn't it have a serial or parallel port?


Ah yes, it has both, i forgot about them - although i lack a way to use them :-P. And TBH i was thinking in terms of something bootable :-).


What model of laptop?


HP Omnibook 600C. It has a 75MHz 486DX4 and 8MB of RAM.


According to the Internet - it has a type3/2x type2 pcmcia in addition to parallel and serial port?

Maybe something like: https://www.amazon.com/PCMCIA-Compact-Flash-Type-Adapter/dp/... could work?

As far as I can figure out, it should be possible to copy files directly over a serial cable (to/from COM1 on dos, /dev/ttyS0 on Linux).

But for sanity, being able to fix a non-booting disk is recommended, so I'd probably try with a second-hand pcmcia hd, usb adapter or something like the above.

I also learned Ms dos comes with INTERLNK.EXE and INTERSVR.EXE - but I wasn't able to figure out if there's a sane way to talk to that from Linux.

http://www.pcxt-micro.com/dos-interlink.html


I wonder how well a BSD, such as OpenBSD runs, since they have supported more ancient architectures. I remember having a quite low-end 486 laptop in ~2000. Linux was pretty much unusable, but OpenBSD ran great on the hardware.


I remember cross-compiling a very stripped down linux for a 486 25 mhz laptop with 20 mb ram around 2000. I had tried a standard debian first but it was dogslow. The trick was stripping everything down so it fit into 20 mb ram without swapping, then it ran fine. Tweaked the compilation options of everything on there to squeeze every last bit of ram out of it. I wrote most of a C++ web app on that machine, with an older opera version to test with. That was a super fun project.

Maybe the slow startup times from the article are swap-related. Shutdown taking five minutes is a dead give away.


I don't mind giving this a shot once I have the time for it! :)


Please do!


I had OpenBSD more recently on my Toshiba Libretto (Pentium 166) and it was great. You can even boot the installer from floppies, for machines without a CD drive or that can’t boot from it.


I ran FreeBSD 4.x on an original Pentium for several years from 2000-2005ish. Slow compilation of the world and ports, but it ran fine.


I was confused about the 11-minute bootup--my old 486/66 running Slackware took maybe 3-4 minutes to boot in 1994--but then I watched the video: it's everything that we've added to Linux systems since 1994 that makes the startup slow. It's mostly post-kernel services and tasks that are slowing things down on this old PC. I wonder if he could speed things up further by removing some of the modern conveniences and going back to a basic system that didn't run much beyond inetd, getty, and crond.


With Gentoo, it is possible to reduce the startup services I guess. I just ran with the default configuration suggested by the Handbook.


I note that the INIT line appears 30 seconds after decompression.

Ultimately you don't really need anything initialized to have a working system.


Great work! I love this. Unfortunately, "Science projects" like this are the first thing you have give up when you're married with children.


I've not found my two kids and wife to be such a burden on me. What I can't do is watch television all night, or go out drinking all night. But productive stuff, that's pretty easy to fit in.


Hahaha, you guessed that right for me!


You can see the video demo: https://www.youtube.com/watch?v=4qSziR6sD8Q

Detailed install instructions: https://github.com/yeokm1/gentoo-on-486


Not the OP, but here's their hackathon presentation as well: https://www.youtube.com/watch?v=w-RN0EkxWxA

(Disclosure: I'm one of the organisers.)


Have to mention this blog post is extended work based on knowledge gained from that hackathon


Just yesterday I installed Lubuntu 17.10 on an old Athlon XP computer. Firefox kept crashing, so I did a bit of research; turned out it was because the CPU doesn't support SSE2. Turns out most browsers these days require SSE2 (not that I blame them). The only modern-ish looking browser that's worked so far is NetSurf[0], but lots of sites have issues with it.

[0] http://www.netsurf-browser.org/


I have been down a similar path. I have an old HP laptop that used to run Windows XP. Its about 2005 model and has 1G of RAM. LXDE is only desktop that will give reasonable performance. Despite claims at being light I found Xfce based desktops slowed computer to a crawl.

The second thing I found was that all modern browsers literally consumed all resources on the computer making laptop unusable. I tried Netsurf which was fine on sites on which it worked. In the end I have found is best to use text based browsers. I use links2.


Compiled the browser a few minutes ago, rather interesting browser. It crashes on ssl/tls pages for me (I'll report / try to debug it more later), gdb is showing something about curl_multi_perform() / fetch_curl_poll.


OMG! I stopped into Super Silly when I was in SG a few weeks ago (I was there with Kai) and wondered what you were doing with that ancient kit. What a brilliant idea.


Thanks! You have to thank my teammate Hui Jing for that too


I wonder how much of the slowness is due to disk swapping since the RAM capacity is so low.


For the era, 64 megs was considered incredible.

For comparison, my first Linux box (a 386SX, 20 mhz) had only 4 megs of RAM.


My first 486DX PC also only had 8MB of RAM until my parents through it out when I was a kid. I was personally shocked as well when I saw that this 486 PC had 64MB.


The 486 computer I had back in 1992 could support up 16 MB of memory IIRC. It was clocked at 25 MHz, but I did get the math-coprocessor and upgraded it to a DX4 75 Mhz. I knew of some models that could support up to 32 MB of RAM, but I wasn't aware of any that would support 64 MB of RAM (which was quite expensive at the time).


Yes. Some friends of mine once configured a 486 with 64 megs as a sort of practical joke, or cool thing to do. They temporarily cannibalised the other PCs at the school for memory SIMMs.


I remember a friend paying around $800 (USD) for a 32 meg 72-pin DIMM back in 1994. I think a year or so later, RAM prices really spiked up...


I remember that. I think it was because of demand + and earthquake in Asia or something like that.


I concur. I can't determine that during the bootup sequence though. Then again, 64MB RAM is considered plentiful for a 486 PC.


Having started in the 386 era, and having upgraded my AMD 386 from its original 2MB of RAM to the full 8MB, my first reaction on seeing that "64MB SDRAM SIMM-72" spec in this article was "wow, that's a lot of RAM!"


72pin SIMMs we're plain async DRAM and later EDO. Never SDRAM.


Ok my mistake, edited the post to just "RAM".


Swap is activate pretty late in at 5:59 and you can clearly see there isn't much disk activity going on.

A barebones debian installation should be using < 50Mb at boot, so a custom gentoo could be <40Mb


I remember running slackware 3.6 on an 8 mb ram 486 dx 100 (100 was the cpu mhz) before ~ 20 years. The linux kernel was 2.0.36 or something like that.

Everything run great back then!


NetBSD could run much better.


So could MS-DOS ;)


But MS-DOS just brings fun, boring fun with FreeDOS and ScummVM adventures:D NetBSD gives you Cataclysm DDA over SSH.


I rebooted my server but it seems like crash again. Let me adjust adjust my Digital Ocean droplet.


Time to switch to Nginx with a FastCGI cache. ;) (If the timeout is set to a couple of seconds, you'll serve thousands of simultaneous requests easily.)


Do you often find yourself having to adjust capacity when under traffic spikes? Would be happy to help smooth out the traffic using off server caching ( see https://www.cachoid.com ). Fremium plan would work well. Feel free to email me joe@


The demo is complete. Get rickrolled at 17mins :)


Wow it's a lot slower now than it is with 1992 Linux. I remember a key benchmark was how long it takes to compile the kernel (and was astonished at how fast Pentium-Pro was when it came out).


Another example of the shocking bloat of modern software.

I had a 486 with 16 megs of RAM and was able to browse the web (Netscape 1.0!), compile kernels, and have something like 8 people logged in remotely checking email.


I'll trade you an 80's era mainframe that had all the bank branch offices of a small country online with only a couple of megabytes of RAM.


Out of curiosity I recently installed Debian Stretch on my Pentium Pro 200 (dual CPU). The text based installer warned that my 128MB of RAM wouldn't be enough for it to finish, but luckily it still succeeded. The system boots within a minute or so and is fairly usable. Running X on s3fb with i3 as the WM, the selection of usable GUI programs is rather limited then admittedly. Still, I guess we really came a long way from 486 to PPro. :-)


I had a stack of ~25 floppies that I used to download linux with X and some programs to run on a 486.

Software has gotten much bigger...



I'd be interested to know what TomsRtBt is like on that machine.


Finally an inter PC safe from Meltdown and Spectre. About time.


I was actually working on this project before those attacks were announced. When I heard that pre-1995 CPUs were unaffected, I couldn't help but LOL and included a reference in my blog post as well.


The 486 has both branch prediction and a cache, but I'm not sure if it's vulnerable.

Reference: https://gist.github.com/ErikAugust/724d4a969fb2c6ae1bbd7b2a9...

Potential patch: https://gist.github.com/ErikAugust/724d4a969fb2c6ae1bbd7b2a9...


I just Googled to double check. The 486 does NOT have branch prediction.

Source 1: https://books.google.com.sg/books?id=MLJClvCYh34C&pg=PA122&l...

Source 2: https://books.google.com.sg/books?id=QzsEAAAAMBAJ&pg=PA59&lp...


Wow. So the 486 cores used for the Intel ME are completely Spectre-resistant.

That is hilarious.


I always wonder ... why 'waste' so much time with cross-compiling #Linux when #FreeBSD [1] supports such 486 computer out of the box along with binary packages even in latest to date 11.1-RELEASE version?

[1] https://www.freebsd.org/releases/11.1R/announce.html


It is not like pc unixes ran blazingly fast or were terribly useful back in the days of 486. Solaris and sunos/386 were probably least stable, followed by next(not x86) and netbsd. They may have seen fine to play around with but would crash horribly under any heavy use like cad or mathematica


Wow. This is a LOT of effort to play around with old hardware and OSes, but it’s interesting to see that it’s possible. “Science Project,” indeed!


Haha. I spent several weeks of sleepless nights on this :)


This is kinda funny, because you can still install OpenBSD/NetBSD on a 486 machine today using their floppy installers.


This is neat and I have some older systems (like the Transmeta based Gateway Connected Touch Pad) that I'd like to get back up and running and there are some good ideas in there. However, calling a 486sx "ancient" is a bit of a stretch... (my oldest working computer is from 1981 and that's far from ancient).


Well for me, the first computer I used was a 486 when I was 5. So it's "ancient" enough for me :)


Oh wow, nice.

There appear to be no decent photos of this thing on the internet, btw.


By "this thing", you mean the GCT? Googling for "Gateway Connected Touchpad" and looking at images brings up ~ 8 images. My "other thing" is a Nascom 2.

(The GCT would be a lot more appealing if the LCD display was better, but it's DSTN and pretty horrible compared to modern displays).


I wonder how fast the 486 (its logical design) could run if Intel produced it today, i.e. with modern fab technology.


They do, and it's what runs the Intel ME.


Also the Quark X1000 SoCs --- they have an almost unmodified (there are a few Pentium/586-level instructions added) 486 core running at 400MHz. Compare the diagram on page 19 of this:

https://www.intel.com/content/dam/support/us/en/documents/pr...

...with this:

https://en.wikipedia.org/wiki/File:80486DX2_arch.svg

A lot of the text in the PDF above was copy-pasted from the 486 doc too.


Interesting, now I wonder why they don't let ME share a core (or parts of it) with the main CPU. That could impact performance a little, which could be detectable, but I mean, it's a "Management Engine", not a "Spying Engine", right?


They have that idea too in "System Management Mode", which is separate from the Management Engine.


and similarly Atom were mostly pentium on sub 20nm process (with added candies here and there)


Yeah the kernel does a great job in maintaining backwards compatibility, the rest of the stack not so much...


Kernel yes. Another yes to the GCC compiler.

Probably Yes and no for the rest. The fact that Git, Python 3, SSH and nginx worked fairly ok implied they probably did some testing too.


> The fact that Git, Python 3, SSH and nginx worked fairly ok implied they probably did some testing too.

Or it's a simple side effect of them being portable software. Since the same code has to work on very different ISAs like 32-bit x86 and 32-bit ARM, any architecture-specific code has to be cleanly separated, with a portable fallback. As long as the compiler can still target the "486" architecture, they'll work.

It's a different story with anything which depends on lower-level platform details, like the kernel or glibc's pthread.


> The kernel does a great job in maintaining backwards compatibility, the rest of the stack not so much... reply

No disagreement. But so far I did not encounter any issues with other parts of Linux other than the kernel. So the rest of the stack seems to do quite well.


Is probably more expensive to procure 486s than mordern low end chips now a days


I don't disagree. In fact, even the Raspberry Pi is more powerful than this PC.


This is why I will always use GNU/Linux on my computers, the freedom to use your own hardware for as long as you want outweighs anything Microsoft or Apple has to offer.


But one day, the 486 will be deprecated. Just like the 386 was...


Yeah, I was kinda sad when the arch that got Linux going was removed.


Why did I get downvoted?

Butthurt Apple fanboys can't stand my comment?

HN is such a joke with these people.


This is awesome!

I, too, have a soft spot for old hardware, so this makes me smile!


I completely forgot about good old LILO boot.


That's one way to avoid Spectre.



Cool project, but at some point we have to move past the "Make XXX great again" joke.


It should be able to run X server as well, Xvesa with ATerm and Fluxbox should work fine.


Sounds like a plan!


Possibly look into the resurrected tinyX project from tiny core linux. It may (?) run better than stock Xorg.


I tried Tiny Core Linux on this PC. It wouldn't boot from the install disc. It kept doing the reboot loop.


I ran Slackware on a 486 (DX133 I think) with 8mb of RAM in 1999.


me too!


Same here. I still have my Slackware 3.5 CDs. 4 Disc set!

(Edit: mine was just a 486/DX66 though. After that I got a PII-333Mhz).


cds? you must be rich.

i was downloading that on a stack of floppy disks


Why does this thing take 12 minutes to boot? I think this software must be built incorrectly. I have an AMD Geode system, which admittedly is more of a pentium-class processor running at 233 MHz, but it boots in about 5 seconds.

When the Pentium first came out a 486 DX2 was just about equivalent for most purposes. Many people ran linux on the 486 in those days, building the software on the host, and nobody would have tolerated 11 minute boot times. And again, I'm running modern linux on a Geode and it's not anywhere near that slow.


I had Gentoo running on an old PPC (old grey Mac) back around 2010 and it booted in just a minute or two.

If you watch the video, it gets past the kernel boot stage in under a minute. OpenRC seems to be stalling for a bit on calculating/caching dependencies. That really shouldn't take that long. I wonder how many services he has enabled or if there's some regression introduced there.

Then it takes forever to mount certain things like shared memory, cgroups, SELinux, etc.

Granted I didn't have cgroups, SELinux or most of this stuff back when I ran Gentoo on that PPC. I kinda wish my dad hadn't thrown it out. I wonder what it would be like to put modern Gentoo on it. I wonder if I'd get similar slowdown.

If you tweeked this init system, or ran a more embedded distribution, the boot time would only be about a minute.


I agree that I could have optimised the system more. Since this is the first time I'm installing Gentoo, I just followed the default instructions given in the Gentoo Handbook.

https://wiki.gentoo.org/wiki/Handbook:X86


My guess is CF slowness.


The CF card is a Sandisk Extreme with up to 120MB/s read speed. My guess is the PATA interface on that motherboard is also a bottleneck.

I did a disk speed test with "dd if=/dev/zero of=/tmp/output bs=8k count=2k"

I only got 720KB/s.


TIL! Interesting benchmark.

I'd recommend trying bs=256k, which is what Linux is optimized for FWIW. 8k may induce overhead, but I can't say for sure how much of a difference 256k would make.

I just briefly poked eBay to find out the price of ISA disk controllers, and found a reasonable number of options within the <$30 bracket. https://www.ebay.com/b/ISA-Internal-Disk-Controllers-RAID-Ca...

The nice thing is that Linux will (almost certainly?) have no problem no matter what card you buy, so the question is what the fastest chipset is.

I also just found https://wiki.68kmla.org/SCSI_hard_disk_replacement_options#I... which may prove relevant.


Ok I can give 256K a shot next time.

Actually won't the ISA controllers be even slower? I'm not sure what bus the PATA controllers on this PC are connected to but isn't it connected "natively" so to speak? It might even compete for bandwidth with the sound and network card.


Very, very good point. I'm not sure.

Don't quote me, but I think the onboard PATA controller is connected via ISA as well.


Very likely the onboard PATA controller is connected via ISA. I had to enable the ISA PATA support during the kernel configuration for Linux to recognise it.

In this case, there probably won't be any benefit of having additional ISA controller cards.


This is very very possible.

What I wonder is whether the onboard chipset is slower than the ISA bus - and whether using an external card would eke out a tiny bit more performance.

Apparently the ISA bus can stably run at up to 8MHz.

It's not on-point but I found this thread that discussed SCSI controllers that was kind of interesting: http://www.vogons.org/viewtopic.php?t=36001

This project is really awesome btw.


I think all of us really have different definitions of what constitutes "modern linux". For me running a modern kernel and software is sufficient.

Speed wise, I believe further optimisation is possible given time. However the point of my post is to show the possibility.


The current kernel boots in 30 seconds (decompression finished to sysvinit start).

I'm moderately confident that extremely aggressive optimization could whittle this boot time down further.


The site died, WordPress overloaded...



It works for me.


It didn't work when I posted that comment.


Now try to run Crysis trough wine ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: