When build stuff on a real slow platform a trick I have used is to setup distcc on the slow computer and at least one real fast computer with a compiler set to the slow computer's arch. Set the slow computer distcc to 0 so it compiles nothing locally, but it does do the configure/linking/etc. This avoids almost all the cross compilation issues you might run into while getting many of the fast benefits.
Yes, but you can't easily avoid doing linking in the local machine, because the output of linking depends on what system libraries there currently are. Also, it doesn't need _that_ much memory. You're still doing all the CPU-intensive parts on the powerful machine, thus saving time and likely keeping the non-powerful machine from overheating.
This is exactly how I work on 9front. All my compiles and computationally heavy stuff gets done on my loud cpu/fs server, while I can use my raspberry pi (or quite old thinkpad when I'm not at my desk) as the terminal and not have to deal with noise and space requirements of another computer or dual boot/virtualize on my main rig.
This is neat, i think i have a laptop with a 486 somewhere around here so i might try this at some point. The only trouble is that said laptop only has infrared for I/O (not even floppy) :-P. Also i have one with a Pentium although that is a bit faster, obviously.
However i wonder if it would be faster if you went with a Linux From Scratch-like approach and used some lightweight init system and only installed minimal stuff - perhaps replacing the some of the more heavyweight GNU tools with alternatives from suckless [1].
I am in the same boat. I have a perfectly working Thinkpad 560E (Pentium) but no floppy or other adapter to really connect to it. I use it to play Doom but adding some other old games would be nice. Hm, come to think of it my old tablet has infrared, maybe i could send some files that way ...
But it first needs a working system installed. I never tried it (actually I did the opposite), but it should be possible to install a minimal system on a similar spec-ed virtual machine, then write the created disk image into a physical disk, stick it into the laptop and boot it.
It is the "write the created disk image into a physical disk" that is the hard bit though :-P. It has some tiny HDD. Although now that i think about it, i wonder if it can be used with another laptop that i have. That one only has a floppy disk, but it is still better than nothing.
Yes, probably. Linux isn't Windows and is deeply modular these days, to the point you can install a full system plus firmwares onto an AMD machine, then move the disk to an Intel one, or the other way around, and expect everything to work because drivers aren't hardcoded anywhere (there are exceptions, but they're rare) and the system loads what is needed for the hardware it finds at each boot. BUT, in a very small memory and storage constrained system one could be forced to recompile the kernel with just the bare necessary static drivers to make the hardware work, which could cause problems in different architectures.
Now that I think of it, there are on Ebay floppy "emulators" which offer a compatible floppy drive interface to the system but use flash memories as storage. They're made in slightly different versions to be compatible with different systems, including home computers and musical instruments. Might be a solution to a fast install through floppies without actually using floppies.
Also there are ATA to compact flash converters which are even cheaper because the two ports are about 100% compatible. I used the latter years ago to put a firewall (pfSense IIRC) compact flash into a small PC which had only ATA ports. Booted from it and worked without problems.
As far as I can figure out, it should be possible to copy files directly over a serial cable (to/from COM1 on dos, /dev/ttyS0 on Linux).
But for sanity, being able to fix a non-booting disk is recommended, so I'd probably try with a second-hand pcmcia hd, usb adapter or something like the above.
I also learned Ms dos comes with INTERLNK.EXE and INTERSVR.EXE - but I wasn't able to figure out if there's a sane way to talk to that from Linux.
I wonder how well a BSD, such as OpenBSD runs, since they have supported more ancient architectures. I remember having a quite low-end 486 laptop in ~2000. Linux was pretty much unusable, but OpenBSD ran great on the hardware.
I remember cross-compiling a very stripped down linux for a 486 25 mhz laptop with 20 mb ram around 2000. I had tried a standard debian first but it was dogslow. The trick was stripping everything down so it fit into 20 mb ram without swapping, then it ran fine. Tweaked the compilation options of everything on there to squeeze every last bit of ram out of it. I wrote most of a C++ web app on that machine, with an older opera version to test with. That was a super fun project.
Maybe the slow startup times from the article are swap-related. Shutdown taking five minutes is a dead give away.
I had OpenBSD more recently on my Toshiba Libretto (Pentium 166) and it was great. You can even boot the installer from floppies, for machines without a CD drive or that can’t boot from it.
I was confused about the 11-minute bootup--my old 486/66 running Slackware took maybe 3-4 minutes to boot in 1994--but then I watched the video: it's everything that we've added to Linux systems since 1994 that makes the startup slow. It's mostly post-kernel services and tasks that are slowing things down on this old PC. I wonder if he could speed things up further by removing some of the modern conveniences and going back to a basic system that didn't run much beyond inetd, getty, and crond.
I've not found my two kids and wife to be such a burden on me. What I can't do is watch television all night, or go out drinking all night. But productive stuff, that's pretty easy to fit in.
Just yesterday I installed Lubuntu 17.10 on an old Athlon XP computer. Firefox kept crashing, so I did a bit of research; turned out it was because the CPU doesn't support SSE2. Turns out most browsers these days require SSE2 (not that I blame them). The only modern-ish looking browser that's worked so far is NetSurf[0], but lots of sites have issues with it.
I have been down a similar path. I have an old HP laptop that used to run Windows XP. Its about 2005 model and has 1G of RAM. LXDE is only desktop that will give reasonable performance. Despite claims at being light I found Xfce based desktops slowed computer to a crawl.
The second thing I found was that all modern browsers literally consumed all resources on the computer making laptop unusable. I tried Netsurf which was fine on sites on which it worked. In the end I have found is best to use text based browsers. I use links2.
Compiled the browser a few minutes ago, rather interesting browser. It crashes on ssl/tls pages for me (I'll report / try to debug it more later), gdb is showing something about curl_multi_perform() / fetch_curl_poll.
OMG! I stopped into Super Silly when I was in SG a few weeks ago (I was there with Kai) and wondered what you were doing with that ancient kit. What a brilliant idea.
My first 486DX PC also only had 8MB of RAM until my parents through it out when I was a kid. I was personally shocked as well when I saw that this 486 PC had 64MB.
The 486 computer I had back in 1992 could support up 16 MB of memory IIRC. It was clocked at 25 MHz, but I did get the math-coprocessor and upgraded it to a DX4 75 Mhz. I knew of some models that could support up to 32 MB of RAM, but I wasn't aware of any that would support 64 MB of RAM (which was quite expensive at the time).
Yes. Some friends of mine once configured a 486 with 64 megs as a sort of practical joke, or cool thing to do. They temporarily cannibalised the other PCs at the school for memory SIMMs.
Having started in the 386 era, and having upgraded my AMD 386 from its original 2MB of RAM to the full 8MB, my first reaction on seeing that "64MB SDRAM SIMM-72" spec in this article was "wow, that's a lot of RAM!"
I remember running slackware 3.6 on an 8 mb ram 486 dx 100 (100 was the cpu mhz) before ~ 20 years. The linux kernel was 2.0.36 or something like that.
Time to switch to Nginx with a FastCGI cache. ;) (If the timeout is set to a couple of seconds, you'll serve thousands of simultaneous requests easily.)
Do you often find yourself having to adjust capacity when under traffic spikes? Would be happy to help smooth out the traffic using off server caching ( see https://www.cachoid.com ). Fremium plan would work well. Feel free to email me joe@
Wow it's a lot slower now than it is with 1992 Linux. I remember a key benchmark was how long it takes to compile the kernel (and was astonished at how fast Pentium-Pro was when it came out).
Another example of the shocking bloat of modern software.
I had a 486 with 16 megs of RAM and was able to browse the web (Netscape 1.0!), compile kernels, and have something like 8 people logged in remotely checking email.
Out of curiosity I recently installed Debian Stretch on my Pentium Pro 200 (dual CPU). The text based installer warned that my 128MB of RAM wouldn't be enough for it to finish, but luckily it still succeeded. The system boots within a minute or so and is fairly usable. Running X on s3fb with i3 as the WM, the selection of usable GUI programs is rather limited then admittedly. Still, I guess we really came a long way from 486 to PPro. :-)
I was actually working on this project before those attacks were announced. When I heard that pre-1995 CPUs were unaffected, I couldn't help but LOL and included a reference in my blog post as well.
I always wonder ... why 'waste' so much time with cross-compiling #Linux when #FreeBSD [1] supports such 486 computer out of the box along with binary packages even in latest to date 11.1-RELEASE version?
It is not like pc unixes ran blazingly fast or were terribly useful back in the days of 486. Solaris and sunos/386 were probably least stable, followed by next(not x86) and netbsd. They may have seen fine to play around with but would crash horribly under any heavy use like cad or mathematica
This is neat and I have some older systems (like the Transmeta based Gateway Connected Touch Pad) that I'd like to get back up and running and there are some good ideas in there. However, calling a 486sx "ancient" is a bit of a stretch... (my oldest working computer is from 1981 and that's far from ancient).
By "this thing", you mean the GCT? Googling for "Gateway Connected Touchpad" and looking at images brings up ~ 8 images. My "other thing" is a Nascom 2.
(The GCT would be a lot more appealing if the LCD display was better, but it's DSTN and pretty horrible compared to modern displays).
Also the Quark X1000 SoCs --- they have an almost unmodified (there are a few Pentium/586-level instructions added) 486 core running at 400MHz. Compare the diagram on page 19 of this:
Interesting, now I wonder why they don't let ME share a core (or parts of it) with the main CPU. That could impact performance a little, which could be detectable, but I mean, it's a "Management Engine", not a "Spying Engine", right?
> The fact that Git, Python 3, SSH and nginx worked fairly ok implied they probably did some testing too.
Or it's a simple side effect of them being portable software. Since the same code has to work on very different ISAs like 32-bit x86 and 32-bit ARM, any architecture-specific code has to be cleanly separated, with a portable fallback. As long as the compiler can still target the "486" architecture, they'll work.
It's a different story with anything which depends on lower-level platform details, like the kernel or glibc's pthread.
> The kernel does a great job in maintaining backwards compatibility, the rest of the stack not so much...
reply
No disagreement. But so far I did not encounter any issues with other parts of Linux other than the kernel. So the rest of the stack seems to do quite well.
This is why I will always use GNU/Linux on my computers, the freedom to use your own hardware for as long as you want outweighs anything Microsoft or Apple has to offer.
Why does this thing take 12 minutes to boot? I think this software must be built incorrectly. I have an AMD Geode system, which admittedly is more of a pentium-class processor running at 233 MHz, but it boots in about 5 seconds.
When the Pentium first came out a 486 DX2 was just about equivalent for most purposes. Many people ran linux on the 486 in those days, building the software on the host, and nobody would have tolerated 11 minute boot times. And again, I'm running modern linux on a Geode and it's not anywhere near that slow.
I had Gentoo running on an old PPC (old grey Mac) back around 2010 and it booted in just a minute or two.
If you watch the video, it gets past the kernel boot stage in under a minute. OpenRC seems to be stalling for a bit on calculating/caching dependencies. That really shouldn't take that long. I wonder how many services he has enabled or if there's some regression introduced there.
Then it takes forever to mount certain things like shared memory, cgroups, SELinux, etc.
Granted I didn't have cgroups, SELinux or most of this stuff back when I ran Gentoo on that PPC. I kinda wish my dad hadn't thrown it out. I wonder what it would be like to put modern Gentoo on it. I wonder if I'd get similar slowdown.
If you tweeked this init system, or ran a more embedded distribution, the boot time would only be about a minute.
I agree that I could have optimised the system more. Since this is the first time I'm installing Gentoo, I just followed the default instructions given in the Gentoo Handbook.
I'd recommend trying bs=256k, which is what Linux is optimized for FWIW. 8k may induce overhead, but I can't say for sure how much of a difference 256k would make.
Actually won't the ISA controllers be even slower? I'm not sure what bus the PATA controllers on this PC are connected to but isn't it connected "natively" so to speak? It might even compete for bandwidth with the sound and network card.
Very likely the onboard PATA controller is connected via ISA. I had to enable the ISA PATA support during the kernel configuration for Linux to recognise it.
In this case, there probably won't be any benefit of having additional ISA controller cards.
What I wonder is whether the onboard chipset is slower than the ISA bus - and whether using an external card would eke out a tiny bit more performance.
Apparently the ISA bus can stably run at up to 8MHz.