There is a deep trend happening in software development.
As the number of dependencies for building an application grows, it becomes exponentially harder to shake the tree. This used to be the role of Linux distributions, they were acting as a push-back force, asking projects to support multiple versions of C libraries. This was acceptable because there is not C package manager.
Now that each language has their own package manager, the role of distributions have faded. They are even viewed as a form of nuisance by some developers. It's easier to support one fixed set of dependencies and not have to worry about backward-compatibility. This is something all distributions have been struggling with for a while now, especially with NodeJS.
This trend is happening on all platforms, but is more pronounced in Linux because of the diversity of the system libraries ecosystem. On Windows and macOS, there are some SDKs the application can lean on. On Linux, the only stable "SDK" is the Linux kernel API.
> On Windows and macOS, there are some SDKs the application can lean on. On Linux, the only stable "SDK" is the Linux kernel API.
This is a very good observation I think. Basically, what Linux distributions have failed to do is to create a higher level SDK for their platforms - instead, they rely on a soup of libraries and on the package maintainer model to know which of these is which.
Basically, they are failing to provide a true OS: there is only a Kernel + various user space apps, from systemd to KDevelop, with no explicit line in the sand between what is the system SDK and what is a simple app bundled with the OS.
> This is a very good observation I think. Basically, what Linux distributions have failed to do is to create a higher level SDK for their platforms - instead, they rely on a soup of libraries and on the package maintainer model to know which of these is which.
Not really no. It's both simpler and more complicated.
At the very low level, everyone expects to get a POSIX system. At the user level, most users use either Gnome or KDE (well mostly Gnome to be honest but let's pretend) which provides what could be called a highlevel SDK for applications.
That leaves the layer inbetween which used to be somewhat distribution specific but is seeing more and more consolidation with the advent of systemd and flatpak.
Even Gnome and KDE are not really "high level SDKs". They mostly have GUI support of various kinds, but not a full SDK for interacting with the OS (e.g. interacting with networking, with other apps, with system services etc.).
I don't think this is really true. There is a fairly stable core set of C libraries present on virtually any Linux distro with a desktop environment that does most of what any application needs in terms of system services. The basic problem is more what the parent was discussing: these are C libraries, and rather than using the FFI for their app language of choice and using these pre-installed libraries, developers would rather rewrite all functionality in the same language as their applications, and the Rust crate, Go mod, or Node package you find when Googling whether something exists in your language of choice is likely to be much less stable than something like libcurl.
Mac and Windows solve this by creating their own languages for app development and then porting the system services to be provided in Swift and C# or whatever in addition to the original C implementation. There is no corporation behind most Linux distros willing to do that other than Canonical and Redhat, but they're perfectly happy to just stick with C, as is Gnome (KDE being perfectly happy to stick with C++). Most app developers are not, however.
For what it's worth, Redhat did kind of solve this problem for the Linux container ecosystem with libcontainer, rewriting all of the core functionality in Go, in recognition of the fact that this is the direction the ecosystem was moving in thanks to Docker and Kubernetes using Go, and now app developers for the CNCF ecosystem can just use that single set of core dependencies that stays pretty stable.
There is no common/stable set of c libs. libc on one distro is different than libc on another (even just within glibc). It's why you generally can't run an app compiled on one distro on different one. You also at times cannot do this on a newer version of a distro and run it on an older version.
When a piece of software lists an rpm targeting Fedora or a deb targeting Debian, these are not just repackaging of the same binaries (unless they are providing statically linked binaries).
RedHat did not create libcontainer and it was never rewritten to go (it was always go).
libcontainer was started by Michael Crosby at Docker as a replacement to execing out to (at the time unstable) LXC, later donated to OCI (not CNCF) as a piece of tooling called "runc", which is still in use today. As part of OCI RedHat and others have certainly contributed to its development.
libcontainer lives in runc, and is actively discouraged to use directly because go is really
quite bad at this case (primarily due to no control of real threads), and runc's exec API is the stable API here. Side note: much of runc is written in C and imported as cgo and initialized before the go runtime has spun up.
That is not to say libcontainer is bad, just that it is considered an internal api for runc.
RedHat did end up creating an alternative to runc called "crun", which is essentially runc in C with the goal of being smaller and faster.
> There is no common/stable set of c libs. libc on one distro is different than libc on another (even just within glibc). It's why you generally can't run an app compiled on one distro on different one
Pretty much everything uses glibc and distros that are to be used by end users (instead of specialized uses like embedded) also tend to have glibc versions. If someone uses a non-glibc distro it'd be out of their choice so they know what they're getting into.
And glibc has been backwards compatible since practically forever. You can compile a binary on a late 90s Linux and it'll work on modern Linux as long as all libraries it depends on have a compatible and stable ABI. I've actually done this[0] with a binary that uses a simple toolkit i hack on now and then, this is the exact same binary i compiled inside the VM running Red Hat from 1997 (the colors are due to lack of proper colormap support in my toolkit and the X in the VM using plain VGA) running in my -then- 2018 Debian. This is over two decades of backwards compatibility (and i actually tested it again recently with Caldera OpenLinux from 1999 and my current openSUSE installation). Note that the binary isn't linked statically but dynamically links to glibc and Xlib (which is another one with strong backwards compatibility).
Yes, you can compile with a really old glibc and use it on a newer one.
But glibc does introduce incompatible changes that make it not work the other way and there are issues with stable ABI's across distros.
AFAIK the issue is the symbol versioning which is explicit by design. I can understand why it exists for glibc-specific APIs but i don't see why it is also used for standard C and POSIX APIs that shouldn't change.
It is an issue if you want to compile new programs on a new distro that can run on older distros, but IMO that is much less of a problem than not being able to run older programs in newer distros. And there are workarounds for that anyway, though none of them are trivial.
Forwards compatibility is basically non-existent in the software world. It's not like you can compile a Win 10 program and expect to any time run it on Win 7.
Actually you can, as long as it doesn't use Win10 APIs (or use them via dynamic loading) it will work on Win7.
The issue with glibc is that when you use it it adds requests for the latest versions of the exported symbols that the glibc you use in your system has. You can work around this in a variety of ways (e.g. use __asm__ to specify the version you want and use the header files of an older glibc to ensure that you aren't using incompatible calls) but you need to go out of your way to ensure that whereas in Windows you can just not use the API (well, also make sure your language's runtime library doesn't use the API either but in practice this is less of a concern).
Actually you're right, I forgot that we used to ship software to various versions of Windows with just a single build based on more or less the latest VC++ runtime (that had re-distributable versions for all intended targets).
Most of these libraries are either quite low level, or more GUI-oriented. They usually don't include mid-level decent system management utilities, like registering your app as a service, or checking if the network is up.
In general, when there are is some useful and commonly used C SDK , common FFI wrappers in any language quickly appear and become popular.
> Basically, what Linux distributions have failed to do is to create a higher level SDK for their platforms - instead, they rely on a soup of libraries and on the package maintainer model to know which of these is which.
At the same time, the tools which solve this really shine. You inevitably run into these issues with random third party dependencies on other platforms, too, but it's further from the norm, so you end up with awful bespoke toolchains that contain instructions like "download Egub35.3.zip and extract it to C:\Users\brian\libs."
Developers on GNOME deal with this regularly, partially because of our lack of a high level SDK. So one of the tools we have to solve this is first class support for flatpak runtimes in GNOME Builder: plop a flatpak manifest in your project directory (including whatever base runtime you choose and all your weird extra libraries) and Builder will use that as an environment to build and run the thing for development. This is why pretty much everything under https://gitlab.gnome.org/GNOME has a flatpak runtime attached. It's a young IDE, but that feature alone makes it incredible to work with.
> You inevitably run into these issues with random third party dependencies on other platforms, too, but it's further from the norm, so you end up with awful bespoke toolchains that contain instructions like "download Egub35.3.zip and extract it to C:\Users\brian\libs."
On Windows, there is a very clear solution: for any 3rd party you need outside the Windows SDK, you bundle it into the MSI installer you distribute. When installing the MSI, it can check if this particular 3rd party is already installed, and avoid it.
I don't think I've seen anything like the manual instructions you discuss for more than 10 years. Even OSS projects typically ship with simple installers today.
I believe, but have not tried it personally so may well be wrong, that similar mechanisms are common on MacOS with App packages.
Oh, for sure. Reconciling a bunch of different things' ideas of what libturkey they think they need into a single image must be a nightmare. In theory, that's what BuildStream should be helping with since gnome-build-meta[1] is only going to have one of those for different components to depend on. (If there were two libturkeys, it would be very obviously wrong). But I guess the trouble then is a lot of extra apps aren't in gnome-build-meta?
When I was messing with BuildStream a while ago I found myself wishing projects put reference BuildStream elements in their own git repos, but I suppose that would get messed up in the same way.
I'd like to point out elemenatary OS as a counter-example:
They have consistently provided a high level SDK for their OS. With elementary OS 6, they moved their high level SDK to using flatpak (not flathub) as a distribution mechanism.
You are making it backward. Linux distribution is a distribution, not a vendor. If you will write a higher level SDK, and then will write an application using the SDK, and then this application will be requested by users, then your application and your SDK will be included into the distribution. UNIX has CDE[0], but nobody uses it, so no distribution includes CDE by default.
> Linux distribution is a distribution, not a vendor.
I don't understand what you mean here. Do you think that the Debian is not trying to provide a full OS, they are just curating a set of popular packages?
I think this is patently false, as most distributions typically take clear decisions to standardize and maintain particular OS components, such as choosing a particular libc (glibc in most distros, musl in Alpine), choosing a particular init system (systemd vs System V), particular network management demon etc.
However, instead of taking additional time to create and commit to a backwards compatible Debian SDK, Alpine SDK etc, they then package all of these OS components the same way they package popular software.
Yep, Debian is not even trying to develop a full OS. They are distributing GNU/Linux with few popular desktops (Gnome, KDE, Mate, XFCE, etc.), and popular applications.
GNU project tries to develop full OS for Linux kernel. GNOME project tries to develop full desktop for GNU/Linux. The Document Foundation tries to develop full office suite. And so on.
I watched a great rant by Linus T himself on how distributing code for Linux is a "pain in the arse", but is comparatively easier on Windows and MacOS: https://www.youtube.com/watch?v=Pzl1B7nB9Kc
Something I took away from that is that it is Linus himself, personally, that is responsible (in a way) for Docker existing.
Docker depends on the stable kernel ABI in order to allow container images to be portable and "just work" stably and reliably. This is a "guarantee" that Linus gets shouty about if the kernel devs break it.
Docker fills a need on Linux because the user-mode libraries across all distributions are a total mess.
Microsoft is trying to be the "hip kid" by copying Docker into Windows, where everything is backwards to the Linux situations:
The NT kernel ABI is treated as unstable, and changes as fast as every few months. No one ever codes directly against the kernel on Windows (for some values of "no one" and "ever".)
The user-mode libraries like Win32 and .NET are very stable, papering over the inconsistency of the kernel library. You can run applications compiled in the year 2000 today, unmodified, and more often than not they'll "just work".
There just isn't a "burning need" for Docker on Windows. People that try to reproduce the cool new Linux workflow however are in for a world of hurt, because they'll rapidly discover that the images they built just weeks ago might not run any more because the latest Windows Update bumped the kernel version.
> Now that each language has their own package manager, the role of distributions have faded.
Yet most language-specific package managers are deeply flawed compared to distro package managers. But i agree with your sentiment that beyond LTS for enterprise, the deb/rpm packaging paradigm is becoming obsolete. I believe nix/guix form an interesting new paradigm for packaging where there is still a trusted third party (the nix/guix community and repo) but where packaging doesn't get in the way of app developers.
I'm especially interested in how these declarative package managers could "export" to more widely-used packaging schemes. guix can already export to .deb, but there's no reason it couldn't produce an AppImage or a flatpak.
> On Windows and macOS, there are some SDKs the application can lean on. On Linux, the only stable "SDK" is the Linux kernel API.
Now, with Flatpak, each runtime is an SDK on its own. However, unlike Windows and macOS, specific runtime is not being bound to a specific OS release, but to the app requirement.
The first time I noticed that trend was on macOS, where applications bundle most of their libraries, and binaries compiled for multiple architectures.
Then we had Electron apps that ship with their own full copy of a browser and dependent libraries.
When NPM was designed, they observed that resolving colliding versions of the same dependency was sometimes difficult. Their answer was to remove that restriction and allow multiple versions of the same library in a project. Nowadays, it's not uncommon to have 1000+ dependencies in a simple hello world npm project.
Our industry is moving away from that feedback force that was forcing developers to agree on interfaces and release stable APIs.
> Our industry is moving away from that feedback force that was forcing developers to agree on interfaces and release stable APIs.
And parts of the industry are beginning to move back and favor stability, as a result of
- numerous NPM packages being taken over by coin miners and other malware - at the scale of some of them, even a ten minute takeover window were millions of installs, and source chain audits are almost impossible
- framework churn being an actual liability for large enterprises - many were burned when AngularJS fell out of love and now they were stuck with a tech stack widely considered "out of date". Most new projects these days seem to be ReactJS where Facebook's heavy involvement promises at least some long term support
- developer / tooling churn - same as above, the constant amount of training to keep up with breaking changes may be tolerable at startup, money-burn phase, but once your company achieves a certain size it becomes untenable
As the number of dependencies for building an application grows, it becomes exponentially harder to shake the tree. This used to be the role of Linux distributions, they were acting as a push-back force, asking projects to support multiple versions of C libraries. This was acceptable because there is not C package manager.
Now that each language has their own package manager, the role of distributions have faded. They are even viewed as a form of nuisance by some developers. It's easier to support one fixed set of dependencies and not have to worry about backward-compatibility. This is something all distributions have been struggling with for a while now, especially with NodeJS.
This trend is happening on all platforms, but is more pronounced in Linux because of the diversity of the system libraries ecosystem. On Windows and macOS, there are some SDKs the application can lean on. On Linux, the only stable "SDK" is the Linux kernel API.