> On Windows and macOS, there are some SDKs the application can lean on. On Linux, the only stable "SDK" is the Linux kernel API.
This is a very good observation I think. Basically, what Linux distributions have failed to do is to create a higher level SDK for their platforms - instead, they rely on a soup of libraries and on the package maintainer model to know which of these is which.
Basically, they are failing to provide a true OS: there is only a Kernel + various user space apps, from systemd to KDevelop, with no explicit line in the sand between what is the system SDK and what is a simple app bundled with the OS.
> This is a very good observation I think. Basically, what Linux distributions have failed to do is to create a higher level SDK for their platforms - instead, they rely on a soup of libraries and on the package maintainer model to know which of these is which.
Not really no. It's both simpler and more complicated.
At the very low level, everyone expects to get a POSIX system. At the user level, most users use either Gnome or KDE (well mostly Gnome to be honest but let's pretend) which provides what could be called a highlevel SDK for applications.
That leaves the layer inbetween which used to be somewhat distribution specific but is seeing more and more consolidation with the advent of systemd and flatpak.
Even Gnome and KDE are not really "high level SDKs". They mostly have GUI support of various kinds, but not a full SDK for interacting with the OS (e.g. interacting with networking, with other apps, with system services etc.).
I don't think this is really true. There is a fairly stable core set of C libraries present on virtually any Linux distro with a desktop environment that does most of what any application needs in terms of system services. The basic problem is more what the parent was discussing: these are C libraries, and rather than using the FFI for their app language of choice and using these pre-installed libraries, developers would rather rewrite all functionality in the same language as their applications, and the Rust crate, Go mod, or Node package you find when Googling whether something exists in your language of choice is likely to be much less stable than something like libcurl.
Mac and Windows solve this by creating their own languages for app development and then porting the system services to be provided in Swift and C# or whatever in addition to the original C implementation. There is no corporation behind most Linux distros willing to do that other than Canonical and Redhat, but they're perfectly happy to just stick with C, as is Gnome (KDE being perfectly happy to stick with C++). Most app developers are not, however.
For what it's worth, Redhat did kind of solve this problem for the Linux container ecosystem with libcontainer, rewriting all of the core functionality in Go, in recognition of the fact that this is the direction the ecosystem was moving in thanks to Docker and Kubernetes using Go, and now app developers for the CNCF ecosystem can just use that single set of core dependencies that stays pretty stable.
There is no common/stable set of c libs. libc on one distro is different than libc on another (even just within glibc). It's why you generally can't run an app compiled on one distro on different one. You also at times cannot do this on a newer version of a distro and run it on an older version.
When a piece of software lists an rpm targeting Fedora or a deb targeting Debian, these are not just repackaging of the same binaries (unless they are providing statically linked binaries).
RedHat did not create libcontainer and it was never rewritten to go (it was always go).
libcontainer was started by Michael Crosby at Docker as a replacement to execing out to (at the time unstable) LXC, later donated to OCI (not CNCF) as a piece of tooling called "runc", which is still in use today. As part of OCI RedHat and others have certainly contributed to its development.
libcontainer lives in runc, and is actively discouraged to use directly because go is really
quite bad at this case (primarily due to no control of real threads), and runc's exec API is the stable API here. Side note: much of runc is written in C and imported as cgo and initialized before the go runtime has spun up.
That is not to say libcontainer is bad, just that it is considered an internal api for runc.
RedHat did end up creating an alternative to runc called "crun", which is essentially runc in C with the goal of being smaller and faster.
> There is no common/stable set of c libs. libc on one distro is different than libc on another (even just within glibc). It's why you generally can't run an app compiled on one distro on different one
Pretty much everything uses glibc and distros that are to be used by end users (instead of specialized uses like embedded) also tend to have glibc versions. If someone uses a non-glibc distro it'd be out of their choice so they know what they're getting into.
And glibc has been backwards compatible since practically forever. You can compile a binary on a late 90s Linux and it'll work on modern Linux as long as all libraries it depends on have a compatible and stable ABI. I've actually done this[0] with a binary that uses a simple toolkit i hack on now and then, this is the exact same binary i compiled inside the VM running Red Hat from 1997 (the colors are due to lack of proper colormap support in my toolkit and the X in the VM using plain VGA) running in my -then- 2018 Debian. This is over two decades of backwards compatibility (and i actually tested it again recently with Caldera OpenLinux from 1999 and my current openSUSE installation). Note that the binary isn't linked statically but dynamically links to glibc and Xlib (which is another one with strong backwards compatibility).
Yes, you can compile with a really old glibc and use it on a newer one.
But glibc does introduce incompatible changes that make it not work the other way and there are issues with stable ABI's across distros.
AFAIK the issue is the symbol versioning which is explicit by design. I can understand why it exists for glibc-specific APIs but i don't see why it is also used for standard C and POSIX APIs that shouldn't change.
It is an issue if you want to compile new programs on a new distro that can run on older distros, but IMO that is much less of a problem than not being able to run older programs in newer distros. And there are workarounds for that anyway, though none of them are trivial.
Forwards compatibility is basically non-existent in the software world. It's not like you can compile a Win 10 program and expect to any time run it on Win 7.
Actually you can, as long as it doesn't use Win10 APIs (or use them via dynamic loading) it will work on Win7.
The issue with glibc is that when you use it it adds requests for the latest versions of the exported symbols that the glibc you use in your system has. You can work around this in a variety of ways (e.g. use __asm__ to specify the version you want and use the header files of an older glibc to ensure that you aren't using incompatible calls) but you need to go out of your way to ensure that whereas in Windows you can just not use the API (well, also make sure your language's runtime library doesn't use the API either but in practice this is less of a concern).
Actually you're right, I forgot that we used to ship software to various versions of Windows with just a single build based on more or less the latest VC++ runtime (that had re-distributable versions for all intended targets).
Most of these libraries are either quite low level, or more GUI-oriented. They usually don't include mid-level decent system management utilities, like registering your app as a service, or checking if the network is up.
In general, when there are is some useful and commonly used C SDK , common FFI wrappers in any language quickly appear and become popular.
> Basically, what Linux distributions have failed to do is to create a higher level SDK for their platforms - instead, they rely on a soup of libraries and on the package maintainer model to know which of these is which.
At the same time, the tools which solve this really shine. You inevitably run into these issues with random third party dependencies on other platforms, too, but it's further from the norm, so you end up with awful bespoke toolchains that contain instructions like "download Egub35.3.zip and extract it to C:\Users\brian\libs."
Developers on GNOME deal with this regularly, partially because of our lack of a high level SDK. So one of the tools we have to solve this is first class support for flatpak runtimes in GNOME Builder: plop a flatpak manifest in your project directory (including whatever base runtime you choose and all your weird extra libraries) and Builder will use that as an environment to build and run the thing for development. This is why pretty much everything under https://gitlab.gnome.org/GNOME has a flatpak runtime attached. It's a young IDE, but that feature alone makes it incredible to work with.
> You inevitably run into these issues with random third party dependencies on other platforms, too, but it's further from the norm, so you end up with awful bespoke toolchains that contain instructions like "download Egub35.3.zip and extract it to C:\Users\brian\libs."
On Windows, there is a very clear solution: for any 3rd party you need outside the Windows SDK, you bundle it into the MSI installer you distribute. When installing the MSI, it can check if this particular 3rd party is already installed, and avoid it.
I don't think I've seen anything like the manual instructions you discuss for more than 10 years. Even OSS projects typically ship with simple installers today.
I believe, but have not tried it personally so may well be wrong, that similar mechanisms are common on MacOS with App packages.
Oh, for sure. Reconciling a bunch of different things' ideas of what libturkey they think they need into a single image must be a nightmare. In theory, that's what BuildStream should be helping with since gnome-build-meta[1] is only going to have one of those for different components to depend on. (If there were two libturkeys, it would be very obviously wrong). But I guess the trouble then is a lot of extra apps aren't in gnome-build-meta?
When I was messing with BuildStream a while ago I found myself wishing projects put reference BuildStream elements in their own git repos, but I suppose that would get messed up in the same way.
I'd like to point out elemenatary OS as a counter-example:
They have consistently provided a high level SDK for their OS. With elementary OS 6, they moved their high level SDK to using flatpak (not flathub) as a distribution mechanism.
You are making it backward. Linux distribution is a distribution, not a vendor. If you will write a higher level SDK, and then will write an application using the SDK, and then this application will be requested by users, then your application and your SDK will be included into the distribution. UNIX has CDE[0], but nobody uses it, so no distribution includes CDE by default.
> Linux distribution is a distribution, not a vendor.
I don't understand what you mean here. Do you think that the Debian is not trying to provide a full OS, they are just curating a set of popular packages?
I think this is patently false, as most distributions typically take clear decisions to standardize and maintain particular OS components, such as choosing a particular libc (glibc in most distros, musl in Alpine), choosing a particular init system (systemd vs System V), particular network management demon etc.
However, instead of taking additional time to create and commit to a backwards compatible Debian SDK, Alpine SDK etc, they then package all of these OS components the same way they package popular software.
Yep, Debian is not even trying to develop a full OS. They are distributing GNU/Linux with few popular desktops (Gnome, KDE, Mate, XFCE, etc.), and popular applications.
GNU project tries to develop full OS for Linux kernel. GNOME project tries to develop full desktop for GNU/Linux. The Document Foundation tries to develop full office suite. And so on.
This is a very good observation I think. Basically, what Linux distributions have failed to do is to create a higher level SDK for their platforms - instead, they rely on a soup of libraries and on the package maintainer model to know which of these is which.
Basically, they are failing to provide a true OS: there is only a Kernel + various user space apps, from systemd to KDevelop, with no explicit line in the sand between what is the system SDK and what is a simple app bundled with the OS.