> The way to solve the problem of dependency hell is to version every function, and only call functions based on their versions, and always ship every old version of every function. Then the application itself, or a dynamic linker, must find and execute the correct version of the function it needs to call.
If this is your vision, why would you dynamically link? If you static link your code to the library functions it calls, the runtime environment can't be changed out from under you (well --- not without a lot of work), and you'd presumably only build with the version of the library functions you like, so you'd be set there too. If you want to update a dependency, pull it in and rebuild.
I don't think this is a popular vision, because people want to believe that they can update to the latest OpenSSL and fix bugs without breaking things and sometimes, they can.
You still have a difficult problem when you share data with code you didn't fully control the linking of. If your application code needs to setup an OpenSSL socket, and then pass that socket to a service library, and the service library uses OpenSSL A.B.C-k and you use A.B.C-l, maybe that works, maybe it doesn't; if it doesn't, that's a heck of a problem to debug. Of course, it's even worse if you're not on the same minor version or across major versions.
While I'm picking on OpenSSL, because it's caused me (and others) a lot of grief, this kind of thing comes up with lots of libraries.
> people want to believe that they can update to the latest OpenSSL and fix bugs without breaking things
Yeah. It's a bug in the culture, really, and culture is much harder to change than software.
> problem when you share data with code you didn't fully control the linking of
Yeah, the data model needs to be versioned too. It's impossible to pass data between applications of different versions without the possibility of a bug. The options I'm aware of are A) provide that loose-abstraction-API and hope for the best, or B) provide versioned drivers that transform the data between versions as needed.
A is what we do today. B would be sort of like how you upgrade between patches, where to go from 6.3.1 to 9.0.0, you upgrade from 6.3.1 -> 6.4.0 -> 7.0.0 -> 8.0.0 -> 9.0.0. For every modified version of the data model you'd write a new driver that just deals with the changes. When OpenSSL 6.3.1 writes data to a file, it would store it with v6.3.1 as metadata. When OpenSSL 9.0.0 reads it, it would first pass it through all the drivers up to 9.0.0. When it writes data, it would pass the data in reverse through the data model drivers and be stored as v6.3.1. To upgrade the data model version permanently, the program could snapshot the old data so you could restore it in case of problems. (Much of this is similar to how database migrations work, although with migrations, going backward usually isn't feasible)
Who's going to write those migration drivers though? Not OpenSSL, because they don't think it's valid to link to multiple versions of their library in the same executable. But also, it will be difficult for it to be anybody else, because the underlying incompatible data structures were supposed to be opaque to the library users. Note that I'm talking about objects that only live in program memory, they're never persisted to disk.
This is the underlying problem: it's the software developers' philosophy and practice that are the limitation, not a technical thing. Doesn't matter if it's program memory or disk or an API or ABI, it's all about what version of X works with what version of Y. If we're explicit about it, we can automatically use the right version of X with the right version of Y. But we can't if the developers decide not to adopt this paradigm. Which is where we are today. :(
Wasteful? Honestly, the docker images I use take up more RAM than they take up disk space. If I had to give up containers you would have to use VMs instead and those are significantly more wasteful.
Also, nothing stops you from putting your statically linked go app in a container which can then use e.g. kubernetes or nomad for horizontal scaling.
If this is your vision, why would you dynamically link? If you static link your code to the library functions it calls, the runtime environment can't be changed out from under you (well --- not without a lot of work), and you'd presumably only build with the version of the library functions you like, so you'd be set there too. If you want to update a dependency, pull it in and rebuild.
I don't think this is a popular vision, because people want to believe that they can update to the latest OpenSSL and fix bugs without breaking things and sometimes, they can.
You still have a difficult problem when you share data with code you didn't fully control the linking of. If your application code needs to setup an OpenSSL socket, and then pass that socket to a service library, and the service library uses OpenSSL A.B.C-k and you use A.B.C-l, maybe that works, maybe it doesn't; if it doesn't, that's a heck of a problem to debug. Of course, it's even worse if you're not on the same minor version or across major versions.
While I'm picking on OpenSSL, because it's caused me (and others) a lot of grief, this kind of thing comes up with lots of libraries.