At the end of the day a docker image (or qemu, firecracker, etc...) is just a tarballed root filesystem. The funny thing to me is how seemingly overly complex the ecosystem has become for turning a blueprint for a Linux system into a tarball of files and folders. What am I missing?
> But aren't they just an implementation detail, albeit a useful one?
Part of what we do in Cloud Native Buildpacks is to use the layer abstraction more aggressively to make image updating more efficient. That requires taking care with the ordering and contents of each layer, so that they can be replaced individually.
Putting it another way: we don't see the image as the unit of work and distribution. We're focused on layers as the central concept.
So docker untars a filesystem which contains a file "foo" and calls this layer "1". Then it encounters a "RUN some_installer_thing" which removes file "foo" and calls this layer "2".
If you just untar the layers on top of each other, foo will still be there. This is a problem, no?