Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At the end of the day a docker image (or qemu, firecracker, etc...) is just a tarballed root filesystem. The funny thing to me is how seemingly overly complex the ecosystem has become for turning a blueprint for a Linux system into a tarball of files and folders. What am I missing?


The benefit is versioning the OS configuration alongside the application for more reliability.

I think the main problem is trying to apply the same patterns to stateful and stateless services.


Layers?


But aren't they just an implementation detail, albeit a useful one? You'd find copy-on-write deltas in modern snapshotting filesystems too.


> But aren't they just an implementation detail, albeit a useful one?

Part of what we do in Cloud Native Buildpacks is to use the layer abstraction more aggressively to make image updating more efficient. That requires taking care with the ordering and contents of each layer, so that they can be replaced individually.

Putting it another way: we don't see the image as the unit of work and distribution. We're focused on layers as the central concept.


Parent was comparing against a tarball, not a snapshotting filesystem.


Layers are each a tarball, extracted one by one on top of one another to get you to the end result.


So docker untars a filesystem which contains a file "foo" and calls this layer "1". Then it encounters a "RUN some_installer_thing" which removes file "foo" and calls this layer "2".

If you just untar the layers on top of each other, foo will still be there. This is a problem, no?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: