This is a perfect example of how we're trying to design Docker: by looking for the right balance between evolution and revolution.
Evolution means it has to fit into your current way of working and thinking. Revolution means it has to make your life 10x better in some way. It's a very fine line to walk.
I think a lot of bleeding edge tools sacrifice evolution because it involves too many compromises - there's a kind of "if they don't get it, their application is not worthy of my tool" mentality, and as a result the majority of developers are left on the side of the road. I see several tools named in this thread which suffer from this problem, and as a result will never get a chance to solve the problem at a large scale.
In this example of build repeatability, "evolution" means we can't magically make every application build in a truly repeatable way overnight. However, we can frame the problem in such a way that lack of repeatability becomes more visible, and there's an easy and gradual path to making your own build repeatable.
Sure, you can litter your Dockerfile with "run apt-get install" lines, and that does partially improves build repeatability: first with a guaranteed starting point, second with build caching, which by default will avoid re-running the same command twice. Your build probably wasn't repeatable to begin with, and in the meantime you benefit from all the other cool aspects of Docker (repeatable runtime, etc), so it's already a net positive.
Later you can start removing side effects: for example by building your dependencies from source, straight from upstream. In that case your dependencies are built in a controlled environment, from a controlled source revision, and you can keep doing this all the way down. The end result is a full dependency graph at the commit granularity, comparable to nix for example - except it's not a requirement to start using docker :)
I agree, this is the right way to go about it. Someone with a nicely repeatable build can go ahead and get that with docker too, someone without still that gets a nicely distributable image. Docker seems to have taken off quickly as there's a benefit very soon after you start using it, and very little to get in the way of you having something running.
There's an issue in that people see the claims of one part and think they apply to the whole (I don't think the poster thinks that, but people reading it might get that impression), but this is a problem of education, not a technical one.
This is a perfect example of how we're trying to design Docker: by looking for the right balance between evolution and revolution.
Evolution means it has to fit into your current way of working and thinking. Revolution means it has to make your life 10x better in some way. It's a very fine line to walk.
I think a lot of bleeding edge tools sacrifice evolution because it involves too many compromises - there's a kind of "if they don't get it, their application is not worthy of my tool" mentality, and as a result the majority of developers are left on the side of the road. I see several tools named in this thread which suffer from this problem, and as a result will never get a chance to solve the problem at a large scale.
In this example of build repeatability, "evolution" means we can't magically make every application build in a truly repeatable way overnight. However, we can frame the problem in such a way that lack of repeatability becomes more visible, and there's an easy and gradual path to making your own build repeatable.
Sure, you can litter your Dockerfile with "run apt-get install" lines, and that does partially improves build repeatability: first with a guaranteed starting point, second with build caching, which by default will avoid re-running the same command twice. Your build probably wasn't repeatable to begin with, and in the meantime you benefit from all the other cool aspects of Docker (repeatable runtime, etc), so it's already a net positive.
Later you can start removing side effects: for example by building your dependencies from source, straight from upstream. In that case your dependencies are built in a controlled environment, from a controlled source revision, and you can keep doing this all the way down. The end result is a full dependency graph at the commit granularity, comparable to nix for example - except it's not a requirement to start using docker :)