Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This sounds like an oversimplification, though:

> Bundling dependencies for production environments has always been and always will be a terrible idea.

We're considering Docker currently -- not for the distribution model at all, since we'd only ever use our own internally built & maintained images -- but as a clean way to break apart dependencies, and make it possible to run a diverse multiple-server-type environment (production) in miniature (development, demo, UAT).

I quite like the idea of something that may occupy multiple VMs or dedicated servers in production be able to run as a lightweight app in a dev environment, with exactly the same dependencies in place -- that's quite useful.

If this kind of use case is also a terrible idea, I'm interested to hear more -- we're just now tinkering with the idea, and haven't yet moved from theory to practice.

My own concerns revolve around how easy it will be to keep updated on RHEL patches, for example -- apparently we should be able to keep both host and app dependencies updated without much trouble, but it adds more complexity to the maintenance cycle (it seems).



> My own concerns revolve around how easy it will be to keep updated on RHEL patches, for example -- apparently we should be able to keep both host and app dependencies updated without much trouble, but it adds more complexity to the maintenance cycle (it seems).

That's about the "problem" with Docker – it's deceptively easy to roll out everything as its own containerized app. Updating? Not so much.

It turns Docker from a magical silver bullet into a slightly fancier way to handle reproducible deployments. Using it this way is fine, but not what Docker is marketed as by many.


Actually its pretty easy, I just did it yesterday for my PostgreSQL container.

Debian/Ubuntu example: sudo docker exec -it my_pgsql_container_name /bin/sh -c "apt-get update; apt-get -qqy upgrade; apt-get clean"


And what happens when you launch new container from the same image? You need to run apt-get/yum again. Or rebuild image.


That's why you keep everything with state in a separate volume container. Attach volume to built image and that's it.


You can, if you want, mount your root as readonly so you're not tempted to modify it. Then it behaves like a Live CD.


Mount data, logs, configuration, eventual extensions in the data container?

For pg, there might be some migration needed when jumping from a major version to the next. Which requires both versions installed, on Debian at least.


>Mount data, logs, configuration, eventual extensions in the data container?

Many programs have their state represented as files that are stable across versions. If you have a cluster of the same image with different states it's more efficient to move volume containers across a network. Easier to backup/upgrade too.

pg is going to give you those problems whether you are using Docker or not.


and that defeats the whole selling point of docker, which is no forward config, containers do not change once shipped.


Worse, doing this breaks your guarantee that all environments deployed from this image will be consistent. You'll have to deploy some config management software (Puppet/Chef/Salt/Ansible) to stay on top of these changes.


Check out Project Atomic http://www.projectatomic.io/. Or its downstream project RHEL Atomic Host. The whole update process for the host is much simpler. Read more abou it here: http://rhelblog.redhat.com/2015/04/01/red-hat-enterprise-lin...

Note: I am not related to Redhat, but we are considering Docker, too. And we are evaluating how would Atomic fit in our infrastructure.


Basically, you're thinking of building a custom PaaS.

I'd just use an existing one. PaaSes require an enormous amount of work to make them featuresome and robust. That's all work you're spending that isn't user-facing value.

I've worked on Cloud Foundry and so obviously I think it's the bee's knees. You might prefer OpenShift.

If you're happy in the public cloud, you can host on Heroku, Pivotal Web Services (my employer's Cloud Foundry instance) or on Bluemix (IBM's Cloud Foundry instance).


First i'd like to point out that you cannot have a miniature version of production and you cannot reduce maintenance complexity. It violates the fundamental laws of nature. No matter how small, you still have the same number of moving parts, so it's effectively the same when it comes to actually operating and maintaining it.

But lucky for you, Docker provides some ways to run commands on an existing image, like the RHEL patching/updating tools. It should be possible to update an image's files using RHEL's patches, as long as the whole RHEL install is there in the images.

As far as breaking apart these sets of files into disparate dependencies: again, it's totally possible, but it does not simplify nor reduce your maintenance complexity.

Now, some really stupid people would recommend you compile applications from source and deploy them on top of RHEL, and basically build all your deps from scratch. You don't want to do that because a large company has already done that for you and put it into a nice little package called an "rpm". You take these RPMs and you find a simple way to unpack them on the filesystem, make a Docker image out of them, label/version them, and keep them in your Docker image hub. Now you have your RHEL patches as individual Docker images and can deploy them willy-nilly.

(This is, of course, exactly the same as maintenance on systems without Docker, and your dev & production environments would be the same with or without Docker, but Docker does make a handy wrapper for deploying and running individual instances with different dependencies)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: