What's the preferred workflow when continuously integrating and deploying in containers? At what step do you run your automated tests? Do you run them in the same image that will go into staging and then production? If using the same image, do you ship to staging and production with test dependencies included, or how do you strip away test dependencies first?
There are many way to do this but we (distelli) recommend the following:
1. Run automated tests during container build (maybe in the AfterBuildSuccess step)
2. Have a single image that goes to both staging and production. Pass in environment variables or configs to operate the image differently in staging or prod
3. Don't include test dependencies in the image so the image is smaller. So if you're running tests etc don't add the tests and dependencies in the Dockerfile. Instead have your CI system run the tests.
our strategy is to have two dockerfiles for each repo. The main Dockerfile and a Dockerfile-test, that builds FROM the main one and contains the test dependencies. during CI we first build from the main Dockerfile and then from the second one. since the test one builds from the main one, there is no significant overhead and it's usually a very fast build. we run the tests on the test image and if they pass, we push the main image to the tests. This means that we do not in practice, deploy the same image we test, but it's pretty close to that. It just requires some discipline to make sure that the test one just adds test dependencies and nothing more to be as similar as possible to the main one. We use circle for continuous integration but distelli looks really cool. Something that circle and travis don't give you is a pipeline feature. Having a system that is aware of your cluster technology enables some nice pipelines and better control
Very true. Our builds are very slow, and they could be much faster if docker caching worked properly. Seems like these issues will be solved soon with the next major release of the platform. Let's hope
The general idea is to test and develop on the same image that will go to prod. Environment-specific configuration should generally be injected at container runtime.
We use CircleCI. The build container is provisioned with docker/gcloud/kubectl. The repo webhook fires on commit, the image is built, a test entrypoint is executed on the image to run tests. If the tests pass the image is pushed to the project repository, and then kubectl is used to update the kubernetes deployments with the new image ref.
Yes, circle provides the build container and docker, and our scripts install gcloud and kubectl and authorize a service account that can push to the associated project image repository.
Gotcha. We're doing the same thing here. The only problem is that Circle doesn't cache the image layers, so we do a full build every time, which takes about 10 minutes.