Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Defence Against the Docker Arts (heroku.com)
236 points by codefinger on April 3, 2019 | hide | past | favorite | 97 comments


Here’s a little known fact: “docker build” can trivially be extended to build buildpacks or CNB. Now that the buildkit refactoring is complete, Dockerfiles are just the default frontend. There’s already a buildpack frontend in the community repo, and it works great. Writing your own frontend is real straightforward.

Honestly after years of stagnation, the most exciting work on container building is now coming out of Docker. Buildkit is amazing, a real hidden gem.

See https://github.com/moby/buildkit


While I really appreciate the work tonistiigi did to create the Cloud Foundry buildpack frontend for buildkit, it uses a compatibility layer[1] (which I wrote myself and no longer maintain) that only works with deprecated Cloud Foundry buildpacks that depend on Ubuntu Trusty. It doesn't work with the new, modular Cloud Native Buildpacks, and the buildpacks that ship with it are outdated (and vulnerable to various CVEs). It will stop working with new buildpack versions entirely when Cloud Foundry drops support for Trusty.

Implementing CNBs as a buildkit frontend would break key security and performance features. For instance, CNBs can build images in unprivileged containers without any extra capabilities, which buildkit cannot do. CNBs can also patch images by manipulating their manifests directly on a remote Docker registry. This means that image rebuilds in a fresh VM or container can reuse layers from a previous build without downloading them (just metadata about them), and base images can be patched for many images simultaneously with near-zero data transfer (as long as a copy of the new base image is available on the registry). As far as I know, buildkit can't do any of that yet.

That said, we do plan on using buildkit (once it ships with Docker by default) to optimize the CNB pack CLI when you build images without publishing them to a Docker registry. It's a huge improvement over the current Docker daemon implementation for sure!

[1] https://github.com/buildpack/packs


Your answer makes sense, but it actually makes me less excited about CNB.

It sounds like CNB will break compatibility with the massive Dockerfile ecosystem, in exchange for... sometimes not downloading a layer? That is not appealing to me at all, because Dockerfiles are too embedded in my workflow, losing support for them is simply not an option.

As for unprivileged builds, I don’t see any reason buildkit can’t support it since it’s based on containerd.

I think it’s a mistake not to jump on the buildkit/docker-build bandwagon. You would get a 10x larger ecosystem overnight, basically for free. Instead it seems like you’re betting on CNB as a way to “kill” Dockerfiles. But users don’t actually want to kill anything, they want their stuff to continue working. Without a good interop story, you’re pretty much guaranteeing that CNB will not get traction outside of the Pivotal ecosystem. Seems like a shame to me.


Meanwhile some of us were already looking for alternatives to Dockerfiles due to roughly the same reasons described in the blog post and are excited about having this option. No connection to Pivotal or Heroku myself; and indeed the blog post came from Heroku, not Pivotal.


Image rebasing and layer reuse really matter at scale. Patching base images for many images simultaneously in a registry is a huge win if you are an organization running thousands of containers and there is a CVE in one of the OS packages in your base image. Optimizing data transfer similarly matters when you add up the gains across many containers.

And devs like fast builds too :)

I also don't really see how this project is "incompatible" with anything in the docker ecosystem. I hope Dockerfiles have a long and healthy life. CNBs are another method to build OCI compatible images that provides a lot of benefits for some types of users and use cases.

I expect that the pack cli will eventually build on top of and take advantage of Buildkit


The incompatibility I mentioned is with Dockerfiles.

You’re right that more flexible patching and optimizing transfers are valuable. But those problems are independent of build frontends: you could solve them once for both buildpacks and Dockerfiles. In fact buildkit is well on its way to doing exactly that.

Basically I would prefer if buildpacks and Dockerfiles could all be built with the same tooling. CNB seems like a wasted opportunity to do that, because it bundles two things that should be orthogonal: a new build format, and a new build implementation. Docker is going in the opposite direction by unbundling the format (Dockerfile) from the implementation (buildkit).


> But those problems are independent of build frontends: you could solve them once for both buildpacks and Dockerfiles.

You can build an image with both technologies, but that's not the key to the argument. The key here is every Dockerfile is unique and potentially quite different from any other Dockerfile. Small differences in layer order and layer contents multiply to very large inefficiencies at scale.

The way you tackle this problem is to make the ordering and contents of layers predictable for any given software that is being built. You can achieve this with Dockerfiles with golden images, strict control of access to Dockerhub, complicated `FROM` hierarchies, the whole shebang.

But at that point you are reinventing buildpacks, at your own expense.

Note that this doesn't change with or without buildkit.

> a new build format, and a new build implementation. Docker is going in the opposite direction by unbundling the format (Dockerfile) from the implementation (buildkit).

Is your understanding that we invented a new image format or that we rewrote most of Docker? Or that the way we've written it prevents, for all times and all purposes, adopting buildkit as part of the system in future?

Because both of those are misapprehensions. We have extensively reused code and APIs from Docker, especially the registry API.


> Small differences in layer order and layer contents multiply to very large inefficiencies at scale.

Can you provide an example of how layer order can cause an issue?


Consider:

    FROM nodejs
    COPY /mycode /app
    RUN npm install /app
Now suppose I change my app code. In a Dockerfile situation, the change to the `COPY` invalidates the `RUN npm install /app` layer, even if I didn't change anything that NPM would care about.

An NPM buildpack can signal that there's nothing to change, allowing the overall lifecycle to skip re-building that layer.

There's also the problem of efficient composition. Suppose I have this:

    RUN wget https://example.com/popular-shell-script.sh && \
        go get https://git.example.com/something-else@abc123 && \
        ./popular-shell-script.sh && \
        rm ./popular-shell-script
And this:

        RUN go get https://git.example.com/something-else@abc123
Both of the resulting images will contain the same `something-else` binary and in an ideal world of file-level manifests I could save on rebuilds and bandwidth consumption (NixOS has this, approximately).

But I don't get to do that, because the layers have different overall contents and different digests. Buildpacks don't get you all the way to a file-centric approach, but because they follow a repeatable, controlled pattern of selecting the contents and order of layers, they greatly improve layer reuse between many images.


I'm not Stephen, but we've worked together on a few projects, including this one.

If you like Dockerfiles, you like Dockerfiles. Lots of people do. I did, until I'd used them for a while.

I'm not sure what you mean by "break compatibility". CNBs produce OCI images. They'll run on containerd just fine.

As for ecosystem: you'll note that the domain is heroku.com.


My favorite Docker BuildKit feature is SSH agent forwarding. Add "--mount=type=ssh" after RUN commands in Dockerfiles and the command will use your host machine's SSH agent. I've been able to greatly simplify a lot of Dockerfiles and CI build processes using it.

There's a good introduction here: https://medium.com/@tonistiigi/build-secrets-and-ssh-forward...


This feature currently does not work with the OS X ssh agent:

https://github.com/docker/for-mac/issues/410


ssh access at build time via buildkit works on os x

https://medium.com/@tonistiigi/build-secrets-and-ssh-forward...

the link you've supplied regards ssh access when running docker images, not when building them.

... i did notice that for lots of downloads, it is somewhat slower, or perhaps more prone to lag, than authentication from inside the running image.


Oh, that's nice -- that's been a pain point for a long time, and it seemed like there was a philosophical argument against making the Dockerfile's behaviour context-dependent like this.


You do have to pass an extra "--ssh default" argument to "docker build", so it's not totally automagical or anything.


At the end of the day a docker image (or qemu, firecracker, etc...) is just a tarballed root filesystem. The funny thing to me is how seemingly overly complex the ecosystem has become for turning a blueprint for a Linux system into a tarball of files and folders. What am I missing?


The benefit is versioning the OS configuration alongside the application for more reliability.

I think the main problem is trying to apply the same patterns to stateful and stateless services.


Layers?


But aren't they just an implementation detail, albeit a useful one? You'd find copy-on-write deltas in modern snapshotting filesystems too.


> But aren't they just an implementation detail, albeit a useful one?

Part of what we do in Cloud Native Buildpacks is to use the layer abstraction more aggressively to make image updating more efficient. That requires taking care with the ordering and contents of each layer, so that they can be replaced individually.

Putting it another way: we don't see the image as the unit of work and distribution. We're focused on layers as the central concept.


Parent was comparing against a tarball, not a snapshotting filesystem.


Layers are each a tarball, extracted one by one on top of one another to get you to the end result.


So docker untars a filesystem which contains a file "foo" and calls this layer "1". Then it encounters a "RUN some_installer_thing" which removes file "foo" and calls this layer "2".

If you just untar the layers on top of each other, foo will still be there. This is a problem, no?


Link to the buildpack frontend for docker: https://github.com/tonistiigi/buildkit-pack


What the heck, how is it that I haven't seen these yet? Maybe one of the downfalls of becoming comfortable enough with the syntax that I'm not having to look up documentation on the docker website anymore is that I'm more and more out of the loop on this stuff.

Thanks for posting that!


When we started, buildkit was still experimental. Personally I'd like for `pack` to be able to jettison all the code that has to deal with the Docker daemon, it's a bit of a PITA, and buildkit looks to have significant improvements in at least that area.

I'd note that the frontend there is for "v2b" buildpacks -- the previous generation of Cloud Foundry buildpacks. The CNB lifecycle has changed a fair amount from the v2a (Heroku) and v2b designs.


Thank you for sharing this!


I've been very close to creating something like this internally. It is easy to write a Dockerfile that produces a compact and optimal container. But it's the same lines of code over and over again. Anything we have that's a static site looks like:

    FROM node:10 AS build
    WORKDIR /foo
    COPY . .
    RUN npm i
    RUN npx webpack

    FROM nginx:whatever
    COPY nginx.conf /etc/nginx/config.d/
    COPY --from=build /foo/dist /srv
    ...
It's fine when you have one. Annoying when you have a couple. This isn't code that needs to be checked into the repository and updated. It needs to just work.

The other thing I'd like to see is the ability to output multiple containers from one Dockerfile. There is so much wasted work where I have webpack stuff and a go application that run in separate containers but are built together. There is one Dockerfile like the above to build the static part. There is another to build the go binary and copy it to an almost-pristine alpine container (including dumb-init, cacerts, tzdata, and grpc_health_probe). I don't understand why I have to have two Docker files to do that.


You don't have to have separate dockerfiles. If you use multistage builds, you can name the specific terminal stages and then invoke them with 'docker build -t stagename', which will reuse the build cache as you'd expect. I've done this to export multiple app containers from a monorepo.


Yo, their example has multiple stages.


I think they're talking about intermediates, which wouldn't have to be rebuilt every time.


I'm a novice docker user, but I found Dockerfiles to be probably the most direct, graspable, important part of docker.

It's one readable text file used to recreate an entire environment. It's sort of a picture worth a thousand command lines.

That said, I wish there was a way to get rid of all the && stuff, which is used to avoild writing a layer of the filesytem.

Why not have something like:

    RUN foo
    RUN bar
    RUN bletch
    LAYER


The && is just layer squashing and not really needed. You can just use run on every line. Docker even supports squashing images now, so the && doesn't matter if you squash.


&& doesn't layer squash, it's a trick to avoid the layer commit that RUN implies.

And squashing comes with a major drawback: you lose layer caching and any hope of a vaguely-efficient rebuilding process.


When I first encountered Cloud Foundry, here's the arduous journey I had to first undertake to learn how to use buildpacks:

    cf push
Whereas with Dockerfiles I had to learn several commands and a lot of exciting gotchas.


I just like the philosophy of docker, although I am recognizing here the implementation has some warts.

also, if you're just a user of docker you might not care about dockerfiles. (like most people building software don't care about the Makefiles and prefer not to look at them)


I like some of the philosophy of docker. But there are improvements left on the table around build and shipment efficiency. Plus it introduces real problems at scale, mostly around the sanity and safety of the overall cluster.


You are setting up an entire operating system to install a single Microservice and just now noticed that you have redundancies?

You could have the same issue by simply trying to rpmbuild your app. No really, you are just doing packaging. If you want more comfort, look into how redhat or Debian maintain their packages. They have similar problems and most likely they have mature solutions.


If you have a single service and will never want more - sure. Once you have multiple, you actually get benefits if you depend on a lot of native libraries and external binaries. If you're in that situation, not having to upgrade every service at the same time when moving to a new base os release is really convenient.


NixOS offers the same benefits(more actually). Without containers.


Why learn how to do it right when you can just ship a filesystem in a box instead? sbuild manpage is too long!


I've always felt that buildpacks in Heroku / Cloud Foundry are the way to go as they offer a higher level of abstraction than Docker files. The resulting containers are often production ready with good default settings. In docker you are re-inventing the wheel more often than not.


With Dockerfiles you can achieve high level abstractions by using proven images and composing multi-stage build assets. And then you can customize them with some lower level abstraction by in-line bash or independent scripts.

E.g. look at phusion/baseimage or phusion/passenger - they are production ready and with good defaults. But you also have an easy way to augument them with latest ffmepg compiled from sources to support some exotic format for video conversion worker.


I mean, I'm essentially a beginner (with lots of system experience), and I'm noticing a lack of intermediates in these comments.


I don't think I'd argue that you can't do what buildpacks do with Dockerfiles.

Basically, why do the work yourself? Especially if someone else will solve the weird problems and keep everything up to date with no effort on your part.

I know from personal experience that buildpacks maintainers have seen stuff you people wouldn't believe. Attack ships on fire off the shoulder of -02. gcc beams glittering in the dark near the Nokogiri gate.


My only criticism with Cloud Foundry/Heroku is that the buildpacks and app staging process are often bloated because they have to be all things to everyone for the common case. There are a decent set of cases where doing custom Docker builds is more advantageous than using buildpacks, however, with the Buildpacks (https://buildpacks.io) project, this may change.


I wish this had gone into some more technical detail about what "CNB" does that is actually better. Most of the article was just rehashing some problems with Dockerfiles, but the conclusion is just "CNB fixes it!" The one specific improvement they mention is being able to "rebase" an image without rebuilding the whole thing, which certainly sounds interesting, but is not explained. How does it work? What else is CNB other than a wrapper around `docker build`?


The presentation to the CNCF TOC covers some of the technical details: https://www.youtube.com/watch?v=uDLa5cc-B0E&feature=youtu.be

Some key points:

- CNBs can manipulate images directly on Docker registries without re-downloading layers from previous builds. The CNB tooling does this by remotely re-writing image manifests and re-uploading only layers that need to change (regardless of their order).

- CNB doesn't require a Docker daemon or `docker build` if it runs on a container platform like k8s or k8s+knative. The local-workstation CLI (pack) just uses Docker because it needs local Linux containers on macos/windows.


> How does it work?

The OCI image format expresses layer order as an array of digests. Essentially, "read the blobs with these SHAs in this order, please".

Cloud Native Buildpacks have predictable layouts and layering. A buildpack can know that layer `sha256:abcdef123` contains (say) a .node_modules directory. It can decide to update only that layer, without invalidating any other layer.

And the operation can be very fast, because you can do it directly against the registry. GET a small JSON file, make an edit, POST it back.

This is a big deal because under the classic Dockerfile model, changes in a lower layer invalidate the higher layers. But this means your image can be invalidated by OS layer changes, dependency changes and so on. It's the right policy for Docker to have -- a conservative policy -- but Buildpacks have the advantage of additional context that lets them rely on other guarantees. Most noticeably ABI guarantees.


There's some more detail here: https://buildpacks.io/docs/


I like they way they highlight that combining Ruby and nodejs makes for a complicated Dockerfile, while their example after only includes Ruby and not nodejs. And do they propose a buildpack called ruby-nodejs, because in many cases you don't need nodejs in your Ruby app. OMG now buildpack's are a leaky abstraction!


The Ruby buildpack shown in the example installs Node.js for you. A developer doesn’t even need to be aware that it’s required to precompile assets.


If only we lived in a universe where there was an easy to find and ready to use dockerpack ruby-node. Oh wait.


The CNB design allows buildpacks to opt in or out as a group. So the same nodejs buildpack can work in multiple usecases. With ruby, java, whatever's necessary.


I love the fact Heroku have open-sourced their buildpacks. Dokku takes great use of these and provides a very similar platform to themselves that you can host yourself (DigitalOcean even provide a base-image that will pre-configure Dokku for you). Great for personal websites and the like.

If you want to scale in a pinch then it's a case of making some tiny tweaks and pushing to Heroku instead.


I run some things at home using dokku and I switched them all to use Dockerfiles since the buildpacks are crazy slow.

Not 100% sure if that is the fault of the dokku implementation or buildpacks in general though. It's all IO related, so I might not even have noticed it had I had faster disks.


Cloud Native Buildpacks are substantially faster in a number of scenarios. Some of the preliminary Java buildpack changes on the Cloud Foundry side have updates dropping from minutes to milliseconds.


They do seem quite slow. I don't think it's so much I/O related as I'm using a DigitalOcean VPS which has an SSD.

I'm not too fussed as I only deploy once in a blue moon if I need to quickly update my website or something. I can see it getting frustrating if you've got a larger product/app doing CI and deploys every merge to master.


Deis Workflow (now Hephy) used them too! We still use them, and it's a great resource for projects in the same vein.

Although there are some issues with buildpacks in production, they are a great set of training wheels. I'm excited about what's happening with the next generation of buildpacks, CNB is a standard with the support of more than just Heroku – Pivotal is "the other buildpacks company," also involved, and I'm not sure how many other companies!

It's clear sometimes that when you work with containers at scale, that what you're building a lot of layer cakes, and you need to treat the layers separately to remain effective at scale. Some of the ideas like swapping layers out are pretty far out and cool.


It's nice that they've built a tool that understands specific application contexts and can do the right things to build an efficient image. But imo that does not make the dockerfile a leaky abstraction.


I just saw a submission on the "demise" of Cloud Foundry (a Heroku-like PaaS) that's relevant to this discussion: https://medium.com/@krishnan/lessons-from-the-demise-of-clou...

Opinionated Platforms Are Risky: The CloudFoundry platform was more opinionated than some competing platforms in the market. In fact, the biggest debate between CloudFoundry and its direct competitors was about whether customers need opinionated platforms or not. CloudFoundry only supported 12 factor applications whereas platforms built on top of Kubernetes could support both stateful and stateless (12 factor) applications.

If you're building a stateless 12-factor app and there's a buildpack that does what you want, buildpacks are clearly better than lower-level Dockerfiles. But there's no buildpack for something like a database and there probably never will be, so the flexibility of directly building containers needs to exist.


Now that Cloud Native Buildpacks build OCI images instead of platform specific artifacts (slugs or droplets), developers can choose the best build solution for their problem (buildpacks for 12-factor apps, Dockerfiles for a data store)and deploy them to the same container orchestrator.


> But there's no buildpack for something like a database and there probably never will be

That's why Cloud Foundry pioneered the Open Service Broker API.

For what it's worth, that article's recounting of history has substantial variations from my own recollection of history.


Why can't we have a buildpack for a Database but Docker would be fine?


It took me a while to get the pun, it doesn't show up in the article anywhere from what I could fine and I don't actually see how you're defending against anything here so I wonder - Did you just have this pun sitting around and were itching to use it somewhere, anywhere?


Personally, I still don't get it; would you please care to explain? I'm really confused with what does the title really try to convey... :/

edit: Also, clicking to the article, the actual title seems (now?) to be: "Turn Your Code into Docker Images with Cloud Native Buildpacks", so I'm even more confused now... o_O


Defence against the dark arts is a professorial position at Hogwarts School in the Harry Potter universe - why the pun was used is what I'm confused about, since without a matching context it loses capacity as a pun.


it's a reference to she who cannot stop tweeting


Gitlab would benefit from this in the context of their autodevops featureset


Yes, this is something we're definitely investigating! We have an issue open at https://gitlab.com/gitlab-org/gitlab-ce/issues/55840, please join us in the conversation there if you have thoughts on how we can do this in a way that works well for your use cases.


For sure! I was wondering about the same question and asked the team in https://www.dropbox.com/s/lcdlpz2l46e1uu1/Screenshot%202019-...


Please come visit us! We'd love to help. https://buildpacks.slack.com


They already use buildpacks, switching to this new and improved version is presumably straightforward.


So instead of a Dockerfile we now need a builder.toml file. In addition you need a detect and build script: https://buildpacks.io/docs/create-buildpack/building-blocks-... Is this really a simplification?


> Is this really a simplification?

Yes, because for end-user developers, you don't need any files. The files you mentioned are used by buildpack authors.


It's not a case of buildpacks vs dockerfiles. They both solve different problems with different solutions.

Buildpacks fit nicely into the heroku way of doing things. But at any medium to large sized engineering organization, there's no way they could satisfy the requirements for even a simple majority of services that using a Dockerfile provides.


Could you give an example of such requirements?

Disclosure: I've worked on Cloud Foundry Buildpacks twice and worked on this latest effort until recently.


This seems written from an outdated perspective of Dockerfiles -- multi-stage builds landed in Docker 17.05 which are almost a year old and address most of the concerns in the article...


Multi-stage builds address some of the problems, but they still have drawbacks. For example, at the moment they need a Docker daemon, which is a non-starter for lots of environments. They also don't help you with fast updates across a fleet of many applications unless you standardise all your Dockerfiles. At which point you are, essentially, recreating buildpacks.


Vote me down, but oh god that title...


I really wanted to hate the article because of the title, but it actually taught me some neat tricks...


They gave one docker tradeoff/alternative. Didnt come to the article to learn about heroku, came for docker tips, and tricks.

Assuming most people arnt going to change thier whole platform because they didnt bother rtfm, or googling.

Yes docker doesnt handle this well, but there is a lot more nuance to that example.


I liked the title, and even though I don't do Ruby, I saved it because of the tips and tricks.

Flick and swoosh.


it looks like the article title has changed from "Defence Against the Docker Arts" to "Turn Your Code into Docker Images with Cloud Native Buildpacks" at some point.


Hey now, I enjoyed it.


Hey I didn't say I disliked it ;)

-4 votes btw.


I mean, I'm pretty much through with talking to people about docker.

There are so many morons out there trumpeting the buzzword, and they are just total imbeciles. My last pointy-haired boss thought he could manipulate the conversation to discover what I know about containerizing EC2 instances, by guiding the conversation with leading questions that'd get me to spill my guts with nerd signaling and posturing, so that he could rip off whatever we discussed and create competition among all his direct reports.

He did this with everyone that worked for him. He'd get us alone in a one-on-one, frame the conversation as a casual discussion where we pick each others brains and sketch out flow charts and relational UML diagrams on a whiteboard, but sprinkled on top, he'd throw a peer under the bus, sell them out, and ask how I might do it better.

On some level, some of my peers really were rotten bastards; complete shit heels; lazy, arrogant pogues. But if pointy-haired boss is doing it to them, he's doing it to me too. I'm just a rube off the street, an ass in a chair at 9 AM on the dot, a warm body with a pulse after all. Replacable and modular as a docker container, yes, yes?

So docker as a buzzword gave this boss a boner. A disgusting, throbbing, pulsating, glistening, dog-dick-red, boss boner. He wanted it sucked, and the way to suck it was to containerize bullshit with docker. So I started telling him the lowest effort shit, and planned my escape, because the guy was toxic waste, and I didn't feel like being a toadie.

It's like jesus, anyone who knows what docker is, knows it to be little more than a glorification of shell scripts and tar files. It doesn't do anything revolutionary. It's a plastic milk crate instead of a cardboard box. If you need to pack your widget with styrofoam peanuts, well, that's your problem.

Pointy-haired boss thought that docker was an art form, like package design is to product unboxing vlogs. He wanted to relish and savor the moment of unboxing the most expensive Apple product ever unboxed before the eyes of youtube.

Sorry boss man, spend the rest of your life with the lifers you can't fire. I'm done with your deceptive, hype gobbling office persona.


Seek help or some tea with a valium, you seem like you're in need of both.


”Mixing operational concerns with application concerns like this results in a poor tool for developers who just want to write code and ship it as painlessly as possible.”

Yeah, throw that code over to the ops team. Let them figure stuff out themselves.


Docker’s vision was to throw images over to the ops team so they can figure it out themselves.

Generally speaking I don’t know of many dev teams that enjoy OS and middleware patching or the nuances of Linux security. Some do of course, but many just want to code.

Buildpacks’ vision is that the ops team already knows how to install run production systems and that knowledge is encoded and tweaked in a buildpack.


> creating Cloud Native Buildpacks (CNB), a standard for turning source code into Docker images without the need for Dockerfile

This is a solution in search of a problem. Please stop.


I work for Pivotal. I beg to differ. We see problems that Dockerfiles create at massive scale in massive organisations, mostly around predictable upgrades, provenance, ease of CVE remediation, not having tens of thousands of running containers with mystery meat and so forth.

Right now for one of our standard large customers, remediating a critical CVE may take several hours, whether they are using current-generation buildpacks or Dockerfiles and build farms. CNBs will drive that figure down to minutes. Thousands of distinct applications, dozens of sites, several tens of thousands of containers, billions of requests per day, patched a few minutes after the buildpack releases from automation observing hundreds of distinct upstream dependencies across a dozen language ecosystems.

Anyone with enough money and people and patience can build and maintain this kind of a capability for themselves. But it's a lot cheaper and easier to pay someone else to maintain it for you.


I understand and accept that you're fine with Dockerfiles, but your statement is overbroad: Heroku and Pivotal aren't the only ones in the industry frustrated at the shortcomings of Dockerfiles. I'm glad alternatives are growing in number. (Others include Buildah, to be included in RHEL 8; and the Docker image support for Bazel.)


As well as Kaniko, Makisu, Orca and I've genuinely lost track.

Though buildpacks differ from most of these by skipping Dockerfiles altogether.


Thanks for that list. If I'm not mistaken, Buildah and Bazel skip Dockerfiles as well; not sure of the three you just named.


I forgot Jib! It also skips Dockerfiles.


Lol no. Heroku is way too much magic for my taste.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: