Forgive me, but I'm not very familiar with the rapidly growing container-based ecosystems market, but how do all these pieces fit together? Namely:
* Mesos / Yarn
* Marathon
* Kubernetes
* OpenShift
* Chronos
There are others I'm sure that I just don't recall.
Also, how does the container approach fit in the traditional VM models of OpenStack / AWS / Digital Ocean. Are these systems aiming to ultimately replace them? Do they solve the problems of networking and disk?
Maybe it's time I spent an afternoon looking into all this.
Mesos is a general purpose framework for task scheduling into a set of machines. Mesos uses a concept of 'offers' where custom frameworks can choose to use or not use them. Yarn is similar, but it's a bit incestual with the rest of the Hadoop ecosystem and is designed to run distributed MRv2 jobs. Mesos isn't related to Hadoop other than it's use of Zookeeper for leader election & state.
Mesos itself doesn't do much without a framework.
> Marathon
Marathon is a framework for Mesos that runs long-lived tasks and supports interesting things like artifact staging, dependencies.
> Kubernetes
Not really sure how Kubernetes differentiates itself from Mesos (besides having Google as a sponsor). I haven't used it myself.
> OpenShift
A PaaS from Red Hat that uses it's own scheduling and distribution mechanisms to run applications built (very Similar to Heroku/Elastic Beanstalk)
> Chronos
Similar to Marathon, except that it runs an essentially distributed cron (and has dependencies, etc). You can use Chronos as a full-fledged distributed job running system. Chronos isn't intended to run long-running tasks.
When CoreOS abandoned btrfs, it made me seriously start to consider the Joyent SDC stack, above all else just because ZFS can answer the storage question in a way that seemingly nothing on Linux will be able to provide in the near future
I really, really like the Joyent SDC stack... It seems like a really nice solution. Though I wish they had the equivalent of S3 or Azure blob storage. Having to run your own VMs for archive storage seems like a pain, especially relative to the cost/amount of storage you get per VM.
As part of OpenShift and Kubernetes we're building stack integration for Gluster, Ceph, NFS, iSCSI, and others into the core runtime environment (so you can on-demand provision storage at a cluster level). Micro services may not care much about storage, but everything else does.
Hi, working for a company that has 2.x and am working on 3.x.
Can't say too much but it's a big shift. The rewrites are significant; under the hood I know less about.
It's certainly a big focus for RH. I think it's great that they've embraced Kubernetes and Docker, but I can imagine it's going to frustrate early adopters who have already got used to one set of terminologies.
I trust that RH has their reasons for such a large shift. So far they're doing amazing work contributing back to Kubernetes especially all the shepherding that Clayton Coleman is doing for the broader community. As the CEO of Kismatic I'm excited that more companies are jumping on the Kubernetes bandwagon.
It's the virtuous circle of open source I think :)
If google and redhat are getting behind it, I guess they can both sell more stuff off the back of successful products. Otherwise, vendor-specific container and orchestration solutions are more likely to flounder. But as the CEO of Kismatic I'm guessing you know this already :)
Yup. It's all about communities and building out on top of things that everyone finds value in. And we don't mind doing some of the boring work (testing, reliability work) to make those communities even more successful.
One of the lead developers here - this is a ground up rewrite. In brief, we felt that the Kubernetes process (pods and the way containers were grouped) model "felt right" - it was the right fundamental bedrock concept for running software processes in the cloud. Realizing that, it was obvious that we wanted to be built in a way that benefited from the work we've contributed to Kubernetes, so we decided to rearchitecture on top of Kubernetes as micro service components. There have been a lot of productive discussions about Kubernetes being the "kernel of a cloud OS", and we wanted to bring developer experience / build / hands-off deployment flow pieces on top of that. Combined with supporting Docker images natively, the changes were sufficient to justify a rewrite (as much as we all wanted to avoid that).
Also, we had some pain points around distributing and packaging large Ruby apps into random environments, and so a switch to Go meant we could simplify the model of deploying the system components (clients, masters, node agents) onto systems. The CLI client shares a lot of code with the Kubernetes client, and having that in Go allowed us to deal less with the vagaries of deploying Ruby onto Windows (for Java developers).
And yes, I do realize that we've hit every single HN hot button thread in that list.
I tried Openshift 2 briefly, but didn't adopt it partially because it was too opinionated for my taste. I am excited about Openshift 3 for its adoption of Docker containers and Kubernetes.
I like how OpenShift3 is building on top of existing OpenSource technologies and not reinventing the wheel like other OSS Paas have done (i.e: cloudfoundry)
* Mesos / Yarn
* Marathon
* Kubernetes
* OpenShift
* Chronos
There are others I'm sure that I just don't recall.
Also, how does the container approach fit in the traditional VM models of OpenStack / AWS / Digital Ocean. Are these systems aiming to ultimately replace them? Do they solve the problems of networking and disk?
Maybe it's time I spent an afternoon looking into all this.