Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why we started CoreOS (coreos.com)
168 points by philips on Sept 18, 2017 | hide | past | favorite | 60 comments


If Equifax were running a vulnerable version of Apache Struts on CoreOS the the hack would have still happened.

That the CoreOS CEO has chosen to use the Equifax hack as a marketing opportunity is equally distasteful and disingenuous.

Also invoking the founding fathers, the Bill of Rights and the great Dr Martin Luther King in the as part of that marketing pitch is the height of bombast and absurdity.

This blog post could have been written by Gavin Belson.


CoreOS's clair[1] will automatically reject (or alert) on new deployments of containers with known CVEs. So at a minimum, it could have let the administrators know they were running vulnerable images and fix the issue before the hack occurred.

[1] https://github.com/coreos/clair


One of the features of their stack is live single-click or even auto-update of you technology stack while your application keeps running. While that doesn't necessarily mean that you will keep your stack up-to-date, it makes patching security issues considerably easier and less painful, which means it's far less likely that you will be running a known-vulnerable version. So it's not entirely BS, even though why CoreOS helps security isn't spelled out here. I happen to have watched a talk where Alex Polvi did spell this out just last night, which is the only reason I know this. But yes, the rhetoric is a bit heavy-handed.


This is built-in functionality of Kubernetes (& OpenShift) called Rolling Update [0]/Rolling Deployments [1] and Image Triggers [2].

[0] https://kubernetes.io/docs/tasks/run-application/rolling-upd...

[1] https://docs.openshift.com/container-platform/3.6/dev_guide/...

[2] https://docs.openshift.com/container-platform/3.6/dev_guide/...


It depends on the base docker image this struts2 web app would run on, and whether it would allow running OS commands involved into the hack, but yeah, coreos/tectonic is not exactly known for any security features that would have helped with this.


"Install kubernetes in 15 minutes or less -> bare metal" leads to this:

https://coreos.com/tectonic/docs/latest/install/bare-metal/r...

There are over 15 non-trivial subtasks there (counting "foo and bar" under one bullet as two subtasks). Worse, this is just prerequisites for before the actual installation!

I really want something like what CoreOS claims to be to exist, but always feel like the victim of some inside joke when I actually try to install it.

Does anyone know of any alternatives that can be setup in a matter of hours and not days?


Big disclaimer, I work for SUSE on the team for this product (though I actually work on upstream projects more than the product itself).

SUSE's "Container as a Service Platform"[1] has a one-page install for each node with an admin panel that does auto-provisioning. It also has transactional updates and a few other nice features similar to CoreOS's "Container Linux". In addition, we're working on openSUSE Kubic[2] which will be the community-developed base for CaaSP in the future.

I believe most if not all of the code is available on GitHub[3] under a free software license (because that's how we do things). Not to mention that you could very easily fork openSUSE Kubic, once it's ready.

[1]: https://www.suse.com/communities/blog/introducing-suse-conta... [2]: https://news.opensuse.org/2017/05/29/introducing-kubic-proje... [3]: https://github.com/kubic-project


Given that you work for SUSE and I kinda like you guys, I really hope that it's different from SLES licensing model.

I work at a german govt. institution which uses SLES and we've recently decided to move away from SLES for a simple reason. If you want to use something like immutable infrastructure SLES is painful to use, because you need to do everything through the SMT admin.

You won't even know which packages are available in SLES until you signup for it. Sure you can use openSUSE, but it's not the same packages, it's not the same package versions, and the open repo, while nice, is imho not as comfortable to use as ubuntu packages. IMHO it's not very user friendly.

And yes yast2 is great, but I'm not talking about graphical user interfaces when I say user friendly.

By the way, there are a few more "ContainerOSs" than just CoreOS nowadays.

- There's project atomic [1] which became atomic host [2]

- There's RancherOS, which I find interesting because it seems to have first class zfs support [3]

[1]: http://www.projectatomic.io/docs/kubernetes/

[2]: https://www.redhat.com/en/resources/enterprise-linux-atomic-...

[3]: http://rancher.com/docs/os/v1.0/en/storage/using-zfs/


I'm not involved in sales, so I can't really comment on what the licensing model is like.

> If you want to use something like immutable infrastructure SLES is painful to use, because you need to do everything through the SMT admin.

CaaSP nodes don't really have configurable packages (though I think that's something we're considering adding in the future). Everything is done using Kubernetes specifications, so you won't need to use SUMA to manage your nodes' running containers (though the admin panel is fairly similar when bootstrapping and maintaining a cluster).

> Sure you can use openSUSE, but it's not the same packages, it's not the same package versions, and the open repo, while nice, is imho not as comfortable to use as ubuntu packages.

This is something we've changed quite recently. You can figure out the package versions through Leap (and the SLE sources are actually visible in OBS from memory). There's also a "SUSE PackageHub" which is basically openSUSE backports that don't invalidate your license but are not directly supported (other than the same support we'd give to openSUSE issues). I think this is something I should raise with Richard Brown though, since on paper you should already have this information from the verbatim SLE sources we release that Leap is based on.

> By the way, there are a few more "ContainerOSs" than just CoreOS nowadays.

Yeah, though I expected (and was right) that people who work on those would be more knowledgable than me to comment on them. I work quite a lot with both of those groups on upstream stuff, so I'm well aware that they exist (in fact I've contributed to several Project Atomic projects in order to be able to re-purpose their code and use it with our OCI tooling[1]).

[1]: https://github.com/openSUSE/umoci


Setting up a production bare-metal setup of a distributed system is going to involve a number of steps the first time. We have found there is a large range of existing automation and basic infrastructure in different people's environments.

Generally folks try out our Tectonic sandbox (vagrant based)[1], or Tectonic on AWS/Azure/etc before diving into bare metal to get a sense if the product. This can all be done really quickly because of the consistency.

What sort of data center automation infrastructure do you have in your environment?

[1] https://coreos.com/tectonic/sandbox


I took this route. It took me two days. Not 15 minutes. You need to set up a PXE server (coreos/matchbox) beforehand. Which I needed to get working with existing PXE / DHCP deployment. And I had to do some tweaking on the generated Terraform templates tectonic generated. (my disk names are vd* instead of sd*. I use virualization also for bare metal). So I needed to learn what Terraform was.

But still a much smoother experience than Kubernetes The Hard Way. It is almost one-click.

The 15 minutes is probably true if you take azure or AWS. But deploying something in an existing environment always takes planning and time. However, coreos is more than willing to help. (I simply asked questions on IRC. But they also have an enterprise support plan)


Ubuntu 16.04 and kubeadm gets you from zero to k8s in about a 4 hours or so.

I recently gave up trying to fight the uphill battle of doing things with CoreOS and tectonic in my homelab. Their requirements are fine if you have high end server class hardware to spare (matchbox /requires/ IPMI).

Started to go the NixOS route, but found it more infuriating than trying to setup Tectonic, mostly because their recipe for k8s is super old.

In the end, I gave up trying to use the big tools and fell back to Ubuntu and kubeadm. Started at 8am and ended at around noon with a fully functional 9 node cluster running v1.6 with RBAC and TLS enabled by default.


There are some guys working on a new k8s module for NixOS. https://github.com/NixOS/nixpkgs/pull/25426


Try https://www.ubuntu.com/kubernetes. The video there shows an automated deployment of Kubernetes to AWS. For bare metal, choose the "MAAS" option (MAAS is Ubuntu's solution to bare metal deployments by API and replaces AWS in the stack for bare metal).

Disclosure: I work for Canonical.

Edit: I realise this isn't exactly "like" CoreOS. But I think it's useful to post about the traditional alternative for comparison :)


docker swarm. it really is as simple as installing the docker daemon on a cluster of machines and issuing a command to link them together.


I have to agree. Swarm is probably the easiest thing to get going besides the Docker engine itself. Honestly I felt like I might have been missing some step or component when I set up a swarm, because it was just really easy. Then I could start swarm services by making some slight modifications to my pre-existing Docker Compose files. It is very very easy.


And also runs very stable. It's not perfect, but I've certainly run into fewer bugs than when I was trying to get Kubernetes set up.


Agree, for quick and easy docker clusters swarm is hard to beat. Although it’s not quite the same as kubernetes or tectonic (swarm needs too many diy things to make it truly useable in more complex setups, with load balancers and what not).


I'm working on an approachable (i.e. easy to setup and manage) container orchestrator at http://quilt.io that may be worth checking out. It's still very early beta quality software, and some features are missing (notably bare-metal isn't there yet, but it's on the roadmap). That said, it's designed to solve precisely this problem -- ops shouldn't be this hard.


Cool! Bare-metal would be the killer feature for me to adopt this for my usecases.



Already in progress =)


Not sure about true "bare metal" but try https://github.com/kubernetes/kops for 0-to-kubernetes on AWS, including provisioning instances, storage, etc.


Full Disclosure: I'm a Red Hat Consultant for OpenShift.

There's lots of Container platforms these days. Docker Swarm, Rancher, OpenShift (and it's upstream project, OpenShift Origin), still others. IMO, the leading edge ones are based on Kubernetes [00]. I'm (obviously) most familiar with OpenShift so I'll speak to that and I'll answer questions honestly if you have them for me.

From field experience, once you have your hardware or VMs ready [0][1][2][3], it takes 10 minutes to stand up a one node cluster or about 30 minutes for an HA cluster with 9 nodes (3 masters, 3 infrastructure nodes, 3 application nodes). We have blogs written by Eric Schabell [4] that show you how install a basic system with a script in minutes. We also have minishift [5] (downstream of minikube [6]) which is another great way to try OpenShift on Linux, MacOS, or Windows with Vagrant and VirtualBox. Paying customers can download CDK [7] on RHEL, MacOS or Windows. You can also try it at OpenShift Online [8] (hosted OpenShift as a Service) or OpenShift.io [9] (IDE and DevOps tools integrated with OpenShift Online).

I hope the above doesn't sound too much like an advertisement but we offer several FOSS ways to try it and I linked to upstreams where applicable.

[00] https://kubernetes.io/

[0] https://docs.openshift.org/3.6/install_config/install/prereq...

[1] https://docs.openshift.org/3.6/install_config/install/host_p...

[2] https://docs.openshift.org/3.6/getting_started/administrator...

[3] https://docs.openshift.org/3.6/install_config/install/advanc...

[4] http://www.schabell.org/2017/08/cloud-happiness-how-to-insta...

[5] https://github.com/minishift/minishift

[6] https://kubernetes.io/docs/tasks/tools/install-minikube/

[7] https://developers.redhat.com/products/cdk/download/

[8] https://www.openshift.com/features/index.html

[9] https://openshift.io/


Disclosure, i work at containership.

We have a SaaS based product that can stand up a Kubernetes cluster for you, and it really does only take 10-15 minutes from signup to being live. https://containership.io


I've done the bootkube setup by hand, it's pretty quick but lacks any documentation. On the list to document/blog on when time permits.


Joyent Triton.


They specifically referred to bare-metal deployments (presumably local). While you could deploy SmartOS locally and run Triton yourself, I have a feeling that running a control plane like Triton is a bit overkill for a self-hosted deployment. Not to mention it likely takes several days too.


There is "CoaL", or "Cloud on a Laptop" from Joyent just to get the feet wet with Triton:

http://blog.shalman.org/running-sdc-coal-on-smartos/

this video demonstrates that it doesn't take days and the description even says it is just as suitable for bare metal as it is for VMware:

https://www.youtube.com/watch?v=apZ-G5LwYiY


Just setup and use Rancher http://rancher.com/ you'll have something setup and usable in 10-15 minutes. Then your cluster can just grow as your needs grow.


I wish there was more from a-z about rancher, like say for instance, setup rancher in a local environment, without immediately connecting it to aws, using maybe like a linode host or something or a proxmox and getting it off the ground. I found ranger was complex, but it seems pretty darn powerful to me, like the proxmox of docker orchestration.



I disagree with this article. Coreos wouldn't have helped at all in this particular case. First of all, struts is not a system dependency. So it wouldn't be auto updated.

Well okay, how about Claire, the vulnerability scanner? It scans container layers, aka system dependencies. They don't scan your maven dependencies.(maybe I'm wrong here). So your entire security relies on someone not updating dependencies for a project often. Which was already the case. What did we gain? In this particular case, nothing

Of course kubernetes makes rolling out updates a lot easier. And thus a team might patch often. But that is not the OS.

Of course for system level security vulns, coreos is great and on the right track. But saying they would have caught Equifax is a big stretch. Equifax was not a big hack with multiple zero days. It was a team who didn't update their application.


While admirable, I don't really see how CoreOS automatically makes the internet more secure as they claim. I'm assuming this blog is targeted at other tech folks so definitely expected a little more detail than a small blurb.

I say this as fan of CoreOS and really appreciate their OpenSource work.


Their measurement and TPM support is probably the best in the industry at the moment. Matthew Garrett worked for them on this, and effectively you can have completely measured boot of a system and the containers running on it. With IMA this would allow you to have systems that cannot run binaries that are not signed (the kernel simply wouldn't allow it).


In a few words. With CoreOS Tectonic we are reducing the toil teams endure to update both application infrastructure and applications. We believe that security begins with a simple regular processes for getting updates to software out the door and that our tools and products remove toil from these processes.

Two sides of the problem:

Infra: CoreOS Tectonic[1] provides one-click updates of the entire app infra stack from the VM/bare-metal Linux instances[2] through the Tectonic control plane including Kubernetes, identity services, monitoring tooling, etc[3]. We call this automated operations.

App: By leveraging Kubernetes APIs application teams can roll. Further container scanning tools enable app teams to scan in-use containers for CVEs.

[1] https://coreos.com/tectonic

[2] https://coreos.com/blog/introducing-container-linux-update-o...

[3] https://coreos.com/blog/announcing-tectonic-1.7.1


A few years ago I worked at a big company that desperately needed something like this. Without it we more than did our part to make the internet less secure.

Without containers, software releases and infrastructure upgrades were highly interdependent. The result was that the software never released and the infrastructure never got upgraded.

Being able to upgrade individual services, independently of the infrastructure, is a bigger enabler than you would think in a large company. When you enable this, teams are suddenly able to release more often. Features ship faster and the lifetime of application vulnerabilities shortens.

Meanwhile, if Tectonic works as advertised, your infrastructure can auto-update but continue to provide a stable API to the services it supports. Again, the lifetime of vulnerabilities shortens, potentially by a lot.


This is imo a backwards way of doing things properly. Binaries should be sandboxed via a .ctor using seccomp and unshare, preferably in their own section so updating or removal is trivial. The remaning issue is "how can you have multiple tools as an all-in-one container"


I like CoreOS, and I know this is some post that is meant for marketing purposes. But it's a shame they're propagating a myth that good security is the result of good tools.

It's a common fallacy to think that these security problems have a technical solution first. They don't. CoreOS makes tools that helps security aware companies/organizations implement security, but being sufficiently security aware is the first step. Buying and using CoreOS products and support does not help you anything if you don't know how to use it well, or if your organization doesn't allow the engineers to use them effectively. It's become abundantly clear that mismanagement is the root cause of the Equifax hack, and that the vulnerable Struts server is just a symptom of it. A fool with a tool is still a fool.

It's attractive to think CoreOS's products, or some other vendor's product, would have avoided this. But given the mismanagement it seems unlikely, and at best it would have just plugged a hole until another one popped up at some later time. Who knows that they already plugged some earlier worse holes with some other security products. Making the tools to make the internet more secure is the easy part. The hard part is getting everyone to use them in the right way. If tools and only buying things were the answer, the internet would have been a much more secure place already.

So keep doing you CoreOS, keep making those tools. But please don't oversell yourself.


> I believe in freedom. Founding fathers, Martin Luther King, Jr. – that kind of freedom. It’s the same kind of freedom that motivates my passion for free software.

Oh boy, this CEO is laying it on pretty thick here... I'm a pretty steadfast advocate for free software--and I think open software does make a better society--but even then, I think it's a bit strange to use a comparison to MLK for something that seems an order of magnitude less important.

It's giving me serious flashbacks to Silicon Valley.

https://www.youtube.com/watch?v=J-GVd_HLlps


This looks like some epic PR talk for marketing purposes. Maybe they want to sell it to Equifax like companies?


And likely not letting a crisis go to waste.


An auto-updating OS doesn't really help you with web application vulnerabilities. Even something like Clair would be unlikely to help - they likely knew about the numerous struts vulnerabilities they faced and simply chose not to care (and prioritize other more exciting business prospects - like selling identity theft protection)


I like their product, but evoking the founding fathers of the US and MLK in talking about a more secure OS seems a bit much.


It does evoke the brilliant:

"We're making the world a better place through constructing elegant hierarchies for maximum code reuse and extensibility."

It's a infrastructure piece, which is several abstraction layers below the things that do impact freedom and all the other abstract values that high school LD debaters love to talk about. That they continue to parrot the "just missed a security update" view of the Equifax hack is both sad and makes me highly doubt their security acumen. True security relies on layers, not an impenetrable outer shell. Equifax's blunder was a fundamental lack of security-conscious architecture, not a failure to install a security update. I'm guessing that CoreOS actually understand this, they're just trying to profit from an event that's going to have real negative consequences for a lot of people who's information was stockpiled without their consent, which is a pretty crappy thing to do.


You're not wrong - this is pretty cringeworthy.


It's no mystery what's wrong with Equifax to those who have accounts with all three bureaus. Two seconds inside the site will tip you off; hint, it's much higher level than choice of hosting platform. CoreOS is certainly trying to get on the radar here for when the other shoe drops and Equifax starts spending serious money on this stuff. The whole thing seems a bit tone deaf in timing though.


You wouldn't think that patching a Java web framework could protect the personal financial information of millions of Americans, but here we are. Security matters.

I used the work in the IT department at an environmental research agency here in the UK. I took enormous pride in the quality and importance of the research conducted by the people I worked with even though I just helped them work more easily with better tools.


Especially since it took another 80 years for the country that the 'freedom-loving' Founding Fathers created to actually get rid of slavery. Buggered if I know why some people get so religiose about the Founding Fathers when they missed such simple things like "hey, freedom for all means no slaves. And women get the vote as well."


They enacted tremendous social change for their time, but they weren't dictators. Even if they wanted to abolish slavery or enable women to vote, it would have never happened because those were very unpopular views at the time. Only a (benevolent?) dictatorial regime can act against the popular will of the people to enact changes faster than the people can accept them.

Societal change has a hull speed, and attempting to exceed that hull speed will result in massive push-back and ultimately failure even if the change in question is viewed as 'obvious' in historical hindsight.

We can see the exact same thing happening today with the legalization of gay marriage or drugs. Both will be viewed as obvious a generation from now, but today they face tremendous pushback. That's not the fault of governance, that's just governance imposing changes that approach the societal hull speed on those issues.


Also, it took a war to get rid of slavery.

I can understand why a new nation battling off a major world power for independence didn't have a civil war at the same time.

Sometimes, you have to pick your battles.


The entire essay reeks of PR; if they were so security conscious they'd pick SmartOS zones with their full isolation as the basic building block, or even OpenBSD and extended it. Instead, they picked an OS with a very wide attack surface and a history of vulnerabilities. If you were setting about to design a security minded product, would you pick a substrate like Linux or one like OpenBSD?


Usually blog posts like this come at pivotal times in a company's lifetime.

There doesn't seem to be anything yet. Interesting.


My guess is it's easier to get attention for selling cloud security after something like the Equifax breach happens.


For everyone criticizing this because there's not much actual substance, I'm fairly certain this piece wasn't aimed at you - it was aimed at people that get taken by this kind of rhetoric over actual technology. If it comes down to the usual players in enterprise software or CoreOS / Tectonic, I'd be perfectly happy to see that stack as an option in the F500. I'm sure there's a lot of reservations we all have about the tone and content, but as long as the fundamental goal isn't just selling I can let it slide.


As far as I know, if a malicious user compromised the CoreOS update central and pushed a new rolling update, then everyone using CoreOS and (presumably automatically) received the rolling update will also be hacked, the end. This is an even bigger evil. It's no difference to a botnet . I don't know if CoreOS has some kind of update signing/verification or what but based on my assertion I wouldn't suggest using it.


"At CoreOS, our aim is to arm these companies with the tools to build their cloud services – and run our digital lives – correctly. We’re also dedicated to making it so easy to run these highly complex systems that they take care of themselves. What if they’ll never miss an update again. They’ll have all the security features turned on by default and new versions of applications will ship quickly and safely." What a load of absolute dreck this article is.


Off-topic minor note to site owner: You have two shortcut icon references in your head, and one of them references the relative ico/favicon.png which 404's. Chrome falls back but many others do not and you get no favicon.


Why do people lie about founding stories. Most of them boil down to two things:

1> We like making money 2> We enjoy working in the domain the company operates in.

Everything else is a usually made up crap for branding, PR and to feed tech journalists.


It was my understanding that CoreOS was started as a minimal Linux distro tooled for hosting and orchestrating Docker containers. IMHO, Docker still has a long way to go to impress the security community.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: