The parent comment is wondering about the structure of the signature and if different curve parameters can be specified for it. How can explicit curve parameters be specified in an ECDSA signature? ecdsaWithSHA256, at least, is simply two bigints. There's no spot for specifying explicit parameters.
Say I'm using ELK for log aggregation. Would Vespa be a good replacement? One pain point is ingest rate. How many "average" log lines per second can Vespa do per node?
It could be a replacement for the 'E', but the APIs are different enough that there's no drop-in replacement for the 'L' and 'K' and creating or making those compatible would be a significant effort. Would be great if someone did though :-)
Gotcha. On the ingest front, do you have any numbers around that? I see some benchmarks that focus on other (important) aspects like QPS but didn't catch anything on ingest.
Write speed (add or update) is typically between a few thousand to a few tens of thousands operations per second per node sustained, depending on sizeof data etc.
Sustaining throughput over long time is important and often overlooked mentioned in benchmarks.
I recently read about how Plex got trusted SSL certificates for all their users in partnership with DigiCert, and was really curious if a similar scheme could be accomplished with Let's Encrypt. The scheme required wildcard certificates so I figured it wouldn't be possible. But with this announcement, maybe it would be! I work on a product that generates a self-signed cert and so our customers always get a cert warning. They can replace the cert with their own if they like, but some customers aren't set up to do that. Offering an alternative where we securely mediate creation of a trusted SSL cert would be fantastic.
If your product consists mainly of a HTTPS service with some particular Internet accessible fully qualified domain name, say https://benth-app.customername.example/ where your customer owns customername.example then it's possible already today although you should ensure the customer is told what you're up to of course.
If your service doesn't provide HTTPS or customers don't have it accessible from the public Internet then you'd need cooperation from them unless you yourselves control the DNS records involved.
A rooted node has access to everything that lands on that node, and anyone who can reproducibly escape to root on a node from a container can do so on any node they can schedule on.
It's definitely something we'll fix in Kubernetes, but rooting workloads is the primary problem, and secondary acl defense in depth is good but won't block most attacks for long.
There's no way to schedule anything from a worker node -- Swarm follows a push model for all scheduling decisions; worker nodes never pull anything. This is the best ACL model possible: the one that doesn't exist because worker nodes have zero ability to perform actions.
Default ACLs are clearly the most important line of defense in an orchestrator's security model, because whether a container escape can happen is not something the orchestration system has control over.
I'm not sure I disagree, but pull vs push with the same ACL rules in place is the same outcome. A secure Kubernetes configuration would also not be able to schedule from a worker. Partition of secrets is important, but anyone able to trigger node compromise still sees secrets and workloads anywhere they can schedule.
At a design level, push removes an entire class of vulnerabilities, full stop. Pull requires good ACL'ing and properly implemented controls for the lifetime of the orchestration system's implementation. Pull makes the system vulnerable to both misconfiguration and incorrect ACL code implementation. Pull is clearly inferior.
Being able to trigger node compromise should have nothing to do with being able to schedule.
I agree. I also care more about access control than encryption. But if you obtain a kubelet's credentials, you can read all secrets. It would be nice if a kubelet's access was restricted to only what the kubelet needs to know. That would limit the impact of a node in a cluster being rooted.
> It would be nice if a kubelet's access was restricted to only what the kubelet needs to know.
This seems like a fairly difficult problem to solve with meaningful security confidence. The only easy way I can see this working is if you pre-define groupings of kubelets and specify which pods may run on which kubelets, and enforce secret access the same way. Without this kind of hard separation, any kubelet is a candidate for running any pod and needing any pod's secrets at a moment's notice.
Furthermore, this sort of separation would require some kind of PKI; you could not just trust any given kubelet to accurately claim which groups a member of. You'd need to provision the kubelet groups with a secret that proves their membership in the group, and we're back to the secret distribution problem again.
And while it's true that a given kubelet may not be running a certain pod at the moment (and thus doesn't need the secret), that does not seem to be a hard security boundary. An attacker who controls the kubelet can manipulate which pods run on it, such as by terminating pods until the desired one is scheduled on it, or falsely advertising a huge amount of available resources to entice the scheduler to run pods on it, and so on. Ultimately if a kubelet is a candidate for running a pod, then it is simply a matter of coincidence whether it possesses the pod's secrets at a given moment or not.
That said, limiting kublets to have access only to secrets required by active pods will make things harder for an attacker, and so will provide value. But we should also evaluate the priority of that work in context of how hard it will be for an attacker to defeat that same restriction (e.g., advertise 1 petabyte of free RAM and disk, and 1000 idle CPU cores).
You raise some good points, like the kubelet killing off pods in hopes of getting a new pod with juicier secrets associated with it, but nevertheless the ticket mentioned by the sibling comment (https://github.com/kubernetes/kubernetes/issues/40476) sounds like a property that Docker's secret handling already has. It would be great to see Kubernetes work this way, too.
Coincidentally, I'm working on a project that uses Kubernetes and it has a very locked down pod placement policy, so the attack you described would be significantly scoped down. But I don't think the same is true of most Kubernetes deployments.
Defending the cluster from malicious nodes is not in the primary threat model of Kubernetes today. A malicious node can do many things, like lie about its capacity, scan container memory for juicy secrets, inject arbitrary outgoing or ongoing traffic, and in general be a jerk.
Securing nodes, preventing container escape, subdividing role access, constraining placement, limiting master surface area, and end to end audits have been the initial focus. Until those are in place, node secret access was less critical.
It is something that several folks are interested in working on soon.
"Currently, anyone with root on any node can read any secret from the apiserver, by impersonating the kubelet. It is a planned feature to only send secrets to nodes that actually require them, to restrict the impact of a root exploit on a single node."