I agree. I also care more about access control than encryption. But if you obtain a kubelet's credentials, you can read all secrets. It would be nice if a kubelet's access was restricted to only what the kubelet needs to know. That would limit the impact of a node in a cluster being rooted.
> It would be nice if a kubelet's access was restricted to only what the kubelet needs to know.
This seems like a fairly difficult problem to solve with meaningful security confidence. The only easy way I can see this working is if you pre-define groupings of kubelets and specify which pods may run on which kubelets, and enforce secret access the same way. Without this kind of hard separation, any kubelet is a candidate for running any pod and needing any pod's secrets at a moment's notice.
Furthermore, this sort of separation would require some kind of PKI; you could not just trust any given kubelet to accurately claim which groups a member of. You'd need to provision the kubelet groups with a secret that proves their membership in the group, and we're back to the secret distribution problem again.
And while it's true that a given kubelet may not be running a certain pod at the moment (and thus doesn't need the secret), that does not seem to be a hard security boundary. An attacker who controls the kubelet can manipulate which pods run on it, such as by terminating pods until the desired one is scheduled on it, or falsely advertising a huge amount of available resources to entice the scheduler to run pods on it, and so on. Ultimately if a kubelet is a candidate for running a pod, then it is simply a matter of coincidence whether it possesses the pod's secrets at a given moment or not.
That said, limiting kublets to have access only to secrets required by active pods will make things harder for an attacker, and so will provide value. But we should also evaluate the priority of that work in context of how hard it will be for an attacker to defeat that same restriction (e.g., advertise 1 petabyte of free RAM and disk, and 1000 idle CPU cores).
You raise some good points, like the kubelet killing off pods in hopes of getting a new pod with juicier secrets associated with it, but nevertheless the ticket mentioned by the sibling comment (https://github.com/kubernetes/kubernetes/issues/40476) sounds like a property that Docker's secret handling already has. It would be great to see Kubernetes work this way, too.
Coincidentally, I'm working on a project that uses Kubernetes and it has a very locked down pod placement policy, so the attack you described would be significantly scoped down. But I don't think the same is true of most Kubernetes deployments.
Defending the cluster from malicious nodes is not in the primary threat model of Kubernetes today. A malicious node can do many things, like lie about its capacity, scan container memory for juicy secrets, inject arbitrary outgoing or ongoing traffic, and in general be a jerk.
Securing nodes, preventing container escape, subdividing role access, constraining placement, limiting master surface area, and end to end audits have been the initial focus. Until those are in place, node secret access was less critical.
It is something that several folks are interested in working on soon.