The problem may actually be compliance requirements. SOC2/HITRUST/SOX all mandate the removal of admin rights from computers, mandate an approval process w/ manager approval. Regulated industries, especially banking have more security-related compliance requirements causing a lot of the pain.
Unfortunately from a security perspective devs and system admins are probably the highest risk targets since they typically have access to servers and admin rights. At the very least they have source code an attacker could analyze, and likely have access to external services.
The reality is that compliance, security and usability are often in direct conflict that can only be solved to make everyone happy with significant work.
> SOC2/HITRUST/SOX all mandate the removal of admin rights from computers, mandate an approval process w/ manager approval
I’ve heard this before, but never with any detail. Can you explain further, or point to a resource? For example, clearly SOX doesn’t say that nobody can have admin rights - because IT does. And I highly doubt that the law says that only departments with IT in the title can have admin access. So what does it really say?
SOX doesn't actually say much about IT at all. It says mostly that you need internal controls to maintain the integrity of financial information. Anything more specific than that, around IT, is just someone extrapolating out what they think a good set of internal controls are.
Mostly when you hear "because SOX", that person has never actually read the document.
tyingq summed it up tersely pretty well in the sibling comment. Specifically for SOX, it is all about financial information, but that information is stored and manipulated using computers, so IT related controls end up being part of it. It's easy to get away with sourcing your controls from an industry standard list of controls appropriate for SOX, but these are often significantly behind the times and don't acknowledge the state of the art.
These things are all based around "controls", which sometimes can be specified locally and some times are set by outside entities.
In my experience (SOX evidence collecting, and SOC control writing and evidence collecting), a well written control is one that covers the bases without being overly prescriptive or ambiguous. In the case of admin rights on computers, it is more useful to have the control be worded as "Only appropriate people who have legit reason to have high levels of access do" and the audit step is confirming and providing evidence that the people who do are documented and approved as having it for specific reasons, and that people who aren't supposed to have it don't have it.
I've run into crappy controls quite a bit. It's easier to push back on these when you're determining the appropriate controls than it is when you're the sucker who has to collect the evidence that doesn't, and won't, exist. Authentication and authorization controls are often some of the worst. A less useful/less meaningful control is one like "all accounts must have passwords and all passwords must be at least 12 characters long and be composed of a mix of alphanumeric and at least two punctuation characters".
The goal is to say "yes, we do this, and here's the proof" without any qualification.
Sorry, none of our accounts have passwords because we disable password authentication and use ssh public keys for authentication with two-factor via Duo. If you say that, you don't satisfy the control as worded (because none of your accounts have passwords, you can see this in the shadow file and sshd has PasswordAuthentication no, and this is difficult to explain to people who are not familiar with ssh, which is, unfortunately, a significant portion of the people who end up being put on audit projects). If you say that they do have passwords, you're lying/misrepresenting, which isn't good for an audit either. If you say you don't but have compensating controls, this doesn't look as good as it could because it is called out with an addendum/explanation and is a potential exception that needs extra consideration.
These controls should be worded more like "all users have their own accounts and accounts are authenticated using secure methods" with sub-controls specific to the environment/company saying things like "password policy is based on NIST suggestions as of YYYY-MM-DD and enforced via <company-policy-compatible enforcement mechanisms>". The point of the controls is to detect, catch, and re-mediate anomalies, it is for this reason that the controls need to be adjusted as standards change and the state of the art moves forward. The specifics and rationales for why something is in place is for policy documents, which means people usually don't understand why a control is worded the way it is and poorly worded controls make for really rough, drawn out audits.
The cargo culting is unfortunately too true. SOX/SOC reporting exists for a reason and it's actually pretty easy to get real value (which is the intent) out of it, as it formalizes what you should be doing anyway. It's a really good feeling when appropriate processes/controls reveal things that fell through the cracks and they get remediated. Prepping for and performing a successful audit needs to involve the company's subject matter experts from multiple departments. If only the CFO is involved early in the process, it makes life harder for the CTO, CISO, and CIO (or whoever they delegate to) later on.
Unfortunately from a security perspective devs and system admins are probably the highest risk targets since they typically have access to servers and admin rights. At the very least they have source code an attacker could analyze, and likely have access to external services.
The reality is that compliance, security and usability are often in direct conflict that can only be solved to make everyone happy with significant work.