I noticed it too, but it doesn't necessarily bother me. Possibly they're just trying to say, "This incident may have made us look like we're complete amateurs who don't have any clue about security, but it wasn't like that."
Using someone else's library doesn't absolve you of responsibility, but failing to be vigilant at thoroughly vetting and testing external dependencies is a different kind of mistake than creating a terrible security bug yourself because your engineers don't know how to code or everyone is in too much of a rush to care about anything.
Yes, I agree with that sentiment, and I thought precisely the same. I know as an engineer that I would feel compelled to mention that it was an obscure bug in an open source library, if that was the case. Not to excuse myself of responsibility, but because I would feel so ashamed if I myself introduced such an obvious security flaw. I would still of course consider myself responsible for what happened.
A lot of the time when people make mistakes, they explain themselves so as they are afraid to be perceived as completely stupid or incompetent for making that mistake, not excusing themselves of taking responsibility even though people frequently think that excuses or explanation means that you are trying to absolve yourself of what you did.
There's a huge difference to me between having an obscure bug like this and introducing that type of security issue because you couldn't logically consider it. First one can be resolved in the future by introducing processes and make sure all open source libraries are from trusted sources, but second one implies that you are fundamentally unable to think and therefore also probably improve on that.
The result for the end consumer is identical whether they have their PII leaked from "an external library" vs a vendor's own home-baked solution.
It's not really a different kind of mistake, it's exactly the same kind of mistake, because it is exactly the same mistake! This is talking the talk, and not walking the walk, when it comes to security.
Publishing a writeup that passes the buck to some (unnamed) overworked and underpaid open source maintainer is worse, not better!
The dev had such a big ego that they didn't want to say "I was dumb and left open a bug", so the dev says "I was so dumb that I left open a bug in software I was also too dumb or lazy to write or even read".
It's not better.
I agree, it is a different kind of mistake; it is immensely worse than creating a terrible security bug yourself.
Outsourcing your development work without a acceptance criteria and without validation for fitness of purpose is complete, abject engineering incompetence. Do you think bridge builders look at the rivets in the design and then just waltz over to Home Depot and just pick out one that looks kind of like the right size? No, they have exact specifications and it is their job to source rivets that meet those specifications. They then either validate the rivets themselves or contract with a reputable organization that legally guarantees they meet the specifications and it might be prudent to validate it again anyways just to be sure.
The fact that, in software, not validating your dependencies, i.e. the things your system depends on, is viewed as not so bad is a major reason why software security is such a utter joke and why everybody keeps making such utterly egregious security errors. If one of the worst engineering practices is viewed as normal and not so bad, it is no wonder the entire thing is utterly rotten.
I do not believe it's necessarily nefarious in nature, but maybe more specifically it feels kind of like they're implying that this is actually a valid escape hatch: "Sorry, we can't possibly audit this code because who audits all of their open source deps, amirite?"
But the truth is that actually, maybe that hints at a deeper problem. It was a direct dependency to their application code in a critical path. I mean, don't get me wrong, I don't think everyone can be expected to audit or fund auditing for every single line of code that they wind up running in production, and frankly even doing that might not be good enough to prevent most bugs anyways. Like clearly, every startup fully auditing the Linux kernel before using it to run some HTTP server is just not sustainable. But let's take it back a step: if the point of a postmortem is to analyze what went wrong to prevent it in the future, then this analysis has failed. It almost reads as "Bug in an open source project screwed us over, sorry. It will happen again." I realize that's not the most charitable reading, but the one takeaway I had is this: They don't actually know how to prevent this from happening again.
Open source software helps all of us by providing us a wealth of powerful libraries that we can use to build solutions, be we hobbyists, employees, entrepreneurs, etc. There are many wrinkles to the way this all works, including obviously discussions regarding sustainability, but I think there is more room for improvement to be had. Wouldn't it be nice if we periodically had actual security audits on even just the most popular libraries people use in their service code? Nobody in particular has an impetus to fund such a thing, but in a sense, everyone has an impetus to fund such work, and everyone stands to gain from it, too. Today it's not the norm, but perhaps it could become the norm some day in the future?
Still, in any case... I don't really mean to imply that they're being nefarious with it, but I do feel it comes off as at best a bit tacky.
They really skirt around the fact that they apparently introduced a bug which quite consistently initiated redis requests and terminated the connection before receiving the result.
Doesn't bother me either. All the car companies issue recalls regularly, sometimes an issue only shows up when the system hits capacity or you run into an edge case.
Using someone else's library doesn't absolve you of responsibility, but failing to be vigilant at thoroughly vetting and testing external dependencies is a different kind of mistake than creating a terrible security bug yourself because your engineers don't know how to code or everyone is in too much of a rush to care about anything.