unwrap() is only the most superficial part of the problem. Merely replacing `unwrap()` with `return Err(code)` wouldn't have changed the behavior. Instead of "error 500 due to panic" the proxy would fail with "error 500 due to $code".
Unwrap gives you a stack trace, while retuned Err doesn't, so simply using a Result for that line of code could have been even harder to diagnose.
`unwrap_or_default()` or other ways of silently eating the error would be less catastrophic immediately, but could still end up breaking the system down the line, and likely make it harder to trace the problem to the root cause.
The problem is deeper than an unwrap(), related to handling rollouts of invalid configurations, but that's not a 1-line change.
We don't know what the surrounding code looks like, but I'd expect it handles the error case that's expressed in the type signature (unless they `.unwrap()` there too).
The problem is that they didn't surface a failure case, which means they couldn't handle rollouts of invalid configurations correctly.
The use of `.unwrap()` isn't superficial at all -- it hid an invariant that should have been handled above this code. The failure to correctly account for and handle those true invariants is exactly what caused this failure mode.
1. Cloudflare is in the business of being a lightning rod for large and targeted DoS attacks. A lot of cases are attacks.
2. Attacks that make it through the usual defences make servers run at rates beyond their breaking point, causing all kinds of novel and unexpected errors.
Additionally, attackers try to hit endpoints/features that amplify severity of their attack by being computationally expensive, holding a lock, or trigger an error path that restarts a service — like this one.
this was in the middle of a scheduled maintenance, with all requests failing at a singular point - that being a .unwrap().
there should be internal visibility into the fact a large number of requests are failing all at the same LOC - and attention should be focused there instantly imo.
or at the very least, it shouldn't take 4 hours for anyone to even consider it wasn't an attack.
in situations such as this, where your entire infra is fucked, you should have multiple crisis teams working in parallel, under different assumptions.
if even one additional team was created that worked under the assumption it was an infra issue rather than an attack, this situation could have been resolved many hours earlier.
for a product as vital to the internet as cloudflare, it is unacceptable to not have this kind of crisis management.
Apple already has excellent x86 emulation. But Apple has locked-down GPU with a proprietary API that adds another unnecessary translation layer where it hurts more.
Partly it's due to lack of better ideas for effective inter-procedural analysis and specialization, but it could also be a symptom of working around the cost of ABIs.
The point of interfaces is to decouple caller implementation details from callee implementation details, which almost by definition prevents optimization opportunities that rely on the respective details. There is no free lunch, so to speak. Whole-program optimization affords more optimizations, but also reduces tractability of the generated code and its relation to the source code, including the modularization present in the source code.
In the current software landscape, I don’t see these additional optimizations as a priority.
The original DVD was way, way less green than later releases, which were changed to match the more-extreme greens used in the sequels. IDK if it was as subtle as in the theater (I did see it there, but most of my watches were the first-run DVD) but it was far subtler than later DVD printings, and all but IIRC one fairly recent blu-ray that finally dialed it back to something less eye-searing and at least close-ish to the original.
The original has a green tint to the matrix scenes, it's just relatively subtle and blends into a general coolness to the color temp. The heightened green from later home printings is really, in-your-face green, to the point you don't really notice the coolness, just greeeeen.
The Matrix is an interesting one because it really caught on with the DVD release. So that was most peoples first exposure to it, not the theatrical release. Even if incorrect, if that was the first way you saw it, it is likely how you consider it "should" look.
It's a bit disingenuous to imply The Matrix did not catch on until DVD release. The Matrix broke several (minor) box office records, was critically hailed, and an awards darling for the below the line technical awards.
Having said all that. One of the most interesting aspects of conversations around the true version of films and such is that just because of the way time works the vast majority of people's first experience with any film will definitely NOT be a in a theater.
I didn't meant to say no-one saw it theatrically but I probably did undersell it there.
The DVD was such a huge seller and coincided with the format really catching on. The Matrix was the "must have" DVD to show off the format and for many was likely one of the first DVDs they ever purchased.
It was also the go-to movie to show off DivX rips.
The popularity of The Matrix is closely linked with a surge in DVD popularity. IIRC DVD player prices became more affordable right around 2000 which opened it up to more people.
2. That's a language feature too. Writing non-trivial multi-core programs in C or C++ takes a lot of effort and diligence. It's risky, and subtle mistakes can make programs chronically unstable, so we've had decades of programmers finding excuses for why a single thread is just fine, and people can find other uses for the remaining cores. OTOH Rust has enough safety guarantees and high-level abstractions that people can slap .par_iter() on their weekend project, and it will work.
If you're a junior now, and think your code is worth stealing, it's probably only a matter of time before you gain more experience, and instead feel sorry for everyone who copied your earlier code (please don't take it personally, this is not a diss. It's typical for programmers to grow, try more approaches, and see better solutions in hindsight).
The lazy cheaters only cheat themselves out of getting experience and learning by writing the code themselves. It doesn't even matter whether you publish your code or not, because they'll just steal from someone else, or more likely mindlessly copypaste AI slop instead. If someone can't write non-trivial code themselves to begin with, they won't be able to properly extend and maintain it either, so their ripoff project won't be successful.
Additionally, you'll find that most programmers don't want to even look at your code. It feels harder and less fun to understand someone else's code than to write one's own. Everyone thinks their own solution is the best: it's more clever and has more features than the primitive toys other people wrote, while at the same time it's simpler and more focused than the overcomplicated bloat other people wrote.
Fil-C will crash on memory corruption too. In fact, its main advantage is crashing sooner.
All the quick fixes for C that don't require code rewrites boil down to crashing. They don't make your C code less reliable, they just make the unreliability more visible.
To me, Fil-C is most suited to be used during development and testing. In production you can use other sandboxing/hardening solutions that have lower overhead, after hopefully shaking out most of the bugs with Fil-C.
The great thing about such crashes is if you have coredumps enabled that you can just load the crashed binary into GDB and type 'where' and you most likely can immediately figure out from inspecting the call stack what the actual problem is. This was/is my go-to method to find really hard to reproduce bugs.
I think the issue with this approach is it’s perfectly reasonable in Fil-C to never call `free` because the GC will GC. So if you develop on Fil-C, you may be leaking memory if you run in production with Yolo-C.
Fil-C uses `free()` to mark memory as no longer valid, so it is important to keep using manual memory management to let Fil-C catch UAF bugs (which are likely symptoms of logic bugs, so you'd want to catch them anyway).
The whole point of Fil-C is having C compatibility. If you're going to treat it as a deployment target on its own, it's a waste: you get overhead of a GC language, but with clunkiness and tedium of C, instead of nicer language features that ground-up GC languages have.
It depends how much the C software is "done" vs being updated and extended. Some legacy projects need a rewrite/rearchitecting anyway (even well-written battle-tested code may stop meeting requirements simply due to the world changing around it).
It also doesn't have to be a complete all-at-once rewrite. Plain C can easily co-exist with other languages, and you can gradually replace it by only writing new code in another language.