There are other possible explanations, e.g. AVC and HEVC are set to the same bitrate, so AVC streams lose quality, while AV1 targets HEVC's quality. Or they compare AV1 traffic to the sum of all mixed H.26x traffic. Or the rates vary in more complex ways and that's an (over)simplified summary for the purpose of the post.
Netflix developed VMAF, so they're definitely aware of the complexity of matching quality across codecs and bitrates.
I have no doubt they know what they are doing. But it's a srange metric no matter how you slice it. Why compare AV1's bandwith to the average of h.264 and h.265, and without any more details about resolution or compression ratio? Reading between the lines, it sounds like they use AV1 for low bandwidth and h.265 for high bandwidth and h.264 as a fallback. If that is the case, why bring up this strange average bandwidth comparison?
Yeah it's a weird comparison to be making. It all depends on how they selected the quality (VMAF) target during encoding. You couple easily end up with other results had they, say, decided to keep the bandwidth but improve quality using AV1.
It depends how you actually use the messages. Zero-copy can be slowing things down. Copying within L1 cache is ~free, but operating on needlessly dynamic or suboptimal data structures can add overheads everywhere they're used.
To actually literally avoid any copying, you'd have to directly use the messages in their on-the-wire format as your in-memory data representation. If you have to read them many times, the extra cost of dynamic getters can add up (the format may cost you extra pointer chasing, unnecessary dynamic offsets, redundant validation checks and conditional fallbacks for defaults, even if the wire format is relatively static and uncompressed). It can also be limiting, especially if you need to mutate variable-length data (it's easy to serialize when only appending).
In practice, you'll probably copy data once from your preferred in-memory data structures to the messages when constructing them. When you need to read messages multiple times at the receiving end, or merge with some other data, you'll probably copy them into dedicated native data structs too.
If you change the problem from zero-copy to one-copy, it opens up many other possibilities for optimization of (de)serialization, and doesn't keep your program tightly coupled to the serialization framework.
There's no particular reason for an image format based on video codec keyframes to ever support a lot of the advanced features that JPEG XL supports. It might compress better than AVIF 1, but I doubt it would resolve the other issues.
Cargo's cache is ridiculously massive (half of which is debug info: zero-cost abstractions have full-cost debug metadata), but you can delete it after building.
There's new-ish build.build-dir setting that lets you redirect Cargo's temp junk to a standard system temp/cache directory instead of polluting your dev dir.
> There's new-ish build.build-dir setting that lets you redirect Cargo's temp junk to a standard system temp/cache directory instead of polluting your dev dir.
If it’s just logs, I would prefer to redirect it to /dev/null.
The situation today is very different than what it used to be when people actually used 386 or Amigas because they had no other options (BTW, Rust supports m68k, just not AmigaOS specifically).
Today even crappiest old PCs that you can fish out of a dumpster are already new enough to have Rust/LLVM support. We have mountains of Rust-compatible e-waste that you can save from landfill. Take whatever is cheapest on eBay, or given away on your local FB marketplace, and it will run Rust, and almost certainly be orders of magnitude faster and more practical than the unsupported retro hardware.
Using actual too-niche-for-Rust hardware today is more expensive. Such machines are often collectors' items, and need components and accessories that are hard to obtain, or need replacements/adapters that can be custom low-volume products.
Even if you can put together something from old-but-not-museum-yet parts, it's not going to make more sense economically than getting an older-gen Raspberry PI kit or its Ali Express knock-offs (there are VGA dongles more expensive than some of these boards).
It's fine to appreciate SGI and DEC Alpha, have fun using BE OS, or prove that AmigaOS is still a perfectly fine daily driver, but let's not pretend it's a situation that people are in due to economic hardship.
> but let's not pretend it's a situation that people are in due to economic hardship.
I'd encourage you to not strawman my response. Because I already said myself that it appears to me it's only hobbyists who are losing support.
My objection isn't to the argument that it's dropping support, my objection is that it's dropping support without cause. Other than, the assumed this would be more comfortable for me.
Maintainers are absolutely not required to support everything for ever, but I recall a story where someone from Linux paid for a user to upgrade, not because that was required, because more because that would make dropping support for that floppy driver feel ethical.
This is the level of compassion everyone should expect from software engineers in critical positions of power.
I have no sympathy for people who lack the compassion to expend the effort to help others. I do have sympathy for people who have to watch the world that they, even if it's them alone. Have to watch their world get worse, so that others can avoid a trivial amount of perceived discomfort.
Should this solo maintainer (who understands C) be required do things exactly the way that I want? Of course not, but I'll be damned if everyone expects me to remain silent while I watch them disrespect other people who were previously depending on their support.
By alluding the switch to Rust was "without cause", and bringing up concerns of floppy users and retro-hobby hardware, you seem to be seeing the change only from a very narrow perspective of interests of very specific group of users.
There are lots of other users, and lots of other ways to care about them. Making software less likely to have vulnerabilities is caring about its users too. Making software work better and faster on contemporary hardware is caring about users too, just a different group (and a way larger one, and including users who really can't afford faster hardware).
Sometimes it's just not possible to make everyone happy, and even just keeping the status quo is not always a free option. Hypothetically, keeping working support for some weird floppy drive may be increasing overall system complexity, and cost dev and testing effort that might have been spent on something else that benefitted a larger number of users more.
Switching to a language with a friendlier compiler, fewer gotchas, less legacy cruft, and less picky dependency management can also be a way of caring about users - lowering the barriers to contributing productively can help get more contributions, fewer bugs, improve the software overall, and empower more users to modify their tools.
It'd be fine to argue which trade-offs are better, and which groups users should be prioritized, but it's disingenuous to frame not accommodating the retro/hobby usecases in particular a sign of lack of compassion in general. It could be quite the opposite - focusing only on the status quo and past problems shows lack of care about all the other users and the future of the software.
That's just your lack of familiarity with the foreign-to-you language (you may be unable to read Korean too, despite Korean being pretty readable).
Syntactically, Rust is pretty unambiguous, especially compared to C-style function and variable definitions. You get fn and let keywords, and definitions that are generally read left-to-right, instead of starting with an arbitrary identifier that may be a typedef, a preprocessor macro, or part of a type that is read in so-called "spiral" order (which isn't even a spiral, but more complex than than).
Cargo isn't satisfied with its own solver either. Solvers are a hard and messy problem.
The problem is theoretically NP complete (a SAT solver), but even harder than that: users also care about picking solutions that optimize for multiple criteria like minimal changes, more recent versions, minimal duplication (if multiple versions can coexist), all while having easy-to-understand errors when dependencies can't be satisfied, and with better-than-NP performance. It ends up being complex and full of compromises.
I'm happy to burden EU companies with responsibilities like securing storage of my private data, having processes to update and delete my data, having to consider whether data collection can be minimized, and getting my consent if they want to repurpose or sell the data they've collected.
It would be much cheaper and pro-business to let them collect everything and secure nothing.
Netflix developed VMAF, so they're definitely aware of the complexity of matching quality across codecs and bitrates.
reply