I came here to post this. We make a lot of the same sorts of optimizations for our OS distro (debian based) -- disabling frequency scaling, core pinning, etc. Critically, CPU0 has a bunch of stuff you cannot push, and you're better off with using one of the other cores as an isolated island.
This is what the scheduler latency looks like on our isolated core:
# Total: 000300000
# Min Latencies: 00001
# Avg Latencies: 00005
# Max Latencies: 00059
# Histogram Overflows: 00000
My big issue with this study is it points to a cause. How can they know the issue is social media, and not, say, the climbing atmospheric CO2 or other long-COVID related issues?
The root cause is likely to the surge of dopamine in the brain related to activities like scrolling social media, fast-paced tik-tok videos, porn, etc. This means that the brain is so much addicted to this dopamine that normal level are no longer enough to be functional.
There can be many causes. Aside from social media, games, porn and AI, it can also be caused by the decline of the living standards or increase of stress in the West due to shift from social-democratic to neoliberal economic policies.
For anyone else that's not familiar, this is referring to https://molly.im/ which looks like it's a fork of Signal. And looks like it interops with Signal, so you can talk to your regular Signal contacts as well.
That's very interesting.
My only concern with it would be how sustainable it is in the long term. I am using Threema currently, which has a plan for enterprises, so that seems more reliable but it's lacking in features and usability.
In addition, it looks like third-party apps (there are a few) that interface with the official Signal client maybe against Signal's TOS. They haven't enforced it yet from what I can see but it's a possibility and that's a fairy large risk IMO.
The stepping didn't kill wifi, the boards they ran with the d0 stepping have likely a different wifi chip (it's connected externally) or similar (maybe the wifi chip has a different stepping?), unrelated board-level changes.
The d0 stepping boards I have with wifi work with the linux kernel, still.
“Ran a quick search on Raspberry Pi's github linux repo and found where I got my info from re the stuff they took out on D0. From what I can see, they actually removed device tree support for parts of the chip they don't use on C0/C1 that are not present on D0, and folded these changes into the same DTS file. They also seem to have added a DTS specifically for the D0 stepping, which seems to be register changes, i.e. stuff that is present in both variants of the chip but has moved or needs to otherwise be handled differently between C1 and D0. See https://github.com/raspberrypi/linux/pull/5847, specifically for the bits removed see <https://github.com/raspberrypi/linux/pull/5847/commits/8be08...>“
I have a somewhat different problem with io_uring in practice: It's extremely hard to use /correctly/. The management of buffers which bounce across a kernel boundary and may-or-may-not end up in the same original thread lends itself to lots of subtle race conditions, resource exhaustions, and ABA issues. It's not that you can't make it work, and work well--it's that it's hard to do correctly, and very easy to make something which works 99.99% correctly, and then fails spectacularly under load or over time.
I can imagine the security implications are the same.
> and may-or-may-not end up in the same original thread
That sounds like a problem stemming entirely from a decision to share a ring among multiple application threads. Is there a good reason to do so? Each thread that needs to do IO can have its own ring, and submitting IO to another thread's ring seems like unnecessary complexity. The ring buffers are intended to be single-producer, single-consumer.
This is what the scheduler latency looks like on our isolated core:
# Total: 000300000 # Min Latencies: 00001 # Avg Latencies: 00005 # Max Latencies: 00059 # Histogram Overflows: 00000
(those are uS!)