This was exactly my first thought when I saw the title. And after reading the contents of the blog, it's pretty clear that ARM is laser focused on getting a piece of their customer's cake by competing with them. This is likely why they are riding the AI hype train hard with their ill-suited name (AGI).
Unfortunately for them, I think hardware vendors will see past the hype. They'll only buy the platform if it is very competitively priced (i.e., much cheaper) since fortune favours long-lived platforms and organizations like Apple and Qualcomm.
In short, this reads like a mix of valid historical pain points and outdated assumptions.
The post frames Wayland security as “you can’t do anything,” but that’s a misunderstanding. Even under X11, any app can log keystrokes, read window contents, and inject input into other apps. Wayland flips this to isolation-by-default: explicit portals/APIs for screen capture, input, etc.
Moreover, the performance argument is weak and somewhat contradictory. The author claims there is no clear performance win, and that it's sometimes slower and hardware improvements make it irrelevant. But Wayland reduces copies and avoids X11 roundtrips (architectural win). Actual performance depends heavily on compositor + drivers, and I've found that modern hardware has HUGE performance improvements (especially Intel, AMD, and Apple Silicon via the Asahi driver).
The NVIDIA argument is also dated. Sure, support was historically bad due to EGLStreams vs GBM, but this has improved significantly in recent driver releases.
Many cited issues are outdated too. OBS, clipboard, and screen sharing issues are now mostly (if not entirely) solved in the latest GNOME/KDE.
I've been using Wayland exclusively on Fedora and Fedora Asahi Remix systems for many years alongside Sway (and occasionally GNOME and KDE). Adoption has accelerated in many distros, and XWayland for legacy apps is excellent (although I believe using the word "legacy" here would be a trigger word for the author ;-).
There's no stagnation here... what we're looking at is a slow migration of a foundational layer, which historically always takes a decade or more in the Linux world.
> Actual performance depends heavily on compositor + drivers, and I've found that modern hardware has HUGE performance improvements (especially Intel, AMD, and Apple Silicon via the Asahi driver)
Author’s argument is those hardware improvements could have been had for free with X11 upgrades. I’m not saying it’s a complete argument. But talking about architectural wins sounds like conceding the argument.
> Author’s argument is those hardware improvements could have been had for free with X11 upgrades.
I do NOT miss having tearing all the time with X11. There were always kludgy workarounds. Even if you stopped and said ok, lets not run nvidia, let's do intel they have great FOSS driver support, we look back at X11 2D acceleration history. EXA, SNA, UMA, XAA? Oh right all replaced with GLAMOR, OK run modesetting driver, right need a compositor on top of our window manager still because we don't vsync without it.
Do you have monitors with a different refresh rate? Do you have muxes with different cards driving different outputs? All this stuff X11 sucks at. Ok the turd has been polished well now after decades, it doesn't need to run as root/suid anymore, doesn't listen for connections on your network, but the security model still sucks compared to wayland, and once you mix multiple video cards all bets are off.
But yeah, clipboard works reliably, big W for X11.
It reads like a user that tried Wayland again last week, found the same issues and wrote a piece that tried to summarize why they remain sad after 17 years of waiting for Wayland to address its issues.
In X11, the problem was Xserver. Now, X11's design philosophy was hopelessly broken and needed to be replaced, but it wasn't replaced. As you correctly point out, there is no "Wayland", Wayland is a methodology, a description, of how one might implement the technologies necessary to replace X11.
This has led to hopeless fracturing and replication of effort. Every WM is forced to become an entire compositor and partial desktop environment, which they inevitably fail at. In turn application developers cannot rely on protocol extensions which represent necessary desktop program behavior being available or working consistently.
This manifests in users feeling the ecosystem is forever broken, because for them, on their machine, some part of it is.
There is no longer one central broken component to be fixed. There are hundreds of scattered, slightly broken components.
I maintain Red Hat backed it as part of a play to make it harder to develop competing distros that aren’t basically identical to Red Hat’s product.
Their actions on systemd, Wayland, plus gnome and associated tech, sure look like classic “fire and motion”. Everyone else has to play catch-up, and they steer enough incompatible-with-alternatives default choices that it’s a ton of work and may involve serious compromises to resist just doing whatever they do.
Wayland is far more aligned with the Unix philosophy than Xorg ever was. Xorg was a giant, monolithic, do everything app.
The Unix philosophy is fragmentation into tiny pieces, each doing one thing and hoping everyone else conforms to the same interfaces. Piping commands between processes and hoping for the best. That's exactly how Wayland works, although not in plain text because that would be a step too far even for Wayland.
Some stuff should not follow the Unix philosophy, PID 1 and the compositor are chief examples of things that should not. It is better to have everything centralized for these processes.
In X you have server, window manager, compositing manager, and clients and all is scoupled by a very flexible protocol. This seems nicely split and aligned with Unix philosophy to me. It also works very well, so I do not think this should be monolithic.
This is quite wrong? There are some features that get blocked from being implemented because Wayland refused to define a protocol for everyone to implement. Window positioning being a recent example of how progress can get blocked for many years due to Wayland.
This is same cop out people use to talk about "Linux."
"No, Linux isn't bad, your distro/DE is bad, if you used XYZ then you wouldn't have this problem." And then you waste your time switching to XYZ and you just find new problems in XYZ that you didn't have in your original distro.
I'm genuinely tired of this in the Linux community. You can't use the "Wayland" label only for the good stuff like "Wayland is good for security!" and "Wayland is the future" and then every time someone complains about Wayland, it is "no, that's not true Wayland, because Wayland isn't real."
But that's what we signed up for in the Linux wirld. Linux systems are smorgasbord of different components by design, and that means being specific. I'm using KDE Plasma 6, that's a different experience than someone using Cosmic or Sway.
Furthermore, Wayland is, first and foremost, a protocol, not a standalone software like the Linux kernel. Wayland is no more than an API format transmitted over the Wire protocol. So properly criticizing Wayland is about criticizing the abstraction this API creates and the constraints introduced by it.
Could you briefly explain in simple terms, why I as a user would care about any of that? I want stuff to work. With Wayland, it largely doesn't. I don't terribly care about the semantics of it.
> Wayland flips this to isolation-by-default: explicit portals/APIs for screen capture, input, etc.
The problem is old (and even not so old) apps don't expose those APIs so interactions like UI automation on Wayland is limited, if not impossible. I'd love to grant a specific permission just for selected GUI apps, but I can't because they don't support it.
There's a reason why RPA software on Wayland is limited to web apps inside a browser. Or something extremely janky like taking screenshots of the entire desktop and doing OCR. But then you can't interact with unfocused apps.
In my experience I have found the xdg-desktop-portal for whatever reason to be completely non functional on Arch/Hyprland. It must be an issue with my config but on x11 I never had to think about this
This reads like AI/FSD-bro speak: "no, that's all old news, you clearly haven't tried the new cutting edge model/build bro! it's all fixed now!".
> Wayland security
Okay, that's great, but why would I care? If you can implement those security wins transparently in the background, cool. Otherwise, what I care about is being able to take a screenshot, not about some theoretical "security threat" from already vetted programs I run on my machine.
> OBS, clipboard, and screen sharing issues are now mostly (if not entirely) solved in the latest GNOME/KDE.
Oh, the clipboard works mostly correctly now, after some 17 years of development? Could not have come up with a more damning statement. Complete misalignment of priorities.
> "no, that's all old news, you clearly haven't tried the new cutting edge model/build bro! it's all fixed now!"
Exactly. And it's standard rhetoric for the wayland fanboys. "The fix for this was committed 15 minutes ago! You just need to check out the unstable branch and recompile!"
> what I care about is being able to take a screenshot, not about some theoretical "security threat" from already vetted programs I run on my machine.
Yeah, the security theatre thing is also part of their standard rhetoric. It's a good bit of rhetoric because it scares people who don't know better. They all love to talk about how it's just so insecure to allow us to do things that every desktop environment has been able to do for 30+ years.
But strangely, in decades, I've never seen a single example of anyone taking advantage of this horrible security design and it becoming a widespread problem in the wild. I keep asking the wayland bros to give me an example of this happening in the wild and causing a problem that's even mildly widespread. Strangely when I ask that question they always seem to forget to respond to that part of my post and move on to their next piece of standard rhetoric.
> Oh, the clipboard works mostly correctly now, after some 17 years of development? Could not have come up with a more damning statement. Complete misalignment of priorities
Tsk tsk, now you're just being cynical. We should be celebrating that wayland has managed to kinda-sorta get a feature working which was working just fine in X11 by ~1998, and which worked just fine in Windows <3.1, and which worked just fine in Mac OS in the 1980s. And they've managed to do it in only ~3 years longer than it took to get Duke Nukem Forever into stores! Yay them!
I'm hoping people catch that typo after reading "every single word, phrase, and typo (purposeful or not)" and smiled every time I've had someone post a PR with a fix for it (that I subsequently reject ;-)
God that machine was terrible - underpowered and undercooled, which led to frequent overheating and component failures. When I first started at Sun, they put one of those on my desk as a joke on my first day (it was quickly replaced so that I could get some real work done).
At work in the 90s we gave tons of old Sparcstation 10s away. They rapidly replaced all IPX and IPS at the computer clubs around Sweden. One Volvo was destined for Luleå and was really weighted down with a trunk full of pizza boxes.
Yeah it was a real piece of junk, but I guess there's no accounting for nostalgia. People also like to restore the SGI Indy, easily the worst machine that SGI ever shipped.
At one point decades ago there were a lot of these IPXs and their SCSI accessories on eBay and they were a decent source of project boxes because you could use the power supply and stick your project where the hard drive was supposed to be, with the wires coming out the SCSI port. It looks like the model 411 is still $30 or so on eBay but there are few.
The Indy was awesome. One client had 400 of them, as long as you didn’t take the lowest RAM entry level model they were excellent. Hardware was reliable, graphical desktop better than MacOS today, and very low support burden.
So true. Keep in mind, OP said it was the worst machine SGI shipped, not the worst machine Sun shipped. SGI's worst machine could be fixed by adding some RAM. Sun's worst machines were completely unsalvageable.
Hey, don't trash talk Indy like that. It has.. well, it is Web! and has VRML.. and it's your only option for N64 devkit. So, there's that. Overall you're right though. Entry level machine. I have one in working order, rarely has use next to Indigo2 MAX impact. I do have one Sparc, haven't been booted in ages. I have to check whether it's IPX or Classic. I'm even afraid to boot it up.
Because the Indy (and O2) are actually attainable. Indigo2, Octane2, Tezro cost 2-3x minimum. Sometimes a Personal IRIS comes up for relatively cheap though.
I managed a lab of them. I _hated them_. They were unreliable, slow, and just absolutely miserable because they created endless complaints.
We were rolling out labs of Windows machines. Except for the lack of terminal, they were better on every single axis for the common university lab use cases - mostly netscape/mosaic and applications..
I also managed NeXT slabs and cubes; they were vastly better than the sun boxes because we had installed HDDs in the cubes and extra memory. The only problem with them was the absolutely terrible, shit behavior when users accidentally browsed the AFS root...
The only positive thing I can say about those Sun boxes is that _one_ behavior was better than NeXT. With NeXT, students would pull the power on them after wating four or five minutes of the beachball due to AFS I/O.
A younger person who only knows the comparative merits of Windows, macOS, and Linux in this decade probably cannot imagine the relief felt by people when they were finally able to move their technical applications off unix boxes onto Windows NT workstations. The situation was so bad, the computers cost so much and worked so poorly, a Dell with a Pentium Pro was like a miracle, at the time.
I don't have any nostalgia for old machines, I understand the 5- or 6-figure price tags were ridiculous, but I'm curious - in what way did Unix machines back then work poorly?
Windows on a 80486 vs. those boxes felt very much like if you were to compare the latest M5 Macs to, say, a ppc 604 Mac.
No comparison at all. Just every single interactive aspect of them was worse in every possible way and that includes I/O performance. At the time, in that era, people would babble about how much faster SCSI was, but the disks sitting in PCs were blazing fast in practice despite being attached by glorified joystick ports.
That means nothing when everything it's either RHEL bound, Ubuntu LTS or docker containers among standalone services written in Go which are everywhere.
Serious GUI software will be written in QT5/6 where the jump wasn't as bad as qt4->5. Portability matters and now even more. Software will run in any OS and several times faster than Electron.
I remember a lab with diskless systems where your disk quota was smaller than the kernel panic dump. So basically if you crashed a machine your account was instantly filled up and basically nothing would work. I believe it affected mail as well. Fun times.
Totally terrible. ONe place I worked we all had sparcs and the first thing that happened whenever anyone left is there would be this mad shuffle where everyone nicked everyone else's computer with the IPX being the prize for whoever wasn't there at the time or the new joiner. So I had the IPX for a while, even just using it as an x client for a remote build server it was horrible.
Certain companies are well-known for their legal teams. Qualcomm is one (often described as a legal company that employs some engineers). Nintendo is the other.
As a result, Nintendo's legal team is far more likely to ensure they get refunded, and quickly. They could provide a template for others to follow.
macOS is a capable UNIX, but it's not Linux - which has since become the standard platform for most cloud/web/ML development.
As a developer myself who uses Fedora Asahi Remix as my daily driver, I can also tell you that Linux runs 2x faster (often much more) for everything compared to macOS - on the same hardware! And that performance gain is also important for my work :-)
Totally, I have a mini forums pc that runs void linux that I ssh into from my MB air. My worry is hitching your wagon to a project that could stop working one day through no fault of the devs.
Maybe they're greedy or maybe they see the long game is that their architecture licensing business is in serious jeopardy from RISCV. So, if you can't beat em, join em.
Maybe they'll eventually make their own RV core designs too.
> Maybe they'll eventually make their own RV core designs too.
I am not a deeply technical embedded person, but I actually don't think that would be the death of ARM: my understanding is that they develop a lot of SoC-level interconnect/fabric standards and IP as well. After all, you have to do a lot of work to integrate your ARM cores into an actual platform...
The problem is they go from being at the center of everything outside x86-64 to just another RISC-V provider. And there will be dozens. And the market will not care so much as they succeed and fail as the ecosystem will not depend on the suppliers specifically. How does ARM stay at the top of that dog fight? It is a much bigger challenge than they have faced so far.
The problem for ARM is that there are a dozen RISC-V companies implementing their business model.
You license ARM cores because you want a “custom” chip but do not want to start from scratch. You especially do not want to have to bootstrap a software ecosystem. When ARM had no competition, it is just a question of which ARM core you want.
Now, you can get the same thing from any RISC-V design house. Which means having real choice over the features you want. If ARM is just one of those RISC-V shops, how does ARM compete? By being the best? Not likely.
And, in the past, you could not totally outgrow ARM as they own the ISA. The Qualcomm lawsuit was an attempt to maintain tight control over this. With RISC-V, you can pack up and move your whole ecosystem elsewhere including taking it entirely in-house. This includes the ISA to an extent since anybody can add extensions.
Today, we are seeing RISC-V succeed where this flexibility matters most: in microcontrollers and in AI.
But as performance equalizes, volumes go up and costs come down, the use cases where ARM makes more sense dwindle.
That makes backwards compatibility the last real reason to use ARM. But does this matter on mobile where devices download the apps that match their arch? Not really. Does it matter in most embedded cases? Not really. Does it even matter in the server? More, but even there not as much as it used to. Does it matter for anything mostly GPU or NPU driven? No. So that leaves desktop and laptop. And, outside of Apple, ARM has not really built up anything to stay compatible with. RISC-V may have time to grow into that niche before being blocked.
We are going to exit 2026 with RISC-V chips that are fast enough. How fast will the costs come down? Perhaps a year or two?
What markets is ARM well positioned to continue its dominance in?
TSMC already does make a huge amount of money off of being the indispensable fab for all of Apple's latest-and-greatest chips.
Each time TSMC launches a new or incrementally-improved manufacturing process, Apple's latest Ax SoC is nearly always the first chip manufactured on that process to reach consumers.
Imagine if Fedora locked you out of vi because your Red Hat account had an issue.
The unsettling part of stories like this isn’t “Microsoft bad,” it’s the growing assumption that local tools should be downstream of remote identity systems. A text editor is about as offline and fundamental as software gets, yet it’s now possible for account state, sync bugs, or policy enforcement to make it inaccessible on your own machine.
This is where non-macOS UNIX and Linux systems draw the line - if it’s installed locally and you have permission, it runs. Cloud services can enhance that experience (backups, sync, collaboration) but they don’t get veto power over whether vi opens.
When that boundary erodes, we start to see our systems as thin clients, instead of full local OSes, as the author mentions.
Business users want everything online, so anything can be accessed from anywhere. They want central identity, so when someone is hired or fired they only need to look in one place.
If Linux had a revenue stream and model, this would make sense. But the style of open-source is to make good software, and let others gravitate to you as a result.
I did something similar, but with GitHub Discussions because my blog is hosted on GitHub Pages and composited with Hugo, and I wanted all components to run as close as possible to one another: https://jasoneckert.github.io/myblog/github-discussions-blog...
Unfortunately for them, I think hardware vendors will see past the hype. They'll only buy the platform if it is very competitively priced (i.e., much cheaper) since fortune favours long-lived platforms and organizations like Apple and Qualcomm.
reply