Playing fast FPS games under wayland/xwayland (and under Gnome/gnome-shell/mutter) is currently suboptimal. It's because gnome-shell will pass mouse moves only at the speed of your monitor refresh rate (see https://blogs.gnome.org/shell-dev/2021/12/08/an-eventful-ins... for the explanation)
So, my expensivish 1kHz (or even 8kHz mice - as these exist) will pass mouse movements with 4ms+ delay and only <240 times per second.
Under Xorg, the xinput proto will pass data at the ~usb polling speed. For my 2kHz mouse, the tool (event-freq) shows ~1700 events / second.
By the way, this ~220Hz is the speed at which gnome-shell passes data to clients (via XWayland or directly). The libinput accepts and passes data at correct 2kHz - and it's only gnome-shell aggregates it under wayland - this can be seen e.g. in the evhz tool (direct /dev/input access) vs event-freq tool (gnome-shell in wayland session, and xinput in xorg session).
I have been playing under sway which removes many bottlenecks (or at least haven’t noticed any), you could give it a try
https://github.com/swaywm/sway
Yup, other compositors might not have this limitation. I might one day switch to sway, but currently staying with Gnome, cause it'd take a couple of days to get used to it I suppose.
BTW, gnome-shell supports so called compositor bypass for windows which request it (and games which I play request it via probably libSDL) - more on that here https://specifications.freedesktop.org/wm-spec/wm-spec-lates... - but I still wonder what kind of visual delay I'm getting with gnome-shell vs e.g. sway.
My understanding is that with _NET_WM_BYPASS_COMPOSITOR it should be the same (vs non-compositing window manager), but the devil is in the details.
I like gnome-flashback/metacity (non-compositing WM), but there are some strange quirks of it, which make me avoid using it on my 21:9 monitor.
Mice have supported 1KHz since the 90s; They are merely set to 125Hz by default. The problem is some can handle low sensitivity and some cannot (due to slow sensor sample rate afaik): http://www.esreality.com/?a=post&id=1265679. This of course is no different than the "gaming" mouse market. Some can handle fast movement, some cannot.
4ms latency is basically nothing (Relatively. Yes, it should be zero). Games already add 10s of ms, modern monitors all add around 3ms-10ms for no reason at a bare minimum. And the latency of each pixel varies depending on the start and end luminance level of a pixel moreso than the [0,8] ms of a mouse at 125Hz being sampled at an arbitrary position in your game cycle (yes, even on many "240Hz" LCDs). Though, it's unclear to me if you get other undesired effects from resampling the output of already sampling the mouse.
1KHz+ mouse sampling's primary benefit is that panning is smooth (perhaps on a properly configured system with a CRT and non-bloated stack then the 8ms max latency becomes an issue). If using a 125Hz mouse polling rate, you circle around an object while keeping your crosshair it, it will be easy to see how wobbly this is. At 1KHz that mostly goes away. Higher mouse DPI on low sens additionally visibly adds to the smoothness.
Having two monitors with different DPI or physical size. Wayland supports different scaling per monitor, so your windows will look thesameish in terms of physical or pixel size (depending on your wishes) when moving them across monitors :).
There isn't really a reason for this to not be done on Xorg too, aside from having the window manager keep track of per-monitor scaling information (note that scaling, physical size and DPI do not have to be tied - someone may want to use a big scale for a low DPI monitor because they have it across the room for example, what really matters is a per-monitor scaling factor - DPI can be used to set up some sane defaults but this should be overridable) and pass it to applications to scale themselves with a fallback if the application doesn't support scaling (be it client-side composition or server-side composition, though the latter will also need merging Keith Packard's patch that implements composition-based server side window scaling)... and someone to specify the messages/events involved.
But really almost all the tech exists (aside from server side scaling, but this can be ignored for an initial implementation) and toolkits (or at least Qt and perhaps Gtk) should already support this as Windows scaling works in a similar way, it is largely getting a bunch of different projects to play nice together.
I think Qt supports it in some quirky way. It supports this DPI notation of something like
QT_DPI="192;128"
via envvars. But it only means that when moving window to another monitor the window will get suddenly re-scaled/re-sized, instead of being smoothly drawn using different scaling factor.
Well, yes, this is basically all what applications can do by themselves without support from the rest of the stack. It isn't impossible but it will be limited to application/library-specific hacks and those have a limited view of the system anyway. Ideally you want support from the environment itself and applications only be responsible for scaling themselves properly.
The window manager is at the best position to support something like this since it handles window placement. It is possible to have a separate program that can keep track of window placement and support the relevant events too (for window managers that wont support this) but it can be tricky to get it work smoothly.
The sudden scale/resize is something that can't really be avoided without support from the compositor (be it the window manager, a separate compositor or server-side composition), assuming there is one of course (this should work without a compositor too, you'd just wont get the scaled fallback unless the server-side scaling patch is also added to Xorg). Again, can be done and the tech is there, it just needs wiring all these things up.
The problem with environment variables is that it doesn't support dynamically adding and removing monitors and changing the scale factor of an existing monitor.
Also personally I think integer (if any) coordinate scaling, and fractional font-only scaling (or fractional blurry server-side scaling) is the only way to support fractional DPI scaling without breaking the fundamental assumption that integer-aligned pixel rectangles don't overlap or partially cover pixels. Qt's fractional coordinate scaling breaks much of current theming and app painting algorithms.
The Qt QPA for Wayland supports this properly and natively since ~2017 or earlier, and will render one single app at multiple DPIs around screen boundaries.
Yes, but the discussion here is how that could be done with Xorg, not Wayland. My point is that the tech is there and at least Qt already has code for it, but it just needs all the relevant projects to work together. Or more accurately, someone to make all the necessary patches for all the affected projects and have these patches be accepted :-P.
I don't plan on getting one since i find even 1440p/27" too much for my taste (i only got it because it was the only high refresh rate monitor with a flat VA panel i could find available when i decided to get a new monitor) but if i ever have to get a hidpi monitor (e.g. someone finally released an OLED monitor but does the annoying thing and only makes it available in a single configuration that happens to be 4K because bigger numbers are better, all other concerns be damned), i'll try to go through everything to get it working on my PC (assuming noone else does it before).
I doubt this will get done on Xorg any time soon, there is very little incentive for anyone to do it when it's already working much better on Wayland. The problem is in finding someone who will do it, it's not really a problem to figure out how it could be done because we already know that: just copy the Wayland API back into X. A good way to start would probably be to get scaling working in XWayland applications and then backport from there; amazingly, we're finally at the point where Xorg is playing catch up to Wayland in terms of features. I can't understate how huge that achievement is for the Linux display stack.
I don't think so, the X.org server would likely have to be rewritten entirely to get some of the improvements of Wayland. And at that point you might as well use Wayland. People have written alternate X servers before but I've never seen one that was totally feature compatible with X.org.
I am suggesting that wayland may not reach critical mass because of backporting these features, and wayland lacking a lot of functionality would prevent me from using it, until its reach partiality or enough functionality. I had mostly a great time with it on GNOME and a tablet interface, but I missed widgets and some functionalities.
In terms of developer velocity I think it has already reached critical mass. I can't comment on any particular missing features in GNOME, to me it is probably more likely that a GNOME contributor is working on getting those features ported over to Wayland than somebody working on backporting things to X.
That's great, I'll switch when its not a skeletonized version of the functionality I use now, I hope there is an easy way to import global hotkeys in my current programs, and have widgets or GUI support. I tried using ctrl and up as a hotkey in the compositor for instance and it never worked in dbus when I tried last week.
I just mentioned GNOME because I actually really dislike it, and was forced to use it for good tablet support with wayland, they broke those plugins I used.
I've been out of the X vs Wayland discussion for ages, but have been on Gnome / Wayland on my work laptop for the last 2 years and wouldn't say that this use case is perfect or even great (though... it gets the job done). Guess your comment means it's even worse on X.
My work laptop is 4k 13" while my second monitor is 1440p 27". I had to enable some gnome custom scaling flag to unlock fractional custom scaling (as you probably also had to do) - not because I want to run my laptop at e.g. 175%, but because setting the laptop scaling to 200% without fractional scaling turned on would mess up all scaling on the external display too (many elements also 200% etc).
So with fractional scaling on, laptop at 200% scaling and external monitor at default 100% everything does work mostly right... except for all the programs that didn't get the memo and seem to scale properly but show up all blurred. I know it's a big Electron issue (so affects Slack, Chrome and VSCode... my 3 most-used programs) - unsure if it's a problem with other stuff.
But all in all has been kind of annoying.
I know they are working on fixes from the Electron angle but last I checked none of it was very stable.
Thanks, now I know for sure the steam deck will just be another windows computer that came with linux. Even these assumptions about its use case were wrong. How could anyone in good conscious support switching to this? I can see noobs who think newer is better, but wow, horrid. Android is pretty smooth, I wonder if running the framework of it on linux could work lol.
Feel free to leave the thread because you can’t handle these truthes. Wayland by itself is complete shit and will be enough to ruin the deck experience.
X.org has many other problems, mostly related to screen tearing or security bugs. It is getting closer for the end of its life as being too complicated and not well designed project.
> It is getting closer for the end of its life as being too complicated and not well designed project.
X11 is 37 years old. Even taking at face value the rumors of its death, I'd say it was shockingly well designed if it's finally - maybe - starting to hit its limits.
It's not really a rumor, if you check upstream you can see that nearly all work on new features is now happening in Wayland and Mesa and then eventually might get backported to X. That's what the article is about.
> X.org has many other problems, mostly related to screen tearing or security bugs. It is getting closer for the end of its life as being too complicated and not well designed project.
That is FUD.
Screen tearing is subjective issue, personally i never cared for it and my biggest annoyance with Windows is that its forced compositor also adds vsync which forces input lag across the desktop. X allowing me to not use that garbage is a big plus. With a fast refresh rate monitor tearing is practically invisible anyway. Also X11 applications can avoid screen tearing, if some do not then this is a bug in the application.
Security bugs is overblown and Wayland security features are akin to keeping your computer turned off to get the best security. In practice if you do not trust an application you can isolate it, though it is really futile since applications can simply use other means than X to "spy" you. X already provide functionality for doing that but if you are really paranoid (why are you running untrusted applications in that case?) you can run a nested X... or even X under Wayland under X. Then once you notice the onionwrapped application launch your browser or any other application, you'll realize the futility of said onionwrapping but hey, until that moment you'll feel safe.
> end of its life
Xorg is an open source program, you can't "end of line" an open source program, it isn't Visual Basic 6 that died because Microsoft decided it should die. As long as someone wants to improve it, it will get improvements.
Also a new version was released just a few weeks ago with a new maintainer that offered to make new releases.
> being too complicated
Have you actually checked the source code? If you ignore the drivers and all the satellite libraries (like Xt, Xaw, etc) the X server itself isn't really that big or complicated.
> not well designed project
This is irrelevant, it does what it is supposed to do.
I always hear these same talking point as well. Why do they want to say things that doom x.org so much? It is some sort of meme that's spewed out from people that want everyone to switch to wayland? I feel like everytime I hear a security issue almost anywhere its paranoia, overblown and generally so improbable that if I never heard of it I would be fine. I disabled spectre and meltdown and recommend everyone else who has an updated browser to as well.
> In practice if you do not trust an application you can isolate it, though it is really futile since applications can simply use other means than X to "spy" you. X already provide functionality for doing that but if you are really paranoid (why are you running untrusted applications in that case?) you can run a nested X... or even X under Wayland under X.
This exactly is the point. Wouldn’t it be nice to just use apps without extra steps? It would make Linux better for non-experts.
You can’t even isolate X applications without running other X.org because there is no permission control once the application has access into socket, and with socket it can see everything.
And you can’t really rework that. Sometimes you need to rewrite whole thing.
Applications cannot normally spy on you unless you run everything in root. It is X.org which provides access for keystrokes and windows.
> Xorg is an open source program, you can't "end of line" an open source program, it isn't Visual Basic 6 that died because Microsoft decided it should die. As long as someone wants to improve it, it will get improvements.
Because the design is bad, it gets harder and harder to add new features. Fixing bug introduces two new ones. Complexity makes it hard to approach project and control everything. Red Hat has maintained it so many years with proper funding, otherwise who knows what would have happened.
>This exactly is the point. Wouldn't it be nice to just use apps without extra steps? It would make Linux better for non-experts.
I don't run untrusted apps, at the cost of crashing, and less overall functionality? Its cutting the nose to spite the face.
>You can’t even isolate X applications without running other X.org because there is no permission control once the application has access into socket, and with socket it can see everything. And you can’t really rework that. Sometimes you need to rewrite whole thing.
Not a real world scenario problem.
>Because the design is bad, it gets harder and harder to add new features. Fixing bug introduces two new ones. Complexity makes it hard to approach project and control everything. Red Hat has maintained it so many years with proper funding, otherwise who knows what would have happened.
Wayland design is... good? So it's taken over a decade to add a fraction of x.org features, and it still crashes? If it so simple and it still sucks, are you calling the programmers incompetent for not making the simple compositor functional?
> I don't run untrusted apps, at the cost of crashing, and less overall functionality? Its cutting the nose to spite the face.
The best practice is zero-trust; handle everything equally. You can't fully say by yourself which is really trustable, and if you are, then you are 0,01% of the actual population, and decision cannot be based on that.
> Not a real world scenario problem.
Of course it is, it is the biggest attack surface for normal application.
No, I am asking about attacks in the wild. I know spectre and meltdown were possible to exploit, but its different from it existing as an attack, like a recipe versus a cooked meal.
I'm not sure what you mean by attacks in the wild. I don't have any news stories talking about how companies lost millions of dollars due to an X.org-based ransomware; but I hope you can see how it's not a good idea to wait for that to happen before fixing a security bug :)
Encryption attacks are being mitigated by backups, but Linux servers don't usually use x.org or GUI do they? This might be why desktop linux isn't being adopted by business, but my point was that we hear of encryption attacks often, but not a single x.org attack that would make migration more pressing. Its living in a nuclear shelter when there is no nuclear threat, and living in an uncomfortable state out of paranoia.
Linux is said to be safer in the public, but if x.org is that bad, is windows actually safer since it doesn't use x?
I think its useful to mitigate problems, without real world examples its hard to care about invisible hypotheticals, especially at the cost of lost functionality.
I'm still not sure I understand. If there is a working proof of concept for the exploit that is published, would you still consider that an invisible hypothetical? To me, it's not, I would like to have those patched. As with meltdown and spectre there may be functional tradeoffs, but when significant money is at risk from security vulnerabilities then I'd usually expect security to win out.
The attack vector for a trojan or ransomware can be a GUI system. It can be anything really, the malware just needs a way to get into the network and then it can cause more trouble and spread to more nodes.
>I'm still not sure I understand. If there is a working proof of concept for the exploit that is published, would you still consider that an invisible hypothetical?
Yes. If deployment is difficult and not applicable in real world settings it isn't really a threat, its like reading about the TouchID since the first iPhone 5S being tricked by copying fingerprints, or needing a bust of a person to trick FaceID. Do people still usually use it? Its a recipe, maybe even its cooked, but if nobody eats the poisoned food because it smells bad, I am not worried I might eat it.
>then significant money is at risk from security vulnerabilities then I'd usually expect security to win out.
In practice it sadly isn't true like the leaks of other people's data that constantly happens.
I think updating browsers is a good idea, I think sandboxing apps can be safer, using a VM for some functionality could be useful too (if you run XP and malware detects its a VM, it doesn't even infect it). Basically I see most security issues as paranoia when its academics publishing hypothetical attacks that have never been seen in the wild, if they made super ebola in a lab or anthrax, I am not too worried about breathing it in.
I would like it patched if the cost is worth it. Intel's was not, I disabled it, and religiously update my browser, my computer is faster, I have safety despite it never existing as an attack because it was easy to defeat.
>its like reading about the TouchID since the first iPhone 5S being tricked by copying fingerprints, or needing a bust of a person to trick FaceID
That's not really comparable, these are trivial exploits that can probably be targeted with a 100-line program, or less.
>In practice it sadly isn't true like the leaks of other people's data that constantly happens.
I've known many security people who take their jobs very seriously. If they weren't doing their jobs, you'd see quite a lot more data breaches than you do now :)
>That's not really comparable, these are trivial exploits that can probably be targeted with a 100-line program, or less.
I feel like it would be newsworthy if they were trivial, or we'd see more of them. Today there was a story on malware on Pinephone, a device almost nobody has that didn't cost any money to anyone. If they were trivial, it would be utilized more and we wouldn't need to care about the wayland issues as much since the alternative is worst.
I tried wayland just because of this thread, and I got 2 crashes within minutes.
I will, KDE discourages me somewhat because it’s not enough or relevant information the few times I tried using their reporter. I think it has to do with mouse speed like gnome.
> You can’t even isolate X applications without running other X.org because there is no permission control once the application has access into socket, and with socket it can see everything. And you can’t really rework that. Sometimes you need to rewrite whole thing.
Actually you can, the X server can run applications in an untrusted state where they cannot see other resources. It is not straightforward to setup though so running a nested X server is much simpler - at least until you realize that the same "untrusted" app has access to the rest of your system anyway.
> It is X.org which provides access for keystrokes and windows.
You can avoid that if you want but as others have mentioned you are way more likely to need that functionality for legitimate purposes.
You can finetune access control of your app into filesystem for example with AppArmor, but access for X.org is always required, remaining as the biggest attack surface.
Neither are really good options for graphical applications. The focus now is on using container sandboxing, at least with things like flatpak and snap anyway.
I'm not an expert on the wayland/xorg/xinput/gnome-shell stack, but unless someone smarter takes good measurements of various things (visual and input delays), I'd suggest
1). Xorg + gnome-flashback or Xorg + other non-compositing WM
or
2). Non-gnome compositor under wayland
As for 1).
Xorg b/c it can provide pointer movements at ~native speed via xnput vs Gnome/Wayland 60-240Hz. And gnome-flashback b/c it's a non-compositing WM, so there's no frame buffering (unless you enable it in nvidia's panel, or via TearFree in amdgpu).
As for 2).
Sway, I suppose, doesn't aggregate mouse movements, but
looking at https://zamundaaa.github.io/wayland/2021/12/14/about-gaming-... the author used some hack to enable "immediate" "drawing" under KWin. I'm not sure what's the default behavior of e.g. Sway - does it buffer frames?
I have a 240Hz monitor, so gnome-shell passes data to xwayland and then to applications at ~220Hz (as seen in the event-freq from https://blogs.gnome.org/shell-dev/2021/12/08/an-eventful-ins...)
So, my expensivish 1kHz (or even 8kHz mice - as these exist) will pass mouse movements with 4ms+ delay and only <240 times per second.
Under Xorg, the xinput proto will pass data at the ~usb polling speed. For my 2kHz mouse, the tool (event-freq) shows ~1700 events / second.
By the way, this ~220Hz is the speed at which gnome-shell passes data to clients (via XWayland or directly). The libinput accepts and passes data at correct 2kHz - and it's only gnome-shell aggregates it under wayland - this can be seen e.g. in the evhz tool (direct /dev/input access) vs event-freq tool (gnome-shell in wayland session, and xinput in xorg session).