According to [0] there have been several releases in the last 10 years.
Including one in 2021 that added variable refresh rate and touchpad gesture support.
On the output side: Mixed-DPI displays. HDR/wide color gamuts/LUTs/color management. Non-rectangle workspaces, or workspaces with no absolute coordinate systems (think headsets).
On the input side: touch gestures recognized as touch gestures, not as mouse emulation.
On both: hotplug of all input/output devices. Headless sessions and handover of sessions between console and headless, both directions. Reliable lock screen, even if popup menu is showing ;).
At some point you will have to either start building it for yourself or get someone to build it for you (when distros drop it from their repos). And that will become harder and harder as time goes by. Just recently apps I was building using Tauri stopped working on Xband forced me to switch to the Wayland Gnome session (on Debian testing) permanently. This will become more and more common, until it becomes untenable.
Building X is probably going to be trivial going into the future, yeah. Building and running future apps written for the Wayland will be a different beast though.
Just like there is XWayland today to make it possible to use a Wayland system with X apps, there are already similar solutions to run Wayland apps on an X system.
If you run `cage`, it will run a Wayland application in its own sandboxed window while the rest of the system runs X.
Similarly; if you run `sway` under X for example, it will start the window manager in its own window, and you can run Wayland applications inside it.
With most installations of X11 / Xlib, there is also an extension library: libXFixes. With this, it is possible to receive an event whenever the clipboard (or any other selection) changes (even if our program is not the owner). [0]
I used this in the past when porting a Windows text editor to Linux that needed to know when the clipboard had changed.
In the C bindings, it corresponds to XFixesSetSelectionOwnerNotifyMask /
XFixesSelectionNotifyEvent. There is also a SelectionNotify event in the core X11 protocol, but I'm pretty sure it is different (occurs in response to XConvertSelection - asking for the clipboard contents). Though it has been some time since I wrote the code I'm looking at.
I think the goal here is to be notified when an application accesses the clipboard, not modifies it. You need to be the owner to do that (excluding some system-wide hack like patching the X server or MITM-ing all X11 connections).
I feel as though they should change the name of the paper, because this must be one of the most complex takes on 'defer' that I have seen.
They make a classic mistake of trying to solve a problem by adding more complexity. They cannot decide on by-value or by-reference, so they take the convoluted C++ syntax to allow specifying one (as well as other semantics?).
It does not fit with C in my opinion. It will also hurt third party tools that wish to understand C source code, because they must inherit this complexity.
I would prefer a solution where they simply pick one (by ref / by value) and issue a compiler warning if the programmer has misunderstood. For example, pick by-value and issue a warning if a variable used inside the defer is reassigned later in the scope.
I think people are getting hung up on the lambda syntax, but it seems they're just taking what they're given. If c23 adds lambdas, at this point I'd say it's more likely to be c++ syntax than c syntax because c++ is already out there. So instead of "to resolve ambiguity we force user to choose" it's more like "the probable lambda syntax already makes this explicit so we will use it too." I think that makes more sense than two different syntaxes for lambda and defer, whatever the other merits of the proposal.
I had a go at this myself a few years ago [0]. But I wanted a dynamically linked ELF instead of a static one so that I could load SDL, OpenGL, etc. That requires extras like a DYNAMIC section which takes up quite a bit more space.
I ended up at 728 bytes without any self-extracting techniques. It played a nice animation though.
I have not tested it recently, I expect it won't run any more as it used "bad things", like relying on ecx having a specific value when the program started, but the ideas should still be relevant.
to backup CD/DVDs, along with toc2cue and bchunk to convert to .iso.
I have no idea if the resulting iso is "more accurate" than using dd - but they've always worked, even from CDs with copy protection and obscure file systems (old mac CDs).
Yep, the up vector is not necessarily orthogonal to the direction vector (which is also called “look at” vector in OpenGL [1]). Another approach would be to set
We've already defined z and x by that stage, and we need y to be a unit vector perpendicular to both. So y can only be z.cross(x) (or x.cross(z) for mirrored).
The article doesn't mention SECCOMP_RET_TRAP which was an existing way to inspect syscall pointer arguments during interception (when combined with a SIGSYS signal handler).
I'm curious how the two approaches compare - does USER_NOTIF give a greater range of possibilities, or is it mostly just a different interface?
In a small application that forks once and uses seccomp on the child process, would there be much benefit in moving from RET_TRAP to USER_NOTIF?
In addition, the trap isn't usable safely with shared libraries that because of signals. For example, glibc once it adopts rseq will block all signals during thread-creation making it impossible to use RET_TRAP. That's an issue that Firefox/Chromium has already run into and is one of the reasons why they are interested in switching to the seccomp notifier.
The trap also doesn't allow to continue syscalls nicely and - as Sargun pointed out - will require ptrace() to be used to inspect syscall arguments and so on. The notifier also has built-in protection against pid recycling, is more secure and is way more efficient.
This is somewhat unrelated to the seccomp notifier but since ptrace() came up I want to lose a few words about it. (And this is more a criticism of the interface not the implementation. I love Oleg who maintains it and is one of the few people who understand all its intricacies!)
As a rule of thumb: you can do almost anything with ptrace(). Which is why people not really putting an effort into kernel patch reviews often come up with the argument "Why do you need a separate api for that. You can already do that with ptrace().". To which the correct answer in my book almost always is: "Because it would be a horrible hack." Effectively, when introducing a dedicated api to do something that you can in some shape or form do with ptrace() is the equivalent to moving it from a debugging hack to a (hopefully well-designed) feature.
Hell, the history of CRIU is essentially the history of building apis out of ptrace() hacks (I'm being facetious of course.).
Imho, with ptrace() you're always in non-cooperative mode to some extent, i.e. you force the behavior on the task. The whole kernel code for ptrace() attach is literally "I'm your parent now." whereas features such as the notifier are almost always cooperative since the task itself is doing the work. Specifically for the notifier the nice thing is that all the work is happening in the task itself. This is especially relevant when you e.g. install file descriptors into the task which is a future patchset that is about to be merged.
Shockingly different. The -O3 option made hardly any difference with OjC but more than a 10x difference with simdjson. I'll be removing the claim from the OjC readme.
Thank you for being civil with your reply. Much appreciated.
What I've learned from this (as a simdjson author) is that we need to update the quick start in the README to have -O3. I was so psyched about the fact that we now compiled warning-free without any parameters ... that I didn't stop to think that some people would go "huh I did what they told me and simdjson is slow, wtf." Because we evidently told you to compile it in debug mode in the quick start :)
simdjson relies deeply on inlining to let us write performant code that is also readable.
Sorry to have sent you down a blind alley!
One thing to note: if you want to get good numbers to chew on, we have a bunch of really good real world examples, of ALL sizes (simdjson is about big and small), in the jsonexamples/ directory of simdjson. And if you want to check ojc's validation, there are a number of pass.json and fail.json files in the jsonchecker/ directory.
The structuring of simdjson is considerably easier to read if we rely on -O3. In order to get the same performance at lower levels of optimization we need to do a lot of manual things that make the source code quite difficult to read and work with.
I was initially surprised that all the lexers treat "[01]" as four tokens, but it makes sense from the state diagram.
In the past I've encountered JSON lexing that only considers token boundaries on "special" characters i.e. ",}]:" and whitespace. This will return a lexing error when it sees "01" (equivalently "truefalse").
One could imagine first tokenizing only based on whitespace, then only starting to figure out what the tokens are. Which means parsing them individually. Which means another parsing step.
I think this would match human more closely: structure is more obvious based on visual separation than detailed analysis.
I guess it wasn't done that way because the current way of operation means one parser to rule all sources, and that parser can handle more complicated cases. That kind of design decision is more surprising later, but is kind of understandable when you draft a language as the same time as your first parser.
[0]: https://en.wikipedia.org/wiki/X.Org_Server