> The reason is entirely technical and I think it has been formulated clearly [..]
He clearly expressed a technical opinion based on his own beliefs, and that’s all. There was no reasoning.
He did not even acknowledge the fact that it is going to be maintained separately from the actual generic kernel code (rust/kernel/dma vs kernel/dma).
Anyone can easily formulate a sentence that seems coherent and correct, but it can be proven completely false in 15 seconds with actual data.
IOW: just because someone calls it a technical argument it doesn’t make it one.
This is a matter of opinion - specifically, the opinion of a single person.
> If there's a single most reason why Rust-in-Linux will fail it is going to be because of the immaturity and entitlement of individuals in Rust community.
Indeed there was immaturity from several individuals.
One question: if you act immaturely towards someone and they react immaturely, who's fault is it? The person who reacted or you?
I do not believe there is a widespread issue of entitlement: if you follow the discussions on the ML and observe how the R4L project has progressed so far, the only "entitlement" that individuals in the R4L project may seem to have in common is the desire to be treated with respect and for discussions to focus on technical arguments.
There is a large difference between "I do not think this is a good idea" vs "do not do this", in particular given the position Hellwig has in the kernel as a listed maintainer of the DMA mapping helpers.
No single technical reason was given besides a non-specific opinion on the "messiness" of multi-language projects.
https://docs.rs/arc-swap/latest/arc_swap/ Is a basic rcu mechanism that doesn’t have the exotic requirements of urcu (but probably not as efficient). If you make the interior nodes of your data structure Arc (assuming they are large or expensive enough to warrant it), then updating is relatively fast. Of course you also want to be careful here to batch as many changes at once if you’re doing this at all frequently.
But ultimately I don’t recall anything special about the runtime semantics the first time I came across this technique which if I recall correctly came from the brilliant folk at Azul Systems for their JVM. You just need a way to atomically adjust certain nodes. And if I recall correctly Azul’s supported N concurrent writers that were all wait free (ie every thread participated in forward progress)
To be fair, RCU is about as exotic as hazard pointers. Then again, there is plenty of documentation for both and once you are familiar with them they lose a lot of their mystique.
You’re thinking of the kernel and urcu implementations. The RCU implementation that I linked here isn’t exotic at all. It creates a duplicate structure and atomically swaps pointers. If the swap fails because another update occurred, it tries creating a new duplicate structure running your RCU closure and swapping again. Forward progress is eventually made although it scales poorly if you have a lot of concurrent updaters.
So not quite as performant but in terms of having a writer and lots of concurrent readers, it achieves that requirement. And it’s data structure agnostic and you only need to pay this cost if you really have concurrency.
This looks rough: it spawns background "worker threads" for tasks like lazy resizing; and even with that it still sometimes takes locks (see resize_mutex).
If you search for "concurrent hash table <language>" or "concurrent map <language>" (being <language> Rust or C++) you get a number of open source libraries written using different techniques. I consider "exotic" a matter of opinion.
It is actually quite tricky to lock-free swap a concurrently-accessed reference counted pointer. The reference count is not associated with the pointer, but with the pointee, so a 2CAS is not enough.
Typically you need hazard pointers or similar deferred reclamation tricks.
I don't see any reasoning besides "I think it's not really needed".
The reasoning in the original article is shallow at best, and the presented alternatives are not discussed in the depth I was expecting for an article questioning the whole architectural basis of a very popular IaC stack.
On the second article, I don't see a self critique of the actual points raised during the original article.
The two articles sound much like a collection of statements based on personal opinion.
I see the value in an article like this for starting a discussion but not into taking any sort of conclusion.
I would want this over docker and docker-compose any day.
I've been using docker compose in production for a couple of years now and it adds another layer on top of systemd that is a continuous source of headache, especially during updates.
Podman gets it right: no central daemon, can automatically generate systemd services for a whole pod. Updates are seamless.
Seconded on the things Podman gets right. Also the isolation of all of the containers in their own network name space makes port management on my workstation super easy. I run many things like Paperless NGX using the same pattern in the start.sh file of my little project. I then use Traefik to route traffic to the right pod. It works great.
Honestly, that sounds plausible, but how would you achieve higher conviction rates?
You're not going to get far with the current US police. If you compare the money put in to the crime stats, the picture is one of an extreme inefficiency. They've failed in their mission to such an extent that a large minority of people simply don't trust them enough for them to do their duty to those people. They've preside over a third-world crime rate with first world tax revenues.
At this point, it would be reasonable to start talking about a wholesale reboot of the whole operation.
However, if that's too radical, or too expensive, it would seem pragmatic to try and target crime through some other mechanism - e.g. welfare.
I don't see how this reasoning makes any sense: you just need to look at the actual clock time to find out if the alarm went off or not.
For instance: alarm set for 8 AM.
You wake up, look at the clock: if it says 7:59AM or earlier, you woke up before it had a chance to ring. If it says 8:00AM or anything past that, it already went off.
Unless, of course, you have an alarm that does not show you the current time. Otherwise, it's pretty easy to figure out what's happening.
Algebraic effects are going to put OCaml on a next level in terms of expressivity, abstraction and decoupling capabilities of separate tasks. It would be like going from a type system like C, with only concrete types, to parametric polymorphism and/or generics.
Multicore is a very nice addition, but the fact that it is going to be coupled with an effect system is a game changer for the language as a whole.
I, for one, would gladly trade threaded GC and unchecked effects for checked exceptions...
Unless there is a way to implement some kind of poor man checked exceptions with unchecked effects?
Please see [1]. Around 28:45 the talk moves to effect types. At around 29:25 the proposed mechanism (with throw) is literally described as "checked exceptions".
Care to elaborate?