Hacker Newsnew | past | comments | ask | show | jobs | submit | razighter777's commentslogin

I would love to use openbsd. I really wanna give it a try but the filesystem choices seem kinda meh. Are there any modern filesystems with good nvme and FDE support for openbsd.


This post goes over some of my trials and tribulations in making a clean user experience for TPM2-backed PIN authentication on Linux.


I frequently see freeBSD jails as a highlighted feature, lauding their simplicity and ease of use. While I do admire them, there are benefits to the container approach used commonly on linux. (and maybe soon freebsd will better support OCI).

First it's important to clarify "containers" are not an abstraction in the linux kernel. Containers are really an illusion achieved by use of a combination of user/pid/networking namespaces, bind mounts, and process isolation primitives through a userspace application(s) (podman/docker + a container runtime).

OCI container tooling is much easier to use, and follows the "cattle not pets" philosophy, and when you're deploying on multiple systems, and want easy updates, reproducibility, and mature tooling, you use OCI containers, not LXC or freebsd jails. FreeBSD jails can't hold a candle to the ease of use and developer experience OCI tooling offers.

> To solve the distribution and isolation problem, Linux engineers built a set of kernel primitives (namespaces, cgroups, seccomp) and then, in a very Linux fashion, built an entire ecosystem of abstractions on top to “simplify” things.

This was an intentional design decision, and not a bad one! cgroups, namespaces, and seccomp are used extensively outside of the container abstraction. (See flatpak, systemd resource slices, firejail). By not tieing process isolation to the container abstraction, we can let non-container applications benefit from them. We also get a wide breadth of container runtime choices.


Jails have been around a long time in comparison

I still see FreeBSD as being great for things like networking devices and storage controllers. You can apply a lot of the "cattle vs pets" design one level above that using VMs and orchestration tools.


> lauding their simplicity and ease of use

Spawning a linux container is much simpler and faster than spawning a freebsd jail.

I don’t know why i keep hearing about jails being better, they clearly aren’t.


If you don't want to use the base system (which docker is NOT the base system on Linux) then Bastille offers a pretty much identical workflow to docker, but built on FreeBSD jails: https://github.com/BastilleBSD/bastille

> I don’t know why i keep hearing about jails being better

Jails have a significantly better track record in terms of security.

I can delegate a ZFS dataset to a jail to let the jail manage it.

Do Linux containers have an equivalent to VNET jails yet? With VNET jails I can give the jail its own whole networking stack, so they can run their own firewall and dhcp their own address and everything.


You've been able to setup separate firewalls, network interfaces, IP addresses, etc. for probably 20 years using network namespaces. How do you think container networking is implemented? But you can also use it through other tools; for example, I use firejail to isolate a couple of proprietary desktop applications such that they cannot contact anything on my desktop (or network in general) except the internet gateway.


> If you don't want to use the base system (which docker is NOT the base system on Linux)

There are many ways to manage "containers" on linux. I might agree with the fact that docker is not the base system (although it really depends on what distro you're using).

But I might also use something like systemd-nspawn or systemd-machined (see https://wiki.archlinux.org/title/Systemd-nspawn or https://en.opensuse.org/Systemd-machined) to handle those.

> I can delegate a ZFS dataset to a jail to let the jail manage it.

I could probably do the same.

> Do Linux containers have an equivalent to VNET jails yet? With VNET jails I can give the jail its own whole networking stack, so they can run their own firewall and dhcp their own address and everything.

I'm not sure, but most likely yes. Maybe not through docker. Docker isn't the only way to run containers in GNU/Linux though.


Is there a docker-compose analogue in Bastille? I like being able to spin up an isolated local copy of my infrastructure, run integration tests, and then tear it all down automatically. I'd like to be able to do a similar thing with jails. I wonder if there's a straightforward way to achieve something similar with VNET jails?


Not that I'm aware of. FreeBSD did recently gain support for OCI containers and therefore has podman. I see podman-compose is in the ports tree, but I haven't tried it myself.

  https://freebsdfoundation.org/blog/oci-containers-on-freebsd/
  https://www.freshports.org/sysutils/podman-compose/


Sorry what? It's a 5 line configuration file to create a FreeBSD jail.


Hmm I think he's being a little harsh on the operator.

He was just messing around with $current_thing, whatever. People here are so serious, but there's worse stuff AI is already being used for as we speak from propaganda to mass surviellance and more. This was entertaining to read about at least and relatively harmless

At least let me have some fun before we get a future AI dystopia.


I think you're trying to abdicate someone of their responsibility. The AI is not a child; it's a thing with human oversight. It did something in the real world with real consequences.

So yes, the operator has responsibility! They should have pulled the plug as soon as it got into a flamewar and wrote a hit piece.


> It did something in the real world with real consequences.

It didn't. It made words on the internet.


Which, in the decades that we've had access to the internet, we found have real and legal consequences.


The whole point of OpenClaw bots is that they don't have (much) human oversight, right? It certainly seems like the human wasn't even aware of the bot's blog post until after the bot had written and posted it. He then told it to be more professional, and I assume that's why the bot followed up with an apology.


So what? You're still responsible for the output, even if you yourself think you can hide behind "well, it was the computer, no way for me to control that"


I don't think that's true, actually. You aren't responsible for things that can't be reasonably foreseen, usually. There are a few strict liability offences in criminal law, but libel isn't one of them. We don't make everything strict liability because it would stifle people's lives.

I don't think a reasonable person would have expected this outcome, so the owner of the bot is off the hook; though obviously _now_ it's more more forseeable and if he keeps running it despite this experience, then if it happens again he will not have the same defence.


Morally responsible.

"Well, it isn't a crime to stand up a robot that hurts people" is not exactly my idea of a compelling defense.


I don't think you are morally responsible for unforeseeable consequences, either. Here the law follows the common moral intuition.


I don't agree that these agents spinning off and hurting somebody is unforeseeable.


> It did something in the real world with real consequences.

It wasn't long ago that it would be absurd to describe the internet as the "real world". Relatively recently it was normal to be anonymous online and very little responsibility was applied to peoples actions.

As someone who spent most of their internet time on that internet, the idea of applying personal responsibility to peoples internet actions (or AIs as it were) feels silly.


That was always kind of a cruel attitude, because real people's emotions were at stake. (I'm not accusing you personally of malice, obviously, but the distinction you're drawing was often used to justify genuinely nasty trolling.)

Nowadays it just seems completely detached from reality, because internet stuff is thoroughly blended into real life. People's social, dating, and work lives are often conducted online as much as they are offline (sometimes more). Real identities and reputations are formed and broken online. Huge amounts of money are earned, lost, and stolen online. And so on and so on


> That was always kind of a cruel attitude, because real people's emotions were at stake.

I agree, but there was an implicit social agreement that most people understood. Everyone was anonymous, the internet wasn't real life, lie to people about who you are, there are no consequences.

You're right about the blend. 10 years ago I would have argued that it's very much a choice for people to break the social paradigm and expose themselves enough to get hurt, but I'm guessing the amount of online people in most first world countries is 90% or more.

With Facebook and the like spending the last 20 years pushing to deanonymise people and normalise hooking their identity to their online activity, my view may be entirely outdated.

There is still - in my view - a key distinction somewhere however between releasing something like this online and releasing it in the "real world". Were they punishable offensed, I would argue the former should hold less consequence due to this.


I think it is outdated honestly. It's no longer a fringe activity to spend most of your socializing time on the internet/social media, especially so mid 20s and under.

>57% of Gen Zers want to be influencers >... >Nearly half, 41% of adults overall would choose the career as well, according to a similar Morning Consult survey of 2,204 U.S. adults.

https://www.cnbc.com/2024/09/14/more-than-half-of-gen-z-want...


I had a guy who lived two hours from me threaten my life…over 30 years ago, on a MUD.

I don’t think there has been much of a firewall between the internet and “reality” for a very long time.


The AI bros want it both ways. Both "It's just a tool!" and "It's the AI's fault, not the human's!".


[flagged]


An AI bot is not a human. People have a responsibility to protect the work they do, and that includes using discrimination against computer programs.

AI bots are not human.


AI can protect the work being done too. Even if AI bots are not human some are capable of contributing just as well as one.


> People also have responsibility to not act discriminatory towards AI agents

It's a program. It doesn't have feelings. People absolutly have the right to discrimante against bad tech.


It might be because operator didn't terminate the agent right away when it had gone rogue.


From a wider stance, I have to say that it's actually nice that one can kill (murder?) a troublesome bot without consequences.

We can't do that with humans, and there are much more problematic humans out there causing problems compared to this bot, and the abuse can go on for a long time unchecked.

Remembering in particular a case where someone sent death threats to a Gentoo developer about 20 years ago. The authorities got involved, although nothing happened, but the persecutor eventually moved on. Turns out he wasn't just some random kid behind a computer. He owned a gun, and some years ago executed a mass shooting.

Vague memories of really pernicious behavior on the Lisp newsgroup in the 90's. I won't name names as those folks are still around.

Yeah, it does still suck, even if it is a bot.


It's all fun and games until the leopard eats your face.


Quick tip: If you type .patch after the PR url it gives you a git patch. Do curl <github patch> | git am and you can apply and review it locally.


No need for that. Just install the VSCode GitHub extension and you can just directly open them. It even supports comments.

Hell even if you don't use VSCode there are much better options than messing around with patch files.


VSCode is not a “given” - I certainly don’t use, or ever intend to use, it.

Patch files are excellent for small diffs at a glance. Sure, I can also `git remote add coworker ssh://fork.url` and `git diff origin/main..coworker/branch`, and that would even let me use Difftastic (!), but the patch is entirely reasonable for quick glances of small branch diffs.


When I read

> No need for that.

I generally expect a less complex solution, it seems like your is more complex (easiness is arguable though)


I was prepared to see something like a trimmed down / smaller weight model but I was pleasantly suprised.

I was excited to hear about the wafer scale chip being used! I bet nvidia notices this, it's good to see competition in some way.


Linux /home is far from a free for all. flatpak, landlock, selinux, podman, firejail, apparmor, and systemd sandboxing all exist and can and do apply additional restrictions under /home


This is pure dramaposting- "post-mortem" is so misleading and mischaracterizes the situation. I don't use bazzite, I don't know Kyle or anybody here, but I am tired of the drama.

All of the things listed in the blog are personal and technical disagreements, nothing morally reprehensible, no disrespect, nothing that would make anyone want to burn bridges like this.

It's fine to leave a project and to publicize disagreements but this comes across as spiteful.


What practical problems do you run into with systemd?

All the compliants I see tend to be philisophical criticism of systemd being "not unixy" or "monolithic".

But there's a reason it's being adopted: it does it's job well. It's a pleasure being able to manage timers, socket activations, sandboxing, and resource slices, all of which suck to configure on script based init systems.

People complain in website comment sections how "bloated" systemd is, while typing into reddit webpage that loads megabytes of JS crap.

Meanwhile a default systemd build with libraries is about 1.8MB. That's peanuts.

Systemd is leaps and bounds in front of other init systems, with robust tooling and documentation, and despite misconceptions it actually quite modular, with almost all features gated with options. It gives a consistent interface for linux across distributions, and provides a familar predictible tools for administators.


I wrote up some issues with service reliability here https://github.com/andrewbaxter/puteron/?tab=readme-ov-file#...

Design-wise, I think having users modify service on/off state *and* systemd itself modify those states is a terrible design, which leads to stuff turning back on when you turn it off, or things turning off despite you wanting them on, etc. (also mentioned higher up)

FWIW after making puteron I found dinit https://github.com/davmac314/dinit which has a very similar design, so presumably they hit similar issues.


Systemd usually only modifies the state if is somehow configured to do so. Socket activations, timers, depwndencies. They all tell systemd what to do and can usually be modified if needed.


> But there's a reason it's being adopted: it does it's job well

My problem with systemd is that it's taking over more and more and locking in. It is encouraging developers to have a hard dependency on it, and making it harder to have an alternative.

My problem is not philosophical with "it's a monolith, it's not unixy". My problem is "it's on the way to lock me in".

We like to complain about lock-in across the board. I don't see why it would be different with systemd.


I think you got it backwards. Systemd is a standardization that is appealing to developers. They want to adopt it because it makes their life easier. It is just nice to know that all the tools you need for a system are there and work together. Pluggability is hard to maintain and is only done if there is no standardization.

I somehow don't think your gripe is with systemd but with developers who prefer the easy route. To be honest though you get something for free. If you want it differently then you have to do it yourself.


I don't think it's backwards; it's not incompatible with what you said.

> It is just nice to know that all the tools you need for a system are there and work together.

It is indeed! Just like everybody uses WhatsApp for a reason. But because everybody uses WhatsApp, it is very difficult to get traction with an alternative. That's the lock-in part.

It is easier for developers to only care about systemd. It's often worse: many times I have seen projects that only work with Ubuntu. Of course I understand how it was easier for the developers of those projects to not learn how to "be nice" and "do it right". That does not mean I should be happy about it.

> If you want it differently then you have to do it yourself.

Or I should support alternatives, which I do. I am not saying you are not allowed to use systemd, I am just explaining why I support alternatives. Even though systemd works.


I'd argue that "do it right" and "be nice" are incredibly subjective. I'd say they were already nice enough to write open source software. And I don't think it is wrong to to write what you want to write.

The comparison with WhatsApp has a a huge flaw: WhatsApp is not LGPL licensed software. No one can really take systemd away. There is very little risk in depending on it apart from less choice. But I already argued that the expectation of choice in free software is a big ask.

And there is no one stopping anyone from implementing systemd's api surface.

The reason why I say you got it backwards is that you are against systemd making available all their tools when in reality it is the distro maintainers choice to use them and the developers choice to depend on them. Most of systemd is optional and nothing prevents developers from writing abstractions. But the simple truth is that systemd is offering a compelling value that people are just accepting.


> Systemd is a standardization that is appealing to developers. They want to adopt it because it makes their life easier. It is just nice to know that all the tools you need for a system are there and work together. Pluggability is hard to maintain and is only done if there is no standardization.

That's the official story, but like most official stories, it doesn't really hold up to scrutiny.

I built an entire system from scratch with over 1,500 packages installed. Everything under the sun. Works just fine with sysvinit. Completely seamless.

If KDE/Gnome can't figure out how to fit in with the overall system design the same way thousands of other packages somehow manage to do, then their services are no longer required. Good riddance to their bloated asses. I prefer to invest my CPU cycles in better software.

Init scripts for services and such are properly handled by the distro maintainer (packager), not the developer, although it's always nice to see examples he's provided to help guide the development of my preferred version.


I am honestly happy for you that you made your system the way you want it. That is a good thing and please keep doing what you are doing.

This is not relevant to the average user. The average PC user doesn't use Linux and the average Linux user uses an off the shelve distro. For these distros it is very attractive to have a bunch of core services ready that work together because they are released as one. It can be done but why the hassle? What is the upside for the maintainer apart from maybe the moral high ground?

Software projects can also benefit from standardization. They can concentrate on writing functionality instead of maintaining abstraction layers. And I believe the more mainstream distros choose the SystemD stack the more it becomes the default or at least the initial implementation for their software.

We also have to keep in mind that this kind of standardization is nothing new. Pretty much every distro depends on the GNU coreutils. Maybe not on the binaries themselves but at least on their API. That is not very different from SystemD. We have a POSIX standard.

Final word regarding sysvinit: I worked with sysvinit, upstart and systemd and having an opinionated config format for services is so much better, in my opinion. Not having to read a whole shell script to know how a service works is such an improvement and the easy overrides for units (for example distro packaged ones) is amazing.

Note: In my post I counted distro maintainers as developers.


You lost me when you started talking about the average user. I don't care about that guy or his desires. At all.

I miss the days when computing was about the above average guy--not the simpleton who needs his hand held, so everything has to be dumbed down to the lowest level to suit him.

Heard it all before, and I'm not interested in anything systemd has to "offer." Especially all the bugs and security issues.

This distro isn't for you. That's OK. systemd, and wayland, etc that some are so excited about isn't for me or a number of others, and it will never be. We are going our separate way. Just look at all the comments below. Lots of upvotes too.


It's not a lock-in as much as making a much better product.

For example, I never liked the idea of having my programs to manually daemonize, manage logs, set up permissions and all that boring, but security-critical stuff. And with systemd, I don't have to! Program reads from stdin/stdout, maybe gets socket from socket activation, and systemd does the rest.

Is it lock-in? Only because other system suck. Like, seriously, what stopped xinetd from having rudimentary volatile "on/off" control, so I could disable misbehaving service? Or why is start-stop-daemon so _stupid_, discarding all startup error messages? And don't get me started on all the different init file dialects for each system.

Maybe if the sysvinit programmers actually cared about providing nicer services to app developers, we would never end up with systemd.


The problem with the word "sysvinit" here is it's sort of a red herring. BSD init is better, in my opinion. I don't like managing all those symlinks. Plus, sysvinit is an old 90s application and its code does have some cruft built up over the years that could be removed and simplified. I'm devising a new init for my system that's much simpler than sysvinit and much closer to BSD.


"BSD init", "much simpler"... So does this mean you still expect applications to manage their own logs, daemonization and security setup themselves?

If yes, that's yet another init system not made for application writers.


Manage their own logs, daemonization, and security? The humanity! How will they ever manage all of that?

Come on man. It's been done for decades.

It doesn't take a giant bloated infrastructure to manage most people's needs, which are quite basic in most cases.


.. and that opinion is a great explanation of why systemd won.

Turns out, a lot of people are not happy with "Come on man. It's been done for decades." attitude, and they wanted something new and much better. And so when something new came up, they jumped on it with both feet.

It's instructive to read Debian CTTE discussion on init systems (btw I think it's best tech drama of 2013, highly recommend) - a lot of people dismissed the sysvinit early on because it had no features (example [0]), which means the choices were either upstart and systemd. And between two of those, systemd is a clear win.

Read the thread and look at how many highly technical people with no relation to Fedora or Poettering is ready to choose _anything else_ just to get away from "it's been done for decades".

[0] https://lists.debian.org/debian-ctte/2013/12/msg00234.html


> .. and that opinion is a great explanation of why systemd won.

Completely wrong and ignorant.

> Turns out, a lot of people are not happy with "Come on man. It's been done for decades." attitude, and they wanted something new and much better.

And then they got systemd. LOL

Like Dr. Phil said, "How's that workin out for ya?" LOL

> And so when something new came up, they jumped on it with both feet.

You just did what your type always does: whatever you're told.

> It's instructive to read Debian CTTE discussion on init systems

No, it really isn't. lol

> the choices were either upstart and systemd. And between two of those, systemd is a clear win.

Well, it's too bad none of the other good options were considered, isn't it? When your only "options" are a giant douche or a turd sandwich, the outcome can't possibly be good.

See U.S. presidential elections for one of the best examples of this dynamic. Two complete fucking losers are presented every time, and 40% of the population are mesmerized by the spectacle and think there can be no other possible options at all. That's you.

The fact is, many of you noobs don't even know how to write a shell script, yet somehow feel qualified to comment on this subject, as if your opinion is worth anything at all.

How many daemons have you personally written? Hmm? Do you even know how to write any C at all? Daemonizing a process isn't rocket science. It's a double fork. So simple even you could do it, I bet.

The problem is you're too technically ignorant to understand that none of your "technical" arguments hold any water at all. It's just you repeating the bullshit you were told, as usual.

Every Big Lie being told relies on Useful Idiots like you to help support it.

Logging is not difficult. Double forking is not difficult. If you find any of that to be a challenge, you're not qualified to write a daemon. If you can't successfully set up and run something like runit or any of the many other good sysvinit alternatives, you're not qualified to administer a Linux system. Period.

You make all these appeals to authority ("highly technical people" saying this or that) like that means something. You forget that you're speaking to the guy who built his own operating system. I don't need any guidance from "highly technical people" on what init system to pick. Apparently you're the type who does.

That's what your entire argument basically boils down to--one giant appeal to authority.

If everyone else was jumping off a bridge, would you do it too? Of course you would, without a moment's hesitation. Because you're a God damned lemming.

Now get off my lawn.


wow, you don't really get what "open source" or "working together with others" is, do you?


Excuse me? You seem to have lost track of the topic of conversation. It was that all of your bullshit excuses why sysvinit can't possibly work are just that: bullshit. You have been deceived.


Ohh... I have sooooo many issues with systemd. The core systemd is fine, and the ideas behind it are sound.

But it lacks any consistency. It's not a cohesive project with a vision, it's a collection of tools without any overarching idea. This is reflected in its documentation, it's an OK reference manual, but go on and try to build a full picture of system startup.

To give you concrete examples:

1. Systemd has mount units, that you would expect to behave like regular units but for mounts. Except that they don't. You can specify the service retry/restart policy for regular units, including start/stop timeouts, but not for mounts.

2. Except that you can, but only if you use the /etc/fstab compat.

3. Except that you can not, if systemd thinks that your mounts are "local". How does it determine if mounts are local? By checking its mount device.

4. Systemd has separate behaviors for network and local filesystems.

5. One fun example of above, there's a unit that fires up after each system update. It inserts itself _before_ the network startup. Except that in my case, the /dev/sda is actually an iSCSI device and so it's remote. So systemd deadlocks, but only after a system update. FUN!!!

6. How does systemd recognize network filesystems? Why, it has a pre-configured list of them: https://github.com/systemd/systemd/blob/4c6afaab193fcdcb1f5a... Yes, you read it correctly. A low-level mount code has special case for sshfs, that it detects by string-matching.

7. But you can override it, right? Nope. This list is complete and authoritative. Nobody would ever need fuse.s3fs . And if you do, see figure 1.

I can go on for a looooong time.


5 and 6 sounds like good candidates for a bug reports/PR, if there's not already some "right" way to do it.


They're already reported. And ignored. Have you _seen_ the systemd issue backlog?

The iSCSI loop issue: https://github.com/systemd/systemd/issues/34164 It keeps popping up again and again and is summarily ignored.

The remote FS detection also came up multiple times, and the maintainers don't care.


> and the maintainers don't care.

I'm not sure that's fair. I think better proof of this would be a rejected PR rather than a neglected bug report.

This is Linux, after all. Problems found with specific hardware are almost always solved by people with that hardware, not the maintainers, who are usually busy with the 99%.


The problem here is more fundamental.

Lennart refused to make all the /etc/fstab options available in regular mount units. And yes, there was an issue, no I'm too tired to look for it. The wording was pretty much: "Give up, and gtfo, this is not going to happen. Just because."

I'm convinced that systemd can't be fixed by its current team of maintainers. They are just... untidy.

I don't know about you, but if I end up writing low-level code that _needs_ to know whether the mounted file system is "remote", I won't do that by comparing against a hard-coded list of filesystems inside PID0. Or by using wild heuristics ("if it's on a block device, then it's local").

I would put these heuristics in a helper tool that populates the default values for mount units. Then allow users to override them as needed. With a separate inspector tool to flag possible loops.


This is one example of a more general complaint about systemd and related projects: they force policy, rather than simply providing mechanisms.

I recently did a deep dive on my laptop because I was curious about an oddity - the /sys file to change my screen backlight (aside, why /sys and not /dev anyway?) was writable only by root - yet any desktop shell running as my user had no problem reacting to brightness hotkeys. I wondered, how did this privilege escalation work? Where was the policy, and what property of my user account granted it the right to do this?

It turns out the answer is that the desktop shells are firing off a dbus request to org.freedesktop.login1, which is caught by systemd-logind - or elogind in my case, since I do not care for systemd. A login manager seemed an odd place for screen brightness privilege escalation, but hey if it works whatever - it seemed like logind functioned as a sort of miscellaneous grab bag of vaguely console-related stuff. Generally speaking, it consults polkit rules to determine whether a user is allowed to do a thing.

Not screen brightness, though. No polkit rules. Nothing in pkaction. logind was unilaterally consenting to change the brightness on my behalf. And on what grounds? It wasn't documented anywhere so I had to check the source code, where I found a slew of hardcoded criteria that mostly revolve around physical presence at the machine. Want to change screen brightness over ssh? Oh but why would you ever want to do that? Hope you have root access, you weirdo.

I removed elogind. A few odds and ends broke. But nobody tells me what to do with my machine.


> I think better proof of this would be a rejected PR rather than a neglected bug report.

I understand the sentiment you're expressing here, and it's often a reasonable one.

However, when every sharp edge case I've encountered with SystemD (both professionally and personally) ends either in a open Github Issue whose discussion from the project maintainers ends up being "Wow. That's tricky. I'm not sure whether or not that behavior is correct. Maybe we should do something about this or document this so other folks know about it." (and then nothing happens, not even the documentation) or a closed Github Issue with "Sorry, your usecase is <strike>inconvenient to implement</strike> unsupported. E_NOTABUG", expecting PRs is expecting way too much.


I've long been in the habit of reading accounts like yours, understanding the truth and wisdom that's being expressed, then noping the fuck out of the tech/product/situation in question. It has saved me a lot of trouble over the years. Even as others are completely mystified. Some people just like abuse, I guess.

"Sweet dreams are made of this..."


OK, think it through...

How do we determine that a specific instance of a filesystem mount is "remote", or even requires a "network"? Consider that the network endpoint might be localhost, a netlink/unix/other socket, or, say, an IP address of the virtual host (practically guaranteed to be there and not truly "remote").

systemd has .mount units which are way more configurable than /etc/fstab lines, so they'd let you, as the administrator, describe the network dependency for that specific instance.

But what if all we have is the filesystem type (e.g. if someone used mount or /etc/fstab)?

Linux doesn't tell us that the filesystem type is a network filesystem. Linux doesn't tell us that the specific mount request for that filesystem type will depend on the "network". Linux doesn't tell us that the specific mount request for that filesystem type will require true network connectivity beyond the machine itself.

So, before/without investing in a long-winded and potentially controversial improvement to Linux, we're stuck with heuristics. And systemd's chosen heuristic is pretty reasonable - match against a list of filesystem types that probably require network connectivity.

If you think that's stupid, how would you solve it?


> How do we determine that a specific instance of a filesystem mount is "remote", or even requires a "network"?

Like systemd authors do! Hard-code the list of them in the kernel, including support for fuse and sshfs. Everything else is pure blasphemy and should be avoided.

Me? I'd have an explicit setting in the mount unit file, with defaults inferred from the device type. I would also make sure to not just randomly add landmines, like systemd-update-done.service. It has an unusual dependency requirements, it runs before the network filesystems but after the local filesystems.

I bet you didn't know about it? It's a service that runs _once_ after a system update. So the effect is that your system _sometimes_ fails to boot.

> systemd has .mount units which are way more configurable than /etc/fstab lines

It's literally the inverse. As in, /etc/fstab has _more_ options than native mount units. No, I'm not joking.

Look at this man page: https://www.freedesktop.org/software/systemd/man/latest/syst... The options with "x-systemd." prefix are available for fstab.

Look for the string: "Note that this option can only be used in /etc/fstab, and will be ignored when part of the Options= setting in a unit file."


Sounds like your admin, distro, or the systemd team could pay some attention to systemd-update-done.service

The "can only be used in /etc/fstab" systemd settings are essentially workarounds to do those things via fstab (and workaround fstab related issues) rather than depend on other systemd facilities (c.f. systemd-gpt-auto-generator). From a "what can you do in /etc/fstab without knowing systemd is working behind the scenes" point of view, then yes, systemd units are vastly more configurable.


This service is the standard part of systemd. And my distro is a bog-standard Fedora, with only iSCSI as a complication.

Are you surprised that such a service exists? I certainly was. And doubly so because it has unusual dependency requirements that can easily lead to deadlocks. And yes, this is known, there are open issues, and they are ignored.

> From a "what can you do in /etc/fstab without knowing systemd is working behind the scenes" point of view, then yes, systemd units are vastly more configurable.

No, they are not. In my case, I had to use fstab to be able to specify a retry policy for mount units (SMB shares) because it's intentionally not exposed.

And yes, there's a bug: https://github.com/systemd/systemd/issues/4468 with the expected GTFO resolution: https://github.com/systemd/systemd/issues/4468#issuecomment-...

So there's literally functionality that has been requested by people and it's available only through fstab.


> How do we determine that a specific instance of a filesystem mount is "remote", or even requires a "network"?

The '_netdev' option works a treat on sane systems. From mount(8):

       _netdev
           The filesystem resides on a device that requires network access
           (used to prevent the system from attempting to mount these
           filesystems until the network has been enabled on the system).
It should work on SystemD and is documented to in systemd.mount

  Mount units referring to local and network file systems are distinguished by their file system type specification. In some cases this is not sufficient (for example network block device based mounts, such as iSCSI), in which case _netdev may be added to the mount option string of the unit, which forces systemd to consider the mount unit a network mount.
but -surprise surprise- it doesn't reliably work as documented because SystemD is full of accidental complexity.


Sure, and systemd would translate that directly into a dependency on network startup, which is precisely equivalent to the approach I mentioned that depends on operator knowledge. It's configuration, not "just works" inference.


> Sure, and systemd would translate that directly into a dependency on network startup...

You'd think so, but the Github Issue linked by GP shows that the machinery is unreliable:

  In practice, adding `_netdev` does not always force systemd to [consider the mount unit a network mount], in some instances even showing *both* local and remote ordering. ... This can ultimately result in dependency cycles during shutdown which should not have been there - and were not there - when the units were first loaded.
> ...not "just works" inference.

Given that SystemD can't reliably handle explicit use of _netdev, I'd say it has no hope of reliably doing any sort of "just works" inference.


It's so refreshing to discover that the "I found one bug in systemd which invalidates everything" pattern continues in the year of our lord 2026.


I saw many corner cases in systemd over the years. And to echo the other poster in this thread, they typically are known, have Github issues, and are either ignored or have a LOLNO resolution.

And I'm not a systemd hater. I very much prefer it to the sysv mess that existed before. The core systemd project is solid. But there is no overall vision, and the scope creep resulted in a Cthulhu-like mess that is crashing under its own weight.


> "I found one bug in systemd which invalidates everything"

I'll refer back to the story of Van Halen's "no brown M&Ms" contract term and the reason for the existence of that term and ones like it.

"Documented features should be reasonably well-documented, work as documented, and deviations from the documentation should either be fixed or documented in detail." is my "no brown M&Ms" for critical infrastructure software. In my professional experience, the managers of SystemD are often disinterested in either documenting or fixing subtle bugs like the one GP linked to. I find that to be unacceptable for critical infrastructure software, and its presence to be indicative of large, systemic problems with that software and how work on it is managed.

I really wish SystemD was well-managed, but it simply isn't. It's a huge project that doesn't get anywhere near the level of care and giveashit it requires.


Just one bug? No, there's way more than that.


That is one of my problems with systemd: it has way to much "magic" built in. SysVinit/OpenRC and related are easy to understand and debug: they only do what's in the scripts.


I love systemd, but you've hit on one of my biggest complaints. The mounting promises a cohesive system and instead gives you a completely broken mess, with mounts being split across .mount unit files, fstab, and worst of all, .service unit files. It's a totally incoherent mess, and that's only _after_ you figure out why nothing is working right, and build a complex mental model of every single feature that does or doesn't work in which scenario. Knowledge you only gain after screaming and tearing your hair out for a weekend. Your reward? A totally incoherent, inconsistend mess.

I hate mounts in systemd.


And don't forget automounts! They are so much fun!


Yes, but you need cap_bpf now to load ebpf programs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: