Hacker Newsnew | past | comments | ask | show | jobs | submit | moreentropy's commentslogin

How about just shipping a networkd.conf(.d) config snippet and .network file with the wanted defaults. This is just another instance of I-hate-systemd-for-no-reason whining


Networkd is still missing the most crucial part of DHCP protocol: DHCP OPTIONS leg of the DHCP negotiation stage.


You've shown up twice to neg about this, but this isn't a problem with systemd-networkd's DHCP client.

Right now there's no rules engine for the DHCP server to dynamically generate options. Sure, you can't embed Lua scripts in the server at this point.

But bro, this has nothing to do with the discussion here. The DHCP client works fine, has no such concerns or issues. Stop blowing smoke.


When having displaced the ISP-supplied DSL/ADSL/SDSL router at a premise using DHCP OPTIONS using a FreeBSD/Linux and ISC dhclient at many locations, and that is the client-side of DHCP.

And ISP likes to have their custom routers (which is a DHCP client, BTW) respond in all sorts of funky ISP-specific DHCP key/value sets.

Not just smoke. FIRE!


> This is just another instance of I-hate-systemd-for-no-reason whining

This is a person actually examining it and specifically spelling out reasons why it's unsuitable.


> This is just another instance of I-hate-systemd-for-no-reason whining

systemd appears to (whether deliberately, or through incompetence) ignore kernel configuration and do its own thing. That's a good reason to complain, don't you think?


> P.S. I actually run systemd-nspawn in production, but I am probably the only person on earth to do so.

You're not alone, systemd-nspawn is very much underrated. I have used it a lot for machine containers, though I'm using podman+quadlet+systemd more right now. systemd-nspawn with mkosi for generating workload images is still a nice & powerful ecosystem.


I would consider PIV and SSH through PIV/OpenPGP legacy and undesired nowadays. If you're only interested in state of the art second factor instead of passwords for sensitive use cases, a simple FIDO2 security key w/o all the extra features on a yubikey 5 is enough.

You can solve most of those with only FIDO2 nowadays:

Webauthn with fido/u2f is supported on most websites and oidc providers.

SSH with FIDO and resident / non-resident keys is supported.

PAM -> as documented in the guide, although setting origin and type manually isn't necessary and you can save keys in ~/.config/Yubico so non-root users can manage their keys. I would recommend enabling PIN verification with pamu2fcfg --pin-verification.

LUKS hard disk encryption with FIDO2 for unlocking isn't covered but is possible, systemd-cryptenroll can set this up on modern linux distributions.


| Webauthn with fido/u2f is supported on most websites and oidc providers.

I wish that was true. I’ve found that webauthn is becoming more common in the last year, but is still relatively rare. Many “important” sites and services make use of them. https://www.yubico.com/works-with-yubikey/catalog/ is a great place to see them, but they’re still quite rare as a whole.


While having a YK neo with all the features, I prefer the simple FIDO security key. Everything you could want apart from legacy/special use cases can be achieved with fido.

websites -> fido/u2f ssh -> native fido support in ssh-keygen login -> fido2 for windows, libpam-u2f for linux luks encryption -> systemd-cryptenroll


I'm in the same situation (50 Mbps uplink at one place, 100Mbps at another) and that's enough to do all hosting for hobby projects at home, which I really love.

Instead of exposing my home directly using DynDNS, I got a really cheap low end server (currently a VM w/ 1 CPU, 512MB RAM and 400mbps traffic flat) for 1€/month that proxies all traffic to the (hidden) servers hosted at home.

The spec is enough to reliably run a HAproxy that can max out the available bandwidth w/o sweat and it allows me to do failover between servers in my two "home datacenters" + possibly cache assets.


I use the Oracle Free Tier Arm instances for the same purpose! 4 cores, 24gb RAM, 200gb disk with unlimited 0.5gbps uplink, completely free forever.


many thanks, I managed to miss this one


This is the way.

I don't use it for home services (I don't have any), but use it in production for a couple of businesses for a decade.

Another advantage of this solution is what you can have ANY Internet connection, even 3G/4G/99G (and switch between your connections), your clients would still have the same IP to connect.

With a proper provider and configuration you can even host MS Exchange there.


Why not just use Cloudflare for this?


No specific reason against Cloudflare, just a DIY attutide in this case.

HAProxy is fun, and I also run it as a TCP proxy, so HTTPS is terminated in my (hidden) home server and I don't need to trust my proxy server, I guess that's not possible with cloudflare.


May I ask where you rent your ultra-cheap server from?


I'm using a Ionos (oneandone) VPS S: https://www.ionos.de/server/vps

1&1/Ionos is one of the largest and best connected ISP and Hosting providers in Germany, not some small shady shop. I see they doubled the price to 2€ though :)


I made a WordPress site for a friend through Ionos's managed offering. After a couple of months of the site being up, they decided to detach the database for literally no reason. I had to go through support to get it back up. I'm guessing it got erroneously flagged as over its quota, but they couldn't explain why it happened. Just my experience.


I've had nothing but hell from 1&1/ionos ... if your billing has issues, be prepared to be treated like a criminal with few paths to easy resolution.


To be fair, whenever you pay 1-2€/Month for something, you cannot really expect human/timely interaction. Paying a single support employee to look into your case for 10minutes already erases all revenue from the past years from you.


Also, ain't this the introductory temporary price for the first six months?


Currently, it is. The "S" offering is at 2€/month long term. I'm still on a "S" contract that stays at 1€.

Can't complain about 1&1, also had dedicated 1HE boxes there for a long time w/o issues but I understand the grief in case something goes wrong.

I host DNS elsewhere so I can relocate such a simple proxy within a few minutes, that's what my desaster recovery plan looks like.


Seems to be a common theme with all the "big" hosting providers. Especially if they offer cheap services. They have an abuse problem so they set up automation to deal with it and they don't care about collateral damage; you're just a number, and not worth their time. Actual proper support costs more than race to the bottom allows for. Same deal as with Google et al, one day you might just get fucked.

Turns out finding a good, reliable hosting provider who isn't a tyrant and gives you the benefit of the doubt & treats you like a human when something's up is really really hard. And probably not very cheap. It's kinda sad that one could host all kinds of interesting things on a $5 device like raspberry pi zero, but then you have to pay out of your nose -- twice, and every month -- to get proper internet access.


While these tutorials are a great way to teach system administration to those who want to start tinkering, I wish people would put more energy into building well crafted firmware images using frameworks like openembedded or buildroot and teaching people how to use those.

Most of the raspbian-based tutorials or images out there treating the raspberry pi like a normal server are just going to trash the SD or fail because system state is mutating in unexpected ways.

Start building immutable images that hold temporary data strictly in RAM.


That sounds actually like a good idea. Do you have a resource on how to get started?


Its true that buildroot/yocto-based immutable images are a lot more resilient and prevent regular sd-card death, for most one-off uses you can get 90% of the way by simply using the "overlay file system" option in raspi-config already built-into the Raspberrypi OS.

Essentially, you start with your Rpi OS, configure it the way you want it - install services etc, and once you are done, just do "sudo raspi-config", select "Performance Options" and under that enable "overlay file system" (also select "read-only /boot" when asked). Reboot when prompted to complete this setup.

This will cause all changes to go to a temporary ramfs - and these changes will be lost on reboot. Most importantly, this means your SDcard won't be written to at all during normal operation. Do note that if you are using one of the older Rpis with 1GB RAM, you might face issues with RAM availability - depending on the amount of changes you make while the overlay is enabled. RPI4 variants with 4GB/8GB ram work really well, though.

If you do need to make persistent changes, just repeat the process starting with "sudo raspi-config", disable the overlay and read-only /boot, reboot, make changes, then renable the overlay. Its is a good idea to do an apt update/upgrade every month of so after disabling overlay.

Another thing you can do is to simply use USB sticks or USB drives as boot media (on RPI4). Those have much better lifetimes than sdcards, and are much faster as well.

While this does not compare to the performance/speed/safety/etc of a fully custom buildroot/yocto image, its a good compromise considering its almost effortless.

Shameless plug: I build such custom OS images for RPI and other SBCs for a living.


>Another thing you can do is to simply use USB sticks or USB drives as boot media (on RPI4). Those have much better lifetimes than sdcards, and are much faster as well. >Shameless plug: I build such custom OS images for RPI and other SBCs for a living.

I havent seen anyone using a rpi to usb boot other pi's via a usb hub, do you think thats possible? :-)


That needs the "boot host" rpi to be a usb device or usb otg - afaik, only the pi0 and pi4 have usb otg. These can emulate a usb storage device via the usb gadget subsystem. Additionally, only RPI4 has USB boot capability, and it does not work with all usb devices.

More importantly - something like this can only be hooked up to _one_ boot device, so the usb hub and multiple pi's are a no go. I don't see any advantage compared to just using a usb stick instead of making a rp0/rpi4 pretend to be a usb stick.

You should look into network booting the multiple pi's - if thats suitable for your use case. You will still need an sdcard in each pi to provide the network bootloader, but once the boot is done, the sdcard isn 't used anymore (until the next boot)


I have some buildroot external trees on github that build images using github actions. It's for personal stuff I needed and only need to update occasionally.

The Buildroot manual is fantastic and it's worth working through the getting started section to get an idea. It boils down to creating a br_external tree that contains everything necessary to create a custom sdcard image as documented here:

https://buildroot.org/downloads/manual/manual.html#outside-b...

Building images from a br_external tree is pretty trivial, see the gitub actions in these example repos:

This builds a raspi4 64 bit image for tvheadend (I'm using this image for a SAT-IP TV dish w/ Kodi clients in my sister's house - so far no complaints about a crashed tv server after 1.5 years of uptime). This image runs the whole rootfs from initramfs w/o mounting a persistent root filesystem. I don't care for the additional ~150MB RAM that is used in this use case:

https://github.com/markuslindenberg/dvbheadend

My most recent buildroot based raspi image builds a 32 bit image pulling binary distribution of openhab and it's recommended jre into the image, running them from a read only root filesystem. I'm using this to reliably run openhab home automation in multiple places. This repo also is a br_external tree and embarrasingly doesn't have a README yet, I really really need to write one becaus I think it's quite useful and mature.

https://github.com/markuslindenberg/habfw


Speaking of building immutable images - I want to do that for one of my projects, but scanning the docs for buildroot and yocto, I don’t see anything about applying updates - which is the part where I most want an off-the-shelf battle-tested solution so I won’t need to fly out and fix it in person.

Specifically, I want to have a disk image with two root partitions - by default (ie, what I flash to the SD card) the first partition has a read-only root FS and the second is empty. When an upgrade is ready, the system downloads the update from the internet, writes it to the second partition, and reboots - if the reboot is successful and the system passes health checks, it marks partition 2 as the default. If something goes wrong, it reboots again back into the original partition. (Then when v3 comes out, v2 downloads and writes it to the first partition, etc)

I keep thinking “surely this must be a solved problem?”, but I can find very little information about it; and the few things I can find are proprietary cloud-based management systems, when I’d much rather have my image hard-coded to poll an update-feed-URL that I control myself...


That is what Android is doing. You can read up on it here: https://source.android.com/devices/tech/ota/ab?hl=en


That is not for the feint of heart though. Here’s my vendored buildroot for an irc bouncer on an SBC.

https://git.drk.sc/Derecho/irc-sbc-buildroot


You might want to consider using the BR2_EXTERNAL feature[1] to keep your customizations outside of the buildroot tree.

[1]: https://buildroot.org/downloads/manual/manual.html#outside-b...


I wonder why systemd-nspawn isn't mentioned/used more. I've been using it for years for full system containerization and I'm really happy with how lightweight and functional it is. No need for additinal LXC/LXD layers.


The point of LXD is the multi host clustering. So, if you don't need that, yes, there are simpler solutions.


It's because it's terribad.


Care to elaborate for people who don't already know what you're talking about?


WG exposes a point to point / l3 network interface like any other to userspace, so an answer would not be specific to wireguard but about networking and routing in general.

Network Namespaces and VRFs are the correct way to approach this I think: https://www.kernel.org/doc/Documentation/networking/vrf.txt


sure. freenode for free and open projects and hackint for everything ccc / chaos community related.


That's your most pressing question?


I've read the whole blog announcement and couldn't find anything but PR stuff for investor consumption.

The garage thing was too much for my taste! It's like trying to forcibly resurrect Apple or HP's first years in the 2020s.

Do people still need physical presence in a garage to get things done? Working remotely is now an established practice...


I imagine hardware design doesn't have as much established success as writing software in a "remote worker" set up.


True, but was any hardware prototyping or design actually happening in that garage? Or were they just using the garage to write blog posts and record a podcast?


They have a cult of personality going that they're trying to monetize. Nobody following this cares about a product.


I'm following it and I do care about better big hardware.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: