Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's the actual use case for rpi clusters like this? If the goal is to learn docker or k8s using a realistic setup involving multiple machines, can't it be achieved by using multiple VMs on one physical desktop instead of using the cluster? The 7-node cluster has only 7GB RAM in total, which is not too much to ask on a desktop.

I don't know what people do with rpi clusters and I'm honestly puzzled why people bother building them. If you get a beefy expensive server, I can understand -- you care about performance. If you get a single rpi to serve files & media in a home LAN it makes sense too -- low footprint, low power and low cost. But I don't know what a rpi cluster excels at. Is it just a toy? Or a learning tool to make computer science & networking concepts concrete and tangible? Or just to get a computer system on a shoestring budget, much cheaper than even a single desktop or laptop?

BTW, computer hardware in Singapore is overpriced. $140 for a switch in Singapore vs $73 for a similarly-capable switch on Amazon. https://www.amazon.com/TP-Link-Ethernet-Unmanaged-Rackmount-...



The initial point was to build it just because, there was no real reason behind building it. But at this point, I've found quite a lot of use cases for it, specifically hosting web services for family and friends. In the process though, I've learnt to work under extreme resource/budget constraints and that alone has been invaluable imo.


What storage do you use for the web services, given that SD cards are unreliable? Do you have one disk per node or let them share one disk over the network or something like that?


I’ve been running a 4 node rpi cluster at home for about a year and a half now. Originally it was purely for the fun and interest of standing-up a tangible (kubernetes) cluster (like others have said, VMs just aren’t the same thing...). Now I use it all the time to run things on my home network I don’t want to pollute my real computers with. e.g. IoT services like MQTT and a heap of one-off weekend experiments etc. I also use it as a kind of reverse proxy from the outside internet to whichever computer I need to hit internally - just deploy a new nginx pod and simple reverse proxy config, and traefik and some other microservices I have running will go ahead and create lets encrypt certs and create my dns records for me. Oh and I also have some arduino/hardware sensors hooked up to the GPIO pins on different nodes as well. Taint a node to say it has X sensor attached, and k8s runs the pods that need that sensor on the correct node... need to move things around? Just change the node taints then scale down/up the pods and everything is running again. Although that’s probably not a normal use case (I do a fair bit of electronics hacking in my spare time).


I built a cluster similar to this a while back. I am using Rock64 boards instead of Raspberry Pis. Each has 4 gigs RAM, 4 cores. It also support USB3 and I am using a USB3->SATA adapters to attach an external SSD to each.

I could've done this with VMs (and did before.) It is more of a science project to learn kubernetes, etc.


I'm operating an 8xPi4/8GB cluster here, the 8GB Pi4s make this a bit more interesting, but my use case is this: I use Wireguard to backhaul an AWS EIP address, and use k8s to manage instances of various small services (gitlab, websites, wordpress+web application firewall, etc), simulate AWS Lambda, and just generally have fun. But to your point: No, it certainly isn't something I undertook with pure practicality in mind.


I think Jeff Geerling answers this question succinctly in this video https://youtu.be/kgVz4-SEhbE?t=307 (starting at the 5min mark).

TL;DW - It's fun and you gain the added skills of managing bare metal with physical IO limitations.


The use case is tinkering. I think it’s pretty sweet to be able to build a super low cost cluster like this. Play with docker, k8s etc.


if you only want to play with software, VMs are a lot easier though (and you got to learn the basics of the VMs)


Generally, the point of a cluster like this is to also gain insight into things that involve physical components as well.

It's a different type of desired end result / learning than a purely software approach. :)


This could be very handy for running integration tests of cluster-management software!


> Or a learning tool to make computer science & networking concepts concrete and tangible?

That seems like a perfectly good use case to me?


You see, there is a big emotional difference in unplugging a network cable of a node, and see your cluster keep going and re-routing traffic compared to clicking on a "shut down VM" button.

These small emotional things end up making you more interested, and keeps you going in projects. It's the little things! After all this is just for playing around, if you wanted any performance, a single machine would give you more.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: