Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I do have a particular reason - memory use. I am running postgres and redis locally for dev work, but I would love to use Docker so that I can standardize it for my team, but it just takes up to much ram on m1.


I don't mean to sound flippant, but that sounds like you're using a computer for business that can't handle the work you are trying to do. If 32/64GB isn't enough memory then yeah, I guess you need something else, but if your machine has less than that then it sounds like you need to buy the right computer for the job.

Also, are you using AMD or ARM images for those?


I dunno about the m1 aspect of this but postgres and redis for typical dev work shouldn't require more than a gigabyte or two of RAM when running together. 32-64GB is more like heavy traffic enterprise DB territory...


guess the parent is annoyed at spending 2-3GiB of ram on the VM itself.

even if the containers themselves only use 128MiB total: the VM will cordon off everything it's told to; which most people are unlikely to change until there's an issue, and is configurable down to 1GiB as a minimum.

FWIW Docker Desktop on my machine (M2 Macbook Air; 24GiB Ram) defaulted to using 8GiB of RAM.


They're also trivial to install without virtualization. Postgres has been one of the first things I install on a new laptop for over a decade now.


Also, stop using a Mac if they aren't suitable for your usecase. There are just so many layers of emulation for this, it's an absolute waste of resources.


This is the right answer.

Scratching my head as to why devs are buying up these new macs knowing up front that they will not work well for the job they were bought for.

Why not make it easier for yourself and have your dev machine match up with prod?


You can setup how much ram is allowed in docker. Generally software will use as much as you allow (especially DB)


Your team might want to use asdf https://github.com/asdf-vm/asdf to run multiple native versions of PostgreSQL and Redis in parallel. Even with one project you might have multiple versions of those tools in different releases of the project. You standardize by using a .tool-versions file. I've been using that for a team targeting Linux and developing on Ubuntu, Mac and WSL (or was that an Ubuntu VM in Windows?)


Many companies run dev work in a dedicated cloud VM.. incl well known companies like Google or Amazon.

You can run a constant VM with 2/4/8/128gb of ram or whatever you need. I use one at work for years and I think mine is 16gb of ram and it’s way over provisioned most of the time. Unlike how you might expect, treat the cloud vm like a work laptop not a production service. Let people write scripts that stay there, let people keep it on 24/7, available on demand, etc. It’s a cloud laptop not a production VM.


Depending on your workloads that sounds like a very expensive way to do development compared to just having a dedicated but efficiently set up work laptop.

Even giving people a headless Intel NUC to connect their laptops too would be way, way cheaper (assuming you're just doing development).


Where I work, they used to give you desktop computers (pre Covid) for the workstation purpose, but post-covid they just provision a VM for you. Honestly, its probably cheaper in short-term and only mildly more expensive long term. No real IT work needed (since <cloud provider> handles the hardware), and automatic upgrades if more ram/GPU/etc is needed. Even a really big VM ($80/m) wouldn’t be crazy compared to the logistics of managing/storing/powering/networking a bunch of desktops across an office.


This is what I wonder. A powerful local laptop is so useful - instant feedback; works on the train or any other offline/low-connectivity setting; the only engineer you need looking after it really is the one who has it.

Having said that, cloud development means you don't need big capex orders for laptops all the time, and it's probably easier to secure.


I use a 16gb m1 air. I'm running docker desktop with mysql, redis, 2 containers doing python, a node container and an nginx container. I'm not noticing any impact on performance. MS Teams hurts more to run. Though I have adjusted the resources docker uses.


I never understand the "docker takes up too much space/ram" objection. Isn't that configurable/manageable even from the GUI?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: