I do not get why giving each customer his own database would be so costly. Each postgresql instance can handle many smaller databases. Are shared buffers allocated per database?
First of all, every database comes with some metadata overhead (catalog tables) which amounts to ~ 30MB. Take into account that this also may (should!) be in memory (shared_buffers) so it can become expensive.
But the most significant factor is whether you need to do queries that span multiple users/tenants. If you need to, you will require postgres_fdw and nor performance neither manageability will be good at all.
I assume he wants a proper database and to be honest if you have zero interaction between the things you are partitioning on it's very easy to scale with SQLite or any other database.
There's no reason you can't either run more servers or multiple instances of postgres per server, with docker or otherwise.
I'm probably going to try using Citus Data instead of moving to Cassandra if I ever get big. We'll see.
I honestly don't see why you wouldn't consider SQLite to be a "proper database." I think it has a reasonably competitive feature set.
In a usecase where you consider independent databases, with few interactions between them, for millions of users (and a few MBs + binary blobs of data per user) I'd certainly consider it as a possible solution.
This is an OK article for someone using this product but there isn't anything interesting here unless you haven't considered partitioning a dataset to be spatially local to the users.