We're a small company (~50 customers) delivering SaaS using Django/Postgres/uWSGI for a niche B2B market where privacy and data confidentiality is paramount.
Currently we deploy one DB + unique uWSGI instances for each customer. This has some drawbacks which has made us look a bit into multi-tenancy as well. Everything is served on dedicated hardware, using common codebase, and each customer is served on a unique sub-domain.
The two primary drawbacks of running unique instances for each customer are ease of deployment and utilization of resources.
When a new customer is deployed we need to set up the database, run migrations, set up DNS, deploy the application, deploy the task runner, set up DNS and configure the HTTP vhost. Most of this is painfully manual right now, but we're looking into automating at least parts of the deployment.
In the future, we aim to offer an online solution for signup and onboarding, where (potential) customers can trigger the provisioning of a new instance, even for a limited demo. If we were doing multi-tenancy that would just require a new row in the database + some seed data, which would make the deployment process exceptionally simpler.
The other issue is the utilization of resources. Running a few instances of the application with a big worker pool would be much easier to scale than running 50+ instances with their own isolated worker pool.
We're considering maybe going for a hybrid multi-tenant architecture, where each customer has their own isolated DB, but with a DB router in the application. That would give us a compromise between security (isolated databases - SQL queries don't cross into another customer's data) and utilization (shared workers across customers). But this would add another level of complexity and new challenges for deployment.
We're deploying unique kubernetes cluster per client with their own application, database, task runner and, yes, DNS. Unlike your situation, most of it is completely automated ( :) ) on Azure and AWS using terraform, good old bash scripts and a custom go CLI we maintain.
Each client is billed for their own resource usage and we can have version disparities between clusters.
On the downsides, maintenance, upgrades and deployments take more time, but we are thinking about potential solutions for managing a fleet of k8s clusters.
This approach makes a lot of sense for B2B customers, and I would add that it's better to separate everything down to the infrastructure level, rather than stopping at the database schema. I would probably do it again in a similar situation !
Currently we deploy one DB + unique uWSGI instances for each customer. This has some drawbacks which has made us look a bit into multi-tenancy as well. Everything is served on dedicated hardware, using common codebase, and each customer is served on a unique sub-domain.
The two primary drawbacks of running unique instances for each customer are ease of deployment and utilization of resources.
When a new customer is deployed we need to set up the database, run migrations, set up DNS, deploy the application, deploy the task runner, set up DNS and configure the HTTP vhost. Most of this is painfully manual right now, but we're looking into automating at least parts of the deployment.
In the future, we aim to offer an online solution for signup and onboarding, where (potential) customers can trigger the provisioning of a new instance, even for a limited demo. If we were doing multi-tenancy that would just require a new row in the database + some seed data, which would make the deployment process exceptionally simpler.
The other issue is the utilization of resources. Running a few instances of the application with a big worker pool would be much easier to scale than running 50+ instances with their own isolated worker pool.
We're considering maybe going for a hybrid multi-tenant architecture, where each customer has their own isolated DB, but with a DB router in the application. That would give us a compromise between security (isolated databases - SQL queries don't cross into another customer's data) and utilization (shared workers across customers). But this would add another level of complexity and new challenges for deployment.
Do anyone have a similar case as this?