This is absolutely true - when I was at Bitbucket (ages ago at this point) and we were having issues with our DB server (mostly due to scaling), almost everyone we talked to said "buy a bigger box until you can't any more" because of how complex (and indirectly expensive) the alternatives are - sharding and microservices both have a ton more failure points than a single large box.
I'm sure they eventually moved off that single primary box, but for many years Bitbucket was run off 1 primary in each datacenter (with a failover), and a few read-only copies. If you're getting to the point where one database isn't enough, you're either doing something pretty weird, are working on a specific problem which needs a more complicated setup, or have grown to the point where investing in a microservice architecture starts to make sense.
One issue I've seen with this is that if you have a single, very large database, it can take a very, very long time to restore from backups. Or for that matter just taking backups.
I'd be interested to know if anyone has a good solution for that.
- you rsync or zfs send the database files from machine A to machine B. You would like the database to be off during this process, which will make it consistent. The big advantage of ZFS is that you can stop PG, snapshot the filesystem, and turn PG on again immediately, then send the snapshot. Machine B is now a cold backup replica of A. Your loss potential is limited to the time between backups.
- after the previous step is completed, you arrange for machine A to send WAL files to machine B. It's well documented. You could use rsync or scp here. It happens automatically and frequently. Machine B is now a warm replica of A -- if you need to turn it on in an emergency, you will only have lost one WAL file's worth of changes.
- after that step is completed, you give machine B credentials to login to A for live replication. Machine B is now a live, very slightly delayed read-only replica of A. Anything that A processes will be updated on B as soon as it is received.
You can go further and arrange to load balance requests between read-only replicas, while sending the write requests to the primary; you can look at Citus (now open source) to add multi-primary clustering.
This isn't really a backup, it's redundancy which is good thing but not the same as a backup solution. You can't get out of a drop table production type event this way.
It was first release around 2010 and gained robustness with every release hence not everyone is aware of it.
The for instance I don't think it's really required anymore to shutdown the database to do the initial sync if you use the proper tooling (for instance pg_basebackup if I remember correctly)
Going back 20 years with Oracle DB it was common to use "triple mirror" on storage to make a block level copy of the database. Lock the DB for changes, flush the logs, break the mirror. You now have a point in time copy of the database that could be mounted by a second system to create a tape backup, or as a recovery point to restore.
It takes exactly the time that it takes, bottlenecked by:
* your disk read speed on one end and write speed on the other, modulo compression
* the network bandwidth between points A and B, modulo compression
* the size of the data you are sending
So, if you have a 10GB database that you send over a 10Gb/s link to the other side of the datacenter, it might be as little as 10 seconds. If you have a 10TB database that you send over a nominally 1GB/s link but actually there's a lot of congestion from other users, to a datacenter on the other side of the world, that might take a hundred hours or so.
rsync can help a lot here, or the ZFS differential snapshot send.
so say the disk fails on your main DB. or for some reason a customer needs data from 6 months ago, which is no longer in your local snapshots. In order to restore the data, you have to transfer the data for the full database back over.
With multiple databases, you only have to transfer a single database, not all of your data.
Do you even have to stop Postgres if using ZFS snapshots? ZFS snapshots are atomic, so I’d expect that to be fine. If it wasn’t fine, that would also mean Postgres couldn’t handle power failure or other sudden failures.
* use pg_dump. Perfect consistency at the cost of a longer transaction. Gain portability for major version upgrades.
* Don't shut down PG: here's what the manual says:
However, a backup created in this way saves the database files in a state as if the database server was not properly shut down; therefore, when you start the database server on the backed-up data, it will think the previous server instance crashed and will replay the WAL log. This is not a problem; just be aware of it (and be sure to include the WAL files in your backup). You can perform a CHECKPOINT before taking the snapshot to reduce recovery time.
* Midway: use SELECT pg_start_backup('label', false, false); and SELECT * FROM pg_stop_backup(false, true); to generate WAL files while you are running the backup, and add those to your backup.
Presumably it doesn't matter if you break your DB up into smaller DBs, you still have the same amount of data to back up no matter what. However, now you also have the problem of snapshot consistency to worry about.
If you need to backup/restore just one set of tables, you can do that with a single DB server without taking the rest offline.
> you still have the same amount of data to back up no matter what
But you can restore/back up the databases in parallel.
> If you need to backup/restore just one set of tables, you can do that with a single DB server without taking the rest offline.
I'm not aware of a good way to restore just a few tables from a full db backup. At least that doesn't require copying over all the data (because the backup is stored over the network, not on a local disk). And that may be desirable to recover from say a bug corrupting or deleting a customer's data.
Try out pg_probackup. It works on database files directly. Restore is as fast as you can write on your ssd.
I've setup a pgsql server with timescaledb recently. Continuing backup based on WAL takes seconds each hour and a complete restore takes 15 minutes for almost 300 GB of data because the 1 GBit connection to the backup server is the bottleneck.
On mariadb you can tell the replica to enter into a snapshotable state[1] and take a simple lvm snapshot, tell the the database it's over, backup your snapshot somewhere else and finally delete the snapshot.
That's fair - I added "are working on a specific problem which needs a more complicated setup" to my original comment as a nicer way of referring to edge cases like search engines. I still believe that 99% of applications would function perfectly fine with a single primary DB.
Depends what you mean by a database I guess. I take it to mean an RDBMS.
RDBMSs provide guarantees that web searching doesn't need. You can afford to lose a pieces of data, provide not-quite-perfect results for web stuff. It's just wrong for an RDBMS.
What if you are using the database as a system of record to index into a real search engine like Elasticsearch? For a product where you have tons of data to search from (ie text from web pages)
In regards to Elasticsearch, you basically opt-in to which behavior you want/need. You end up in the same place: potentially losing some data points or introducing some "fuzziness" to the results in exchange for speed. When you ask Elasticsearch to behave in a guaranteed atomic manner across all records, performing locks on data, you end up with similar constraints as in a RDBMS.
Elasticsearch is for search.
If you're asking about "what if you use an RDBMS as a pointer to Elasticsearch" then I guess I would ask: why would you do this? Elasticsearch can be used as a system of record. You could use an RDBMS over top of Elasticsearch without configuring Elasticsearch as a system of record, but then you would be lying when you refer to your RDBMS as a "system of record." It's not a "system of record" for your actual data, just a record of where pointers to actual data were at one point in time.
I feel like I must be missing what you're suggesting here.
Having just an Elasticsearch index without also having the data in a primary store like a RDMS is an anti-pattern and not recommended by almost all experts. Whether you want to call it a “system of record”, i wont argue semantics. But the point is, its recommended hacing your data in a primary store where you can index into elasticsearch.
This is not typically going to be stored in an ACID-compliant RDBMS, which is where the most common scaling problem occurs. Search engines, document stores, adtech, eventing, etc. are likely going to have a different storage mechanism where consistency isn't as important.
I'm sure they eventually moved off that single primary box, but for many years Bitbucket was run off 1 primary in each datacenter (with a failover), and a few read-only copies. If you're getting to the point where one database isn't enough, you're either doing something pretty weird, are working on a specific problem which needs a more complicated setup, or have grown to the point where investing in a microservice architecture starts to make sense.