You want high-density storage to minimize the number of servers required when data models get large. If you limited yourself to 16 TiB, then it would not be uncommon to require thousands of servers for large data models. If your database supports high-density storage then you could fit this in a single rack. For somewhat smaller data models you could fit this on a single server. It doesn't just save a lot of money, some edge environments -- essentially single servers -- are already PiB+. Storage bandwidth is typically greater than network bandwidth these days so many workloads scale well this way.
Quantity has a quality all its own. Fragmenting your database over thousands of servers introduces new classes of failure and greatly increases the risks of other types of failures that you don't have to deal with when you only have dozens of servers or a single server.
High-density storage does require strategies for recovery that reflect the bandwidth-to-storage ratios. Designers are usually aware of the implications and architect appropriately.
Quantity has a quality all its own. Fragmenting your database over thousands of servers introduces new classes of failure and greatly increases the risks of other types of failures that you don't have to deal with when you only have dozens of servers or a single server.
High-density storage does require strategies for recovery that reflect the bandwidth-to-storage ratios. Designers are usually aware of the implications and architect appropriately.