The 3-2-1 data protection strategy recommends having three copies of your data, stored on two different types of media, with one copy kept off-site.
I keep critical data mirrored on SSDs because I don't trust spinning rust, then I have multiple Blu-ray copies of the most static data (pics/video). Everything is spread across multiple locations at family members.
The reason for Blu-ray is to protect against geomagnetic storms like the Carrington Event in 1859.
[Addendum]
On 23 July 2012, a "Carrington-class" solar superstorm (solar flare, CME, solar electromagnetic pulse) was observed, but its trajectory narrowly missed Earth.
3-2-1 has been updated to 3-2-1-1-0 by Veeam’s marketing at least.
At least 3 copies, in 2 different mediums, at least 1 off-site, at least 1 immutable, and 0 detected errors in the data written to the backup and during testing (you are testing your backups regularly?).
All the data is spread across more than 3 sites, both SSDs and Blu-ray (which is immutable). I don't test the SSDs because I trust Rclone, the Blu-ray is only tested after writing.
There is surely risk of Bit rot on the SSDs but it's out of sight and out of mind for my use case.
I've been considering to get in to the Blu-ray backups for a while. Is there a good guide on how to organize your files in order to split it in to multiple backup disks? And how to catalogue all your physical Discs to keep track of them?
I remember about 20 years ago my friend had a huge catalogue of 100s of disks with media(anime) and he used some kind of application to keep track of where each file is located across 100s of discs in his collection. I assume that software must have improved for that sort of a thing?
I don't know about the best way to split things (I do it topically mostly, e.g. each website backup goes to a separate disc). But hashdeep is a great little tool for producing files full of checksums of all files that get written to the disc, and also for auditing those checksum files.
Powering on the SSD does nothing. There is no mechanism for passively recharging a NAND flash memory cell. You need to actually read the data, forcing it to go through the SSD's error correction pipeline so it has a chance to notice a correctable error before it degrades into an uncorrectable error. You cannot rely on the drive to be doing background data scrubbing on its own in any predictable pattern, because that's all in the black box of the SSD firmware—your drive might be doing data scrubbing, but you don't know how long you need to let it sit idle before it starts, or how long it takes to finish scrubbing, or even if it will eventually check all the data.
Adding to this... Spinrite can re-write the bits so their charge doesn't diminish over time. There's a relevant Security Now and GRC article for those curious.
Re-writing data from the host system is quite wasteful of a drive's write endurance. It probably shouldn't be done more often than once a year. Reading the data and letting the drive decide if it needs to be rewritten should be done more often.
How about a background cron of diff -br copyX copyY , once per week, for each X and Y .. if they are hot/cold-accessible
Although, in my case, the original is evolving, and renaming a folder and few files makes that diff go awry.. needing manual intervention. Or maybe i need a content-based-naming - $ ln -f x123 /all/sha256-of-x123 then compare those /all
I've been reading a lot of eMMC datasheets and I see terms like "static data refresh" advertised quite a bit.
You're quite right that we have no visibility into this process, but that feels like something to bring up with the SFF Committee, who keeps the S.M.A.R.T. standard.
Might need to go through the NVMe consortium rather than SFF/SNIA. Consumer drives aren't really following any SFF standards these days, but they are still implementing non-optional NVMe features so they can claim compliance with the latest NVMe spec.
The 3-2-1 data protection strategy recommends having three copies of your data, stored on two different types of media, with one copy kept off-site.
I keep critical data mirrored on SSDs because I don't trust spinning rust, then I have multiple Blu-ray copies of the most static data (pics/video). Everything is spread across multiple locations at family members.
The reason for Blu-ray is to protect against geomagnetic storms like the Carrington Event in 1859.
[Addendum]
On 23 July 2012, a "Carrington-class" solar superstorm (solar flare, CME, solar electromagnetic pulse) was observed, but its trajectory narrowly missed Earth.