Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unfortunately many of us learn this simple truth the hard way: a mirror is not equal to a (daily) backup.


Non-checking backups are not a perfect solution here. Running on non-ZFS filesystems, you can get slowly building corruption in files. When you take a backup, you copy that same corruption over to your backup as well.

Going back through years of backups to find a non-corrupt copy can take a lot of time during which your service is down. Not a perfect solution by a long shot. Discovering which files have been updated and which are corrupt is also non-trivial.


Do your daily backups using rsync+hardlinks (rsnapshot, dirvish or something similar) and keep a long history. This is slower than copy-on-write ZFS (obviously), but works reliably on any Linux/Unix file system and the storage cost is roughly the same as for ZFS.


This is kind of what I'm doing as backups, but I still don't feel safe (I'm kind of paranoid for my backups):what if an attacker gets in your server and wipes out all your data and backups? And you know Murphy is always ready to strike... I'm currently looking at making regular backups offline, on DVD or blue ray disks, and automating the process. I wonder if this might be a service people are interested in. Let me know what you think... (I put a landing page at http://www.offlinebackups to test reactions)


It is never a good idea to keep backup copies at the same place as the source data, so normally it should be not that common for an attacker to be able to wipe both original and backup. Regarding the offline optical disc backups, they are still ridiculously expensive compared to magnetic spinning drives or tapes. Backup, especially an automated one, is always an extra security risk to consider, but apparently there are no other good ways...


A lot of people put their backups on S3, with a script running on the server. Even if you limit the rights with IAM to only put files, the attacker can overwrite existing files on S3. The only way I thought to prevent that is to give only write access with no listing access, and append a random number to the file name. But, who does that? I'm sure 90%+ of the servers backing up on S3 are not safe for this scenario.

The reason I thought of DVDs is that they're not sensitive to electromagnetic fields as disks and tapes. (You never know: http://www.telegraph.co.uk/science/space/9097587/Solar-flare... )


If you turn on file versioning in S3, then you'll be able to get to the data that was "overwritten". I don't think there's a way for someone with only PUT access to work around this.


I making backups using duplicity with incremental backup option, which is basically librsync, and would allow such restores (as incremental = diff)

Not sure why it's not used "more". The tool is pretty straightforward.


True, but if their mirroring system had been more robust in checking corruption, they would've had an every 20 minutes backup.


as another post stated, checking which writes are intended and which are corruption is a difficult problem (not trivial).


Unless you use svn and svnsync...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: