We’re Gonna Need Backup!
Many years ago someone said to me, “There are two types of people in the world; those who take backups and those who haven’t experienced data loss.”
Thinking back over that point, I knew it once applied to me. While I used to keep duplicates of data on floppy disks due to their fragile nature, I viewed hard drives as a somewhat more permanent and more reliable as a storage method.
That was until one day in 1998, when I lost a load of irreplaceable data, scans of photos, code and writing.
We can all sit there and say, “Yeah, hardware can fail, but you can resolve that these days with RAID or by putting your data in the cloud.” Which is all well and good, only I didn’t suffer a hardware failure, it was filesystem corruption. The filesystem was completely shot. Data recovery tools recovered bits and pieces, but lots of stuff was also lost. The disk was perfectly fine and gave me a few more years service before it was replaced with something larger.
The above is a prime example of the common misconception; that data held in the cloud is safe and that RAID can act as a substitute for a backup policy.The problem is, that as systems become better, people rely on them too much. Then they don’t expect things like filesystem corruption to be an issue. A friend was once told by a support team in response to his data loss, “Filesystem corruption can occur at any time for a multitude of reasons.” Similarly, it’s entirely possible, and I’ve seen it many a time, for someone to accidentally delete important files.
So where does this leave you? If you’re storing your data in only one place, then you are wide open to data loss. This can be due to malicious behaviour from a hacker, accidentally deleting files through a mistyped command, or because of a badly-judged mouse drag or an application or operating system error corrupting files or a filesystem. In this case it becomes important to make sure that you make backup copies of any important and irreplaceable data.
Where do you store these backups? Preferably somewhere in a different geographical location to the source. I remember once having to deal with a customer who had a file system corrupt in Linux. He’d made backups, but they were to the same filesystem that subsequently corrupted. With more patience than I have, the customer waited for three days while a filesystem check ran on the server with a screen full of errors being repaired and, much to my surprise, the filesystem ended up mountable and the data was recovered.
With backups stored elsewhere the customer could have had a fresh OS install on a new RAID array and recovered from his backups faster than the filesystem check took and with a greater degree of known reliability.
You can store backups on the same server, but I’d strongly recommend putting them on a different disk or RAID array to the place where the primary data lives. Even better than that, would be another server in the same location and the best is always a different building altogether.
Another thing is to always test your backups. Once you’ve put your backup scripts or systems in place, try putting a system together of a similar spec elsewhere and see if you can get a working system when you recover from your backups. You’ll soon see if your backup policy is sufficient and it’s much better to find out if it isn’t when it isn’t an emergency.
I can’t stress enough the importance of properly planning and implementing a backup procedure for your systems. If you’ve currently got no backups of your important or potentially mission critical systems, then it’s probably time to put some serious thought into getting some.