It’s near the top of every IT pro’s list of worst nightmares, one of your servers goes down… and your backups don’t do their job. It can bring an organization to a grinding halt, as well as the careers of the people responsible. But what can you do so this situation doesn’t happen to your company, and what can you do if your best laid plans fail and you’re faced with this particularly awful type of failure?
This was the question ITPro asked several industry leaders, and their responses were illuminating. For this blog, let’s focus on what you can do to prevent this situation from ever happening. This was the question ClearSky Data co-founder Laz Vekiarides answered, offering a couple best practices for avoiding the feared backup failure.
Best practice 1: Check your backups regularly.
According to Laz, you’d be surprised how often this simple, relatively easy step is ignored. He recalled a particularly excruciating situation where a customer “was backing up data from a legacy system, only to realize that the data they needed to retrieve had actually been lost … five years earlier.”
Simply having the discipline to check your backups regularly can eliminate many potential problems from ever rearing their ugly heads.
Best practice 2: Back up your data… correctly.
All data is not created equal. Critical financial or medical data, for example, must be backed up differently than development or test data.
“The best practices in general involve periodic backups and you have to make sure that you adjust your backups either snapshots or physical copies of data … you need to make sure that they are done with the correct periodicity,” Laz told ITPro. “Each application is different. Each application has a particular window of time which is the amount of tolerable data loss, and it really depends on the business need.”
For test data, a company may be able to back up less frequently, knowing that losing a little bit of data isn’t the end of the world. For critical systems, backup may have to be constant, as even a minute’s worth of lost data could be too much. Laz recommends taking the time to evaluate each application to determine the appropriate backup timeframe.
To drive the point home, he relayed another customer example. Researchers were building data sets with hundreds of terabytes of data and “throwing them on storage that is not backed up at all. So if anything bad were to ever happen, they would lose a year’s worth of work.”
Neither of these best practices are earth-shattering – they just require time and discipline. But they pay off a hundred fold, or more, if they prevent even one failure.
To learn more backup best practices, read the whole ITPro article here.