“Today, we’re going to discuss a topic that’s becoming a big problem for IT departments in all industries: the pitfalls of traditional data protection and how to avoid them.” That was a pretty common opening sentence for articles and blogs in 2015, 2010, 2005… So, why are we still talking about it? Because the weaknesses of the traditional data protection model are still catching IT teams off guard when it comes to costs, complexity, management, risk and so on.
While the industry has been talking about these problems for a while, there are finally real solutions out there that can help organizations overcome some of the most common issues that traditional data protection can cause. Let’s look at three of these common and potentially devastating pitfalls, and what’s available out there to help overcome them.
Pitfall: Cost versus performance compromises
The Holy Grail for any data protection plan is optimal performance and reliability at a reasonable price. With the technology available it seems like this should be possible, right? Unfortunately we all know too often traditional data protection and disaster recovery has a distinct tradeoff.
Secondary data centers deal with the cost aspect pretty well. They’re generally built outside of city centers, where real estate and other overhead costs are low, so they can be cost effective. However, you also have to consider the resources required to manage these secondary sites, and the innate inflexibility of having to add physical capacity -- often above what your company needs – as your data load increases.
Performance is also another matter. When looking at metrics like recovery point objective (RPO) and recovery time objective (RTO), secondary data centers often struggle. As companies demand RTOs of under a minute – and RPOs of zero – secondary data centers can struggle to keep up. And that’s before getting into the nitty gritty of latency and migration times.
A hybrid cloud approach can help companies get out of making trade offs and into a data protection solution that delivers on all expectations. Making use of a hybrid cloud storage infrastructure that’s designed from the ground up to provide highly available access to all your critical data, whether in the cloud or on-premises, can help deliver that high performance, cost-effective data protection every organization is chasing.
Pitfall: Data, data everywhere
Inadequate performance or high costs aren’t the only pitfalls of traditional data protection. Using the common model of having a primary site, an offsite backup location and a disaster recovery site introduces a pretty obvious issue: multiple copies of data. Companies need at least two storage arrays, two copies of applications, two to ten copies of data, and so on – one at the primary site, one at the secondary site. These sites both have to be managed, maintained and scaled as the organization’s needs change. That’s before even getting into the high-speed networks needed to replicate data between the arrays.
We don’t need to tell you what this means. There are added costs for managing these copies, of course. Just as important, extra copies introduce unnecessary complexity into the equation. If data isn’t replicated immediately, there becomes a real issue around which copy is the one of record. On the business side, this can be disastrous.
Stop making copies. Consider eliminating storage infrastructure for data protection and disaster recovery altogether. With a managed service that integrates primary, backup and disaster recovery, you can have a single, completely protected copy of your data that’s accessible on-premises or in the cloud. This reduces your footprint, reduces complexity, eases the burden on IT staff and ultimately reduces costs significantly.
Pitfall: Forgetting about disaster recovery
It’s important to remember disaster recovery is a critical component of data protection. We get it; nobody ever wants to think about a disaster happening, much less plan for it. When you consider that a “disaster” could be anything from a small power outage, or even a user error, all the way up to an actual natural disaster, it becomes clear that it’s a “when” not an “if” scenario.
Yet, many IT departments don’t fully focus on disaster recovery until that outage strikes. With all the other business critical tasks facing IT, it’s understandable disaster recovery can slip through the cracks. However, it’s at that unfortunate moment IT teams realize their disaster recovery plans were inadequate.
As remote as the possibility may seem, as tedious the process may be, there is simply no way around having a disaster preparedness plan, and testing it. What companies using traditional data protection often find when developing these plans and tests is their RTOs and RPOs are way off base, considering the complications and limitations of their data protection infrastructure.
While public cloud solutions can help, they usually don’t get organizations all the way there when it comes to disaster recovery. Migration can be difficult, and getting your data back after an outage can be costly and unpredictable.
Again, this is an area where a hybrid cloud approach can be an effective solution. A service-based model that keeps your data accessible from anywhere, in the cloud or on-premises, is a reality. This lets you have a real DR solution that’s cost effective, without your IT organization having to maintain multiple locations, or really even worry about disaster recovery at all. And that’s really the goal, right?
The limitation of traditional data protection isn’t a brand new topic and it’s yet to be solved. With the hybrid cloud options available, however, there’s light at the end of the tunnel. We could be nearing the end of the “data protection pitfalls” era.
Click here to learn more about how ClearSky Data can help improve your data protection.