Data is the lifeblood of the enterprise, particularly as industries across the economic spectrum transition to new business models driven by advanced digital services.
Nevertheless, it is surprising how rigidly many organizations cling to old data management habits, some of which date back to the pre-virtualization era. To compete in the new economy, businesses must update their entire IT infrastructure and operating stacks, and this includes how they utilize their data.
Among the worst of these habits is the incessant need to copy the same data over and over again. Backing up data as a means to protect against failure is, of course, a worthy goal, but over the years the practice has taken on a life of its own. Many IT professionals copy data almost instinctively, copying every piece of data multiple times in the often mistaken belief that it is better to copy than be sorry.
Awash in data
This is the primary driver for the escalating data volumes we are witnessing on a global scale. Despite the prevalence of compression, deduplication and other data reduction techniques, IDC estimates that the worldwide datasphere will grow 10-fold between 2016 and 2025, to top 163 zettabytes.
This relentless accumulation of data leads to the second bad habit of information management. Rather than undertake the difficult task of assessing and analyzing their data, most enterprises choose the easier task of provisioning more storage. A typical enterprise houses data in a primary database for current operations plus a secondary storage system for test and development. In addition, there is usually a near-line back-up system and an off-site disaster recovery facility that houses a fully replicated data environment. And finally, there is long-term archival storage, where data is kept in perpetuity.
This knee-jerk reaction to growing data loads inevitably coincides with the third bad habit: the tendency to skimp on resources and, consequently, performance. IT has always been a cost center for the enterprise, so it’s not surprising that decision-makers seek to steadily reduce the budgetary strain in support of data that has become less valuable over time. This can lead to detrimental consequences in areas like disaster recovery, which is often neglected until it is needed – and then it’s too late.
And finally, organizations are forced to choose between implementing adequate governance and security policies and rigorously enforcing the ones they have, and managing costs. Costs usually becoming the deciding factor because the benefits are difficult to quantify. The danger, of course, is that each and every copy of data represents a new threat vector at a time when hacking and data theft has ramped up to an industrial scale.
A better way for data storage
The ideal solution, of course, is to stop making all those copies. While this was once considered unthinkable, technology has evolved to the point where even distributed, cloud-based ecosystems can employ a unified, high-performance, elastic storage system that serves primary, secondary, backup, DR and archival needs for all users simultaneously.
Using smart caching technologies and advanced automation, enterprises can maintain a single copy of data across a global IT footprint while preserving flash-level performance, dynamic scalability and end-to-end data protection. And, this can be achieved at a TCO that is less than half of what is available under traditional storage platforms.
Under this paradigm, organizations can forget about over- or under-provisioning storage, maintaining complex management stacks and performing tedious migrations. There is one copy of data, fully protected, and available nearly instantaneously wherever it is needed.
For enterprises struggling to maintain their positions in today’s economy let alone the emerging fast-paced world of Big Data and the Internet of Things, it’s time to stop managing data and start accessing it.
Want to learn what you to start doing with your data? Check out Gartner’s 2017 Strategic Roadmap for Storage.