This last week, I was able to step away for a few hours to visit the Boston VMware User Group (VMUG) conference and see what’s hot and exciting in the VMware world. As I wandered through the exhibit area and various sessions, I was overcome by the weariness of the messaging, particularly around data storage. It felt as if the calendar was frozen in 2005. An IT version of Rip Van Winkle could have woken up from a 10 year slumber, and found that the storage dialog hasn’t shifted away from tried-and-true talking points – performance and cost of acquisition. Maybe I am seeing the past through rose-colored glasses, but it seems to me that it hasn’t always been this way.
Quite a number of years have passed since the early days of 2002 when I sat in Nashua with the early EqualLogic engineering team to design our first storage arrays. Over the next decade, the iSCSI products we built redefined what data storage needed to be in the enterprise: integrated, easy to use and scale-out. These basic ideas helped a whole generation of enterprises embrace shared storage and enabled the use of compute virtualization like VMware in the data center. In the process, we made billions for our investors and ultimately, our acquirers. However even the most innovative systems architecture eventually outlives its usefulness, since the market requirements continue to change and new waves of technology emerge. It’s a rare situation that a market-leading systems vendor makes the leap to the next generation since often their success blinds them to the need for change.
For today’s storage to meet the new world of cloud, on-demand and agility requirements, a traditional systems approach is no longer valid. The fundamentals from the early 2000s are still true – enterprise customers still want integrated, easy to use and scale-out solutions – but what this means has now changed. We are now at a new inflection point in the evolution of infrastructure. Data growth and increasing complexity has made even the most clever system architectures of 10 years ago obsolete. We cannot continue to sell the same stuff in the same way we did in the previous decade. The business of data storage cannot continue to be an endless series of repeating capital expenditure (CapEx) transactions.
In the current model, an enterprise will buy storage capacity, stand it up, protect it from hardware failure, protect it from disaster and back it up to protect it from human error. Then, within a few years, they will do it again and again. The reason for this is the lifetime of the data far exceeds the lifetime of the gear. There are lots of underlying drivers for this, ranging from the insane pace of hardware innovation and obsolescence, to the regulatory climate, to abject fear. If you want to piss off a customer, simply tell them about a mandatory upgrade involving a mass data migration and a forklift gear replacement. It’s not that anyone cares about the equipment; IT users care deeply about the integrity of their data and their internal customers’ access to it.
What’s needed in the new generation of storage is not an updated model of hardware systems, but rather a completely new approach that still addresses the durability, availability, and security of the enterprise’s most prized assets. The problem of building, scaling, retiring and refreshing physical infrastructure needs to be simplified and managed so that physical limitations no longer hinder the agility of IT.
I’m excited to be working with a next-generation team that keeps legacy problems top-of-mind. Making complicated things easy is probably the most difficult and rewarding task an engineer can ever take on. I’ve always said that the key litmus test is the volume of complaints from the rank-and-file engineering staff; if you don’t hear complaints about how hard the work is, then you probably aren’t succeeding at making anything easier for your customers. Suffice it to say, our office has been very noisy these last 18 months.