Over the past couple of years, I have spent a good deal of time talking about the notion of software-defined storage (SDS) at industry events. What is particularly vexing about this topic is that the marketing of software-defined storage has gotten way ahead of the definition of software-defined storage. Unlike its cousin software-defined networking, which has all sorts of detailed architectures and associated manifestos, the definition of SDS really depends on who is delivering it. The promise of SDS is that it will enable enterprise IT to provide a much more on-demand and agile experience for business users who rely on IT infrastructure. But beyond this basic definition, every vendor seems to have its own product vision. Is SDS a virtual machine that runs storage array firmware? A cloud gateway? Perhaps Linux running on a standard Intel server with NFS? The answer could be “yes” to all of these depending on who’s doing the selling.
I was reminded of my admittedly ambivalent view on SDS last week as I was traveling and meeting with customers and partners. I had a number of interesting conversations about SDS that indicated not much has really changed. There are a lot of enterprises out there trying to build an SDS infrastructure themselves instead of using traditional enterprise storage arrays. But, people seem stuck on basic issues such as which disk drives and vendors are more reliable, and which parts perform better. As a systems guy, I’m always happy to chat about these topics—but to be honest, if you’re an enterprise IT shop and have 20 people testing disk drives, I think you are over-invested in an expensive, non-core activity. How can we say we are software-defined if we are obsessing over different vendors’ solid-state drives write latencies?
Getting involved in building infrastructure needs to be a deliberate decision, not an accident. It requires a lot of expense—both in people and capital—to effectively de-commoditize cheap parts out of the computer supply chain, not to mention system and software expertise. The answer isn’t the same for everyone. For some businesses, this may make sense as a long run investment. In many cases cost structures and scale make the expense prudent. Lots of huge Web businesses are building out their own infrastructure, but they are the minority. If you are thinking about this as a strategy, make sure you are getting something of value for all that investment–and don’t assume that a slightly cheaper price can rationalize this choice. For many reasons, that slight cost edge can disappear, making it a zero sum game.
The main business benefit all the technology introduced in the virtualization and cloud eras these last 15 years is agility. Business agility is the new currency for valuing technology in the enterprise. Being able to quickly spin up and spin down resources, and move them around, while avoiding procedural hassles translates to real financial benefit. This is what virtualization promised. This is what compute clouds—both private and public—promise. In order for enterprise IT to enhance its value to the business, teams need to find ways to enable the organization to be more flexible and responsive. Hence, it’s important to be sure what business value we are delivering as we ponder the failure rates of SAS versus SATA drives. Are we making the business more, or less flexible?