I am avid motorcyclist. Nothing comes close to the visceral experience of riding a Ducati sports bike. But like all vehicles, bikes require care and feeding, and Ducatis – especially the older ones – require considerably more than most. Servicing is a complex operation that needs be performed by specialist every few thousand miles. It’s therefore no surprise that one can buy a really nice Ducati bike from the mid ‘90s for just a couple of thousand dollars, because servicing costs are astronomically high – almost as high as the purchase price. Total cost of ownership (TCO) is an important consideration for this iconic Italian brand, because what you see isn’t necessarily what you get.
TCO is what I’ve been focused on for the past few weeks – spending time with customers to really understand the cost drivers associated with on-site storage arrays, so I can compare our managed service and present an apples-to-apples comparison. All too often the focus is on measuring storage in dollars per gigabyte ($/GB), but anyone who has managed storage knows there are other cost drivers that must to be considered.
To understand some of the cost drivers, it’s worth exploring a few myths about of storage TCO.
Myth No. 1: Purchasing the “right” storage capacity is a simple function of requirement over the time period for the budget cycle.
How much storage capacity is enough? A popular approach is to size the “today” requirement and apply a growth factor to determine an end-period requirement, be that three or five years. That’s the “usable” storage, so it typically means purchasing roughly 30 percent more raw capacity, which disappears when various vendor storage overheads associated with redundancy and OS file systems get configured. And then there’s data replication and disaster recovery, which are typically two separate systems, but must to be sized and budgeted using similar methodology.
Let’s just stick with the primary use case for a moment and assume a “today” requirement of 20 terabytes (TB), then apply a linear 3 percent month-over-month growth factor for 36 months. With a nominal 30 percent overheard for raw-to-usable storage, the purchased capacity on Day One needs to be close to 90 TB to ensure enough capacity for day 1,095.
For the sake of illustration, let’s just assume the raw storage price works out to be 10 cents per gigabyte (GB), in monthly costs over the three-year period.
Let’s take a look at the effective price per GB based on the actual storage requirement at the end of each year:
The so-called white space is an inherent problem with the purchase-ahead model –utilization is horribly inefficient for the first couple of years. This inefficiency makes the effective $/GB much higher than the advertised raw GB rate (in our case, $0.10 per GB), which is the one that’s typically negotiated with the vendor.
What would this look like if the Day One-required capacity was closer to 50 TB or 100 TB? The utilization inefficiencies would be exactly the same, but imagine how much capital is tied up funding all that white space. As capacity increases, so do inefficiencies.
Myth No. 2: You save money by planning your budget up front.
Conversations with various IT executives indicate many enterprise IT groups still follow the traditional capital budget request approach for IT projects, with a “use it or lose it” mindset.
Those executives are forced to think about IT projects three to five years down the line. Given technology advancements in compute, networking and storage, this is almost an impossible task. This is especially true for storage technology – not only do IT pros need to figure out capacity requirements, but the sizing of performance tiers and how much flash versus spinning media they’ll need. And what happens when they are blindsided by a new application, or a machine data project, both with demanding storage needs? So much for trying to determine storage requirements three years away – there is way too much uncertainty.
With so much uncertainty, how can IT executives in capital budget-driven organizations accommodate inevitable changing requirements? The simple answer is they cannot, or are forced to grossly over-request and hope for some flexibility on fund availability.
Its highly unlikely this old-world budget approach is ever going to align with changing enterprise IT needs – where “going back to the well” time and again isn’t an option, and planning cycles beyond a year are problematic. It is therefore very difficult to save IT money under these constraints. A new approach is required, one that aligns with the dynamic IT needs of an organization.
Myth No. 3: Managed services cost more than they are worth.
Just like the Ducati, every storage solution requires some level of care and feeding. There’s a non-zero figure associated with power, cooling, accommodation and some ongoing management during the three or five-year lifecycle. Gartner suggests matching one full-time employee (FTE) with 300 TB of storage. So even a company with modest capacity requires some level of IT support, even if it just to get the system up and running.
Managed storage services make a lot of sense for organizations when storage volumes are in the hundreds of TBs and growing – operational expenses, over a multi-year period, can comprise up to 30 percent of storage TCO. Those ongoing operations include swapping failed drives, adding capacity, configuring additional controllers, data migrations and so on. Not exactly the most riveting of IT tasks, but necessary to keep storage running smoothly. A managed service frees up valuable IT cycles to concentrate on more strategic activities, such as application deployments that enable new products or services (and generate income), or save the organization costs.
Learn how to cut storage TCO by 80 percent over three years.