The public cloud model should be a no-brainer for enterprise data. Rather than build out storage infrastructure yourself, you get infinite capacity on demand in a multi-tenant environment that keeps upfront costs down. In reality, the cloud today is mainly relegated to archival data, backup, disaster recovery and other workloads that don’t require stringent performance requirements. If the benefits of the cloud are impossible to deny, why are enterprises not adopting it widely?
Although there is much discussion about security and control as blockers to cloud adoption, a less-discussed but potentially more serious problem is latency. The distance between companies and users from the cloud infrastructure turns out to be a big issue, especially when using the public cloud to store data while keeping compute workloads in-house. Even though data travels at the speed of light, users and software can easily perceive the time it takes for it to move across distances through complex networks.
Many companies would love to use the cloud as a new approach to storage management, both because of the on-demand scalability, and because a massive storage infrastructure is onerous to maintain internally. Unfortunately, they find that the high latency, as well as the unpredictable performance of internet transport can make the public cloud unsuitable for many production applications. Based on our discussions with industry analysts, we’ve heard that as many as 50 percent of cloud customers have brought workloads back on-prem due to latency and performance issues.
This latency is due to the economics of public clouds, as well as a little physics. Because of the limitations of the speed of light, latency gets worse when data is further away. However, the public cloud is a regional play which makes latency a major issue. Amazon, Google, Microsoft and others build huge data centers in places where land and power are cheap, usually hundreds or thousands of miles apart and often far from customers’ locations. For storing massive volumes of archival data at low cost, the economies of scale are fantastic. But for today’s business applications, where users expect real-time response, the delays and unpredictability of the public cloud can make it hard to use for more active and performance-sensitive workloads.
One example of a high-cost solution to this problem can be found in Michael Lewis’ Flash Boys. A financial services exchange built an 827-mile cable through mountains and under rivers from Chicago to New Jersey to reduce latency from 17 to 13 milliseconds for their trades to create a competitive edge. Most enterprises wouldn’t dream of going to these extremes (and don’t have the money even if they wanted to), but still need low latency for their business application workloads. And they are still eager to get the scalability, agility and on-demand pricing of the cloud.
Before giving up on the cloud, companies sometimes look for workarounds to try to get acceptable performance — beefing up their network infrastructure, rebuilding their applications, or finding ways to route traffic more efficiently, such as leveraging cloud exchanges and network hubs in major colocation sites. But all these options require large capital commitments and are still limited by the speed of light. For the cloud to fulfill its promise, enterprises will need better and more cost-effective solutions for latency --stay tuned for our thoughts on innovative approaches to these challenges.