We have been talking about the notion of edge services, or more recently, edge computing, for quite some time. When Ellen and I were conceptualizing what the next generation of infrastructure might look like, we kept revisiting network concepts – users are on one end of a powerful network, and compute and data can flow seamlessly from the cloud to locations proximal to users.
The rationale for this architecture was simple: latency across long distances will always be a problem, and leveraging a cloud for storage or compute does not work for every application. Highly interactive or transactional workloads often do not perform when separated from their users. Even more vexing is the problem that users generate huge volumes of data. Getting all this data to the cloud can be complex and prohibitively expensive. In a world where sensors, people and machines are constantly uploading, today’s download-optimized internet is too restrictive – like having a single upstream lane adjacent to five downstream lanes on highway.
Edge services and edge computing have been in the carrier network vernacular since at least the 90s. Carrier networks have been living in colocation spaces within metro areas since the days of the Bell System for reasons very similar to today’s landscape:
- Humans find it hard to interact with infrastructure that is too far away.
- It’s much easier to aggregate thousands of a city’s endpoints into nearby point rather than run thousands of lines to, say, New Jersey.
The telco industry created the metro edge to be the place where all services terminated next to the user. Much of today’s network of highly connected colocation centers dates back to the creation of the telco edge. Even today, everyone from cellular companies to content delivery networks (CDNs), such as Akamai, use an edge computing approach and leverage local endpoints to make user experiences more natural and manageable.
Edge computing becomes a reality when you extend the cloud – for data storage, in particular – to a metro edge, where it can be managed and consumed as if it were local infrastructure. This means that all applications can leverage the elasticity and economics of the cloud, even if they need to be situated far away, near users. A global storage network can easily aggregate large data feeds and allow local processing at the edge, where local users and data sources prefer it, while still providing options to store and process the same data in remote locations and even the cloud, as needed.
A global storage network architecture can also help solve the problem of managing and analyzing log and machine data. These data streams have all of the management problems that are attendant with voluminous and incessant data feeds, heavy performance requirements and long retention periods. Even getting this data to the cloud presents an infrastructure challenge, never mind the need to process and aggregate it along the way. Rather than creating enormous data silos then managing multiple systems and data movement between them, users can simply plug into the global storage network and take advantage of public cloud scalability with enterprise-class performance and availability.
Learn more about choosing the right storage architecture for edge computing and machine data analytics.