When I worked with my first SAN, it was the late nineties and I was leading a storage QA team at Digital. Most servers at that time had hard disks directly attached, either internally or externally via parallel SCSI cables. My team got a FC-AL based array to test.
|Figure 1 - Single Site on premise SAN architectural diagram|
Fibre Channel Arbitrated Loop was the state-of-the-art and fortunately short-lived predecessor to switch-based SAN connectivity methods that are still in use today. However, the core SAN concepts introduced in the late 1990’s are the same core concepts in today’s SANs: centralize the management of your storage, eliminate unused islands of capacity, and gain better performance by spreading workloads across multiple disks.
These new ideas and companies that backed them changed the landscape of how enterprises managed their data, and spawned all manner of new jobs with required skills to manage their storage. Companies began adopting SANs at a rapid rate. This involved purchasing new infrastructure components, such as:
- Separate switching infrastructure
- External SAN controllers
- Hard drives and SSDs from storage vendors
- Software licenses for extra features
SANs also introduced another set of management costs into the business environment. IT needed to perform capacity and performance planning to ensure it would always have enough of both to satisfy the upcoming demand of end users. This planning had to be done in advance as there is lead time in procuring, installing, and setting up the new equipment to ensure that it is ready in time to meet user demand. IT also had to manage the hardware lifecycle as components age out. But the benefits of running with on-premises SANs far outweighed the associated costs.
Protecting your data with DR as your insurance policy
|Figure 2 - Protecting your data with a multi-site architecture|
The original core principles, benefits and costs of SAN technology remain true in today’s data centers. The amount of data stored in SANs has grown exponentially since they were first introduced, but all of that growth came with new challenges. Enterprise companies, both on the customer and the sales side of the storage industry, quickly realized that while SANs offered tremendous benefits, there were inherent risks that came along with them. One of the main risks of operating a SAN was, and still is, that you put all of your eggs into a single basket. In other words, if the SAN fails, you could lose all of your data. With the world moving into the digital age and more and more business-critical records being stored digitally, a partial or total loss of data could have catastrophic effects. In order to protect against potential data loss in SANs, new strategies and ideas were necessary. The solution evolved into today's costly, fully redundant architecture that has only one job: to act as an insurance policy in case of an outage at the primary site.
Along came array-based replication technology, which offered the ability to create a second copy of data located at a different physical site that would be accessible rapidly in the event of a disaster. The benefits of this insurance policy are tremendous, but there are a lot of associated costs, including:
- The second SAN, including hardware, software, and support
- Replication licenses and possibly additional hardware
- Sufficient bandwidth allocated to support the movement of this data. This could be private Ethernet links or could utilize the public internet links with the appropriate encryption
- Additional expertise and training for the IT staff
What if there was a way to get the performance and security of on-premises SAN technology, combined with the ability to access your data from multiple locations without needing to manage the lifecycle of your SAN or the replication of your data?
The future of enterprise storage
Figure 3 - Connecting multiple customer sites to the ClearSky global storage network
ClearSky Data’s global storage network was created to do just that – as a managed service. This network can be accessed inside a customer’s data center in exactly the same manner as an on-premises SAN, through block-level protocols. You simply tell us where your data centers are located, and we place an edge cache device, which we own and manage, inside each of your data centers. This edge cache device will plug into an existing SAN network, and compute nodes access volumes that are exported as LUNs in the same manner that they would with any other on-premises SAN. That is where the similarities to a traditional SAN stop.
Instead of storing your data on-premises, we store your data in our network, where we make multiple copies of it that are automatically geographically dispersed. Inside your data center we store a cached copy of the hottest and most frequently accessed data, and we keep a copy of your warm data a short distance away inside our local POP. This set of cached copies can change in real-time to meet the needs of your applications. This new approach to data storage delivers a number of benefits, such as:
- Capacity planning: We always have as much capacity as you need to scale on demand
- Performance: Your data will be as automatically placed as close to your applications as needed
- Redundancy: If your entire site goes down, you lose no data, just like synchronous replication
- Access: You can access the same data from another site, without needing to move it there
- Lifecycle: We manage, maintain, and upgrade all aspects of the service non-disruptively
Join me live on Feb 25th for an interactive discussion and Q/A and learn how you can stop managing your storage and start choosing how to access your data with ClearSky’s global storage network.