In this special guest feature, Ellen Rubin, CEO and co-founder of ClearSky Data, discusses three ways to extend the use of the cloud as effectively as possible when working to manage the increasing volume of machine generated data. Ellen is an experienced entrepreneur with a record in leading strategy, market positioning and go-to- market efforts for fast-growing companies. ClearSky Data’s global storage network simplifies the entire data lifecycle and delivers enterprise storage as a fully managed service. Most recently, Ellen was co-founder of CloudSwitch, a cloud-enablement software company that was acquired by Verizon in 2011.
Machine data growth is like a tidal wave; it’s gaining speed and it’ll soon be out of control. According to IDC, 42 percent of all data will be machine generated by 2020, which includes data from sensors, security systems, networks, servers, storage and applications. Due to the scale of the data coming from these sources, many organizations are struggling to stay afloat.
The storage systems of the past weren’t built to manage data of this scale. Logically, the public cloud should solve the machine data problem. Moving machine data off of enterprise infrastructure and into cloud services gives enterprises a chance to stop over-provisioning their physical storage capacity, and shrink their data center footprints – a goal that many CIOs and industry leaders are working toward. However, machine data is frequently generated by on-premises applications and processes. Hosting it in the cloud can create latency issues, rack up access fees and negate the cloud’s intended value.