This article can also be found in the Premium Editorial Download "Storage magazine: How to successfully deploy data replication."
Download it now to read this article plus other related content.
Cloud storage is a service, just like what storage service providers attempted to do years ago. But the technology has significantly evolved in the last decade and this time, cloud storage is real.
The term cloud storage has been thrown around by so many vendors and industry pundits that it has just about lost any meaning. One vendor says cloud storage and means a service, another says it and means software infrastructure, while yet another vendor means hardware infrastructure. When asked who the cloud storage vendors are, we could name just about anyone in storage because just about every vendor has a cloud strategy and is providing at least one piece of a solution. At the end of the day, however, cloud storage is a service, just like what storage service providers (SSPs) in the earlier part of the decade attempted to do. While the SSP model didn't work in 2002, the technology has significantly evolved; this time, cloud storage is real.
There are many market drivers for cloud storage, and they are pretty much the same ones that existed in 2002. Just like death and taxes, data growth is a sure thing, even with the economy falling off a cliff. And the economics work. In a 2008 research survey of 516 IT executives at midsized companies, 30% of those surveyed cited a lack of physical space in the data center as a top challenge. In another survey of 504 large enterprise storage buyers, 28% of respondents cited running out of power and cooling capacity
If all of these factors were in play in 2002, why didn't the SSP market thrive? The dot.com bust was only a small part of the equation. Storage as a service (SaaS) was -- and continues to be -- a good idea; it was just a little before its time.
The following factors were inhibitors to storage service provider growth:
Bandwidth cost. Bandwidth cost and availability was a major market inhibitor. A T1 line is only 1.5 Mbps and, in many cases, users needed much more. Many were looking for Fibre Channel (FC) connectivity over optical networks, usually two FC connections and a Gigabit Ethernet (GbE) one. That amounts to somewhere around 2.5 Gbps or 3 Gbps, which translates to an OC-48 connection. Monthly network fees for the bandwidths and distances required to support the model were exorbitant. Pure availability of network bandwidth was an issue. In some locations, you couldn't get connectivity or the last mile would cost a small fortune.
Using the wrong storage platforms. The original SSP model was to take the massive arrays offered by storage vendors like EMC Corp. and Hitachi Data Systems and leverage them as consolidation platforms. The data was absolutely safe in big iron arrays and some economy of scale could be realized through multitenancy -- hosting lots of different customers on a single array -- but the big iron platforms from EMC and Hitachi weren't designed to support these environments. No matter how many tenants were housed in a single array, the break-even economies of scale didn't work.
Targeting the wrong applications. Rather than going after less-used persistent data or remote storage as an archive tier, SSPs focused on offsite primary storage for any and all applications. They ignored latency issues associated with supporting I/O-intensive applications remotely. The solution was to put storage PoPs everywhere, which was an extremely expensive proposition.
Today, the Internet reaches every corner of the world, effectively creating a flat global network with few, if any, barriers to connectivity. The combination of wide-area network (WAN) acceleration and ubiquitous network connectivity allows business to be conducted anywhere. On the platform front, scale-out, commodity-based platforms that provide massive scalability, parallel data transfers and economies of scale not just for hardware, but for ease of use and management, are available. And, today, the application profiles that can withstand latency associated with storing data remotely are better understood. Now cloud storage can be part of a storage tiering model for persistent data.
The consumer and small office/home office (SoHo) markets, along with Web 2.0 businesses, will continue to be early adopters. Larger enterprises will proceed with caution, as there is a high degree of risk aversion in this segment. It's more likely that large enterprises will deploy purpose-built private clouds for bulk storage of persistent data and for archive -- the move to disk-based archive clouds within the four walls of IT has been in process for some time. Eventually, even large enterprises will look to the cloud as a storage utility and an integrated part of a storage tiering model. Cloud storage is still in its early growth stage, and it will take a long time to evolve to become core to enterprise IT. But this time, the technology has come far enough to make the dream of a storage utility a reality.
BIO: Terri McClure is a storage analyst at the Enterprise Strategy Group, Milford, Mass.
This was first published in April 2009