Storage pundits (me included) are sometimes surprised by how little of what we talk about actually finds its way...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
into practice. We take for granted that companies use centralized Fibre Channel (FC)- attached storage for most of their data center servers. We assume advanced features such as mirroring, replication and flexible provisioning are widely used. In short, we think what we talk about is what you do.
We're wrong, of course. According to my own research, centralized storage area network (SAN) storage is used by just 20% of data center servers. And many companies have little FC hardware, which is often limited to a few disk trays attached to a single, important server. The fact that Dell, Hewlett-Packard (HP) and Sun Microsystems are among the top five vendors in terms of storage hardware market share should clue us in that advanced enterprise storage just isn't as widespread as many believe it is.
The real state of storage
In the real world, direct-attached storage (DAS) still dominates. Most data centers are full of servers using internal disks or small, dedicated external storage arrays, especially in the Windows market. And with Windows servers accounting for 80% of data center servers, SANs still seem alien to many IT managers.
Even where FC is in use, many sites connect only a few servers to their SAN. SANs are supposed to offer flexibility and resource sharing, but most small SANs don't share resources at all. Instead, FC is used as a high-performance bus for dedicated disk and tape devices.
The advanced features offered by enterprise storage devices are also largely unused. I'm often surprised to find that sites that have purchased array-based mirroring and replication features aren't using them.
But that's not always the case. A few large sites have hundreds of servers on their SANs, use all the latest storage technologies, and pressure their vendors for improved features and performance. Many smaller environments have 10 to 20 servers attached to one or two storage systems, but many others use DAS. The majority of data centers have no enterprise storage resources, relying instead on internal disks and server-based software for RAID, mirroring and replication.
The network effect
The problem, in many cases, comes down to Metcalfe's Law or the network effect. How useful would a phone be if only one person in 10 had one? Similarly, the usefulness of many storage technologies is proportional to their penetration into the data center. In other words, administrators won't learn and use a technology if only one or two systems access it. When technologies become ubiquitous, they grow in importance and usefulness.
Consider storage replication. It's tricky to set up, eats expensive bandwidth between sites and requires huge storage systems at both ends. Even with the obvious benefit of having an identical copy of key data on hand, is it any wonder few businesses implement it? Add in that only 20% of the servers in the average data center are even attached to replication-capable storage arrays and the network effect looms even larger.
Price is part of the problem. The price of a gigabyte on an array has dropped dramatically, with low-end modular arrays costing just a bit more than plain disk trays. But to attach a server to a SAN, it still costs about $2,000 for host bus adapters (HBAs) and switches.
Another problem is approachability. FC storage is a world of new terminology and vendors for a busy IT worker to learn. FC is just too different for many and others were burned by earlier technologies that failed, such as Token Ring and Fiber Distributed Data Interface.
Bringing storage to Main Street
There's a notable exception to the low rate of networked storage penetration into the small- and medium-sized business (SMB) market: network-attached storage (NAS). Network Appliance (NetApp) built a business selling devices that are simple to set up and provide a high level of functionality.
The argument for NAS is compelling: You can leverage technologies you know, such as the Common Internet File System, Ethernet and IP, and consolidate hard-to-manage Windows file servers for about the cost of a few servers. It's a fright-free prospect and it's blossoming everywhere SANs have refused to take root.
Despite vendor arguments to the contrary, NAS isn't the right storage solution for every server. But iSCSI may bring NAS simplicity to the rest. iSCSI leverages IP and the Ethernet ports in most servers and LANs. Microsoft demonstrated its support by adding iSCSI client software to Windows, and every storage vendor has an iSCSI array on the market or in the works.
iSCSI will bring every server in every data center into the SAN world. Think of the effect on the network with all application data on every server consolidated on a smart array. Replication would become the rule and drive mirroring would be just another tool.
Another technology might just bring centralized storage to the rest of the data center, and it's not a storage technology. Server consolidation is the current rage in the systems world, and blade servers and server virtualization are becoming commonplace. The reason is the proliferation of single-application servers. Data centers have become stuffed with rows of servers, each dedicated to a single task and underutilized.
Application owners and vendors balk at sharing a server, so companies are adopting server consolidation technologies. Microsoft's new Virtual Server and VMware, an EMC company, run multiple, virtual Windows instances on a single server. Blade servers are actual tiny servers that huddle together in a frame. Both technologies physically consolidate systems and share precious resources such as power supplies, network adapters and storage.
Imagine having one-tenth as many servers in the data center, each a little larger than today's, but each appearing as a dozen or more virtual servers. In terms of storage, virtual servers can share the same FC hardware, effectively cutting the cost from $2,000 to $200 per server. And iSCSI is still an option.
One particularly interesting feature of VMware is that each virtual server uses a file as its disk image. Want to replicate the entire thing to New Jersey for disaster recovery? Just send the image files. The hardware on the other side doesn't have to be identical--in a pinch, a few bargain PCs or IT laptops would work.
A revolution at hand
No matter how you look at it, there's a revolution brewing that will tear down the barriers to consolidated enterprise storage. NAS is leading the way, gaining acceptance in the places SAN has failed to reach. Server consolidation makes shared storage more enticing, while also making it more critical. And iSCSI promises to remove the obstacles of cost and approachability that have plagued FC storage.
The positioning of the major storage players is also interesting. With last year's acquisition of VMware, EMC continues to act like the New York Yankees of storage--always offering a roster as good as (or better than) anyone else's. Dell and HP have quietly put together lineups that should help lock up a place in the playoffs.
But NetApp appears to have the SMB crown in the bag. They rule the NAS market, and the iSCSI support built into the same arrays is in use today. Like the Red Sox going into game four of the World Series with a 3-0 lead, how could they help but come out on top? And as the Yankees learned, even a big lead can be lost to a determined opponent.