NEW YORK -- According to users and analysts at the Storage Decisions conference this week, storage virtualization products that create an abstraction layer over the entire environment are finally beginning to see the light of day in production.
Most notable among the technology's early adopters is Jeff Boles, IT manager for the City of Mesa, Ariz. He uses startup Incipient Inc.'s split-path virtualization product, the Incipient Network Storage Platform (iNSP), which runs on a blade on the Cisco Systems Inc. MDS director.
Debate continues in the industry about the proper place for virtualization, but Boles said he feels strongly that it belongs in the switch. In fact, he said, he bought Cisco's MDS director as well as the Storage Services Manager (SSM) blade, specifically because of the opportunity to manage his environment from the fabric through a product like Incipient. The MDS manages virtual storage area networks (SAN) that separate data paths for each of the city's 24 different departments.
Change also happens quickly in his environment, he said -- for example, the city's court system has begun a document-scanning project expected to see the addition of 20,000 document files per day -- and the IT department typically has very little lead time to prepare for new projects. In order to be able to add storage quickly and at the best prices, Boles said, having a fabric-level virtualization product that could take snapshots, migrate data and provision across a heterogeneous environment was ideal.
So far, the City of Mesa is using the iSNP on 150 terabytes (TB) of Hewlett-Packard Co. (HP) EVA6000 arrays, but it is hoping to add a 7 TB EMC Corp. Symmetrix array to the mix.
Though the management of heterogeneous storage remains an unmet goal in his shop for the time being, Boles said he has already seen a benefit within his HP environment. Specifically, he said, he's content with the performance of the EVA6000, but were it not for the ability to pool his virtualized storage, he might have to buy the EVA8000 simply for the added capacity -- an expensive proposition. "And I'd still be managing separate SAN islands," he added.
Replication and disaster recovery are not problems he's been able to solve yet using virtualization, Boles said. For now, he'd rather focus on improving his primary storage environment before branching out into more services.
IBM SVC has 2,200 customers and growing
Speaking to attendees alongside Boles during a panel discussion at the conference was IBM SAN Volume Controller (SVC) user J. Nick Otto, IT manager for Circuit City. Otto said he dove into virtualization headfirst in an effort to keep spending on his primary storage flat. Circuit City has the SVC, an in-band product consisting of software on clustered commodity Intel Corp. servers, in a four-node configuration in its primary data center and a two-node setup in its secondary data center.
In the primary data center, the SVC shuttles data between three tiers of storage, all of it IBM -- a DS8100 and DS4800 array for Tier-1, a DS4500 for Tier-2 and a DS 4100 for Tier-3. Tier-1 is Fibre Channel (FC) disk; Tier-2 and Tier-3 use SATA disk. Because of the 16 GB of cache on the SVC, as well as the load balancing it performs across arrays, Tier-2 and Tier-3 performance was improved enough to keep Tier-1 spending flat, a savings of $1 million in his 150 TB environment, Otto said.
While implementing the SVC, Otto also said Circuit City upgraded its Brocade Communications Systems Inc. switching fabric from 1 Gbps switches to 4 Gbps models and from multiple-edge switches to Silkworm 48000 directors. During this update, 156 TB of data were migrated through the SVC within 60 days without a single outage, according to Otto.
Some attendees still wary
Otto was gung ho in his presentation, but was questioned during a Q&A period by other users, including one who said he had investigated the SVC but hadn't liked it because it was a "black box," and he feared cutting off his visibility into the environment when he wanted it.
"I've bet the farm on virtualization," Otto admitted, but added that IBM's TotalStorage Productivity Center had also been a necessary investment in order to get good monitoring and reporting on his storage environment. "We did have to make a fairly substantial investment."
Both Boles and Otto were questioned on their ability to back out of their virtualization products if necessary. "I wouldn't want to do it because we'd take a huge performance hit on our lower tiers of storage -- it would be a huge headache," Otto said. But in terms of distribution of his data, he said ripping out the SVC, if absolutely necessary, didn't worry him.
"Incipient's product maintains a fairly intact block-data stream," according to Boles. He, too, said he was committed to a virtualized environment and had not encountered any issues so far, "but of course, you want to be cautious about making such a big change, and be able to backtrack if it's absolutely necessary."
The big picture: Thinking outside the box … literally
Analysts at the show also said they saw virtualization making a stronger push in the market, though it's still only beginning.
"It's past the hype and then the low-point stage any new technology goes through, where first there's lots of noise around it and then some letdown," said Arun Taneja, founder and consulting analyst with the Taneja Group."
In his keynote speech on Thursday morning, Enterprise Strategy Group founder and analyst Steve Duplessie predicted that ultimately, IT would rely on an entirely virtualized data center with an overarching operating system (OS) like that on a PC, which would allow the allocation of storage and memory resources as necessary without user intervention.
"You don't need to decide how much virtual memory your PC should allocate to iTunes," Duplessie said.
Though that future is still a long way off, Duplessie urged attendees to start soon in applying new virtualization schemes. "You're already using virtualization in some form if you're using RAID," he said. "Any move up the continuum is worth it."