SAN DIEGO -- To say storage virtualization was a part of the discussion at this spring's Storage Networking World (SNW) would be an understatement -- over the course of two days, users were treated to no less than 17 sessions on storage virtualization, including two keynote speeches.
Among the satisfied virtualization users was keynote speaker Mark Douglas, vice president technology for eHarmony.com, who said his company has 100 terabytes (TB) of 3PARdata Inc. storage area network (SAN) storage deployed behind ONStor Inc. network attached storage (NAS) gateways. Douglas said his company chose the 3PAR array because it virtualizes disks in the subsystem for easier provisioning, and the NAS gateway because it doesn't require SAN expertise to run the storage system.
"Storage virtualization meant we can run our entire storage environment with zero dedicated staff," he told the audience.
Another user-presenter, Alejandro Lopez, storage director, technical support services information and communication services, for the University of California Davis (UC-Davis) Medical Center, said using Hitachi Data Systems' (HDS) Universal Storage Platform (USP) to front IBM Enterprise Storage Service (ESS) and Fast-T arrays allowed him to consolidate mainframe and open systems storage. "This is important for (HIPAA) compliance," he told attendees at his session. "We meet HIPAA security requirements in part by centralizing management and access to storage."
However, attendees who had decided on, let alone implemented, a specific storage virtualization product were in the minority, though many users said they had intensively evaluated products.
Karl Lewis, storage administrator for the University of Michigan, College of Engineering, said he has used virtualization to some extent on separate NAS systems in his environment -- one an ONStor clustered gateway in front of cheap disk and the other an EMC Corp. Celerra box that is broken up into virtual file shares for different departments.
Lewis said he had considered pooling his entire file storage environment using Acopia Networks' Adaptive Resource (ARX) switch, but said he walked away from the company's response to his RFP with what he termed "sticker shock; it costs me less to continue to manage two separate NAS environments than it would to implement an Acopia switch to manage the whole thing," he said.
Cost was also a sticking point for a manager of systems and networking infrastructure for a content delivery company based in California, who preferred to remain anonymous because he is not authorized to speak with the press. This user said he had considered HDS' Tagmastore for virtualization, but with just under 500 TB of capacity under management, he said he balked at the capacity-based licensing fees. "It's a single solution that could integrate well with our existing systems," he said of Tagmastore. "But the cost to us would be exorbitant."
Another storage administrator with a large company, who spoke on condition of anonymity for legal reasons, said that his company tested out StoreAge's Storage Virtualization Manager (SVM) product, now owned by LSI Logic Corp., but said that testing revealed the product wouldn't support his particular tiered storage migration scheme. "We use primary storage only for I/O and mirror the data to secondary storage almost immediately," he said. Further complicating matters is the fact that some of the consistency groups for production databases are over 2 TB in size in his environment.
"We found it a good product, easy to use and the support was excellent," he said. "But we could just never get it to work for our particular needs."
Meanwhile, UC-Davis's Lopez said he's been happy with his virtualization approach and its cost because he has never expected it to solve all or even most of his storage problems. Of 120 TB in his environment, Lopez said, the HDS system is fronting just 22 TB. He also told attendees at his session that if they're looking for virtualization as a silver bullet to solve complexity in a disorganized environment, they'll be disappointed.
"I made sure I had a strong foundation and simplified my environment in other ways before implementing virtualization," he said, adding that he's found the right approach is to work his way into virtualization slowly and selectively.
"No matter what anyone tells you, virtualization is a tool, not a solution, and that's where I think some people get bogged down," he said. "But in the end, you have to find a way to do it because growth is pushing you. That's just a fact."