Storage virtualization products: An important niche

Storage virtualization products help IT shops in need of non-disruptive data migration, tech refreshes and life support for aging storage assets.

Discrete storage virtualization products that provide a single point of control for a pool of block-based disk arrays certainly didn’t catch on in the way that hotter technologies such as virtual servers and data deduplication did. And, their present status and future outlook can be tricky to gauge, given the mix of ambiguous signals.

On one hand, the number of storage virtualization products has declined, and adoption trends suggest a possible weakening in interest. And there’s little discussion anymore about managing storage from different vendors as one large pool, as originally promoted. On the other hand, surviving storage virtualization products have become valuable tools, especially for the purpose of non-disruptive data migrations and technology refreshes.

“The market segment will remain a niche but an important one,” said David Vellante, founder of The Wikibon Project, a community that promotes best practices in IT.

Potential customers once found a crowded landscape of storage virtualization products with deployment options in appliances, arrays, switches and servers. Today, a fraction of the choices remain and one category – switch-based virtualization – is near extinction.

Although EMC Corp. still claims to sell and provide maintenance releases for its switch-based Invista storage virtualization, industry analysts say they’ve encountered few if any customers and have seen no substantial product updates for at least 18 months.

EMC, instead, now calls the VPLEX appliance that it introduced last year its “primary storage virtualization solution.” VPLEX aims to federate both EMC and non-EMC storage, but its ultimate goal is to enable the pooling of multiple data centers over unlimited distances, in keeping with the model of cloud computing and Cloud storage.

Hewlett-Packard Co. last year scrapped development of its StorageWorks SAN Virtualization Services Platform (SVSP) switched-based appliance, which was built on technology from LSI Corp. HP now promotes the virtualization capabilities of its StorageWorks P4000 SAN, StorageWorks P9500 disk array and StorageWorks X9000 NAS for heterogeneous storage.

Non-switch approaches fare better

But storage virtualization appliances, arrays and software remain on the market, and some have proven popular.

Successful storage virtualization products include IBM’s SAN Volume Controller (SVC) and NetApp Inc.’s V-Series appliances. NetApp’s Mike Riley, director of technology and strategy, claimed that V-Series is the company’s fastest-growing product over the past two years.

“The appliance model has won because it’s very straightforward,” said Gene Ruth, research director at Stamford, Conn.-based Gartner Inc. “It is similar to the disk products from the same companies. There are a lot of operational and functional similarities. It’s a simpler install, and it’s just a much more understandable product.”

Ruth said switch-based products, by contrast, turned out to be “too complicated for folks to understand and implement.” He advised potential users to set their sights beyond storage virtualization when selecting a technology vendor, because the appliance often becomes part of a company’s larger storage strategy.

“The virtualization products are Trojan horses, and I don’t mean it in a negative way,” he said. “You put a V-Series product into your environment, and that is opening the door for that NetApp salesman to convince you why you should buy more NetApp equipment to go around that V-Series product, for all good reason. Once you accept the management tools, then it would make good sense to continue to buy NetApp as long as their pricing is good.”

It can also help NetApp customers get more life from other storage they already own. For instance, J. Craig Venter Institute in Rockville, Md., was already a NetApp NAS customer when it purchased a V-Series V3070 cluster. Eddy Navarro, the company’s computer systems manager, said he hoped to make better use of the third-party block-based storage arrays it inherited via a merger. He wanted to present the block storage through NetApp’s WAFL file system and leverage NFS and CIFS protocols.

“We were already a strong NetApp shop, and we just wanted to leverage additional NetApp technologies,” said Navarro. “It worked seamlessly.”

Practical considerations drove the Canadian city of Coquitlam, British Columbia, to remain loyal to its storage vendor, Hitachi Data Systems. While doing a VMware Inc. ESX Server upgrade, the IT staff discovered the city’s old HDS Thunder 9570 disk arrays weren’t certified with the new version.

Compugen Inc., Coquitlam’s systems integrator, suggested HDS’ Universal Storage Platform VM (USP VM), IBM’s SVC or NetApp’s V-Series to solve the problem. But, using a third-party product with the Thunder 9570 would mean data transfer and downtime, whereas the USP VM could simply ingest the 9570’s LUNs, according to Andrew Tolentino, a senior storage architect at Compugen.                   

“Virtualization was not even on our radar. We didn’t even know there were such products,” said Darren Browett, the city’s technical services manager. But, the USP VM decision in 2008, in turn, influenced last year’s decision to go with HDS’ Adaptable Modular Storage (AMS) 2100s, when it came time to replace the 9570s, Browett added.

Virtualization as a feature

Storage systems and storage virtualization have become so intertwined that Marc Staimer, president of Dragon Slayer Consulting in Beaverton, Ore., argues that storage virtualization is no longer a discrete market but rather a product feature. “Today, for all intents and purposes, it’s not sold unless it’s bundled with storage,” Staimer said.

Staimer pointed out that even one of the most successful appliances, IBM’s SVC, is typically sold with IBM storage as a complete system. He also postulated that the most prominent “pure-play” vendor, DataCore Software Corp., with its software-based SANsymphony, is really a “storage stack vendor.”

Indeed, that’s the way one DataCore customer viewed its purchase of SANsymphony about three years ago. For Truly Nolen of America Inc. in Tucson, Ariz., the deployment of DataCore’s software on HP servers, attached to HP Modular Smart Array (MSA) 70s, constituted its first SAN.

“My SAN is my storage virtualization,” said Themis Tokkaris, a systems engineer at the pest control company.

By contrast, one DataCore customer that adopted SANsymphony more than 10 years ago sought out more of the traditional benefits of storage virtualization. Gabriel Sandu, senior director of technical services at Maimonides Medical Center in New York, wanted to substitute storage systems and migrate data between arrays without downtime and to pick up functionality such as thin provisioning and synchronous mirroring across geographically dispersed data centers.

“We save tons of money with it,” said Sandu. “When EMC and IBM and Hitachi were selling hard drives, and people were utilizing them 40% and 60%, we were utilizing everything 120% or more because we did thin provisioning. We were serving up space that we didn’t purchase yet.”

The broad and ever-expanding definition of storage virtualization can serve to confuse administrators more than help. Some say storage virtualization has always existed in storage arrays in the form of RAID sets. Some equate thin provisioning and other technologies with storage virtualization. Murky definitions, in turn, render adoption statistics open to interpretation.

The InfoPro Inc.’s periodic sampling of Fortune 1000 companies shows interest in block virtualization is flat or declining. Those already using storage virtualization appear likely to stick with the technology, but the percentage with no plans for it rose from 43% to 57% between 2006 and 2010, according to Marco Coulter, managing director of the storage practice at the New York research firm.

Likewise, the Storage magazine/SearchStorage.com Spring 2011 Purchasing Intention survey showed the storage array was the most popular place for storage virtualization. HDS’ USP V, USP VM and VSP systems are the main source of this type of virtualization.

Whatever the form of storage virtualization, some analysts still see the near-term future as bright. Arun Taneja, founder and consulting analyst at Taneja Group in Hopkinton, Mass., said the prognosis remains good because the problems that technology addresses continue to be significant pain points.

“Customers are still going to have storage lying around that they could put to use, and non-disuruptive migration continues to be a major headache for the industry,” Taneja said. “With more and more of the new products that are being brought into the marketplace, companies are thinking about [building in] non-disruptive migration. Once that happens, then the big use case for external virtualization will disappear, but I don’t see that happening at least for the next five years.”

Dig Deeper on Storage architecture and strategy

Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close