Published: 03 Jun 2002
As head of storage management for insurance giant Aetna, Hartford, CT, Steve Pomposi has plenty of reasons to be excited about storage virtualization. After all, he manages a large, heterogeneous storage infrastructure at Aetna that contains well in excess of 100TB of data. Managing that complex infrastructure is an expensive task that demands the attention of senior technicians every time storage is added or reallocated within the environment.
Virtualized storage solutions are supposed to resolve these kinds of thorny issues. By placing a layer between the storage subsystem and the systems accessing it, virtualization should allow IT managers to add, configure and manage multivendor disk arrays with ease, letting applications and systems access a unified pool of storage. Adding a disk array in this environment doesn't require making infrastructure components aware of the new hardware. Rather, the virtual pool of storage grows incrementally and immediately becomes available to all devices on the other side of the abstraction.
There's just one problem: Most IT and storage managers at large enterprises aren't convinced that storage virtualization is ready for implementation. They cite a litany of obstacles, including a lack of standards, spotty interoperability, watered-down management functionality and the potential for show-stopping support conflicts.
For Pomposi, those concerns are so great Aetna won't consider deploying the technology for at least another 18 months. "Basically, none of it is real," he says. "The solutions we see emerging tend to be very point solutions, which may or may not be appropriate for small-scale enterprises. But I've seen nothing that comes close to meeting a heterogeneous environment like Aetna."
Pomposi's experience reflects that of many IT managers who are excited about the potential of storage virtualization, but see an emerging technology that's still nascent.
Few things define the state of the market like the confusion that has arisen around the phrase storage virtualization. Traditional storage vendors and upstarts apply varying degrees of virtualization in the switch, controller, host, storage subsystem or dedicated appliance, depending on the specific solution. The way these companies define the concept can vary just as widely.
"Storage virtualization is such a broad term that it's almost impossible to use it to describe any particular product feature," says Harald Skardal, senior consulting engineer at Sunnyvale, CA-based Network Appliance and a member of the technical council of the Storage Networking Industry Association (SNIA).
Fred van den Bosch, executive vice president of product strategy and new product initiatives for Veritas Software, Mountain View, CA, echoes Skardal's view that virtualization has become an overused, confusing term. "Two years ago, I told our marketing folks they should stop using the word virtualization," he says.
However, storage vendors still toss the word around like baseballs at spring training. Dan Tanner, senior analyst of storage and storage management at Aberdeen Group, isn't surprised vendors latched onto the virtualization term.
"If you are a marketer, you will use the latest buzzword to get attention," Tanner says. "Virtualization is any abstraction; there is no legal definition of the term."
As it turns it out, SNIA has crafted a formal definition for the term storage virtualization, although their wording leaves a lot of room for vendor interpretation:
- The act of abstracting, hiding or isolating the internal function of a storage (sub) system or service from applications, compute servers or general network resources for the purpose of enabling application and network independent management of storage or data.
- The application of virtualization to storage services or devices for the purpose of aggregating, hiding complexity or adding new capabilities to lower level storage resources. Storage can be virtualized simultaneously in multiple layers of a system, for instance to create HSM-like systems.
"For me it is a little bit frustrating," Skardal says. "Various companies have products in different corners of the storage infrastructure, and they are very eager to make sure they can reap some of the marketing benefits of doing virtualization." (For a rundown of virtualization products, see "Types of Virtualization")
Nearly every IT manager talked to in the course of this piece offered a similar opinion. Ken Horner, vice president of marketing for storage appliance maker DataCore, Ft. Lauderdale, FL, argues that some storage vendors, buying time for their own belated efforts, are poisoning the well.
"I don't think that the poisoning effect is at all unintentional - it's very intentional," says Horner. "Other companies that don't have products and have an installed base of customers they want to protect - they don't want to see this market move quickly. The paralysis and the confusion we've seen is classic FUD."
Compatibility and standards
Bruce Jacobs, director of information services at direct mail and database marketing firm ChoicePoint Direct, Alpharetta, GA, isn't so quick to criticize vendor marketing efforts. But he's wary of solutions that he feels will lock his shop into a single platform or vendor. "EMC and Compaq are implementing virtual technology at the controller level. That's a logical level to do it," Jacobs says. "What I don't like about the Compaq and EMC solutions is that I'm locked into a single vendor for my storage."
Jacobs manages a 9TB storage area network (SAN) built on EMC Clariion and Dell Power Vault boxes, sharing identical architecture for worry-free interoperability. He says EMC has been "knocking my door down for two years now," but he opted to test a solution from StoreAge Networking Technologies. StoreAge offered him a cost-effective solution, compatible with as many as 20 other vendors.
"I think virtualization is a very proprietary thing," SNIA's Skardal says. "Companies say they do virtualization and do some level of integration or hiding, but the solutions are still very proprietary. "The next leg is to start using the existing standards more and more," he says.
In fact, interoperability is one of the most pressing issues facing storage virtualization firms today, says Anders Lofgren, senior industry analyst at Giga Information Group. "There has to be a whole lot of cooperation with a lot of different vendors for it to work flawlessly."
One IT manager, who wanted to remain nameless, oversees a significant storage virtualization build-out for a U.S. government research facility, and says anything less than flawless invites disaster. "It all comes down to product maturity and interoperability. Especially when you are booting from the SAN, absolutely any hiccup or error on the Fibre Channel [FC] connection will have serious consequences. Resetting a FC switch or virtualization server to clear an error, port or other issue can have far-ranging and significant effects."
The challenge for managers is that storage virtualization solutions can present a single point of failure. The government IT manager complains virtualization solutions lack maturity for such a mission critical role, making it difficult to set up, for example, redundant servers in an active-active configuration for instant failover.
One way to achieve that interoperability is through standards. Today, solutions supporting multiple vendor storage subsystems do so only because the developer has written code specific to each product. An established standard would allow companies to source storage from multiple vendors and be assured of interoperability with their virtualization solution.
The SNIA has attempted to rally a standard around the Common Information Model (CIM), but Skardal says those efforts have failed to gain much traction until recently.
"There's been considerable effort at the SNIA to get CIM going and it has petered out several times. The main reason for the lack of success is that CIM encourages vendors to model their solution down to the finest detail," Skardal says. "There's been some excitement about CIM, but I keep seeing so much complexity coming up. It becomes very difficult to reconcile [vendor] models into a standard model that you can write interoperable software against."
Still, Skardal and Tanner agree that the CIM standards effort is gaining speed. While that momentum should allow IT managers to adopt virtualization with greater confidence, questions still remain.
Aenta's Pomposi isn't that optimistic. "You have all these standards where the protocol is well defined, but the manufacturers of components that plug into these protocols many times will not support other vendors' equipment within that framework," he says. "What an organization like Aetna runs into is that the minute you plug disparate devices into that network - I'm talking Fibre Channel - the support structure goes out the window. In other words, if I throw a product in there, does that mean EMC will no longer support their solution?"
Another issue that concerns IT managers is storage management. Bob MacDougal of Canada Tire is looking closely at moving to a virtualized storage solution. As senior team leader of storage management, MacDougal worries that any virtualization solution his team adopts might end up hurting management capabilities.
"My concern is that even with virtualization, there is going to be some software tools that each vendor are going to have that are proprietary and they won't share with other vendors. They are not going to give away the proprietary stuff," MacDougal says. "When you want to really drill down at the disk level and the different software levels, I think there is still going to be some dedicated software out there."
ChoicePoint's Jacobs has similar concerns. While physical changes to hardware are rare in his infrastructure, the StoreAge software offers no ability to manage hardware directly.
Akhbar Tajudeen, Director of IT at Alloy Inc., New York, NY, a demographic marketing firm, says he'd like to see management software moved up the list of priorities by vendors. His experience with a FalconStor IPStore appliance-based virtualization has been positive, enabling him to streamline access to a 1.2TB data store. But it's clear that the management component of the solution needs honing.
"Hardware vendors know RAID technology and Fibre Channel and those sorts of things, but they are not dealing with the issues that an IT manager might be more concerned with," Tajudeen says.
Tajudeen brings up a serious issue that critical storage management capabilities such as replication, mirroring, time mark and snapshot are currently implemented at the array level. IT managers who move to virtualization solutions in heterogeneous environments find they are giving away functionality to achieve the convenience of unifying storage devices. The trade-off - both in terms of capability and integration challenges - is significant.
Cadence Design Systems, San Jose, CA, a leading maker of engineering software, deployed Veritas software to help reign in 100TB of data spread across both a SAN and network-attached storage (NAS) device. Mike Forman, director of IT, North American Operations, says Veritas was one of the only solutions that could unify both types of storage. Today, their Veritas software runs on clustered Sun servers and acts as a gateway to NAS- and a SAN-based on an IBM ESS disk array.
Forman says the solution has worked well after an intensive deployment period, but that his team would like to see better management tools.
"I think there are some OK [SAN management] products out there, but they are not quite there yet. In our assessment, we think it is going to be another six months to a year before we have some really good SAN and storage management products out there," Forman says.
In fact, Forman's software-based approach offers access to one of the more mature management solutions on the virtualization market. Veritas' clustering and volume management software, for instance, are widely deployed solutions. But are managers ready to ditch their existing solution set and accept the prospect of introducing a performance bottleneck?
Pomposi believes that's a decision each manager has to make. "In a heterogeneous environment there are going to be some trade-offs," he says. "The big question, of course, is how many of those trade-offs are you willing to take? For myself, I'm willing to take a whole lot of trade-offs."
Taking the plunge
Like many IT managers, MacDougal says his company won't deploy a storage virtualization solution until it has seen other similar outfits make the transition. Nonetheless, he's busy preparing his infrastructure for the transition. Canada Tire currently runs an EMC-based SAN with a McData switch, but MacDougal hopes that will change.
"Our intent is to have a virtualized storage concept, so if we do put a Hitachi box in there, nothing is going to change," MacDougal says. "When we first started talking about SANs two or three years ago, we always knew virtualization would come. There's no doubt in my mind this is going to go and it's just a matter of how quickly and how well it is going to get accepted."
Other companies are making the move ahead. At Cadence, storage virtualization has enabled the company to unify both SAN- and NAS-based storage to let the company keep up with storage demands that double yearly. Demographic marketing firm Alloy also took the plunge, deploying FalconStor IPStore appliance-based virtualization for its 1.2TB build-out. "The solution helped cut the time to assign storage from several days to several hours," says Tajudeen.
Clearly, companies are successfully deploying storage virtualization solutions. But for most IT managers, the question remains open: Is storage virtualization a reality?
"If you had asked that question nine months ago, I would have said that it was all fluff," says Pomposi. But looking ahead, he adds, "During the next 18 months there's going to be an emergence of products and technologies that will make this technology real for more and more companies."