Virtualization breathes new life into old arrays, but at a cost

This article can also be found in the Premium Editorial Download: Storage magazine: New rules change data retention game:

One of the bonuses of buying a storage virtualization product is that it lets storage managers extend the life of older arrays. Cranky, old systems suddenly get brand-new features such as point-in-time copies, dynamic LUN expansion and additional cache. But there are some traps users should be aware of when weighing the benefits of virtualizing older systems vs. purchasing newer ones. For example, hardware maintenance and environmental...

costs can negate the savings from virtualization.

Hitachi Data Systems (HDS) found this out recently with a customer in Korea, and HDS storage consultant and solution architect David Merrill blogged about the findings. (HDS has since removed his posting and has declined to comment on the topic any further. A cached copy of the story is still available on the SearchStorage.com blog.) Merrill noted that for this particular customer's environment, which contained fewer than 18TB of storage data, the total cost of virtualizing older storage using a Hewlett-Packard (HP) XP Series virtualization system (the HP XP is a rebranded TagmaStore system) was more than the cost of buying new storage.

"Users should perform a careful total cost of ownership [TCO] analysis with this technology, as the price tag of these products can often be greater than the savings they afford," says Greg Schulz, founder and senior analyst at the StorageIO Group, Stillwater, MN.

Several conditions impacted the storage TCO for this Korean company, including the old hardware and software maintenance costs; electricity and cooling costs were also far greater for the older systems. In working with this user and others, Merrill found there's a crossover point at which virtualization enables a better TCO, and that's when capacity growth is greater than 20% to 25% per year.

Not everyone is struggling to see the cost benefits of storage virtualization. For QBE Regional Insurance in Sun Prairie, WI, virtualizing old storage arrays reduced the company's total storage costs significantly and brought capacity utilization up by more than 10%, according to Loren Eslinger, senior systems engineer at QBE. The company virtualized two old HDS Thunder 9570V storage systems with IBM's SAN Volume Controller (SVC), a fabric-based virtualization product, and added an array from IBM.

Eslinger agrees that there's a point at which virtualization becomes cost-effective. QBE Regional Insurance manages 50TB of storage data and found that virtualization costs decreased as the quantity of virtualized terabytes increased. "Licenses are based on terabytes of usable storage, meaning that smaller environments may not see a significant savings," says Eslinger.

She also warned users to read the vendor's supported hardware list carefully. All vendors have their own list of supported old storage that will work behind their virtualization product. "Make sure your storage array is on that list," she stressed. QBE Regional Insurance tried using IBM's request for price quotation process to get some older switches supported on the SVC, but "it didn't pan out," she says. In other words, if your switches and host bus adapters are as old as the storage arrays, they may need to be upgraded as well, which adds to the cost.

Nevertheless, QBE Regional Insurance ended up saving money because it was able to hold off on plans to hire another storage administrator. IBM SVC made QBE Regional Insurance's upgrades to newer storage easier because the process is now hidden from the server and users. The firm moved older storage to lower tiers for archiving, backups, testing and development, extending the life of these systems. Changing drive layouts and RAID configurations became easier, as data can be moved around without affecting the app. SVC also enables common functionality across all storage arrays, allowing all storage clients to be consistently configured regardless of the age of the actual storage behind the scenes.

But plenty of users are happy to upgrade to new equipment and thus save on the maintenance costs. David Ping, data center storage team lead for information systems and technology services at San Francisco-based Pacific Gas and Electric (PG&E) Co., is managing approximately 2.5 petabytes (PB) of storage data, most of it residing on IBM DS8300s, with about 25% on HDS Universal Storage Platform and Adaptable Modular Storage Model AMS1000 arrays.

"At this time, PG&E has not implemented any virtualization across its storage arrays because it costs less to maintain newer equipment," maintains Ping.

--Jo Maitland

This was first published in September 2007

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close