This Content Component encountered an error

Best practices

Realizing benefits of storage consolidation <<previous|next>>

Tutorial:

Realizing benefits of storage consolidation sometimes easier said than done

SearchStorage.com

The IT infrastructure at Gwinnett Health System had begun to exhibit signs of excessive sprawl, and Rick Allen, service line director of IS operations for the healthcare network serving Georgia's Gwinnett County, needed to rein in the sprawl.

When Allen started at Gwinnett in 2003, "we had about 20 servers in our data center and about a terabyte of spinning disk -- mostly direct attached storage -- with another separate library of tape backup totaling 2-3 TB," he says. But over the past five years, operations grew to about 500 servers with 200 TB of disk storage and 700 TB of tape backup -- growth that is considered normal for a medical facility, Allen laments, "thanks to electronic medical records and things like radiology files."

Normal or not, something needed to be done about all that growing storage. Allen decided to consolidate storage by implementing a SAN from Hewlett Packard.

Storage consolidation – with its promise of more efficient management and potentially reduced machine count – is attracting widespread attention. When done right, consolidation will not only reduce the number of systems and increase storage utilization – sometimes to 80% or more – but it can take some time-consuming work off the shoulders of storage administrators. In a consolidated storage infrastructure, management software can allow administrators to organize and provision storage across multiple storage systems through a single control panel. But sometimes, achieving all the benefits of storage consolidation is easier said than done.

The demands of healthcare, particularly the need to have bullet-proof systems, complicated Allen's task. In addition, his users wanted to keep their silos intact. Different clinical departments had grown accustomed to creating and storing their own information and were wary of efforts to share systems. "We had to push hard to say from now on the SAN is the storage," he says.

The compromise Allen reached with users was to reserve relatively generous levels of storage for specific functions -- say 400 GB for a pharmacy system when 100 might have been sufficient. Still, he adds, overall utilization is now at a respectable 60%.

Faced with a similar challenge, Marty Buening, IT director at Northwest Radiology Network in Indianapolis, tackled his sprawling infrastructure by implementing wholesale server virtualization. In addition to seeking better storage utilization, Buening says his operation wanted to improve reliability and availability. "We determined that the best way to accomplish that was to virtualize systems," he says. "But the first step was infrastructure."

After examining various SAN and NAS options, Buening selected a Fibre Channel-based IBM SAN built with IBM System Storage DS3400 arrays with an initial capacity of 3 TB. Since then, Buening was able to reduce the number of physical servers to just three, each running multiple virtual servers. And although he doesn't have before-and-after storage utilization numbers, he is certain that the numbers have improved.

Best of all, the deployment of virtualization and the ensuing consolidation has gone smoothly, with the exception of a short-lived connectivity problem involving compatibility with the dozen or so NIC interfaces in use on the network. Management, and provisioning in particular, has been trouble-free.

According to analyst David G. Hill, principal at Mesabi Group, storage consolidation is above all a planning issue that requires the storage administrator to determine three things:

  1. What data needs to be migrated, where the data is going and how it is going to get there.
  2. What you are going to do with the empty array(s) from which data was migrated.
  3. How you are going to manage the more tightly packed array and prepare for growth.

Hill points out that a storage consolidation effort is an opportunity to minimize the amount of data that actually needs to be moved, for example, by putting active production data on tier 1 storage and active archived data on less expensive tier 2 storage. The amount of data that must be storage can be further lessened through data reduction, such as single instancing or block-level data deduplication.

According to Hill, another consideration for storage administrators consolidating their storage infrastructure is how heavily to load the storage arrays. Although it might be desirable to bring an array up to 80% capacity, it might threaten one's ability to handle large temporary jobs (such as end-of-month activities, data warehouse builds or special projects). Then there's the simple fact of growth. You may want to consider acquiring a capacity monitoring tool to make sure you don't run out of storage unexpectedly.

Even once these matters are settled, there's still the thorny problem of migration, which usually must be scheduled to fit in some available window, must not disrupt normal activities and, above all, must not shut down the network. "Migrations can take an unbelievably long time," says Hill, "and the longer that it takes, the greater the chance that something will go wrong." However, virtualization and migration-specific tools can help solve these problems.

"If a LAN is used internally within a building or a WAN is used when migrating to a remote site, then non-storage traffic is on these networks and that could lead to a bottleneck," Hill says. The solution to this is that the product that is used for migration must have bandwidth-throttling capability so that the speed of the migration can be slowed during peak-load traffic for other applications, and then sped up automatically when other network traffic is low, he notes. "This is where the planning has to be done carefully to make sure sufficient bandwidth will be available," he adds.

Finally, Hill notes, you need to consider what to do with any arrays made surplus by consolidation. "To summarize, you the administrator have to do a lot of planning, budgeting and product selection up-front before the consolidation process can commence."

Gartner analyst Bob Passmore views storage consolidation as an opportunity to deliver more service to customers. Arrays that support thin provisioning are a step in the right direction, he says, but warns, "Unless you are going to go out and buy new storage every week, you have to over-provision to allow for growth." And that means, even with automated processes, that it will be hard to average better than 60%-80% utilization, he says. Still, for most organizations, that's a significant improvement.

Passmore offers three rules of thumb an administrator overseeing a storage consolidation effort.

  1. Start with a clear understanding of what you are trying to accomplish, then strive for efficiency and look for opportunities to employee tiered storage.
  2. Put sufficient effort into selecting the vendor, including a look at the partner ecosystem.
  3. Work the details. Good plans are essential as is strong project management "then it is up to your people and the professionals you bring in to make it work," he says.

Rick Allen at Gwinnett Health System echoes that sentiment. "The biggest thing to remember is to do your homework up front," he says. "You don't have to buy the most expensive products -- just the right products." And, he adds, "be sure to invest adequately in vendor support plans. Problems with consolidated storage usually have a large impact on an organization."

27 Aug 2008