Essential Guide

Managing storage for virtual environments: A complete guide

A comprehensive collection of articles, videos and more, hand-picked by our editors

New storage capacity management tools can make efficiency a reality

Poor provisioning and a lack of effective capacity management tools leads to underused storage systems. New tools and improved processes can make storage efficiency a reality.

This article can also be found in the Premium Editorial Download: Storage magazine: Rethinking the way storage architectures are packaged and presented:

Poor provisioning and a lack of storage capacity management tools leads to underused storage systems. New tools and improved processes can make storage efficiency a reality.

Storage managers rarely admit they have a capacity management problem. Instead, they're more likely to talk about how big a slice of their IT budget storage eats up or the unpleasantness of unplanned purchase requests. In some cases, the conversation focuses on the high cost per gigabyte of storage.

Other managers may be preoccupied with seeking a solution to seemingly unattainable backup windows or impossible disaster recovery scenarios.

Some are looking for capacity management tools or processes that can identify and prune obsolete data, while others are buying storage in large chunks annually to get "quantity discounts."

What do all of these scenarios have in common? In each case, storage managers are trying to address a symptom without taking a holistic view of a fundamental problem: the lack of an effective storage capacity management regimine.

Don't look to the cloud for answers

Let's state up front that cloud storage is not the solution to a capacity management problem. Increasingly, cloud is portrayed as the cure-all for what storage ailments are afflicting companies. Cloud may mask the pain with a somewhat lower cost per GB, but it does nothing to fundamentally address uncontrolled capacity expansion. Cloud has a role in storage service delivery, but solving capacity problems isn't one of them.

It would be charitable to say that some organizations' storage utilization is less than stellar. Many companies have as little as 20% to 30% average utilization as measured by storage actually consumed. Those organizations whose consumed utilization is more than 50% are the exception. This metric is one of the fundamental obstacles to better utilization.

There are three basic ways to measure storage capacity:

  • Formatted (sometimes referred to as raw, though there is a technical difference)
  • Allocated (sometime expressed as provisioned)
  • Consumed (or written)

When asked what their utilization rate is, most storage administrators will quote the allocated figure. From their perspective, if it's allocated to an application, it's as good as consumed because it's unavailable for new provisioning. It's a legitimate perspective, but it can cover an insidious incentive to overprovision because it allows that portion of storage to be ignored for a long period of time. Some administrators will tout an 85% utilization rate, even though perhaps only 20% of the array has actually been consumed. Such poor utilization, however, ultimately drives up the average cost per GB consumed by 2x or more with management none the wiser. Moreover, most capacity purchases are triggered when allocated capacity hits 85% regardless of how much is really being consumed. Responsible teams husband an organization's resources more diligently.

Why is data getting so big?

The biggest driver of storage growth is "secondary" data, copies of original data or primary storage. Secondary data includes snapshots, mirrors, replication and even data warehouses. The secondary data multiplier can be as high as 15:1. It would seem the obvious solution is to reduce the number of data copies, which may indeed be the case. However, the secondary copies were likely created for a reason, such as for data protection or to reduce contention for specific sets of data. The unintended consequence of optimizing storage capacity management may be reduced data recovery capabilities or worse performance. Thus, storage managers must be aware that there's an inverse relationship between data recovery, performance and capacity management; if you improve one, you're likely to impede the other. Consequently, it's important to start with service-level requirements for recovery and performance. Capacity management can be optimized only to the point that other service levels aren't jeopardized.

Tools to take control of capacity management

Thin provisioning

  • Eliminates overallocation and increases utilized capacity from 30% to 60%
  • Cuts the cost per gigabyte (GB) stored by 50%

Compression

  • A 2:1 compression allows twice as much data in the same array, for another 50% reduction in cost per GB stored

Deduplication

  • A 2:1 deduplication rate further halves the cost per GB of storage and the deduplication rate could be higher for some data types

Storage resource management applications

  • Manages storage as an enterprise, not as individual arrays
  • Measures storage metrics to drive best practices
  • Spots trends that could become serious problems without proper attention

Capacity management toolkits

Fortunately, storage managers have numerous tools to assist them in tackling capacity management. These include two general categories: utilities and reporting tools. Array vendors have a number of useful utilities that are now available with most systems.

Perhaps the most common of these is thin provisioning capability, which is supported by nearly every vendor. Thin provisioning allows administrators to logically allocate storage, but automatically keeps the physical allocation only slightly above the actual capacity used. Storage is automatically allocated from a common pool as a volume demands more space. Because the array itself may be logically overallocated, it's possible to have an out-of-space train wreck if administrators don't ensure that enough physical capacity is available as data grows. This is uncommon, however, as automated alerts should keep administrators on top of the situation. Thin provisioning alone can largely alleviate the problem of high allocation/low utilization. In most cases it's complemented by a space reclamation feature that returns unused space to the common pool. While array vendors may offer this feature, reclamation can also be performed by Symantec Corp.'s Veritas Foundation Suite for those who use that product.

Another useful and near-universal utility is compression. Most vendors are willing to guarantee a 2:1 compression on primary storage, or a 50% space savings. Compression is normally applied at the LUN or volume level, depending upon the vendor's specific implementation. Compression does incur some performance penalty, though it can be as little as 5%. Of course, your mileage may vary, so a proof of concept is worth the effort. From a management standpoint, the benefit of compression is cutting the cost per GB stored by 50%.

Compression is complemented by data deduplication, though deduplication is not yet supported on primary storage by every vendor; EMC Corp. and NetApp Inc. are examples of vendors that do. Here again, deduplication differs in its implementation on primary storage versus backup appliances. On primary storage, data deduplication is an idle-time process and isn't nearly as aggressive in eliminating duplicate blocks as deduping backup appliances. Because it's a background process, the compression itself doesn't impact operations. Decompression, known as "rehydration," may have minimal or significant effect on performance, so a proof of concept is advised. Rehydration is more like reassembly of parts. Unlike compression where vendors make efficiency guarantees, there are no such guarantees with deduplication because it's highly dependent upon data type. Media files generally dedupe poorly, whereas text files may dedupe quite well.

Capacity management reporting tools

The other category of tools is reporting tools, or more accurately, storage resource management (SRM) products. Both array vendors and independent vendors offer SRM products, examples of which include EMC ControlCenter, Hewlett-Packard (HP) Co.'s HP Storage Essentials, NetApp OnCommand Insight (formerly SANscreen) and Symantec's Veritas CommandCentral Storage. All of them offer the ability to comprehensively manage and monitor an enterprise storage environment. Yet few organizations leverage them, largely because SRM has gained a reputation as being unwieldy and resource-intensive. These limitations can be overcome by focusing on only those aspects of an SRM application that are truly beneficial, otherwise known as the 80/20 rule. In the context of storage capacity management, you should focus on the following:

  • Thresholds. Individual arrays provide threshold alerts, but SRM applications can consolidate them and give an enterprise-wide picture to administrators. This allows far more comprehensive planning and provisioning to prevent one array from being oversubscribed while another is undersubscribed, for example.
  • Utilization. Again, SRM consolidates information that otherwise must be manually aggregated (and who has the time to do that?). Utilization figures to monitor include:
    • Consumed as a percent of raw. Know how much the array is truly utilized. Target 55% or higher as a best practice, though this will vary with the age of the array and growth rates.
    • Consumed as a percent of allocated. Know whether or not the array is overallocated. Target greater than 70% (85% if thin provisioning is used) as a best practice. Allocations lower than 70% may be acceptable for newly provisioned LUNs or those with high, unpredictable growth.
    • Secondary data. Know how much data is consumed by snapshots, mirrors and the like. Target no more than 3x the primary storage. More than 3x may be justifiable for various reasons, but this ensures that space isn't consumed unnecessarily. This feeds into data/information lifecycle management.
  • Trends. Thresholds and utilization are points-in-time. Identifying trends is the key to optimizing capacity.
    • Growth rates. Knowing growth rates fosters accurate forecasting, thereby avoiding unnecessary "safety factor" purchases. Storage prices decline approximately 10% per quarter on a per-GB basis, so delaying an organization's purchases can yield substantial savings over time.
    • "Days storage in inventory." Using growth rates, calculate how many days of storage growth capacity is on the floor. Target 90 to 180 days. Less than 90 days doesn't give purchasing enough time to do their job most effectively. More than 180 days and you could have purchased the storage later at a cheaper price.

Organizations can dramatically cut the cost per gigabyte stored by using the array utilities that in many cases are already paid for. Implementing thin provisioning, compression and deduplication (where applicable) can reduce this cost by 50% to 75%, which isn't bad by any measure. However, best-organizations will implement SRM products to take their storage management to the next level. With it, storage managers can balance and optimize performance, data protection and capacity utilization simultaneously.

About the author
Phil Goodwin is a storage consultant and freelance writer.

This was first published in April 2013

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Essential Guide

Managing storage for virtual environments: A complete guide

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close