Managing and protecting all enterprise data


Get control of capacity

Although storage resource management tools can be complicated to implement, they're a better alternative to breaking the bank and rushing out to purchase more storage. Get precise with your vendors on what you need and you'll wind up with better results.

Despite years of constrained budgets and limited buying, most organizations believe they can mine more value out of their existing storage assets. Disparate storage area network (SAN) islands and multiple servers--combined with limited reporting capabilities--reinforce this belief. The IT management mandate is clear: Before adding more capacity, get more mileage out of what's already on the floor.

A storage resource management (SRM) tool can identify the capacity you have, control how it's being used and forecast how much more storage you will require. Any SRM tool will collect data on things such as file aging, disk and network performance as well as customer requirements such as application performance expectations. Correlating that information helps determine what pain points to address first. Most organizations will find that their storage infrastructures aren't optimized and that their resources aren't shared.

Keep in mind, however, that all enterprise SRM tools are still developing. Most work best at the departmental level or in geographic locations, although an increasing number now offer the ability to roll data up to provide enterprise reporting. But there's a big difference between enterprise reporting and enterprise management, and being able to do both from the same central console is not here yet, at least, for most products.

Capacity management tools (PDF)

Five Steps Toward Capacity Management
Bringing capacity management to your company is an evolutionary process. "Solve daily problems first," says Ron Riffe, a storage strategy manager at IBM. He believes that most capacity management problems fall into five categories.

1 Hardware Coming and Going. If you are constantly dealing with new arrays coming and going from your environment, solve the problem with network-based block- and file-level virtualization solutions, which move data between storage arrays with minimal or no server intervention.
2 Overworked Administrative Staff. If your staff is struggling to keep up with the daily tasks of LUN masking, zoning, volume discovery and extending file systems, deploy a tool that lets you manage any vendor's hardware from a central console.
3 Storage Growth. Use hierarchical storage management (HSM) software that classifies and moves data to the correct tier of storage.
4 Regulated Environment. Deploy archive management software. While similar to HSM software, it also provides hooks into e-mail and database applications, as well as the ability to store and fetch data from tape.
5 Recovery Management. Look for backup and recovery software with complementary features such as snapshots and caching to disk.

Organizations must first decide what capacity they want to measure. That decision will largely determine what tool or set of tools to choose. A tool should feature:

  1. Disk utilization reporting
  2. File-level reporting
  3. Database reporting
  4. Visualization of the storage network infrastructure
  5. Storage array reporting
An increasing number of products now report on tape libraries, but that feature isn't that critical for an organization just starting to deploy an SRM product. So, how do the SRM tools differ? Basically, in terms of how they gather the data and what they are architected to accomplish. The architecture of each product bears a direct correlation to the size of the agent deployed, how many agents get deployed, their level of activity or if any agents even need to be deployed.

Tools like Fujitsu Softek's Storage Manager and Tek-Tool's Storage Profiler for OS provide one small, easy-to-deploy agent that meets the first two criteria. Their agents can be installed and configured in just a few minutes, consume minimal disk space (maximum 30MB) and remain dormant except during scheduled collection times. They measure disk utilization filtered by such criteria as servers, users and group. For file-level reporting, the agents provide more granularity, enabling users to identify certain file types such as mpg, jpg and xls. They also determine the age of the file types and see historical trending and forecasting information that illustrates how fast these file types are growing while predicting how fast they may grow.

Similar tools, from EMC, for example, aren't as simple to install and configure because of the way they are architected. EMC currently offers two tools, one called VisualSRM, which provides an easier to install point solution but gathers less information than their flagship product, Control Center (CC), which requires the deployment of a 100+MB agent and licensing for four products: Control Center, StorageScope, StorageScope FLR and Automated Resource Manager ARM. The CC agent can consume anywhere from 1% to 15% of a host's CPU while it performs storage capacity monitoring and reporting functions.

Some companies like AppIQ collect device-level and host-level capacity information without an agent. Its StorageAuthority Suite software--written to the Storage Networking Industry Association's Storage Management Initiative Specification (SMI-S) and Distributed Management Task Force's Common Information Model (CIM) standards--gathers this information simply by querying SMI-S compliant operating systems. The query may take place over either a Fibre Channel (FC) fabric or an Ethernet network. The operating system vendor must provide or ship a CIM Object Manager (CIMON)--such as Microsoft Windows WMI--and devices must offer SMI-S interfaces.

SMI-S compliance should no longer be considered optional when making new storage purchases. While users should remain skeptical about any vendor's claims that their capacity management tool can manage their storage infrastructure through only the SMI-S standard, they can certainly expect these products to deliver fundamental reporting capabilities. Any capacity management product that doesn't have an SMI-S road map shouldn't be viewed as a viable long-term solution.

Gaining visibility into databases beyond their raw size requires the deployment of database-specific agents. Most databases require a reporting agent that's specifically tailored for the database; for example, an Oracle database requires the deployment of an Oracle agent. Other products such as Storability's Global Storage Manager (GSM) require the deployment of two agents--a core database agent and one specific to the database being monitored.

What to look for in a capacity management tool


Complex agent installs. While larger, complex agents do more, they generally take longer to deploy, configure, understand and manage. So when thinking agents, think small (10MB or less).
Functional dependencies. Product functionality should stand on its own. For instance, gaining visibility into an array or the storage infrastructure shouldn't first require the deployment of that vendor's storage arrays or switches. Similarly, file-level reporting shouldn't require the deployment of a specific vendor's volume manager or file system. Only accept dependencies if you know you will need more advanced reporting and management features.
Point solutions. If the product does only file level and database reporting, but doesn't integrate with other virtualization or backup management solutions, avoid it. While product functionality should stand on its own, don't discount the importance of integrating with complementary software from the same vendor.


Enterprise reporting. Look for a tool that offers the ability to deploy servers at different geographic sites. The tool should be able to roll all the information up to provide a global view of the enterprise.
Storage infrastructure visibility. Make sure you can measure and report on the storage infrastructure itself. The tool should include the ability to report on the availability, capacity and performance of Fibre Channel switches and storage arrays. Reporting on tape drives and libraries is nice to have, but is outside of the scope of most current product offerings.
SMI-S compliance. Simply put, if a tool isn't already SMI-S compliant or doesn't have a road map for compliance, don't waste your time looking at it. You will only lock yourself into a more costly, proprietary product with functionality that you can obtain through a lower cost product.
Grouping. Make sure you can report and manage by almost any grouping criteria imaginable. Administrators should be able to group storage by application, department, file size and age, geographic location, storage type used and user for reporting, management and chargeback purposes.

Integrated reports
In addition to database reporting, most vendors offer tools that report on what switch ports are free and used, what volumes on storage arrays are allocated and unallocated, and how much traffic is traversing the SAN. How well each product does this depends on how it is architected, and if the vendor integrates this function into their overall reporting.

The products that best integrate these reports have the largest agents and invariably the most complicated installs. However, this is changing as newer, better-designed products with small agents can get the world wide name (WWN) off of a server's host bus adapter (HBA) and with only that information, can identify that WWN's path through the SAN and any and all of the storage assigned to that WWN on any storage array. Older products use more cumbersome methods to gather this same information.

For instance, products such as CreekPath System's Storage Operations Manager, EMC's Control Center and Veritas' SANPoint Control collect this information by putting agents on every server attached to the storage network. Their agents perform active monitoring on the servers on which they reside, querying the server and SAN for updated information based upon parameters set up by the user. Because these agents remain active on the server, users need to determine how current they want their information to be--how often the agents should run. Additionally, they need to consider the time it will take to configure the agents of these products and whether they will realistically be able to use all of the information that these products provide. Better to look for products whose agents can be deployed and configured in less than two minutes. Anything longer than two minutes is not worth the effort.

Tape utilization and library monitoring remains below most users' radar screens. Products like Storability's GSM do give visibility into StorageTek's tape environments and most tape vendors provide native utilities to measure and report on library capacity. Yet with the increasing use of ATA disk backup, the relatively low cost of tape and the scarcity and proprietary nature of tools that manage tape, users should concentrate on getting disk under control before worrying about tape.

Measuring the assets
The time it takes to measure storage assets will depend on the tool selected, the size of the environment, what's being measured and the number of agents deployed. All of these factors will contribute to the quantity, quality and level of detail gathered on the environment. Users should temper their reporting expectations in accordance with how long the tool has been implemented. Depending on the components deployed, expect initial reports to provide statistics on storage utilization, the sizes and types of files on individual servers, available ports on FC switches and allocated and unallocated storage on each storage array.

These usage reports can have immediate impact on the bottom line. It will help organizations determine if they need to purchase more storage arrays or switch ports, if they have enough available to meet current and forecasted needs or if they are buying the right kind of storage. For instance, the reports may confirm that the organization may need to purchase more storage. However, it may also reveal that the storage needed doesn't need to be an expensive monolithic array, but a more affordable modular array because the data is primarily reference data.

After the tool is up and running for a few months, expect more comprehensive enterprise reports. Storage utilization may now be grouped by such criteria as application, operating system, location or department.

Capacity management
Making information life cycle management (ILM) a reality requires intervention. Actively managing your storage to achieve an effective ILM program should be an important consideration in choosing a capacity management tool. Right now, only the best managed environments can report on and measure storage capacity but they remain hard-pressed to redeploy storage assets quickly. Next-generation technologies such as newer storage arrays, iSCSI SANs and network-based file- and block-based virtualization, along with host-based replication software will help organizations realize the benefits of a centrally managed storage utility.

Article 11 of 17

Dig Deeper on Data storage management

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

Get More Storage

Access to all of our back issues View All