To manage storage effectively, storage administrators must know how much storage is available, how much is being used, what applications are accessing the storage, what service levels are required for each application and how fast storage needs are growing. Many storage professionals are turning to a new generation of storage resource management (SRM) tools to help them understand the configuration of their storage environment and the behavior of the individual storage components.
The scope and capabilities of SRM have expanded dramatically over the years. A decade ago, SRM primarily involved storage provisioning and allocation -- the creation and management of LUNs. This was coupled with capacity measurement and utilization features.
But SRM has grown to encompass automation, performance measurement and monitoring, capacity planning and change management. Today, SRM can also be connected to backup, business continuity, and disaster recovery technologies. But because of this broad scope, "It gets really confusing for companies trying to buy it [SRM products]," says Robert Laliberte, analyst at GlassHouse Technologies Inc. Adding to the confusion is that some SRM features are provided as specific point products, while other vendors provide SRM as a more complex (and expensive) software framework.
SRM tools still emphasize capacity utilization and reporting, but don't mistake "allocated" storage for "used" storage. An SRM utility that reports on allocated storage may not reveal the true amount of used storage. For example, if 95% of a storage array is allocated to applications, the array may seem full, but if only 30% of that allocated storage contains files and other data, the array may actually be poorly utilized.
SRM tools should also emphasize heterogeneity. SMI-S is key to heterogeneous operation, but success will depend on the individual storage environment. "SMI-S isn't quite there yet," Laliberte says. "So typically the [storage] vendor's [SRM] products will work better with their equipment." SRM must also allow administrators to identify virtual machines and see the links to storage and performance.
Ideally, an SRM tool will allow a storage administrator to see the entire storage environment at a glance, then drill down into any area of the environment. The tool may also offer performance analysis, real-time reporting and trended behaviors of the storage environment. Storage administrators can use performance data to tune applications and find bottlenecks. Performance and utilization should be evaluated in tandem. It's bad policy to scale up utilization, only to find that performance suffers. "Balance performance, availability, capacity and energy," says Greg Schulz, founder and senior analyst at the StorageIO Group.
Even though today's SRM products are relatively mature, complex tools may not be worth the cost and trouble of deployment for some shops. When complex tools are implemented, under utilization is a serious concern. Some organizations simply don't have the time, staff or expertise to fully integrate and utilize all the features of an SRM tool. Smaller shops may opt for SRM products that focus on a limited number of tasks.
How are SRM products deployed?
SRM only makes sense in some type of SAN or NAS environment. Deploying SRM for a handful of disassociated network servers makes no sense -- the more complex the storage environment, the more SRM makes sense. You should also make sure that a prospective SRM tool is compatible with your storage infrastructure. Not only should the tool interoperate with your storage platforms, it should also be supported by your network infrastructure. Since SRM tools must store the information that they collect, be sure you have the storage resources available to meet the tool's repository needs. For most tools, the additional network traffic imposed by SRM should not hurt network performance.
SRM products are normally deployed on a server in the enterprise. Most SRM tools rely on agents that are installed on other servers or storage platforms. "The benefit of an agent is getting you more information and more application-aware information," Schulz says. "The downside is there's an agent you have to put on every server." It is possible to simplify the deployment by placing agents only on key servers where more granular information is needed.
Tools that don't use agents are typically getting their information through some type of management information base, an application programming interface (API), SMI-S or executing a shell script command against a server port. This can add complexity to the storage environment because software agents, scripts and other means of transferring data can suffer compatibility problems or require periodic upgrades over time. Some of the newest SRM tools are agentless, but this can limit the depth of information acquired from the storage platform.
The biggest mistake in SRM deployment is underestimating the tool's complexity, which can result in features being abandoned or underutilized. While point products, like capacity planning tools, may be simple to deploy, but storage administrators can engage the vendor's professional services group to assist with the initial installation and configuration of large frameworks or other complex SRM suites.
While most SRM vendors have a return on investment (ROI) story to tell, the value of the tool will vary depending on your environment. However, capacity utilization is the low-hanging fruit for SRM payback. An SRM tool that reports unexpectedly low utilization can save money by precluding the need to purchase more disks. Conversely, a report of unexpectedly high utilization might result in disk purchases that keep key applications running smoothly.
SRM ends storage allocation nightmare
SRM technology has become crucial to National Medical Health Card Systems (NMHC) Inc., a national independent pharmacy benefit manager. NMHC manages a spectrum of data, such as pharmacy card benefits processing, co-payment information, drug eligibility data, pharmacy payment details and comprehensive drug utilization interaction reporting. NMHC's data volumes have approached 90 TB with 40 TB of raw storage on two Hewlett-Packard Co. (HP) XP24000 enterprise arrays and another 20 TB across two HP EVA8000 midlevel arrays. The remaining 25 TB are spread across three AS400 servers used as JBOD arrays.
Limited funding made the promise of visibility and automation particularly appealing to NMHC, allowing more work with fewer IT staff. "Allocating storage for 200 plus systems and managing 40 TB to 60 TB of SAN storage was becoming a nightmare," says Babu Kudaravalli, senior director of operations, business technology services at NMHC. "We made an executive decision -- rather than spend money on humans, we would start using automated tools." NMHC uses HP's Storage Essentials SRM offering, building on experience with the automated tools already included with HP's XP-class arrays.
NMHC evaluated SRM carefully, implementing the tools across a single XP-class storage array and four servers running Windows, HP Unix, Red Hat Inc. Linux and IBM AIX operating systems. Kudaravalli found the correlation and statistical data developed between servers and storage to be particularly useful, along with the single pane of glass interface that allowed servers, storage and backups to be monitored from one location. SRM also provided cost accounting, allowing storage administrators to put a monetary value on the storage being allocated. From there, the organization built out SRM to encompass most of the storage environment.
Although Kudaravalli cannot offer a specific ROI for SRM, he has discovered a significant reduction in management labor. "We [now] have half of a storage administrator here," he says. "We eliminated at least one full time employee." That may not sound like a lot, but the ability to grow the storage environment into the foreseeable future with little, if any, increase in management labor is a huge benefit for NMHC.
SRM did present one unforeseen implication: a loss of direct control over specific storage content, such as LUNs and disk locations. For example, the automation of SRM may make it impossible for a storage administrator to direct a specific LUN into a desired RAID group. "It's an inherent problem with any automated tools," Kudaravalli says, noting that the advantages of automation and visibility more than compensate for any loss of direct insight into storage locations.
Future of SRM
Aside from the ongoing move to support data center virtualization, SRM is an increasingly important part of data center automation, not just in provisioning or thin provisioning, but actually integrating with and providing information to data movers, classification utilities and other tools that can make decisions about stored data.
SRM may ultimately evolve into a data center operating system. "SRM will be that piece of the puzzle that fulfills the whole storage environment and expand even up to the applications," Laliberte says, citing moves by HP, EMC Corp. and other major SRM vendors. In the meantime, it's important to understand the SRM roadmap being followed by your vendor. If you don't want large comprehensive frameworks in your organization, it may be better to stay with innovative startups that deliver capable point tools.