How much does your storage actually cost? This question has vexed IT departments and storage managers for deca...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Most IT departments budget storage purchases based on the needs (and desperation) of various application users. So if a capacity planning tool projects that the Oracle database will grow another 500 GB in the coming year, IT budgets the necessary storage purchases. In other cases, additional storage is purchased only on an emergency basis, such as if an application performance problem appears.
But these outdated paradigms overlook the business value of storage to the enterprise. Users demand unlimited storage growth and peak performance for all of their data because they're not the ones footing the bill.
The emergence of chargeback storage techniques is rectifying this problem. With chargeback storage tools, IT departments can assign real values to the storage assets that are actually being used by applications and departments. This allows IT to not only identify the true costs of storage but possibly recover them, as well.
Chargeback storage: More than an accounting tool
Contrary to what chargeback storage implies, it isn't just an accounting or billing tool. Chargeback storage is part of a management process -- a means of applying realistic values (or costs) to all the storage assets in a company, then tracking the utilization of those assets by application or department. IT departments use chargeback data to recover storage costs from application or departmental users, as well as (often in conjunction with capacity planning) to budget for the future.
"There's a big disconnect between actually charging and using [chargeback storage] as an elevated form of storage resource management or storage resource reporting," says Greg Schulz, founder and senior analyst at the StorageIO Group.
While some storage resource management (SRM) tools offer basic chargeback storage features, vendors of standalone chargeback products emphasize reporting and billing capabilities. Most chargeback products can track which storage resources are being used, along with the department or application using the storage, then assign a value or cost to the resources. The same model works with other resources in the enterprise in addition to storage capacity, including charging for CPU time, network access and storage performance, e.g. service level agreements (SLA).
"Even if you're not going to charge [recover costs] for storage, if you need to plan, manage or monitor how you're doing, you need to be able to account and report on storage," Schulz says. "Ironically, SRM tools are a step in that direction." Real utilization data from the chargeback tool can be compared to utilization estimates to see how the department, application or overall organization is performing against forecasts.
Heterogeneity is a requirement for chargeback storage tools. A chargeback model based simply on the amount of storage used has no real bearing on the level of service being delivered or paid for. Consequently, the notion of performance plays a role in chargeback costing. Unless you're operating in a single-vendor environment, heterogeneity in the chargeback tool is needed to collect information and gauge performance across the many storage types, interfaces and applications in an organization.
Determining the true cost of storage
While the concept of chargeback storage is simple, the correct use of chargeback storage relies on realistic costing for each storage tier and system. In short, you have to establish the appropriate costs before you can charge those costs back to a user. This quantum leap is what often poses the biggest roadblock to deployment.
Many IT organizations simply guess at costs or else assign costs across the business arbitrarily. "The criteria that's used for the allocation typically involves the number of people that are in each business unit, or maybe it's the percentage of revenues contributed," says Mike Hogan, director of services development at GlassHouse Technologies Inc. "It has nothing to do with the consumption of IT services."
While no single formula is used to set costs, the three principal cost factors are technology, process and people. Technology costing involves the storage itself, starting with an evaluation of each service level needed in the organization. There may be multiple service levels that mix storage systems with data protection schemes to establish a "menu" of services. For example, service level one may provide Tier 1 storage with a recovery point objective (RPO) of one hour, while service level two may use the same Tier 1 storage with an RPO of 24 hours. In such a case, level one would require more aggressive data protection, e.g. continuous data protection (CDP), and would be far costlier per gigabyte or other "unit" of storage.
But costing should also take into account the process and the people needed to implement and support each service level. Processes may include the staff time needed to partition, back up, recover and maintain each service level. Since it's easy to determine such staff costs, it's a simple matter to shift labor costs to each corresponding service level.
For example, 40% of the total staff cost may be assigned to service level one, 25% may be assigned to level two and 15% may be assigned to level three, leaving the remaining 20% free to handle lower levels or other tasks. There may be other costs, but ultimately the storage administrator can arrive at a "unit cost" for each service level. Now, those costs can be plugged into a chargeback tool and allocated to users based on real usage.
Total storage utilization also plays a role in consumption-based cost allocation. "The other thing I have to factor into that unit cost is the cost of unallocated capacity," Hogan says, noting that unallocated storage adds the cost of inefficiency to the unit storage costs being passed back to the business. This is why increasing the utilization of a service level actually drive downs storage costs. Hogan underscores the importance of accurate costing, even if you're not ready to implement chargeback technologies soon. "It still makes total sense to take this approach and get to unit-based costing and management because now I can influence how my customers are consuming. It's an indirect way to drive out costs now."
Deploying chargeback tools
Chargeback tools consist of several elements that must be deployed and configured. The core data collection and reporting tool requires installation on a server, whether it's a dedicated appliance or a server set aside for storage/system management, or a virtual server carved from a physical server. Chargeback systems also require data collectors (agents) installed on various storage systems. The agents collect information about storage utilization and service levels. The collected data is then stored in a repository -- some amount of storage set aside for the chargeback system. The chargeback tool accesses the repository, performs analytics and deposits the results back to the repository. In time they are delivered to IT.
As with any agent-based system, there are concerns about processing and network overhead. Ideally, a chargeback system's agents and analytics should introduce little performance penalty to the network, but chargeback agents need to coexist along with other tools vying for network time. It's common to see the server group, network group and storage group all deploying agents and gathering information from across the infrastructure to determine utilization and performance. All this activity can hurt performance.
"You might start out with a chargeback focus for storage, but what are your peers doing in other groups?" Schulz asks. "Is there any way to roll some information up?" It may be possible to consolidate some data acquisition to support multiple analytical systems, including chargeback, while maintaining performance.
Software agents also require processing time on each system where they're installed, along with periodic updates, patches and fixes. Intrusive agents can be difficult to maintain and complicated to manage. "If the benefit of putting that agent on a system is outweighed by the complexity to manage it, then agents probably aren't a good thing," Schulz says, noting that it may be acceptable to place agents onto key systems only, rather than all systems. This also ties into the biggest oversight in chargeback storage deployment, which is excessive complexity -- implementing features and functionality that aren't needed. For example, agents may not be needed to accommodate a basic level of chargeback reporting if that's all that's really needed.
Chargeback tools counter unrealistic user demands
At National Medical Health Card Systems (NMHC) Inc., an independent pharmacy benefit manager, the storage infrastructure supports pharmacy card benefits processing, copayment information, drug eligibility data, pharmacy payment details and drug utilization interaction reporting. About 90 TB of total storage is spread across three storage tiers, including 40 TB on two Hewlett-Packard XP24000 enterprise arrays, 20 TB on two HP EVA8000 midlevel arrays and 25 TB distributed among three AS400 servers.
All this storage had evolved into a ubiquitous utility that users often undervalued. As such, user demands had become a serious problem, resulting in costly, long-term storage requests that didn't make much sense for the organization. "The user community always thinks that there's enough storage out there in the company, and they can keep saving stuff forever," says Babu Kudaravalli, senior director of operations, business technology services at NMHC. In addition to longer retention demands, Kudaravalli points to increasing file sizes and repetitious file storage as major storage drains.
The challenge for Kudaravalli was to rein in unrealistic storage requests, while maintaining adequate (and cost-effective) capacity and performance. The company's initial response was to implement storage quotas, such as limiting the size of Exchange mailboxes. But this solution offered mixed results because of the many exceptions required.
So Kudaravalli turned to the chargeback features in HP's Storage Essentials SRM software already in use in the NMHC data center. By applying realistic costs to each disk partition, (LUN) and tracking utilization by department, Kudaravalli was finally able to attach a meaningful cost to each department's storage use.
"When you are aware of the cost, people make more sensible decisions," Kudaravalli says. While NMHC does not actually recover costs from users, the "cost awareness" initiative has impacted department heads, and he has seen new storage requests significantly curtailed.
Although early versions of HP's chargeback module proved time consuming to load and difficult to set up, recent updates to the software have chargeback features far easier to implement, Kudaravalli says. Chargeback tracking in HP StorageEssentials is limited to each disk partition, so it's impossible to subdivide the partition between multiple users -- a possible disadvantage in some situations. However, the chargeback module does support tiered storage, allowing NMHC to offer users real-cost comparisons between service levels. "Sometimes we can justify an application on Tier 1 storage, even though it costs much more than Tier 2 based on service levels," he says. "If I move an application from A to B, 'this' is the reduction in cost." Ideally, he'd like to see a more detailed cost analysis broken down by file type.
Although there were no formal return on investment (ROI) calculations applied to the adoption of chargeback storage, Kudaravalli notes that the cost savings in moving just 10 TB from Tier 1 to Tier 2 has paid for the chargeback tool in under a year. By analyzing file types and access patterns, NMHC is revisiting quota limits on common storage applications, like email and migrating large amounts of infrequently accessed data to Tier 3 archives. "By distributing the data to three different platforms, we've actually minimized a single point of failure," he says. "We've spread the risk from one storage system to three."