Managing and protecting all enterprise data


Run storage as a utility

Converting from a traditional decentralized IT and storage infrastructure to running IT services and storage like a utility isn't a trivial task; it requires a big shift for both business units and IT. But mandates to lower costs and meet compliance requirements will undoubtedly result in an increasing number of organizations opting for centralized storage models with tiered storage offerings.



You can achieve considerable cost savings if different tiers of storage are parceled out to business units on a pay-for-use basis.

Running storage as a centralized service has been touted as a good way to reduce costs, simplify storage administration and aid in compliance, but most companies are still far from parceling out storage to different business units in the way an electric company delivers electricity to homes.

Going from a distributed to a centralized storage infrastructure, which is needed to dispense different levels of storage services throughout the company, typically entails a multiyear effort that requires overcoming people, process and technological issues. Not all CIOs and IT managers are ready for such a radical change and, in some cases, it takes a revamp of the entire IT organization to make it happen.

There's little doubt about the merits of centralized storage services from an enterprise perspective. However, the goals and activities of business units and departments will almost always be viewed as a higher priority than a corporate initiative to lower storage costs or streamline storage management. A primary objection of business units is concern that they might not receive the same type and quality of storage they would if storage were under their auspices. "For business units, their storage is always more important than other groups' storage," says Doug Chandler, research director, infrastructure services research group at Framingham, MA-based IDC.



Six tips for running a successful storage utility
Set goals. From the get-go, have a plan in place that includes the end goal. Because it may take years to replace storage silos, it's crucial to define a timeframe for each of the existing storage silos to be replaced with centrally managed storage.

Money matters. Letting storage users know how much storage costs is critical; otherwise, storage is assumed to be free and wasteful storage usage will continue.

Define processes. Well-defined processes that govern all aspects of the centralized storage service are a must when offering storage as a utility.

Always on. High availability and sufficient redundancy that ensures continuous service are imperative.

Consolidate management. A storage management application that provides a single way for all storage administrators to perform storage management tasks is a key ingredient to running storage like a utility.

Virtualize it. Harness virtualization to allow easy scaling of storage services.

Changing job functions
Negotiating with business units and departments is a crucial factor when migrating to centralized storage services. Converting from a distributed to a centralized storage architecture is typically initiated by the IT organization, mainly for budgetary or compliance reasons, or as an initiative to streamline and simplify storage management. "It was the main role of our CIO to work out the ROI, sell it to the executive staff and, once approved, negotiate service-level agreements [SLAs] with department heads and users," reports Mark Deck, senior VP of technology, National Medical Health Card (NMHC) Systems Inc., Port Washington, NY.


People issues within the storage team will likely surface as the roles of some IT members change due to the different skill set required to manage a centralized storage environment rather than storage islands. "Initially, most IT administrators felt that their function and relevance within the IT organization had been reduced," says Deck.

Unlike storage silos, centralized storage that's operated like a utility requires strong processes. As a result, a big part of transitioning from a distributed to a centralized storage model is related to defining, redefining and adopting processes that regulate all aspects of storage: from how storage is acquired, provisioned, backed up and monitored, to how it's budgeted and costs are allocated to end users. The challenging aspect of introducing new processes isn't the actual process definition, but getting people to adopt and abide by them. "For less process-oriented companies, of which the large majority are smaller and midsized firms, this may require a change in culture; in all reality, it'll take time to permeate and it can only happen if it is mandated and championed by the executive staff," says Greg Schulz, founder and senior analyst at StorageIO Group, a technology analyst and consulting firm in Stillwater, MN.

It typically takes companies several years to complete the transition to a centralized storage infrastructure. "It took us two and a half years to replace existing storage silos; it would have been too costly to replace existing storage islands that weren't fully depreciated," explains Deck.

The cost of the migration--and how to phase out legacy systems cost-efficiently--has a major impact on the length of the project. Having seen many companies successfully adopt centralized storage models, Nimish Shelat, outbound marketing manager for Hewlett-Packard (HP) Co.'s Storage Essentials, cautions that "multiyear projects are susceptible to getting off track ... the project plan needs to include an end-of-life schedule for each storage island and this schedule needs to be rigorously executed."


Although the majority of people and process issues occur while transitioning toward centralized storage, they continue to be relevant once the new storage infrastructure is in place. Refining processes and ensuring that service levels are met takes an ongoing effort. Users are tempted to associate centrally provided storage services with "free storage," which could lead to wasteful storage-usage behavior that counteracts one of the main purposes of centralized storage: lowering overall storage costs. Therefore, making users aware of the cost of the storage they consume through chargeback or cost reports is pertinent. "One of the surprises and main challenges since we centralized storage has been a change in users' attitudes toward storage; as they didn't have to pay for it directly, the amount of disk space they used increased significantly," says NMHC Systems' Deck.

Is chargeback necessary?
In environments that charge storage services back to end users, and where cost and storage services are measured by meeting negotiated SLAs, staffing is a very important factor. "If you charge for your services, you need to keep cost and services closely aligned; it definitely has an impact on staffing, as better controls are required," says StorageIO Group's Schulz.

To ensure uninterrupted availability, redundancy is another key attribute of a centralized storage service. From storage controllers, disks and RAID levels to servers and connectivity, most aspects of the storage infrastructure need to be designed with redundancy. Backups and disaster recovery need to be architected so that storage services can resume with minimal service interruption. Meeting negotiated recovery time objectives (RTOs) and recovery point objectives (RPOs) is also essential.

Besides more stringent storage hardware requirements, storage management software harnessed to manage, monitor and analyze the storage infrastructure is indispensable in a centralized storage model. "Having HP Storage Essentials as the single tool for all storage management tasks not only gives us a single view into all aspects of our storage infrastructure, it ensures that all storage management tasks are performed the same way," says Deck.


In addition to storage resource management (SRM) applications like EMC Corp.'s ControlCenter, HP's Storage Essentials and Symantec Corp.'s Veritas CommandCentral Storage that attempt to address all storage management aspects within a single application, smaller, more targeted applications have found a place in many centralized storage environments, complementing or taking the place of full-blown SRM apps. Akorri Inc.'s BalancePoint for performance management, MonoSphere Inc.'s Storage Horizon for storage capacity management, Onaro Inc.'s SANscreen for SLA and change management, and TeraCloud Corp.'s Storage Analytics for storage utilization analysis are a few of the examples of tools companies have deployed for storage management and reporting (see "Storage management products," PDF below).



Click here for comparison of Storage management products (PDF).


Besides simplified administration and the ability to standardize storage throughout the enterprise, the biggest incentive to running storage like a utility is the cost savings in procuring and maintaining storage. One of the big drawbacks of storage silos in the form of DAS, or SAN and NAS islands, is that disks and arrays are dedicated to a single server or department; regardless of how much available storage there is, it can't be extended beyond its dedicated use, resulting in dismal overall storage utilization. With centralized storage run like a utility, storage is provisioned from a storage pool and utilization is determined by how much spare capacity is needed. "We were able to increase storage utilization from under 60% to over 90%," says NMHC Systems' Deck.

Determining how much spare capacity is required is very company specific, and the amount of spare capacity should be based on a combination of historical storage growth and storage forecasts. "In the past, we bought storage ad hoc and we were 95% reactive; now, we extend storage based on annual forecasts and we are 95% proactive, and we almost never have to procure storage ad hoc," says Dan Trim, director of information technology infrastructure support at Health Alliance Plan of Michigan in Detroit.


Storage tiers
Maximizing storage utilization and growing storage on an as-needed basis results in significant storage savings. Storage costs can be further reduced by providing different tiers of storage to applications and users by segregating the storage pool into different classes based on performance, service level and cost. Offering three different storage tiers is the most popular approach (see "Tiers offer service and savings," below).


Tiers offer service and savings
An important aspect of running storage as a utility is the ability to provide different tiers of storage with varying service levels and cost. The number of tiers, their characteristics and the size of each tier depends on the specific environment. For instance, 65% of National Medical Health Card (NMHC) Systems Inc.'s storage is Tier 1 because a large percentage of it is used for database apps. Conversely, environments with a large number of file stores are likely to have a higher percentage of Tier 2 or Tier 3 storage.

In addition, the definition of storage tiers is very relative. What's considered Tier 1 storage in one company may serve as Tier 3 in other places. "Tiering is very customer specific; some customers consider [EMC] Clariion as Tier 1, while others view only [EMC] Symmetrix as Tier 1," explains Kevin Gray, product marketing manager for storage products at EMC Corp.'s resource management group.

Although storage tiers can't necessarily be pinned down in absolute terms, storage tiers--especially Tier 1 storage--do have common characteristics. Tier 1 storage is typically characterized by high performance, high availability, scalability, expandability and higher cost. Tier 1 storage is likely to offer more advanced features than lower tier storage, like the types of supported interfaces, cloning, snapshots, and synchronous and asynchronous replication. Array vendors have structured their offerings to support a three-tier approach to storage and, in general, high-end arrays like EMC Symmetrix, Hitachi Data Systems' TagmaStore Universal Storage Platform or Hewlett-Packard Co.'s StorageWorks XP arrays are likely to serve the critical Tier 1 storage layer.

Besides the choice of the storage array, Tier 1 storage must provide the most aggressive service levels related to backup, restore and disaster recovery (DR). Although the objectives for recovery and DR are company specific, Tier 1 storage is likely to serve mission-critical apps that may have zero tolerance for downtime. As a result, Tier 1 storage frequently uses snapshots, replication or continuous data protection that provides real-time backup, close to instantaneous restores and the ability to restore to any point in time. For the most critical of applications, synchronous replication to a secondary array or nodes in a clustered architecture can provide instantaneous failover with zero downtime.


Tier 1 storage usually consists of expensive high-performance storage and provides the highest degree of redundancy and availability. Tier 1 storage typically comprises high-end Fibre Channel (FC) arrays and is used for mission-critical apps such as enterprise resource planning systems or databases. While Tier 1 storage frequently has four-nines of availability, Tier 2 storage is slightly less reliable, isn't as fast and is less expensive than Tier 1 storage. It's used for less mission-critical applications and applications that don't require the performance of Tier 1 storage. Tier 3 storage usually comprises large, inexpensive disks, such as SATA disks, and is frequently used as file or archival storage.

NMHC Systems' Deck implemented a three-tier storage architecture that delivers the aforementioned service levels. As an HP shop, the three tiers are served by HP storage arrays: Tier 1 storage includes HP StorageWorks XP Disk Arrays with FC drives; Tier 2 is made up of HP Enterprise Virtual Arrays (EVAs) with FC drives; and HP StorageWorks 1000 Modular Smart Arrays (MSA1000) with SATA drives serve as Tier 3.

"While XP-based Tier 1 storage is priced at about $7/GB per month, Tier 2 storage runs at about $3/GB per month and Tier 3 storage at about $1/GB per month," explains Deck. Because users have a tendency to ask for more storage than they need, having a cost associated with each tier is a crucial instrument to manage and provide users with the right level of storage that meets application and cost requirements. "Initially, users wanted Tier 1 storage for almost all of their applications; only after we priced each tier and users became aware of the cost implications [did] they accept the idea of lower tier storage for less mission-critical applications," says Deck.


Having different storage tiers greatly lowers overall storage cost, but storage tiers per se won't prevent valuable Tier 1 storage from being filled with inappropriate content and files that haven't been used for ages. To get the most out of storage, an increasing number of firms deploy data migration and information lifecycle management tools that automatically move data among different tiers based on predefined rules. For instance, rules can be defined to move all files that haven't been modified for more than three months to less expensive Tier 3 storage, or to purge redundant copies of certain file types. Automatic data migration, however, isn't without challenges and usually breaks down when there's a need to move data among different vendors' products. Most significantly, as files get moved among tiers, the cost of the storage service changes and it becomes more difficult to relay the correct cost of storage back to users. Fortunately, some SRM tools like EMC ControlCenter 6.0 can track storage cost as files move among tiers.

Storage management and reporting
Storage management plays an instrumental role in running storage services. Provisioning storage to end users, managing changes and being able to monitor, report and analyze the storage infrastructure are key areas that need to be properly addressed. Some companies have automated the provisioning process, harnessing process management apps like EMC IT Process Centre, which allows a storage administrator to design the provisioning process with a Visio-like tool and then turn it into an application to enforce the designed process. In other words, it generates all elements of a storage provisioning app, from request and approval forms to the workflow that enforces the actual process.


Properly managing changes is a key requirement to ensure uninterrupted IT services. Changes always involve a certain level of risk. A well-defined change management process that ensures all changes are reviewed, tested (if possible), approved and scheduled puts the proper controls around changes. If possible, changes should also have a provision for rollback that allows returning to the configuration prior to the change. Due to the inherent risk of changes and their potential detrimental impact, SRM vendors have added rudimentary change management features to their suites. Onaro SANscreen takes change management to another level by analyzing the impact of changes to the storage infrastructure prior to enacting a change.

Visibility into all aspects of the storage infrastructure, including dependent services like servers and the network, is a prerequisite to running storage like a utility. Only complete visibility into the full data path, from the application that uses the storage service and servers that host the apps, to involved storage switches, arrays and spindles, will enable the storage service to live up to agreed service levels. Performance, availability, utilization, errors, data analysis and usage patterns are some of the attributes a storage management app needs to report on along the full data path.

While SRM applications have the ability to report on the various aspects, they don't necessarily provide the depth and breadth of some of the tools that focus on a very specific aspect. For instance, all SRM applications can analyze storage performance, but Akorri BalancePoint provides cross-domain performance data that includes all IT infrastructure dependencies.

"Cross-domain storage management tools with predictive analysis capabilities that allow troubleshooting and analyzing of all facets of the storage infrastructure are instrumental to delivering committed service levels," says StorageIO Group's Schulz.


Virtualization of IT resources is probably the most significant trend in today's data centers, and within a few years most storage services will be provided on a virtualized infrastructure that runs on highly redundant server farms and storage arrays. Virtualization addresses the age-old problem of managing heterogeneous storage and servers but, more importantly, it enables utility-based computing by providing an easily scalable platform. As a side-effect of increased virtualization and resulting interdependencies, the boundaries between storage and servers will fade, resulting in common teams, management and budgets for server and storage infrastructure.

On the other hand, most storage management tools were designed to manage resources for a single server. With one server hosting multiple virtual server instances, many storage management tools have lost the ability to manage all virtual server instances and provide visibility into the full data path. Fortunately, vendors realize the problem and are working to tightly integrate their applications with VMware, today's leading virtual server software.

Converting from a traditional decentralized IT and storage infrastructure to running IT services and storage like a utility isn't a trivial task; it requires a big shift within the business and IT organization. However, a continuous mandate to lower costs and meet compliance requirements will undoubtedly result in an increasing number of organizations opting for centralized storage models with tiered storage offerings. This will enable them to provide storage on a what-is-required basis vs. the wasteful and overprovisioned manner inherent to distributed storage environments.



Article 1 of 13

Dig Deeper on Storage management tools

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

Get More Storage

Access to all of our back issues View All