By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
|You can achieve considerable cost savings if different tiers of storage are parceled out to business units on a pay-for-use basis.
Going from a distributed to a centralized storage infrastructure, which is needed to dispense different levels of storage services throughout the company, typically entails a multiyear effort that requires overcoming people, process and technological issues. Not all CIOs and IT managers are ready for such a radical change and, in some cases, it takes a revamp of the entire IT organization to make it happen.
There's little doubt about the merits of centralized storage services from an enterprise perspective. However, the goals and activities of business units and departments will almost always be viewed as a higher priority than a corporate initiative to lower storage costs or streamline storage management. A primary objection of business units is concern that they might not receive the same type and quality of storage they would if storage were under their auspices. "For business units, their storage is always more important than other groups' storage," says Doug Chandler, research director, infrastructure services research group at Framingham, MA-based IDC.
|People issues within the storage team will likely surface as the roles of some IT members change due to the different skill set required to manage a centralized storage environment rather than storage islands. "Initially, most IT administrators felt that their function and relevance within the IT organization had been reduced," says Deck.
Unlike storage silos, centralized storage that's operated like a utility requires strong processes. As a result, a big part of transitioning from a distributed to a centralized storage model is related to defining, redefining and adopting processes that regulate all aspects of storage: from how storage is acquired, provisioned, backed up and monitored, to how it's budgeted and costs are allocated to end users. The challenging aspect of introducing new processes isn't the actual process definition, but getting people to adopt and abide by them. "For less process-oriented companies, of which the large majority are smaller and midsized firms, this may require a change in culture; in all reality, it'll take time to permeate and it can only happen if it is mandated and championed by the executive staff," says Greg Schulz, founder and senior analyst at StorageIO Group, a technology analyst and consulting firm in Stillwater, MN.
It typically takes companies several years to complete the transition to a centralized storage infrastructure. "It took us two and a half years to replace existing storage silos; it would have been too costly to replace existing storage islands that weren't fully depreciated," explains Deck.
The cost of the migration--and how to phase out legacy systems cost-efficiently--has a major impact on the length of the project. Having seen many companies successfully adopt centralized storage models, Nimish Shelat, outbound marketing manager for Hewlett-Packard (HP) Co.'s Storage Essentials, cautions that "multiyear projects are susceptible to getting off track ... the project plan needs to include an end-of-life schedule for each storage island and this schedule needs to be rigorously executed."
|Although the majority of people and process issues occur while transitioning toward centralized storage, they continue to be relevant once the new storage infrastructure is in place. Refining processes and ensuring that service levels are met takes an ongoing effort. Users are tempted to associate centrally provided storage services with "free storage," which could lead to wasteful storage-usage behavior that counteracts one of the main purposes of centralized storage: lowering overall storage costs. Therefore, making users aware of the cost of the storage they consume through chargeback or cost reports is pertinent. "One of the surprises and main challenges since we centralized storage has been a change in users' attitudes toward storage; as they didn't have to pay for it directly, the amount of disk space they used increased significantly," says NMHC Systems' Deck.
To ensure uninterrupted availability, redundancy is another key attribute of a centralized storage service. From storage controllers, disks and RAID levels to servers and connectivity, most aspects of the storage infrastructure need to be designed with redundancy. Backups and disaster recovery need to be architected so that storage services can resume with minimal service interruption. Meeting negotiated recovery time objectives (RTOs) and recovery point objectives (RPOs) is also essential.
Besides more stringent storage hardware requirements, storage management software harnessed to manage, monitor and analyze the storage infrastructure is indispensable in a centralized storage model. "Having HP Storage Essentials as the single tool for all storage management tasks not only gives us a single view into all aspects of our storage infrastructure, it ensures that all storage management tasks are performed the same way," says Deck.
|In addition to storage resource management (SRM) applications like EMC Corp.'s ControlCenter, HP's Storage Essentials and Symantec Corp.'s Veritas CommandCentral Storage that attempt to address all storage management aspects within a single application, smaller, more targeted applications have found a place in many centralized storage environments, complementing or taking the place of full-blown SRM apps. Akorri Inc.'s BalancePoint for performance management, MonoSphere Inc.'s Storage Horizon for storage capacity management, Onaro Inc.'s SANscreen for SLA and change management, and TeraCloud Corp.'s Storage Analytics for storage utilization analysis are a few of the examples of tools companies have deployed for storage management and reporting (see "Storage management products," PDF below).
Determining how much spare capacity is required is very company specific, and the amount of spare capacity should be based on a combination of historical storage growth and storage forecasts. "In the past, we bought storage ad hoc and we were 95% reactive; now, we extend storage based on annual forecasts and we are 95% proactive, and we almost never have to procure storage ad hoc," says Dan Trim, director of information technology infrastructure support at Health Alliance Plan of Michigan in Detroit.
Maximizing storage utilization and growing storage on an as-needed basis results in significant storage savings. Storage costs can be further reduced by providing different tiers of storage to applications and users by segregating the storage pool into different classes based on performance, service level and cost. Offering three different storage tiers is the most popular approach (see "Tiers offer service and savings," below).
|Tier 1 storage usually consists of expensive high-performance storage and provides the highest degree of redundancy and availability. Tier 1 storage typically comprises high-end Fibre Channel (FC) arrays and is used for mission-critical apps such as enterprise resource planning systems or databases. While Tier 1 storage frequently has four-nines of availability, Tier 2 storage is slightly less reliable, isn't as fast and is less expensive than Tier 1 storage. It's used for less mission-critical applications and applications that don't require the performance of Tier 1 storage. Tier 3 storage usually comprises large, inexpensive disks, such as SATA disks, and is frequently used as file or archival storage.
NMHC Systems' Deck implemented a three-tier storage architecture that delivers the aforementioned service levels. As an HP shop, the three tiers are served by HP storage arrays: Tier 1 storage includes HP StorageWorks XP Disk Arrays with FC drives; Tier 2 is made up of HP Enterprise Virtual Arrays (EVAs) with FC drives; and HP StorageWorks 1000 Modular Smart Arrays (MSA1000) with SATA drives serve as Tier 3.
"While XP-based Tier 1 storage is priced at about $7/GB per month, Tier 2 storage runs at about $3/GB per month and Tier 3 storage at about $1/GB per month," explains Deck. Because users have a tendency to ask for more storage than they need, having a cost associated with each tier is a crucial instrument to manage and provide users with the right level of storage that meets application and cost requirements. "Initially, users wanted Tier 1 storage for almost all of their applications; only after we priced each tier and users became aware of the cost implications [did] they accept the idea of lower tier storage for less mission-critical applications," says Deck.
|Having different storage tiers greatly lowers overall storage cost, but storage tiers per se won't prevent valuable Tier 1 storage from being filled with inappropriate content and files that haven't been used for ages. To get the most out of storage, an increasing number of firms deploy data migration and information lifecycle management tools that automatically move data among different tiers based on predefined rules. For instance, rules can be defined to move all files that haven't been modified for more than three months to less expensive Tier 3 storage, or to purge redundant copies of certain file types. Automatic data migration, however, isn't without challenges and usually breaks down when there's a need to move data among different vendors' products. Most significantly, as files get moved among tiers, the cost of the storage service changes and it becomes more difficult to relay the correct cost of storage back to users. Fortunately, some SRM tools like EMC ControlCenter 6.0 can track storage cost as files move among tiers.
|Properly managing changes is a key requirement to ensure uninterrupted IT services. Changes always involve a certain level of risk. A well-defined change management process that ensures all changes are reviewed, tested (if possible), approved and scheduled puts the proper controls around changes. If possible, changes should also have a provision for rollback that allows returning to the configuration prior to the change. Due to the inherent risk of changes and their potential detrimental impact, SRM vendors have added rudimentary change management features to their suites. Onaro SANscreen takes change management to another level by analyzing the impact of changes to the storage infrastructure prior to enacting a change.
Visibility into all aspects of the storage infrastructure, including dependent services like servers and the network, is a prerequisite to running storage like a utility. Only complete visibility into the full data path, from the application that uses the storage service and servers that host the apps, to involved storage switches, arrays and spindles, will enable the storage service to live up to agreed service levels. Performance, availability, utilization, errors, data analysis and usage patterns are some of the attributes a storage management app needs to report on along the full data path.
While SRM applications have the ability to report on the various aspects, they don't necessarily provide the depth and breadth of some of the tools that focus on a very specific aspect. For instance, all SRM applications can analyze storage performance, but Akorri BalancePoint provides cross-domain performance data that includes all IT infrastructure dependencies.
"Cross-domain storage management tools with predictive analysis capabilities that allow troubleshooting and analyzing of all facets of the storage infrastructure are instrumental to delivering committed service levels," says StorageIO Group's Schulz.
Virtualization of IT resources is probably the most significant trend in today's data centers, and within a few years most storage services will be provided on a virtualized infrastructure that runs on highly redundant server farms and storage arrays. Virtualization addresses the age-old problem of managing heterogeneous storage and servers but, more importantly, it enables utility-based computing by providing an easily scalable platform. As a side-effect of increased virtualization and resulting interdependencies, the boundaries between storage and servers will fade, resulting in common teams, management and budgets for server and storage infrastructure.
On the other hand, most storage management tools were designed to manage resources for a single server. With one server hosting multiple virtual server instances, many storage management tools have lost the ability to manage all virtual server instances and provide visibility into the full data path. Fortunately, vendors realize the problem and are working to tightly integrate their applications with VMware, today's leading virtual server software.
Converting from a traditional decentralized IT and storage infrastructure to running IT services and storage like a utility isn't a trivial task; it requires a big shift within the business and IT organization. However, a continuous mandate to lower costs and meet compliance requirements will undoubtedly result in an increasing number of organizations opting for centralized storage models with tiered storage offerings. This will enable them to provide storage on a what-is-required basis vs. the wasteful and overprovisioned manner inherent to distributed storage environments.