If so, you aren't alone. About one-third of large and medium-sized companies in the U.S. are implementing storage subsystems, according to International Data Corp. in Framingham, Mass. Another one-third plan to do so in the next 12 months.
"Not all your data needs to be on expensive storage anymore," says Tony Asaro, a storage analyst with Enterprise Strategy Group in Milford, Mass. "We're talking all the time with customers that are putting 'non-critical' data on serial ATA drives, which cost a lot less to buy and manage. Plus, tiered storage enables them to retain data longer, which is important because of mandated new regulatory requirements."
A typical tiered architecture might use a (SAN) for transactional or production data. Data of lesser value shifts to secondary arrays of low-cost disk that sit behind primary storage networks. This could include compliance data that must be retained but is seldom accessed.
Virtual tape libraries could provide an additional tier for bulk storage, perhaps archived e-mail or reference copies of old business processes. Storage virtualization, which presents different storage resources as a single pool, is gaining traction also, Asaro says.
Tiered storage potentially makes databases nimbler and more stable, according to Mike Casey, a data-retention analyst with Contoural Inc. in Los Altos, Calif. "If done intelligently, tiered architecture ought to give you two things: better service levels on the data you really care about and lower costs overall," he says.
Cooperative data classification
To achieve those results, storage administrators need input from various stakeholders, experts say. The aim is to quantify the value of different datasets, which in turn determines the class of storage each dataset requires.
"We've found that companies that have successfully implemented tiered storage got cooperation and agreement among the server group, database group, applications group and business units," Asaro says.
Central to planning is the concept that data changes over time, in relation to evolving customer demands and business conditions. "Build an architecture that allows you to move data 'to secondary tiers' after its access patterns drop off. To do that, you need to think about data classification at the time the data is created," Casey says.
For instance, this could mean segmenting test datasets and development files from production databases, archiving them on near-line or midrange storage devices. Establishing effective service levels and policies around each application is cumbersome, yet necessary, Casey says. It often requires winnowing hundreds of datasets to a manageable number. "You don't want one class of data, but you don't want 100, either," he says.
To minimize turf wars, replace subjective value judgments about data with objective measures, says Stephen Foskett, director of the storage practice at Glasshouse Technologies in Framingham, Mass. Instead of the "report-card method" of grading data (A, A-, A+, B, etc.) -- "that's the worst way to do it" -- Foskett advises storage administrators to take a service-level approach.
If data needs to be replicated and mirrored several times, Fibre Channel primary storage probably may be the way to go, even if the business value of the data is low. Conversely, low-end, low-availability storage might suffice for other types of data, Foskett says.
"Talk about data in concrete terms and ask 'users' specific questions," Foskett says. "Do they need data replication? Is 10 milliseconds access time acceptable? Does their data require full daily backup offsite? If you start dealing in things you can actually do, you avoid the fuzzy 'is it valuable/not valuable' question."
Setting policies around business drivers
Casey notes that implementations often stall because enterprises fail in three ways: setting policies, clearly establishing the business drivers, and developing appropriate service levels. "So they don't have a clear vision of what they want to do from an architectural point of view," he says.
By reclassifying data, enterprises have an opportunity to reap other efficiencies. Automating processes, especially for heterogeneous storage environments, eases administrative headaches, says Rick Villars, a storage analyst with IDC.
"Having a common system of software and intelligence in your storage devices means you are able to start setting common enterprise-wide policies" for how data is managed, Villars says.
Before buying technology, Foskett suggests developing a specific list of attributes for each class of storage to avoid paying for unnecessary features or functionality. Products offering high availability or data replication may serve your primary tier well, yet those functions probably aren't necessary for lower tiers. "You can cut through a lot of the vendor hype by knowing ahead of time exactly what you need," he says.
Reading the fine print on licensing shouldn't be overlooked, either, Villars says. "When consolidating data, be sure a vendor doesn't slip in 'a clause' that locks you up for several years and prohibits you from scaling your architecture," Villars says.
As SANs emerged as a way to centralize backup, tiered storage enables companies to radically reduce their cost-per-gigabyte, Asaro says. Not to be overlooked is storage virtualization. He notes early adopters of storage virtualization report cost-savings on storage hardware and software of 24% and 19% respectively -- almost double what they anticipated.