Storage administrators are increasingly revisiting tiered storage strategies to lower asset and operational costs, and to improve application performance. This is accomplished by establishing and assigning tiers according to drive cost and performance. In its simplest form, this means assigning non-critical data to higher-capacity and lower-cost drives.
Recent technology improvements aiding the success of tiered storage include new solid-state disks (SSDs), improved Serial Attached SCSI (SAS) and more efficient data movement, classification and quality of service (QoS) software. These developments provide more tiers of storage to choose from, and facilitate moving data to the optimal available storage tier.
Many storage vendors have scrambled over the last year to put SSDs in their storage systems. While solid-state disk technology offers better performance and the opportunity for lower overall system costs, it currently carries a high price premium.
"From a system-level design perspective, solid-state disks certainly have every advantage compared to hard disk drives -- better performance, more throughput, lower power consumption and more resistance to harsh environmental conditions," said Stan Zaffos, research vice president at Stamford, Conn.-based Gartner Inc.
SSDs have two primary functions in storage systems: hosting I/O-intensive applications such as credit-card validation and log filing; and sharing resources to enable the increased use of back-end, high-capacity, low-performance drives.
The drawbacks of solid-state disk historically have been higher acquisition costs, lower capacity and long-term reliability. Limiting SSD to processing I/O-intensive applications or reducing system costs can result in lower total cost of ownership, especially if you consider power and cooling savings, and negating capacity concerns.
While SSDs still cost considerably more than even the most extensive Fibre Channel (FC) drives, that will change as the technology become more frequently implemented and vendors devise better ways to manage data on solid-state drives.
"The economics of SSDs can be dramatically improved if the storage system implements some sophisticated management techniques to maximize the technology's usefulness," Zaffos said.
Sun Microsystems Inc.'s Sun Storage 7000 Unified Storage Systems utilize sophisticated techniques to group SSDs and create large "super" disk caches to hold frequently accessed data. The 7000 Unified Storage Systems handle CPU I/O more slowly, but higher-capacity hard disk drives can store immense data sets and lower the total system cost.
EMC Corp. was the first vendor to incorporate SSDs into its enterprise storage systems, and now offers 3.5-inch SSDs with its Symmetrix and Clariion storage-area network (SAN) and Celerra unified storage systems. Scott Delandy, a senior product manager at EMC, said the firm's enterprise flash drives can plug into devices that already feature SATA and Fibre Channel drives. Enterprise flash drives can also plug into existing SANs as a tier 0 option, Delandy said.
With support for 6 Gbps and other enterprise features on the way, SAS drives are emerging as an alternative to FC drives and a way to lower overall system costs by giving manufacturers more flexibility in designing disk trays and back-end systems. SAS has traditionally been used primarily in servers and for direct-attached storage (DAS), but will become another option in enterprise arrays along with performance Fibre Channel and capacity SATA drives.
"Now I can choose between different drive types, different drive sizes, different performance characteristic drives and different tiers of drives, all with the same interface," said Greg Schulz, founder and senior analyst at StorageIO Group in Stillwater, Minn.
Hitachi Data Systems' midrange Adaptable Modular Storage (AMS) 2000 series offers SAS drives ranging from 146 GB to 450 GB, as well as 10K rpm or 15K rpm spinning speed. The series also supports 500 GB or 1 TB SATA II drives, and FC or iSCSI host connectivity.
"SAS drives in the 3.5-inch form factor are showing up in more and larger arrays," Schulz said. "They're working their way from the mid to the upper market."
Small form factor (SFF) 2.5-inch SAS drives will also allow vendors to sell denser arrays to save footprint in the data center.
"What's really ramping up is the 2.5-inch, high-performance SAS drives," Schulz said. "We're seeing them right now in the entry-level and mid-market, and I would give them another 18 months to 24 months to make it to the high end."
Policy-based migration tools are finally allowing tiered storage to move from the planning stage to reality, said Mark Peters, an analyst at Enterprise Strategy Group in Milford, Mass.
"Now we have many virtualized migration tools that allow migration to actually occur," he said.
Prior to the development of sophisticated and dynamic migration tools, IT shops (for the most part) only migrated data when they absolutely had to, Peters said.
Dynamic migration tools allow IT staff to move data while users remain online, a cheaper alternative to limiting migrations to those times when users don't require system resources. That means storage shops don't have to call in team members to work around the clock on weekends.
"The ops people stay over the weekend," Peters said. "They have to be paid overtime. Then they get days off in loads. Then they're useless Monday morning, and the migration probably didn't work anyway."
Heterogeneous migration solutions such as Dynamic Storage Tiering in Symantec Corp.'s Veritas Storage Foundation "helps [administrators] automate the movement of information from one storage tier to another," said Sean Derrington, Symantec's director, storage and availability management group. "We've engineered a multivolume file system so everything is transparent to the application, database, data protection application and end users," he said.
Policies can be devised according to business or IT goals, Derrington said. "IT can include the business organization as much or as little as it wants."
The next evolution in moving data across tiers will be to couple the data movement solutions with emerging e-discovery data classification technologies, said Jeff Boles, senior analyst and director, validation services at Hopkinton, Mass.-based Taneja Group.
"As an industry, we don't have a rich history of being able to deeply classify and structure our storage," Boles said. "So when you introduce tiered storage -- whether at the file-system level or at the block level -- to an environment, there are some real challenges."
But newer e-discovery data classification tools have Boles thinking the marriage between movement technologies and classification technologies is imminent.
"I have no doubt that in the next two years we're going to see an evolving marketplace where these solutions do become coupled together," Boles said.
Along with data movement technologies driving tiered storage growth, quality of service tools may not be far behind. Boles decried the "total lack of QoS capabilities" in today's market.
"It still amazes me today that nobody has been able to bring a real storage software stack to the market that can help you understand and manage your enterprise according to policies around those different tiers of service and even do things within the storage fabric to optimize your performance," Boles said.
Virtual Instruments' VirtualWisdom is one product that seeks to conquer the QoS problem. It provides "on the wire" instrumentation and measurement capabilities to optimize virtualized environments.
Boles believes more QoS solutions will emerge in the next few years. "It will happen because we're seeing people focused on predictive frameworks," he said.