The rise of the ultra-dense array

Disk drives are getting smaller and smaller even as their capacities rise. Now storage vendors are packing more disks than ever into smaller spaces, which saves costly data center real estate. But the denser arrays also have a downside--higher power consumption and more heat.

This article can also be found in the Premium Editorial Download: Storage magazine: What to do when storage capacity keeps growing:

The rise of the ultra-dense array

Higher disk densities and smaller form factors will affect power and performance.

EVEN IF BUSINESS demands haven't driven you to classify data and implement tiered storage, changes in technology and relentless data growth will force you to consider these initiatives. Ever-higher data density on disk platters will slow performance, just as applications demand more. To maintain performance, we'll soon have ultra-dense arrays with massive numbers of tiny drive mechanisms that guzzle power and spew heat. It will end only when we get real about data requirements and force the bulk of our storage onto big, slow, efficient drives. Tiered storage is coming whether we like it or not.

Packing more into less
Data is growing at an alarming rate. It's certainly compounding, and some recent studies suggest the growth rate is accelerating. So far, disk capacities have kept up with the growth of space usage, even outpacing Moore's Law. In a past column (see "Five axioms for storage," Storage, June 2004), I noted that while disk capacity isn't necessarily subject to the same technical improvements that inspired Moore's Law, it has kept pace since the mid-1990s, doubling every 18 months. Will this pace continue? I think so.

Until now, the only way to pack more bits onto a disk platter (or ribbon of tape) was to make them smaller and squeeze them in like tinier and tinier puzzle pieces. The big idea of the 1990s that enabled disk storage to jump from 25% to 60% cumulative annual growth was magneto-resistive heads, which allowed much smaller bits to be written. Today's big idea is perpendicular recording. The name refers to the idea of standing magnetic regions (and thus bits) vertically (perpendicular to the surface of the disk) instead of laying them out flat. Current estimates suggest this will increase disk density by a factor of 10.

Although invented in 1976, the first application of perpendicular recording came just this year in the form of a 160GB 2.5-inch disk drive from Seagate. Hitachi Data Systems also plans to introduce the technology, promising 20GB Microdrives and a 1TB 3.5-inch mechanism. It's clear that perpendicular recording will enable at least a few more years of capacity, so what's the problem?

The performance crunch
As I pointed out back in 2004, the issue with these dense drives is performance. As every database or storage engineer will tell you, "High performance comes from lots of spindles." The more disk mechanisms you can access, the higher your overall performance will be, so these massive drives will hurt performance. While denser media speeds access, this capacity still has to squeeze through one narrow pipe (the drive's interface) and there's still only a single arm sweeping the heads over the platters. This is an important point because we're already seeing 500GB drives offered in enterprise arrays.

The answer to this problem is to use smaller drives. Array vendors have long since switched to 3.5-inch drive mechanisms, and the next big jump will be to 2.5-inch so-called "laptop drives." Seagate offers its Savvio family of 2.5-inch enterprise-class drives, while Hitachi and Samsung are targeting the enterprise with their little disks. These mini drives consume about half the power on a per-gigabyte basis as their larger brethren, and thus produce approximately half the heat as well.

But their biggest win is in terms of weight. It takes approximately 22 pounds of 73GB 3.5-inch disk drives to make up a terabyte of capacity. Swap these 3.5-inch mechanisms out for 2.5-inch units and you're down to about 6.5 pounds. Today's enterprise arrays typically pack about 250 drives into each cabinet, or more than 400 pounds of disk drives! Indeed, the drive mechanisms themselves make up more than half of the weight of a full enterprise storage unit. If we used 2.5-inch mechanisms, we would have closer to 115 pounds of drives--the disks shrink to approximately 25% of the storage device's total weight.

Heading off trouble
So if heat, cooling and weight go down while performance goes up, what's the problem? The issue is that we'll have to use more and more spindles to keep performance up as the density of each disk platter increases. I expect smaller, cooler 2.5-inch drives will soon be the standard in enterprise arrays, but performance and space demands will mean that the number of "spindles" per array will double or triple in short order. Soon, all of your power and weight savings will go right out the window. In fact, power consumption and heat dissipation for these new ultra-dense storage arrays will begin to increase rapidly, with cost driven more by the number of spindles than the amount of capacity.

These little disk drive platters will always lag behind the capacity of larger 3.5-inch units. As density increases, we'll look to ever-smaller form factors to offset the performance hit. What about enterprise arrays with 1.8-inch disks or even Microdrives? We're looking at a future where high-performance arrays use vast numbers of tiny disks while weight and power consumption continue to rise. Even though each little disk is smaller and lighter, we'll be overwhelmed by the sheer numbers.

As the mice of the disk industry invade the data center, the density increases are creating elephants--massive drives with capacities measured in terabytes. These big disks are much more environmentally efficient than 2.5-inch drives--today's 500GB disk is lighter (two pounds per terabyte) and uses less power per gigabyte than the latest 2.5-inch mechanism. But they're much slower, too.

The real problem is figuring out how to use the larger, slower, more efficient disks. We need to strike a balance between performance and capacity. Why is data growing so rapidly? Does all of this new data truly need the performance we're demanding?

Determining realistic data requirements is a vexing problem, and will be the most important topic in storage management for the foreseeable future. We have to figure out how to reach an agreement with the rest of the business on metrics and policies, but this discussion is bigger than the performance issues outlined in this article. Compliance, data protection and availability also have their own metrics and policies, and we'll have to overcome the different technological issues associated with these facets of storage.

This was first published in June 2006
This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close