Best Practices: Pull the plug on high energy costs

Spiraling energy costs are taking an increasingly big chunk of the data center budget. Data centers are grappling with rising electrical bills and, in some locations, limitations on the amount of available power are forcing IT anagers to rethink their basic processes.

This Content Component encountered an error
This Content Component encountered an error
This article can also be found in the Premium Editorial Download: Storage magazine: New rules change data retention game:

Storage systems account for a big chunk of the data center electric bill. Here are some ways to reduce your costs.


There's been a flurry of activity among IT vendors to develop energy-efficient storage products and with good reason--the cost of electricity continues to rise in every part of the country. Energy experts' opinions differ on the size of the rate hikes, but one thing remains clear: None of the experts expects the rates to remain the same or decrease. Some even foresee a dire future in which the total cost of powering and cooling a server for four years exceeds what it cost to acquire the server.

For some data centers located in metropolitan areas, it's not just the spiraling cost of electrical power that's a concern; availability may be an even greater problem. They simply can't pull any more power off the existing electrical grid. To bring in new hardware, they have to unplug older hardware first, making the migration effort to updated hardware difficult and risky.

Cutting the cost of energy
Companies concerned about high energy costs are evaluating many options to reduce their monthly bill. Some have taken drastic measures by relocating their large data centers near hydroelectric facilities or other regions of the country where electricity is (relatively) inexpensive. But for many other companies, the relocation option is too expensive or impractical to consider.

But there are other ways to reduce electrical costs. For example, semiconductor chip manufacturers continue to develop cooler and more powerful chips while server manufacturers design more energy-efficient servers. There's been a great deal of attention focused on energy-efficient servers; several large data centers report their server farms are responsible for approximately 60% of their electricity consumption. But as these data centers consolidate apps onto larger and cooler servers, the emphasis on saving energy in the data center must soon shift to disk storage systems. The same companies that have plans underway to consolidate servers will soon face the next energy challenge, which is linked inextricably with the data growth rates of 70% or more per year they report. This high data growth, coupled with the need to keep data for longer periods of time to comply with regulations, means that more and more storage devices must be added. The unrelenting demand for more storage means that the cost to power and cool storage devices will soon exceed the energy requirements of servers in the data center.

Disk storage engineers, like their server counterparts, are developing new and innovative ways to reduce energy consumption. Lower energy consumption chips are finding their way into high-performance storage arrays that are designed to meet the stringent performance demands of online applications.

Higher capacity, lower cost disk storage (usually populated with SATA disk drives) is commonly used to store application data with less-stringent performance demands and as the target for backup and archival applications.

Because the performance requirements of application data stored on these higher capacity disks are less rigorous than those of mission-critical online applications, engineers have more flexibility in designing energy-efficient storage. Evaluating some of the new innovations in high-capacity disk arrays can help reduce energy costs.

Trimming the tab for storage energy
There's a direct correlation between the number and speed of disk drives and the electricity required to power these devices. It takes electricity to spin up disk drives and continuous power to keep them spinning. If we spin these disks faster, more power is needed. We also need to cool these devices. The more disks there are spinning, the more cooling is required. Reducing the energy requirements for disk systems is therefore relatively straightforward. We must reduce the number of spinning disks or spin the disks at a slower rotational rate. These techniques work well to store infrequently accessed data, but the same techniques may wreak havoc with applications that demand fast response times.

Saving energy can be as simple as migrating to larger capacity disk drives. For example, if backups today are written to 250GB SATA disks and we now write those backups to 500GB SATA disks, half the number of disks will be needed and energy--and floor space--will be saved.

Other techniques have been developed to make storage "greener" or more energy efficient. Some storage systems detect when a disk drive hasn't been accessed for several minutes. Once the preset period of inactivity is reached, the drive is "spun down" or rotated at a slower speed to save energy costs. A disk drive designed to spin at 7,200 rpm in normal operating mode may spin at half that speed during its inactive period. When a request to access data on the disk is received, the drive is then "spun up" to its normal speed.

You can also keep disks powered off until they're needed. These systems are sometimes called massive arrays of idle disk (MAID). The concept is simple. Some data stored on disks, such as archival data or backups, may be seldom, if ever, accessed. Disks storing this inactive data are powered off to reduce energy costs. When the request to access the data is received, the disks are powered up. When the disks are idle again, they're powered down again.

Some vendors have implemented both technologies in a single disk system. After the initial threshold of inactivity is met, drives are rotated at a slower speed. These drives are then periodically powered up to run diagnostics, usually monthly, to ensure they continue to function properly.

One advantage of tape systems is that their power and cooling requirements are much less than similar capacity disk systems. Tape cartridges, when stored on a shelf or in an automated library, require no electricity. Several vendors now also offer removable disk cartridges that have many of the same properties as tape cartridges. They look like a tape cartridge, but contain a small hard drive within the plastic enclosure. These disk cartridges are inserted in a disk drive reader that powers up the disk drive. After data has been read from or written to the disk cartridges, the removable disk cartridges are then unloaded from the reader and stored within an automated tape library or on a shelf, or possibly transported to a remote site. The removable disk drives, like tape cartridges, require no power when idle. Similar to tape drives, the disk cartridge readers draw little power when idle.

The solutions noted here save energy by spinning drives slower or not at all. But there's another way to save power: store less data. We could ask everyone to delete data they no longer need and aren't required to keep for regulatory compliance. However, many of us feel comfortable keeping old data around ... just in case.

There's an easier way to reduce the amount of data that we store: Use software that recognizes and eliminates duplication. Some software eliminates duplicate files, while others detect duplication at the block level. The amount of data reduction varies greatly depending on the type of data and the data deduplication technology. Some data centers report data-reduction rates of 10:1 or 20:1, while others report triple-digit reduction ratios. Bottom line: Storing less data requires fewer disks and saves energy.

Always consider energy costs
When evaluating new disk systems, acquisition cost will always be important. But energy costs should also be a concern for those data centers grappling with rising electrical bills, or those in locations where there are limitations on the amount of available power. Vendors should supply the power and cooling requirements for all storage devices under consideration, and users should include questions about energy requirements in all requests for proposals. Determine how your future expansion and upgrades will change energy requirements, and ask vendors what they're doing to improve energy efficiency in future versions of their products.

Choosing energy-efficient storage saves money on today's electrical bills and continues to reap savings for the life of the product.

This was first published in September 2007

Dig deeper on Enterprise storage, planning and management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close