Jon William Toigo's five-part series on the best tips for maximizing efficiency in complex data storage environments...
continues with a look at storage energy efficiency, including green data storage technologies that can help control energy costs.
The fascination with “green IT,” which dominated the trade press and agendas of tech conferences only a few years ago, has waned. The earliest conversation around green data storage focused on server power consumption and was leveraged by vendors to make a case for their latest products, many of which delivered energy savings benefits only when you squinted to read the bottom line. As a result, any vendor pitch using the term “green” is often disregarded by consumers who have developed a case of well-deserved cynicism.
However, the chief problems green IT was designed to address -- rising utility power costs and, in some areas of the U.S.A., difficulties obtaining additional power for the data center at all -- are still with us. For a growing number of firms today, power efficiency has become an important dimension of any assessment of data storage technologies.
The reason for the connection is simple. Data storage gear accounts for between 33% and 70% of overall IT hardware spending. As early as 2005, companies like Dell were reporting that storage gear was the biggest power pig in their data centers. The ascendency of storage to the forefront of the green discussion was a by-product of server consolidation and server virtualization, the latter accounting for data storage infrastructure power spikes that weren't originally predicted. Going to virtual machines required many companies to increase their storage capacity exponentially and to break up Fibre Channel (FC) fabrics in favor of more hypervisor-friendly direct-attached storage (DAS) or network-attached storage (NAS) topologies.
Not surprisingly, Gartner recently doubled IDC’s estimates for storage growth over the next three years. While IDC pegged capacity demand increases at 300% over the next three years, Gartner raised the bar to 650% in a recent estimate. Both firms attribute the demand spike to the increasing adoption of server virtualization.
Read the entire Toigo tip series on storage efficiency
Prevent disk capacity problems through efficient allocation
Capacity utilization tips: Cleaning up your storage operation
Achieving data protection efficiency: Five suggestions
Storage performance and acceleration aren't the same thing
Individually, disk drives aren't huge power users, consuming between 3.5 watts and 11 watts in the latest “green” technology units. However, drives are typically placed into arrays that include a server/controller, power supplies, fans and other gear to provide a kit to the consumer that features whatever “value-add” features the vendor is promoting this week. Late last year, Hewlett-Packard's 3PAR IOPS record in the Storage Performance Council’s SPC-1 Benchmark required the parallel spinning of 1,900 drives.
Bottom line: As storage infrastructure continues to grow in capacity, so does equipment power cost (and related air conditioning and ventilation power costs that increase as the need to transfer heat away from all the storage gear does). Given an upward 15% to 22% trajectory in the annual cost of a kilowatt hour of utility power, the cost of energy is no longer a trivial matter. And according to the North American Electric Reliability Council (NERC), which furnishes the U.S. government with reports on utility power generation and distribution statistics, the problems companies are experiencing obtaining additional power in saturated areas of the U.S. power grid (mainly Southern and Northern California, and the New England Corridor) may soon spread to new geographies, including the Midwest.
Accurate estimates: IOPS per watt, capacity per watt
The consideration of power efficiency has many moving parts. The most obvious starting point in any evaluation of the current state of (in)efficiency is to collect data. Every company should be collecting data on real power consumption: at the system level, rack level and infrastructure (HVAC) level. This is baseline data required to predict and evaluate the impact of power cost containment strategies.
Gaining power usage data requires instrumentation. For guidance on how best to gather the data, a good starting point is your utility company. As counterintuitive as that may seem (they want to sell more energy, after all), there are standing programs in many utility power companies today to aid consumers in reducing their power demands. In a few cases, they deliver measurement services free of charge or at a minimal cost.
With baseline data in hand, data storage planners are in a better position to discuss power issues with vendors. When evaluating equipment for acquisition, data center energy efficiency arguments should be tested as a condition of the contract. Understanding that a new array generally consumes less power than a fully burdened workhorse, the impact of the deployment of a system on overall energy consumption needs to be tested and measured.
In general, newer arrays that leverage flash solid-state drives (SSD) as read caches (not as write targets) to augment disk performance deliver better power economics than do array architectures that use hundreds of parallelized disk platters, or worse yet “short-stroked” disks, to achieve performance. Short-stroking a disk uses only a few tracks of each media surface to reduce read-write head movement and, by extension, to improve I/O performance; however, the platter motor must nonetheless spin the entire surface. Going to SSD-assisted arrays may have a meaningful impact on the power demands of your performance storage.
This is the idea behind a metric that some folks are calling IOPS per watt. Optimizing IOPS per watt should be a guiding principle when building “capture storage,” storage sporting sufficient performance to handle the read/write requirements of your most demanding business applications.
In most organizations, approximately 70% of the data stored on performance or capture storage doesn’t need to be there at all. Per my previous tip on capacity utilization efficiency, approximately 40% on average of the data stored on disk is of archival quality and very rarely re-referenced. This is especially the case for most user files, which tend never to be re-referenced after 30 to 90 days.
This suggests that a second class of storage, call it retention storage, may be needed to store low-access data. Given the usage parameters of the files being stored, a solution like tape network-attached storage (NAS) may provide just what the doctor ordered: high capacity, reasonable access speed and extremely low power consumption. The power efficiency of such a retention storage platform can be measured in terms of capacity per watt.
Bottom line: Making power considerations part of the criteria used to build a data storage infrastructure is the foundation of a power efficiency strategy. IOPS per watt and capacity per watt are metrics that can help you determine (and communicate to management) the value of the strategy you adopt to economize on power.
Self-test for job protection
Power efficiency isn’t just about right-sizing storage to address utility energy costs. It's about ensuring that clean and consistent energy is provided to power the storage infrastructure to prevent unwanted downtime. Several studies indicate that power issues play a significant role in business interruptions, which have been estimated to occur with surprising frequency. In a recent survey, respondents reported experiencing an average of 5.12 power-related data center outages in the past two years, each lasting an average of an hour and 46 minutes at a cost of $5,600 per minute.
Power efficiency also means reducing power-related downtime, so now is the time to refresh your knowledge of how power supplies are protected in your data center, including surge protection, uninterruptible power supplies (batteries) and self-generation capabilities. Even if you think you're on top of the challenge, perform a self-test on the gear you've deployed. It may save your job.
BIO: Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.