Escalating storage demands and energy costs are creating a dilemma for many storage administrators.
According to a recent report from Gartner, half of today's current data centers won't have enough power and cooling capacity to operate their storage equipment by 2008. What's more, by the following year, in 70% of worldwide data centers, energy costs will emerge as the second highest operating cost.
Storage growth in the U.S. and Europe is highly restricted and/or prohibited because of inadequate power generating capability -- there is simply no more electricity to be bought at any price. Storage administrators are trying to mitigate the growing problems of power cost and availability by turning to green storage policies, a broad mix of technologies and tactics intended to reduce power consumption not just in the data center but across the entire enterprise.
This article explains the concepts of green storage, examines deployment issues, looks at one user's struggles and considers the future of green storage practices.
What is green storage?
Most industries are responding to increased energy costs and limited energy supplies by adopting a series of conservation measures. These measures will often reduce work, thus saving money by reducing energy consumption. Conservation -- using the same amount of energy to perform more work -- may also improve efficiency. Taken together, these measures are dubbed "going green."
Green storage is a combination of technologies, practices and policies that lead to lower and more efficient energy use. Energy is getting more expensive, and many organizations can't buy any more power. This potential inability for companies to grow represents substantial risk, and green storage is largely seen as one way of mitigating that risk. "The cost of not responding [to limited power] is losing your ability to operate," says Rick Hayes, senior data center consultant at GlassHouse Technologies Inc. Hayes cited a Gartner report noting that more than 70% of U.S. data centers will experience disruptions due to floor space, energy disruptions or costs by 2011.
Green storage also includes more traditional ecological considerations such as reducing hazardous materials, requiring more recycling of storage systems and components, and more energy-efficient data center designs. This may include using environmentally-friendly batteries for storage cache or eliminating batteries entirely in favor of capacitor-based energy storage devices. Some storage vendors may even take back old equipment for recycling. "There are regulations out there about the recyclability of equipment," says Mark Peters, analyst with Enterprise Strategy Group. "They're more dominant in Europe, but they're coming here." One example of this is the Restriction of Hazardous Substances (RoHS) Directive, which all U.S. equipment sold in Europe must comply with.
The altruistic message of environmental initiatives don't always correlate to the business realities of budget and performance. While vendors say you can save the planet, the real issue for a storage administrator may be to expand storage another 20% without increasing power costs in their organization. "Green 'anything' is not a strategy," Hayes says. "Who would go out and replace storage without a significant business driver?"
Furthermore, because green storage is not a single product or system, there is no single solution to reducing power demands or increasing power efficiency. Consequently, green storage is adopted systematically through the purchase of new disk drives, more intelligent controllers and advanced storage systems. In other words, the "greening" of a storage environment takes place over the course of months and years through technology refreshes.
What are some approaches to green storage?
Hard drives are getting larger and more power-efficient. For example, one green strategy may involve replacing a series of existing 250 GB hard drives with 750 GB or larger models. Capacity is dramatically expanded, but the actual number of drives stays the same, keeping power demands roughly unchanged.
Drive designs are also changing to reduce power demands. Some drives support variable spindle speeds, allowing drives to slow down when they're not accessed. Hybrid drives include significant amounts of memory on the drive itself. This reduces regular platter access and allows the spindles to spin down more frequently and save power. Also, there is renewed attention to "no-power" media, such as tape and optical/holographic storage technologies for long-term offline storage.
Disk systems and arrays are also evolving. Drive controllers use less power, and the systems can manage low-power features on greater numbers of drives. In one example, MAID, the majority of disks are powered down at any given time. While MAID technology isn't recommended for online storage, it's an interesting tactic for nearline and archival storage systems. However, experts still question the total power savings and long-term reliability of drives within the MAID system.
Server virtualization allows data center administrators to place more virtual machines on fewer physical servers, increasing server resource utilization and improving power efficiency.
Also, there are techniques that reduce the amount of data that needs to be stored. Data deduplication can vastly reduce data storage demands by ratios of up to 50 to 1. While deduplication has mainly appeared in archive and virtual tape library (VTL) systems, it's appearing more frequently in primary storage. This is often coupled with tiered storage so that the online data from expensive, high-performance (power hungry) storage is moved off to slower, high-capacity nearline storage as soon as possible. (See the data deduplication special report for more information.)
When properly evaluated and implemented, green storage equipment should have no impact on preagreed service levels. "If they're impacting your service, you've made a mistake in your basic architecture and technology selection," Hayes says. Large drives are a possible source of storage contention because larger drives do not receive a corresponding boost to I/O speeds. When storing significantly more data on a given drive, there will likely be more users attempting to access that data simultaneously, possibly resulting in contention and reduced performance.
How do you implement green storage and measure returns?
Since green storage capabilities are normally integrated into the data center over time, it's important to consider power reduction and power efficiency capabilities as a normal part of any product evaluation. "Vendors that are not able to demonstrate some sort of ongoing strategy to deliver green equipment will fall off the selection list," Hayes says. In virtually every case, the power-related features included with a new disk, controller or array will not substantially add to the cost of that product. It's just part of the technology refresh cycle.
Management support can be easy or difficult to obtain, depending on how the new purchase is presented. If your company already mandates power conservation initiatives in the data center, management buy-in should be a quick and easy proposition, especially when implemented as part of normal technology refreshes. However, management buy-in can be more problematic when purchases are motivated by emergency circumstances and the resulting changes can prove disruptive to the business.
"If you don't know what you're storing, where it is, what you're doing with it, then your chance of doing it in a smarter way is extremely limited," Peters says. There is no single tool for green storage. In many cases, there are numerous storage resource management (SRM), hierarchical storage management (HSM) or other analytical storage tools already running in the organization that can help users understand the composition and performance of storage resources.
When determining return on investment (ROI) on green storage initiatives, experts suggest measuring against criteria that is meaningful to the business, such as energy consumption or IOPS per kilowatt, or even revenue per IOP. The objective is not to compromise the organization's revenue for the sake of kilowatts. For example, reducing power consumption might save money on power costs, but if the total work performed is also reduced, any loss in revenue due to lost work (fewer IOPS) may actually cost more than the energy that has been saved. This is why the notion of power efficiency is actually emphasized over the notion of simple power cost savings.
Experts debate the importance of green storage from third-party storage providers. If you're not particularly concerned with the efforts of your storage provider, then it's probably a nonissue. However, power efficiency and other environments considerations are increasingly appearing in RFPs and business-to-business (B2B) sales, causing demand for green storage to cascade down the supply chain. "When you're outsourcing, you look at service level and cost," Peters says. "Beyond that, if it matters to you than it matters; if it doesn't, than it doesn't."
How are IT organizations adopting green storage?
Beth Israel Deaconess Medical Center in Boston is an EMC shop with more then 200 TB of data evenly divided across three distinct tiers using an EMC Symmetrix DMX with Fibre Channel drives for high-performance Tier 1 patient data storage, an EMC Clariion with SATA drives for nearline data storage and an EMC Centera for deep archive storage.
According to John Halamka, chief information officer at Harvard Medical School and Beth Israel, the real power challenge is to hold the line on energy utilization (costs) while maintaining storage performance and still meet the needs for high-performance computing and 25% annual storage capacity growth. "Can I keep power under 200 kW for several years to avoid a quantum leap in data center operating expense?" Halamka asks. "After that, I'd have a $1.5 million investment in UPS, new power feeds, etc."
For the hospital and other organizations, there is no single solution to green storage. The answer to power conservation has come at several different levels. Server virtualization has helped, making the most of server hardware in the data center. Environmental management has also been important, using more efficient cooling systems and settling on slightly higher temperatures to save electricity. Halamka has also adopted more energy efficient equipment, recently replacing 7500 CRT monitors with flat panel LCDs, reducing display power consumption by a factor of 10.
On the storage side, the initial emphasis is simply storing data on fewer drives using larger disks (e.g., 750 GB drives rather than 500 GB), data deduplication and implementing policies that restrict unnecessary or personal data storage. This reduces the total amount of "spinning storage" and holds growth rates to a manageable level. Archiving is another level of savings that Halamka is currently exploring, moving data that has not been read in 90 days to archival storage or offline storage, such as optical or tape media. The next push is to manage drive operation at a more granular level, only powering the drives that are currently in use while slowing or stopping drives that are idle. "It may be that I have certain types of storage, like a virtual tape library, that's really only backing up data a couple of hours a day. Spindown 20 hours a day; no problem," he says.
For Halamka, the ultimate goal is not in energy savings or traditional ROI, but rather in long-term energy/demand management -- avoiding power increases rather than enjoying power reductions. So far, green initiatives have been successful without compromise to existing service level agreements. Halamka will be keeping a close eye on emerging technologies for additional power-saving capabilities into the future.
Future of green storage
The main thrust of green storage is not to save money, but to maximize efficiency. This means storing more data without incurring more energy costs, since any savings will be absorbed by storage growth. Accomplishing this will involve many technological facets that extend beyond the realm of storage.
But on the storage front, expect to see more energy-efficient hard drives, such as hybrid and fully solid-state drives, along with storage systems and controllers that actively monitor and manage power based on storage tier and access frequency. Expect also to see data reduction technologies, such as deduplication and compression, become standard features of the operating system. Storage systems should also move beyond tiering to become more application-aware so that data is stored most effectively.
"If we have knowledge of what we're storing and we have the management ability in a separate storage server to put the right data in the right place, that's when the efficiencies can really leap forward," Peters says.