How much is a terabyte of enterprise storage worth?
$800,000? $250,000? $100,000? Currently, there are vendors who will sell you a terabyte of storage for $12,000, $10,000 or even $7,000, and prices are expected to drop further.
Nobody pays retail
A survey of 152 storage professionals from large companies done earlier this year reveals that deep discounting on disk subsystems and network switches is rampant.
"We sell [disk] storage at 7 cents/MB, and we're profitable," says Diamond Lauffin, senior executive vice president, Nexsan Technologies Inc., Woodland Hills, CA. That works out to be $7/GB or $7,000/TB and the company will have products that threaten to break the $1,000/TB barrier.
"And our cost includes no charge for support for three years," he adds. At that price, you can just toss it out after three years and buy new storage, which will likely cost even less.
Granted, this may not fit your definition of highly reliable, highly available, high-performance storage, but those numbers throw into sharp relief the volatile nature of storage costs today. You might keep a terabyte or two in reserve, ready to be switched on in the event of an emergency or lavishly throw an extra several hundred gigabytes of cheap capacity at a database application to boost performance.
Has a terabyte of enterprise storage become a standard commodity? Not really, if you want a full package of high performance, availability, reliability, support and, most importantly--manageability--along with raw capacity. You also want to be assured that it will work with their various servers and switches and networking infrastructures. And if there's a problem, you want skilled service technicians on the case fast. For this, most enterprise storage managers are willing to pay a premium.
But you can get a lot of those enterprise elements for a lot less in today's market, and not just because vendors are cutting deals. Just as the PC annihilated mainframe and minicomputer price models, the relentless march of electronic miniaturization has yielded standardized storage components that allow subsystem manufacturers to package a lot of power for shockingly low costs. And if you define your requirements carefully, you can be the beneficiary.
Survival of the cheapest
If you're willing to pay a premium over the cost of the cheapest storage, you should ask: "How much?" and "For what?"
The perceived cost of a terabyte of enterprise storage is about 1.5 to 2.5 cents/MB ($150,000 to $250,000/TB). Depending on the specifics and various bells and whistles, the cost could shoot up to $800,000/TB, but even that price is significantly below the $1 million-plus cost of just a few years ago. Add to that the price pressures of a down market and "we're seeing deals come in at $60,000 per terabyte," says Dan McCormick, vice president, Xiotech Corp., Eden Prairie, MN.
Vendor fear may be the biggest factor driving storage pricing today, but that hasn't been the case until recently. "The biggest factor in storage pricing in the past has been human greed, but now we're seeing the law of supply and demand catch up," says James Porter, the founder of Disk/Trend, Mountain View, CA. "IBM and Hitachi have improved their high-end products, which has forced EMC to reduce its margins," says Porter, who has tracked storage disk pricing for decades.
The most immediate storage pricing action is taking place at the boundary between high-end and midrange storage systems. Midrange storage systems, such as EMC's Clariion, Hitachi Data Systems' Thunder, Hewlett-Packard's Enterprise Virtual Array, LSI Logic's E-series (resold as IBM's FAStT and StorageTek's D series), Sun's T3, Xiotech's Magnitude, and others are offering performance, availability and manageability features that meet--and in some cases exceed--what high-end systems deliver. And they do so at a fraction of the cost.
"We have functionality that competes feature to feature with the high-end systems at 80% of the cost," says McCormick.
HP--now the overall leader in storage following the Compaq merger--is looking to modular, midrange storage as the mainstream future. HP's Enterprise Virtual Array--now equipped with broad OS support and services such as snapshot and cloning--fits the bill for most users who don't require mainframe ESCON or FICON connections, according to Mike Feinberg, chief technology officer.
"The midrange is growing up fast," concedes Chuck Hollis, vice president, EMC Corp. The specifications of today's midrange systems "look like a checklist of the feature sheet of a high-end system," he says.
Within the midrange itself, prices continue to drop while features go up, such as the new Clariions. In addition to many features formerly found only on EMC's Symmetrix, the CX400 list price starts at $62,000 for 180GB raw capacity and includes installation, services and warranty. At full capacity--4.4TB--the list price is $217,500, which drops the per-gigabyte cost below $50 (5 cents/MB). Although it's still pretty pricey compared to the newest disruptive technologies, it's a bargain compared to what the high end traditionally goes for.
Perhaps even more indicative of the dynamics of today's market is the CX200, a product conceived of by EMC's new partner, Dell. One of the champions of PC economics, Dell's input resulted in this new entry-level Clariion, at a starting price of $28,000.
But some experts still see a broad role for high-end monolithic storage. Steve Jeffreys, general manager for hardware and software qualification at Storage Networks, Waltham, MA, says, "Clariion performs very well for streaming audio and video. Symmetrix still holds an advantage when it comes to broad interoperability and connectivity."
Hu Yoshida, chief technology officer for HDS, sees increasingly important core functions that can still only be found in boxes such as HDS's high-end Lightning series.
"The kind of global cache architecture that we have is the only way to guarantee data integrity with remote replication," he says. Yoshida expects Federal regulations and pressure from insurance companies to make mirroring at a distance mandatory for the financial sector. EMC's Hollis ticks off three more reasons why enterprise storage managers still need to consider high-end subsystems:
- They can handle more consolidation, meaning they can connect with more hosts, typically hundreds, but thousands in some cases and provide higher storage capacity--up to 20TB normally--sometimes more.
- They have more processors than midrange systems, enough to ensure that service quality doesn't degrade even under extreme loads or in the event of failure. If one processor fails, the others can pick up the slack without a noticeable drop in performance.
- They can recover much faster than even the fastest midrange systems, seconds or minutes rather than hours. Like Yoshida, Hollis points to architectural differences between the high end and the midrange that will always result in a performance gap. A high-end architecture includes a different backplane and channel and bus design that substantially increases the amount of host connections and disk array capacity the storage system can support.
For example, midrange systems like Clariion deliver very high I/O rates per channel compared to a Symmetrix channel, but Symmetrix has greater aggregate rate due to far more channels, says Jeffreys. Similarly, a Symmetrix is architected for redundant everything, adds Porter. If any element fails, another can take over immediately. Midrange systems have some redundant components, but not everything.
"If you are a bank, an airline or a phone company, and you can't have a service slowdown, then you need the high end," says Hollis.
Most everyone else, by implication, can probably get away with midrange storage.
It's not all gravy
But not all the technological development of storage networking is driving costs down, at least not in the short run. Storage area networks (SANs), in fact, are a mixed blessing. Although they can reduce the total cost of ownership over time through consolidation, centralized management and administration, they increase the acquisition cost.
And until vendor implementations of SAN standards are as consistent as SCSI standards, storage system vendors will have to endure the cost of extensive compatibility and interoperability engineering and testing, costs they will pass on to users. Shops that also mix storage, switches and HBAs from multiple vendors--whether from a philosophical imperative or circumstance--will also have to bear similar costs.
"It is not enough to just conform to a spec. You have to test and certify that it works, and that costs a lot," says William Pinkerton, director/storage solutions at Pioneer-Standard Electronics Inc., Atlanta, GA, a distributor of high-end storage systems. In the past, high-end systems had an advantage in compatibility and interoperability as a result of the extensive certification testing the systems have undergone, he says, but midrange systems are catching up in this area.
Another complicating cost factor is service and support. Organizations requiring more hand holding will find themselves forced to buy more costly storage systems. The high-end vendors excel at service and support, but it comes at a premium price. The new, modular storage technologies promise to be easier to set up and use right out of the box, minimizing this particular cost. "As time goes by, things will get easier to set up," says Dave DuPont, vice president, LeftHand Networks, Inc., Boulder, CO.
LeftHand, for example, offers modular storage consisting of units almost 500GB in size (list price: $12,500). Through virtualization technology--an extra cost--you can set up multiple modules to appear as a single disk drive, making them easier to manage and support. The company offers a comprehensive service program, but rarely expects to get called, says DuPont. "We also have an installation charge, but most customers skip it. They can get it up by themselves in minutes," he adds.
Storage managers at large enterprises, however, are unlikely to start assembling their own modular storage, at least not as primary storage for their biggest, most mission critical applications. "Enterprise storage managers are willing to pay a premium price for performance, availability and service," says Jerome Wendt, senior information technology analyst at First Data Corp., Denver, CO.
In terms of performance, the high-end storage systems have more specialized tools for troubleshooting problems, such as when an Oracle database inexplicably slows down. For availability, the high-end systems offer redundant everything and the automatic call-for-help feature when something goes wrong. And vendor service "helps us to lower our staffing costs. Those are people we don't have to keep on staff," says Wendt.
Still, Wendt isn't optimistic about the future of today's high-end systems in the face of the midrange onslaught. "The midrange is a very viable solution now, and the gap between the midrange and the high end is closing. So, I seriously doubt the future of the large boxes."
Lower costs ahead
But despite the countervailing factors, cost pressures on both high-end midrange systems will only increase going forward.
"Even lower cost storage is coming. Some company will combine serial ATA disk drives and iSCSI to create midrange storage," says Robert Gray, research director/storage systems at IDC, Framingham, MA.
Serial ATA drives-destined for the PC-assembled into large disk arrays will deliver terabytes of storage to the data center at a cost per megabyte of well under a penny. Many of these arrays will have Fibre Channel (FC) connectivity to fit into existing SANs. Beyond that, IP in the form of iSCSI or some other storage over IP standard can eliminate the cost of the entire FC infrastructure.
"These technologies have the potential to be disruptive," says Tony Prigmore, senior analyst, Enterprise Storage Group, Milford, MA. Disruptive technologies first enter a market at the low end, or for specialized situations but increasingly penetrate the high ends of the market as performance improves and features are added, ultimately driving down prices and undercutting the dominant technologies.
But the biggest cost--the cost of labor to manage all this storage--still remains, Wendt point outs. Even here, advances in software, particularly storage virtualization, promise to simplify the task of managing terabytes of heterogeneous storage.
New view of storage
Ultimately, the distinctions between low end, midrange and high end-once grounded in capacity and features-are becoming less useful in making decisions about what works for you. New distinctions may be more instructive.
The difference in architecture, more than capacity or features, has led Enterprise Storage Group to revise how it classifies storage. Now it views the high end as monolithic storage compared to the midrange, which it identifies as modular storage.
The features will be similar, but with modular storage "you can pay as you grow," says Prigmore. Organizations simply buy what they need and easily add more storage later as they need it. With monolithic storage, you get a huge capacity at a high price all at once.
But the distinction between monolithic and modular storage goes beyond expandability. It also is about usage. Modular storage will be used as secondary storage or as primary storage for organizations willing to accept occasional service slowdowns and slower recovery.
IDC too is shifting its focus to storage usage. "Over time, we will see more storage system product categories," says Gray. The storage categories will align with how the storage is used, rather than with the size of the organization or the amount of storage. For example, Gray envisions a category of storage for data that won't change often, although it's not quite ready for long-term archiving. Another category of storage might handle stored digital media that needs to be streamed out of storage at a steady rate. Each type of storage will combine the appropriate mix of hardware and software features do its particular job.
As a result, a new view of storage is emerging. Instead of high, mid and low ends, storage managers are being encouraged to look at their storage in terms of primary, secondary and specialized, such as backup or archiving. The primary applications may need the rich features of the high end or the midrange, but secondary applications can get away with less capable storage that also happens to cost dramatically less.
That kind of tiered approach may be the only way for companies begin to expand into the petabyte range. Pennies and even fractions of pennies loom large at those volumes. At $0.007/megabyte, a petabyte of storage costs $7 million dollars. That's too high, but all signs point to it coming down.
Online resources from SearchStorage.com: "Assessing the cost/benefit ratio for storage quotas," by Paul Hilton.
- Data Protection Strategies in the Era of Flash Storage –Rubrik
- Data Management Strategies for the CIO –SearchDataCenter.com
- Three Ways That AI Will Impact Your Data Management and Storage Strategy –IBM
- Data integration strategy: A clearer path for data –TechTarget