Hot storage technologies for 2010

In our annual assessment, we pick five technologies we think will impact your storage operations in 2010. Read how VMware backup, solid-state storage, thin provisioning, 8 Gbps Fibre Channel and data dedupe for primary storage can change how you manage storage.

This article can also be found in the Premium Editorial Download: Storage magazine: Hot storage trends and technology for 2010:

 By Rich Castagna, Todd Erickson, Chris Griffin, Ellen O'Brien, Beth Pariseau, Carol Sliwa, Sue Troy

 

VMware backup, solid-state storage, thin provisioning, 8 Gbps Fibre Channel and data deduplication for primary storage: Are these on your 2010 storage to-do list? If not, they should be.

"Hot" -- in reference to enterprise data storage technologies -- can be interpreted in many ways. Hot technologies could be the stuff of dreams that engineers are cooking up in research labs -- but that often takes years, if ever, for real products to emerge. You could also define hot as those emerging technologies that may still be on the cusp of maturity but can have a significant impact on current storage environments.

We favor the latter definition because we think you're more likely to be fighting the storage wars than Star Wars, and would like to be armed with the latest technology available. The five technologies we think will be hot in 2010 may be familiar, but they're still cutting edge while being advanced enough to be practical.

Data backup is still one of the toughest chores in most storage shops, and it got even tougher when server virtualization upset the balance of traditional backup practices. We predict virtual machine backup technologies, already in high gear, will shift even higher with enhanced and new products emerging. Borrowing from the backup world, data deduplication for primary storage systems will become more pervasive to help storage admins cope with spiraling disk capacities. And additional disk system efficiencies will be realized as more vendors offer -- and more shops implement -- capacity management tools like thin provisioning.

With solid-state storage, touted by many as the logical evolution from magnetic media, we might be sticking our necks out a bit. But we think the proliferation of new products, dropping prices and intense interest will result in many more deployments in 2010. Our final hot technology is far more evolutionary than revolutionary: 8 Gbps Fibre Channel (FC). Although storage array vendors have some catching up to do with 8 Gig, we think this is the year they'll do it.

Backup for virtual servers

VMware Inc. may rule the data center, but for storage administrators virtual server backup was just an afterthought as many companies embarked on server virtualization implementations. Virtual machine (VM) backup is still in its adolescence but maturing fast, with significant developments that should offer some relief for beleaguered backup admins in 2010.

Traditional backup software vendors were slow to respond to the specialized needs of VM backup. Still, many IT organizations stuck with their traditional backup apps for their VMs, which may have distracted those vendors who saw the prospect of selling multiple agent licenses.

But other technologies have emerged to better address the unique needs of virtual server backup. Source-side deduplication and continuous data protection (CDP) products are well-suited to virtual machine backup because they reduce the volume of backup data and therefore lessen the likelihood of I/O contention.

John Merryman, services director at Framingham, Mass.-based GlassHouse Technologies Inc., sees source-side deduplication in products like CommVault Systems Inc.'s Simpana, EMC Corp.'s Avamar and Symantec Corp.'s NetBackup PureDisk as delivering "some pretty tight integration with the ESX environment from a backup perspective."

W. Curtis Preston, TechTarget's Storage Media Group executive editor and independent backup expert, agrees that both source-side dedupe and CDP are good approaches to VMware backup. They both follow an incremental-forever backup model that produces far less data than traditional backup tools.

VM-specific backup products, such as PHD Virtual Technologies' esXpress, Veeam Software's Backup and Replication, and Vizioncore Inc.'s vRanger Pro were designed from the ground up to handle VMware backup. Their advantages include per-socket rather than per-server licensing fees (though experts and users caution that it doesn't always equate to lower costs); and they enable recovery of the virtual machine disk (VMDK) image for greatly simplified disaster recovery (DR) preparedness, as well as recovery of individual files within the VMDKs. Traditional backup tools operate from within the VM, so they're adept at file-level restore but require multiple steps to restore entire VMDKs. And the VM-specific tools are adding deduplication capabilities.

These products are gaining traction. "[With these VM-specific backup tools] it's faster to recover, it's easier to recover and it's easier to move things around because everything's encapsulated," said Edward Haletky, a virtualization consultant and author of two books about VMware.

Nathan Johnson, manager of IT services at NAI Utah, a commercial real estate company in Salt Lake City, avoided traditional backup tools early on. His company implemented Veeam's Backup and Replication software at the same time it rolled out server virtualization. Johnson said he didn't consider a traditional tool "because of how convoluted VCB [VMware Consolidated Backup] was. It's gotten better, but I want something simple. If I get run over by a bus, I want someone from my company to follow the procedures that I've written so that it can come back up easily." (In vSphere 4, VCB has been superseded by new storage integration capabilities and VMware Data Recovery, which addresses some of VCB's limitations.)

Welch's, the Concord, Mass., grape juice company, took a different route. George Scangas, manager of IT architecture, said the company initially used CommVault's Simpana to back up its VMs. "With traditional backup, if we had to restore files and folders within the virtual machine, that worked great. If we had to restore the entire virtual machine, that was a 50/50 shot," he said. The company now uses vRanger Pro to back up its virtual machines in combination with Simpana on nonvirtual servers. vRanger Pro backs up the VMDKs to disk, and Simpana includes that disk when it backs up physical servers to tape, a practice followed by many IT organizations.

The traditional backup vendors aren't sitting still. Hewlett-Packard (HP) Co. and Symantec, for example, are working on updates that promise to deliver end-to-end backup for VMware environments and nonvirtual servers. "With Symantec [Veritas NetBackup] and HP Data Protector getting into the market as strongly as they are, [PHD Virtual, Veeam and Vizioncore] have to start looking over their shoulder for their backup product," consultant Haletky said.

In 2010, VM backup won't disappear as a chore at many IT organizations, but better tools are emerging. A year from now, simpler and more effective VM backup processes should be within reach for most storage administrators.

Solid-state storage

Flash memory has been around for decades, but it's only been in the last 18 months or so that the persistent solid-state storage medium has made its way into enterprise data storage products.

EMC introduced solid-state drives (SSDs) into its Symmetrix array in January 2008; following that, most major IT vendors, including HP, IBM Corp., Hitachi Data Systems and NetApp Inc., made some form of solid-state storage available in server and storage products. Smaller players like Compellent Technologies Inc. and emerging companies like Atrato Inc. have also incorporated solid state with software that automatically migrates data between flash- and disk drive-based tiers of storage.

Even with that level of activity, there's still substantial work to be done to integrate solid-state storage into the rest of the IT environment, particularly with SSDs, which typically consist of flash memory fronted by a disk interface. Other implementations, such as Fusion-io's PCIe cards, offer an alternative to the disk interface and reside in servers rather than disk arrays.

MySpace is familiar with the pros and cons of solid-state storage. The social networking site recently replaced all of the Serial Attached SCSI (SAS) hard drives in one of the massive server farms that serves its Web portals with solid-state devices from Fusion-io.

Although solid-state storage is generally thought of in terms of high performance, Richard Buckingham, vice president of technical operations at MySpace, said the big benefits were savings in power, cooling and server hardware. "Instead of eight $6,000 servers, we can go with one $2,000 box and the cost of the Fusion-io devices doesn't even make up the difference," he said. "The ROI is immediate."

Buckingham remains open to SSD as well as the PCIe cards, but said the technology hasn't yet proven to be mature enough in his internal tests for production deployments. "It seems like it would be a simple step to pull out one hard drive and put in another that's faster, but under our real-life workload we found that SSDs just didn't perform as well behind a drive interface," he said.

Buckingham also said MySpace won't be replacing its Fibre Channel storage-area network (SAN) infrastructure with solid state anytime soon. "SSDs have a bright future, and flash will almost certainly take over in the future," he said. "But the SAN infrastructure is something we've invested a lot of time and money in and won't be tearing out and replacing for a very long time," he said.

Jeff Boles, senior analyst and director, validation services at Hopkinton, Mass.-based Taneja Group, agrees that SSDs have some way to go before they become a fully integrated part of enterprise storage systems, and that much of that integration work will take place in 2010.

Boles said the market has taken a "massive step forward" in the last six months with systems that intelligently integrate solid state, providing a more efficient way to share solid-state capacity among hosts and automatically move data among multiple pools of storage media. Boles cited IBM's addition of SSDs to its SVC storage virtualization device, new offerings from startups Avere Inc. and StorSpeed that offer granular automated tiered storage, and hints of products to come related to developments such as Texas Memory Systems' acquisition of storage virtualization player Incipient.

"This trend will carry forward in 2010," Boles said, but so far most automated tiered storage and storage virtualization devices handle moving data in and out of solid-state devices at the LUN or volume level, when the most efficient method would be at the block level. "It may be 2011 before we see solid-state storage applied in more unique ways at increased densities," he said.

Not quite hot … yet

 

Cloud storage. Truth is, cloud storage was already struggling for clarity in the marketplace before the October mess with Microsoft and Sidekick carrier T-Mobile. Maybe it wasn't as bad as it first seemed, but some users still lost data -- not exactly a boost for the cloud storage cause. Still, cloud storage is getting plenty of good publicity, which has resulted in a lot of buzz, some new fans and a long line of experts predicting eventual success. But all of that hasn't translated to a prime-time slot for cloud storage.

Disaster recovery (DR) testing software.All the DR gurus keep warning us that solid DR strategies -- and regular DR testing -- should be top priorities, but these products just can't seem to get the respect they deserve. These apps have gotten some traction, but they're still viewed as luxury items at a time when storage pros are spending only on necessities. The bottom line is that DR testing software isn't likely to take off until budgets loosen up.

FCoE. Talk about Fibre Channel over Ethernet (FCoE) and it's easy to get smart people to agree on two things. Yes, it has tangible, proven benefits. No, they don't want to overhaul their data center to accommodate it. Despite all the chatter about FCoE, most storage arrays don't support it yet and most vendors aren't rushing to add it. Experts say FCoE won't heat up until 2011 -- and then the fun will really start, as storage and networking teams duke it out over control of the converged infrastructure.

Tape encryption. If you have tape media going offsite, encryption makes sense, right? Especially with encryption built into LTO-4/5 drives. But key management and the challenges of encrypting at the client remain obstacles. As the security pros like to say, if you lose your keys, you lose your data. Despite hardware and software technology improvements, tape encryption still can't squeeze its way into the spotlight.

 

8 Gbps Fibre Channel

IT organizations haven't made a mad dash to get to 8 Gbps Fibre Channel, but they'll certainly move steadily in that direction as they refresh or add new host bus adapters (HBAs), switches and storage arrays. The pace will accelerate when the cost of the faster technology nears parity with the price of current 4 Gbps gear.

For instance, when Atomic Energy of Canada Ltd. (AECL) needed to increase the port count of its core switch infrastructure, it found the cost of new 8 Gbps 64-port switches from Brocade Communications Systems Inc. to be close to what it had paid for 4 Gbps switches the prior year.

Simon Galton, manager of IT infrastructure services at the Mississauga, Ont.-based company, said the decision to go to the higher speed switches was opportunistic rather than highly strategic, as AECL has no plans at this time to go to 8 Gbps in its HBAs and disk arrays. Because 8 Gbps FC is compatible with earlier generations of the technology, a forklift upgrade isn't required. You just won't get the full benefit of the higher speed until you have 8 Gbps capability across the board.

Moving to 8 Gbps can improve I/O response time and prove especially useful with bandwidth-intensive applications, such as backup and data warehousing, and for virtualized server environments.

Ryan Perkowski, the SAN manager at a large financial institution, has justified 8 Gbps switch ports only for backups. His company purchased a pair of Brocade 5100 switches with three 8 Gbps ports each to link its disk/tape backups and Brocade DCX core.

But the connections between the host servers and the DCX are still 4 Gbps, as are the links between the DCX and storage arrays. Perkowski said he won't expand the 8 Gbps footprint until the firm's major storage vendor offers native ports. "There's no business need for it," he said. "We're having trouble saturating a 4 Gig link. I'm not going to buy stuff just to have it."

The pace of the shift from 4 Gbps to 8 Gbps has been slower than it was from 2 Gbps to 4 Gbps among the Fortune 1000, according to Robert Stevenson, managing director of storage technology at TheInfoPro Inc., a New York City-based research firm. He attributed the sluggish uptake, in part, to the economy's effect on IT spending.

Other contributing factors include the increasing interest in 10 Gigabit Ethernet (10 GbE) for file-based network-attached storage (NAS) or iSCSI SANs, as well as curiosity about Fibre Channel over Ethernet (FCoE). Any major FCoE adoption, however, will likely happen beyond 2010.

Meanwhile, 8 Gbps technology will likely see a marked uptick as the price gap with 4 Gbps continues to narrow. Seamus Crehan, vice president of network adapters and SAN market research at Dell'Oro Group, noted that 8 Gbps switch-side port shipments grew 50% quarter over quarter and became a majority of total Fibre Channel port shipments for the first time since the technology started shipping.

Also, 8 Gbps HBA port shipments doubled between the first and second quarters to nearly 60,000. Crehan cited the March launch of Intel Corp.'s Xeon 5500 (previously codenamed Nehalem-EP) server platform, which offers substantially higher server I/O throughput, as a major driver.

Robert Passmore, research vice president at Stamford. Conn.-based Gartner Inc., predicted that 2010 will be a big year for 8 Gbps FC when the majority of HBA, switch and storage array purchasers will go for the faster technology. "We're in the beginning of a very rapid transition," he said.

 

 

 Click here to view a PDF of "Report Card: Grading our 2009 predictions."

 

Thin provisioning

Thin provisioning has moved beyond its management and application issues of the past to become a must-have feature on many storage systems, and interest should only intensify in 2010.

Brian Garrett, technical director, ESG Lab at Milford, Mass.-based Enterprise Strategy Group (ESG), said vendors have mostly worked out implementation and management issues related to defining separate logical pools and having to reserve capacity for thin-provisioned volumes. Garrett said thin provisioning works smoothly in most cases and is becoming a "feature check-off item" in the storage systems he evaluates.

The benefits of thin provisioning are evident, especially as tightening budgets bump up against ever-growing capacity demands. Releasing provisioned but unused disk capacity to a virtual storage pool and making it available to other applications can significantly increase utilization rates. John Michaels, chief technology officer at Maxim Group, a New York City brokerage firm, used his thin provisioned FalconStor Software Inc. IPStor and Network Storage System (NSS) units to increase his capacity utilization by 59.87%. Michaels said he "could see a difference right away."

3PAR Inc. was a thin provisioning pioneer, rolling out the technology in 2003. Since then, most major storage vendors have jumped on the bandwagon: EMC's Virtual Provisioning for Clariion, Symmetrix and Celerra systems; HP's StorageWorks XP Thin Provisioning Software; IBM's space-efficient virtual disks for its SAN Volume Controller (SVC); and NetApp's FlexVol. And there are many others, including Compellent Technologies Inc.'s Dynamic Capacity software and DataCore Software Corp.'s SANmelody software, which converts standard servers, blades or VMs into virtualized storage servers. .

User interest in thin provisioning is growing, too. In the 2009 Storage magazine/SearchStorage.com Storage Priorities survey, 14% of respondents said they had already implemented thin provisioning, 21% planned to deploy it by year's end and 35% planned evaluations.

Mark Peters, an ESG senior analyst, noted that thin provisioning will continue to evolve as vendors add the capability to easily convert "fat" storage volumes to thin-provisioned volumes. Last October, 3PAR announced the release of Thin Conversion, a technology the company said will thin previously fat volumes. 3PAR also announced Thin Persistence to reclaim deleted thin capacity, and Thin Copy Reclamation to recapture unused virtual-copy snapshots and remote copy volumes.

Compellent and DataCore already offer fat-to-thin and reclamation technologies with their storage systems. As thin provisioning finds more and more users, other vendors will likely follow suit and upgrade their offerings to compete.

Initially, some storage vendors may have been reluctant to offer a technology like thin provisioning that could conceivably cut into their disk sales. But the successes of early entrants and eager acceptance of users persuaded them to follow suit.

Data deduplication for primary storage

The rate of growth of digitally stored information is putting many storage managers on the defensive as they struggle to address the operational risks and costs associated with unchecked data growth. In 2010, a variety of data-reduction technologies for primary storage, including deduplication, will provide some relief in hard-pressed storage shops.

"Business are finding that it's taking a lot less time to reach that second terabyte or petabyte than it did to reach the first," said Tory Skyers, a senior infrastructure engineer at a leading credit issuer. "Primary dedupe will allow any business to increase the density of data on their existing disks by at least twofold."

A fixture in backup environments, dedupe can also be applied to primary storage, thus helping to cut space, power and cooling costs. But primary dedupe won't yield the dramatic results common with backup dedupe.

Performance is another concern. "With backups, as long as the virtual tape loads and the backup works, everything is fine. With primary storage, performance isn't as cut and dry," said TechTarget's Preston. "If a restore [of a backup system] goes slowly, it's not the same as a system where you have thousands of people accessing files that they expect to open immediately."

The key to primary dedupe may come from finding the right balance between benefits and costs. "I'm looking to reduce my cost for storage and it's all about maximizing it with online compression and online dedupe," said Greg Schulz, founder and analyst at Stillwater, Minn.-based StorageIO Group. "Primary dedupe is not good for data that you're frequently working on, but it's good where you can trade time for money savings."

Both inline and post-processing dedupe can be applied to primary storage. For applications that can afford the performance hit, inline dedupe is perfect. If the data in those systems can be held in cache and then deduped before it hits a disk, fewer disks are required on the back end of the system, which ultimately cuts costs. "While inline is currently the slowest performer, I have a feeling with the advent of [solid-state storage] and larger inline caches, it's eventually going to catch up with post-process," Skyers said.

Some major storage vendors, including EMC and NetApp, are now offering primary data-reduction capabilities. NetApp's dedupe is built into its Ontap operating system. It works by storing the cyclic redundancy code (CRC) of every block written to storage, comparing the CRCs, and then eliminating and replacing any matching blocks with a pointer.

"NetApp is doing real dedupe and they're doing it essentially without a change in performance," Preston said. "When the actual dedupe process is running there's a change in the performance. But once the data has been deduped and you're just running your database or VMware, there's essentially no change in performance."

Ocarina Networks and Storwize Inc. also had early primary data-reduction entries. Ocarina's ECOsystem is an out-of-band appliance with software that's tuned to the data types associated with specific applications. Storwize's STN appliances work with NAS devices to compress and uncompress the data inline. Both of these startups have garnered a lot of attention that has led to partnerships with a variety of storage vendors.

This was first published in December 2009
This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close