Hot technologies for 2011

In our annual feature, we list the six hottest storage technologies that are likely to show up in data centers in 2011.

This article can also be found in the Premium Editorial Download: Storage magazine: Hot storage technology for 2011:

If you don't have at least one of these six hot technologies in your 2011 plans, it might be time to go back to the drawing board.

By Andrew Burton, Rich Castagna, Todd Erickson, Megan Kellett, Sonia Lelii, Dave Raffo and Carol Sliwa

Each time we present our Hot Technologies special coverage, we're quick to point out how our definition of "hot" may differ from others' interpretations. We think of technologies that are mature enough to be real data center alternatives but have yet to make it into the storage mainstream. So whether you consider yourself an early adopter or an inveterate skeptic, our Hot Technologies list has something for you.

Putting efficiency back into storage management has been a mantra at many companies for the last couple of years, and automated storage tiering is poised to be one of the keystone efficiency technologies as it makes quick work of putting data in its proper place. Similarly, multiprotocol storage arrays can be far more cost effective than those one-trick pony single protocol systems that are beginning to seem oh-so old fashioned.

With annual data growth typically at 50% or higher, most companies should be interested in taming their network-attached storage (NAS) sprawl with the new breed of scale-out NAS systems. And capacity-conscious storage managers will look to data reduction for primary storage for some relief in the coming year.

Virtualized servers have been a boon to the systems side of the house, but a bane for storage managers. With VMware in the lead, hypervisors offer new hooks that will make configuring storage for virtual machines (VMs) and backing them up much easier and more reliable.

A new concept has crept into the storage conscious: Why buy when you can rent what you need when you need it? That's the basis of cloud storage services and, if our research proves right, they're ready to take their place as viable alternatives to more traditional data storage infrastructure alternatives.

 

 

Click here to get a PDF of the Grading our 2010 Predictions report card.

1. Automated storage tiering

With all the major storage vendors offering it, and solid-state storage fueling its need, the conditions are ripe for automated storage tiering to take off in 2011.

Until now, moving data between storage tiers such as Fibre Channel (FC) and SATA disks has largely been a manual or semi-automatic process. A Storage magazine poll done earlier this year revealed that 54% of respondents migrated data by manual or only partially automated means, and only 32% used automation tools.

But IT shops that adopt solid-state drives (SSDs) for their I/O-intensive applications might want to roll out the welcome mat for automated tiering. Given their price, ultra-fast SSDs generally make economic sense only for application workloads with the highest performance requirements.

"The increasing usage of SSDs is going to be a big driver for automated tiering," said Arun Taneja, founder and consulting analyst at Taneja Group in Hopkinton, Mass. "The moment you bring in SSD, the power and the performance of that tier is so high relative to Fibre Channel that good usage of that SSD tier is all dependent on auto tiering."

"It was very difficult to be able to afford enough SSD if you were purely going to use it as a static storage device," noted Mark Peters, a senior analyst at Milford, Mass.-based Enterprise Strategy Group (ESG). "Now that people will be able to combine tiering with a smaller amount of SSD, I think the two go hand in glove."

IT shops will find plenty of automated storage tiering options that promise to migrate data to the right place at the right time. Some carry a fee; others are built into storage systems. Product differentiators include the level of granularity at which the data moves between tiers, the degree of automation and the extent to which users can define policies.

"Everyone does it differently," said John Webster, a senior partner at Broomfield, Colo.-based Evaluator Group Inc. "And there's a lot of variability in the way you buy this."

For instance, Compellent Technologies Inc., which in 2005 pioneered block-level automated tiering, can move data in page sizes of 512 KB, 2 MB or 4 MB, depending on the user's needs. Compellent also touts its integration with features such as thin provisioning, boot from storage-area network (SAN), pointer-based snapshots and remote replication.

In 2009, EMC Corp. began shipping its Fully Automated Storage Tiering (FAST) technology for its high-end Symmetrix V-Max, Clariion midrange systems and Celerra NAS boxes. Symmetrix can move data in sub-megabyte chunks, Clariion does it in 1 GB chunks and Celerra performs this task at the individual file level. EMC's future plans include automated tiering capabilities between arrays, rather than simply within arrays, according to Scott Delandy, a senior product manager at EMC.

Hitachi Data Systems (HDS), which began offering volume-based automated tiering in 2006, recently introduced 42 MB page-based automated tiering (known as Hitachi Dynamic Tiering) for its Virtual Storage Platform (VSP). HDS plans to offer page-based automated tiering for external third-party storage early in 2011.

"We're in the early stages," said Richard Villars, vice president, storage systems and executive strategies at IDC in Framingham, Mass. "Thin provisioning two years ago was seen as sort of a risky thing, and now it's almost a de facto requirement especially in virtualized environments. I think you'll see automated tiering is going to be the same thing, in about the same cycle, over the next two years or so."

2. Data reduction in primary storage

Primary storage data reduction is back from our 2010 Hot Technologies list, which means we were a year early in our last predictions. But maybe we weren't so far off as it was a hot topic in 2010 for vendors positioning themselves to deliver the technology. In 2011, we'll see a lot more of primary data reduction in shipping products.

Primary data reduction has taken one large step toward becoming mainstream by going from a technology provided mainly by startups to one dominated by major vendors. In 2010, Dell Inc. acquired primary data deduplication vendor Ocarina Networks and IBM bought primary compression vendor Storwize Inc. EMC delivered block-level compression for its Clariion midrange storage systems and Hewlett-Packard (HP) Co. said it would expand its StoreOnce dedupe software from backup to primary data beginning with its X9000 scale-out NAS product.

Permabit Technology Corp. struck OEM deals for its embedded deduplication software with NAS vendors BlueArc Corp. and Xiotech Corp. Permabit CEO Tom Cook said more partnerships are coming.

"We're seeing a clear objective from all storage vendors to have data optimization products in the market in 2011," Cook said. "We're seeing equal demand from block-storage vendors and file-based vendors. Momentum is increasing."

We can expect more product jockeying in early 2011 before primary data reduction becomes common in storage systems. Dell and IBM have yet to embed their new reduction technologies in their storage systems, and HP probably won't deliver StoreOnce on any primary storage before mid-2011. Net-App, which has offered primary dedupe since 2007, is expected to address customer requests for increased volume sizes as well as dedupe across volumes. And Hitachi Data Systems, LSI and smaller storage system vendors have yet to declare their data reduction plans.

There's also still a need for education on the types of reduction, and how they work differently on primary data than they do with backup data. Dedupe and compression yield different results depending on the data they're used on, and can even be used in combination.

"Underneath the covers the technology is quite different," said Brian Garrett, vice president, ESG Lab. "One [compression] reduces the size of the data while the other [deduplication] works over redundant chunks. The effects can be different. Deduplication is great for backup and will give you much better reduction if you're storing the same data over time. Compression does a good job on databases and emails. But some data, like video and audio files, is already compressed, so compression's not going to give you a big bang for the buck."

Greg Schulz, founder and senior analyst at StorageIO Group in Stillwater, Minn., emphasizes that there's no one-size-fits-all approach to data reduction.

"Vendors like EMC with the Clariion and IBM via its Storwize acquisition are demonstrating that effective data footprint reduction includes using different technologies," he said. "They range from archiving to compression to dedupe, along with thin provisioning, RAID and space-saving snapshots to meet various needs across many tiers of storage."

3. VMware APIs for storage

The VMware vStorage APIs for Data Protection, successor to the much maligned VMware Consolidated Backup (VCB), have had the backup world buzzing since their release in 2009. "VCB was kind of a mess," said Lauren Whitehouse, a senior analyst at ESG. "[VMware] built the hypervisor without thinking about the implication of I/O-intensive applications like backup, and VCB was like a tumor on the hypervisor. It was an afterthought."

"VCB was fairly limited," said Venu Aravamudan, senior director of server product marketing at VMware. "The traditional approach of putting agents in each virtual machine just didn't work and VCB was kind of a stop-gap measure to provide some backup functionality."

The vStorage APIs for Data Protection, however, aren't a standalone product like VCB. Instead, the APIs allow third-party backup applications to directly interface with the VMkernel without the need for scripts or agents. The APIs provide a sort of baseline, and then it's in the hands of each backup vendor to develop functionality around that. With these APIs, VMware essentially stepped aside and let the backup software vendors do what they do best -- develop backup products.

"As soon as they started doing VCB and having all of the crazy issues with the vendor partner community, they [VMware] went down the road of trying to build out the APIs," ESG's Whitehouse said. "It was really to make it easier for themselves, easier for their vendors and to drive adoption of their platform. It was a critical problem for them to solve and APIs are the best way to do it."

According to VMware's Aravamudan, while his firm worked closely with third-party software partners in developing the APIs, VMware isn't involved in testing or certifying the third-party products. "There was a ton of joint work leading up to the actual release of vSphere," he said. "However, there's not an actual class of certification for these types of products. Because it's a very clearly defined API set, there are no third-party products that sit in the hypervisor kernel."

Vendor integration, so far, varies. Not surprisingly, backup products designed specifically for virtualized servers were the first to jump on the bandwagon, and others have yet to fully integrate the APIs. "We saw day-one support from CA, Veeam and [Quest], and TSM still hasn't integrated all of the features of the APIs," ESG's Whitehouse said. "I think once users see the increased efficiency that's possible with the APIs, they'll push their vendors to get there."

In addition to the vStorage APIs for Data Protection, vSphere includes vStorage APIs for Array Integration, Multipathing and Site Recovery Manager (SRM). The vStorage APIs for Array Integration improve vSphere efficiency by allowing the storage array to perform tasks such as snapshot and replication. The vStorage APIs for Multipathing allow for array-based multi-pathing, which improves storage I/O throughput. The vStorage APIs for Site Recovery Manager integrate SRM with array-based replication for SAN and NAS. This allows SRM to access and control the array-based replication it relies on.

Another feature of vSphere worth mentioning, though it's not specifically part of the APIs for Data Protection, is Changed Block Tracking. This feature tracks the changed blocks of a virtual machine's virtual disk, allowing backup applications to immediately identify changes since the last backup and to copy only those changes, thereby reducing backup time and network traffic. "It's part of the bigger picture of making backup more efficient," ESG's Whitehouse said.

For users, efficiency and reliability are obviously critical. "If you can't have data protection in place without being disruptive or causing issues in the environment, no one is going to roll out production applications in the hypervisor," Whitehouse said. By allowing third-party vendors to interface with the hypervisor directly, the vStorage APIs and other vSphere features go a long way in improving the overall storage and data protection picture for VMware users.

4. Scale-out NAS

Scale-out NAS has been a proven technology waiting for the right problem to solve. That problem has emerged amid a perfect storm of rampant unstructured data growth and the limitations of traditional NAS systems. The technology's ability to scale capacity and performance with relative ease has attracted organizations trying to cope with massive unstructured data storage needs due to the increased use of rich-media digital information and the constraints of regulatory compliance requirements.

According to Jeff Boles, a senior analyst and director of validation services at Taneja Group, scale-out NAS, also often called clustered NAS, can solve a lot of problems. "Scale-out NAS has been out there for a while, and certainly offers the ability to serve a wide range of needs from a single unified repository," Boles said. "You can do the primary NAS just as much as you can do the archive stuff."

While scale-out NAS deployments are certainly increasing, they've yet to span multiple vertical markets. "There are specific use cases [that] drive people to scale-out NAS today -- that's still very much the pattern." Boles said. "For [these deployments], we're seeing much wider-spread adoption this year than we ever have in the past."

Those use cases include media and entertainment, telecommunications, cloud services providers, life sciences, and energy exploration and simulation -- environments with very large data sets and the need to drive down per-gigabyte storage costs. "Scale-out NAS can do a lot to unify a storage infrastructure," Boles said. "[It can] create one big storage infrastructure you can manage through a single view or set of tools."

According to StorageIO Group's Schulz, the emergence of scale-out NAS has shifted the industry's perception of near-line storage. Instead of automatically archiving data after 30 days, scale-out NAS allows companies a low-cost alternative. "It's the new near-line," Schulz said. "The new model is to move [data] onto lower cost bulk storage where it's accessible but at a slower speed and lower cost because there's value in having it out there. It might be highly compressed, it might be highly optimized, it might be deduplicated, but it's not tying up prime storage real estate."

Some of the leading scale-out NAS products include HP's StorageWorks X9000 storage system, which includes technology gained from the company's July 2009 acquisition of Ibrix Inc.; IBM's Scale Out Network Attached Storage (SONAS), which uses the company's General Parallel File System (GPFS) for high-performance computing; Isilon Systems Inc.'s S-Series and X-Series scale-out storage platforms, which are favored by the media and entertainment industries; and NetApp's Data Ontap 8 storage operating system, which incorporates the scalable file system technology the company acquired when it bought Spinnaker Networks Inc. in 2003. Both Taneja Group's Boles and StorageIO Group's Schulz said it will be interesting to see what Dell does with its EqualLogic product line and the scale-out technology it gained by buying Exanet Inc. in February 2010.

Not hot yet

Fibre Channel over Ethernet (FCoE)
Last year, FCoE was also on our "Not Yet" list, but we're not just picking on this technology -- we truly think it's going to heat up one of these days and become the hottest storage network technology around. We just don't think it's going to happen soon. Most of the parts that make up FCoE are here, but storage array support still lags. Fibre Channel networks are the Rodney Dangerfield of storage environments; they don't get a lot of respect and nobody ever relishes a network upgrade.

Virtualized Networks (or Virtualized I/O)
This is one of the coolest new technologies around. It does for HBAs and NICs what VMware did for servers by turning them into shared, and virtual, devices. By adding a layer between your servers and their network hookups, you can share those interfaces and allocate them dynamically or based on policies. With servers and storage virtualized, why ignore the network? We think all I/O virtualization may need is a little boost from one of the big vendors, but we don't see that happening in 2011.

Self-Healing Systems
There's something a little spooky about storage arrays that know more about themselves than you do, but if they can use that knowledge to avoid time-draining disk failures, we're all for it. Although a fair number of array vendors offer systems with some self-healing capabilities, it currently sounds a little more like a science project than the right stuff for your company's data. But it shouldn't be long before the list of self-healing systems grows. It's a win-win deal: You get some peace of mind and the vendor gets some service and support relief.

Unified Computing (Integrated Storage Stacks)
It's IT in a box. Everything you need to fill your data center with servers and storage, and the network to tie it all together, all on a single SKU. Vendors like the idea so much that they're following EMC's lead and partnering up so they can offer soup-to-nuts packages, too. Some call it convenience, but for others the word is "proprietary." The "stack attack" has been tried before with less than awesome results; we'll see how it fares this time.

 

 

5. Multiprotocol storage arrays

Multiprotocol storage arrays have been around for quite some time, but this class of storage system has whipped up renewed interest among users who are keen on taking advantage of the technology because of its flexibility and cost effectiveness. The multiprotocol approach lets users consolidate storage systems as they seek new efficiencies in the face of spiraling storage capacity requirements.

Research from the Enterprise Strategy Group has revealed that multiprotocol storage adoption is growing. Of the more than 300 respondents to an ESG survey, almost 50% are planning a deployment, while nearly 25% have already deployed multiprotocol storage. Similar research done by ESG in 2008 showed that only 18% of those surveyed had gone the multiprotocol route.

Vendors such as EMC and NetApp prominently feature multiprotocol storage in their product lines. And according to Terri McClure, a senior analyst at ESG, multiprotocol storage is more of a "checkbox" item than an exotic option these days. "It's becoming more and more of an expectation, mostly driven by NetApp's push for unified storage," McClure said. "And when you look at where NetApp's headed and where EMC talks about going with their Clariion platform, I think users are planning their storage requirements holistically rather than [saying] 'I need x for block and x for file, and I'll pay a significant penalty if I guess wrong.'"

Although multiprotocol storage has the potential to simplify a data storage environment of any size, smaller businesses may be more attracted to this technology than larger environments that often have well-established (and distinct) block and file storage infrastructures. "Instead of going out to buy a block device or a NAS device, [SMBs] can buy one storage system that gives them that capability," StorageIO Group's Schulz said. "And with these systems coming down in price and increasing in functionality that, in turn, is aligning with the smaller environments and growing with their needs."

Elvis Cernjul, vice president of IT at fashion retailer Spiegel, currently uses unified storage to consolidate eight different devices. Along with not having to deal with managing these systems, Cernjul said he has "more storage space and triple redundancy on his data."

Kevin Fitzpatrick, IT director at San Diego-based ROEL Construction Co., doesn't use multiprotocol storage in his environment, but he's interested in it. His cloud storage provider decided to switch to a unified storage system from NetApp, and since that change he has "seen some great improvements in storage functionality." Impressed by those performance results, Fitzpatrick said he will consider a multiprotocol storage system when he needs to upgrade his storage capacity.

Multiprotocol storage is definitely a hot technology, but you'll still have to gauge just how much sizzle your company can expect. "If it makes sense . . . or allows you to leverage your dollars more effectively, absolutely take a look at it," StorageIO Group's Schulz said. "But first and foremost, make sure it can do something for your business. In other words, let the technology work for you instead of you having to work for the technology."

6. Cloud storage services

If you need any evidence that cloud storage is a hot technology, just look at the number of companies rushing to market with some type of offering or strategy. But even as technology giants like Amazon, Google, Iron Mountain, Microsoft and numerous hosting service providers are in the process of a massive build-out in this area, to date cloud storage has been more of an emerging technology as users test the waters by moving some of their backup applications into the cloud.

"Customers worry about security," said Ashar Baig, senior director of product marketing at Asigra Inc. "Adoption of the cloud has been slow. It could be much faster."

One reason behind the slow pace of adoption is that there's still much discussion about what types of data would be suitable for cloud storage. For now, it's mainly unstructured data that's being moved to the cloud, and the majority of users have chosen backup for their initial foray into cloud storage because there's less perceived risk than using the cloud for primary storage.

But as users become more willing to test the cloud, startups are offering cloud gateways -- devices that act as cloud access points for non-backup data. Some of the vendors offering these primary storage cloud gateways include Cirtas Systems Inc., Nasuni Corp., Panzura, StorSimple Inc. and TwinStrata Inc.; all have launched either hardware or virtual appliances in the past year.

Furthermore, Storage magazine research shows growing interest in the technology, with more users planning to deploy non-backup cloud services. Although the numbers are still modest, the gains are significant: Almost 10% of respondents to a fall 2010 survey use the cloud for data center primary storage vs. just 4% six months earlier. And another 10% said they plan to start using cloud services for near-line data.

In addition, industry experts say that compliance, and reference and archived data are other obvious choices for cloud storage.

Iron Mountain has built a good part of its cloud strategy around data protection, governance and archiving. "Cloud makes it cheaper for keeping information that needs to be preserved for a long time," said T.M Ravi, chief marketing officer at Iron Mountain Digital. "The next step in the evolution of cloud storage is infrastructure as a service."

Staci Cross, CIO for the City of Bradenton, Fla., found cloud storage services to be a good fit for her organization's needs. With a small IT staff and limited expertise, the City of Bradenton looked to the cloud to handle several storage functions. It has been using cloud provider Yotta280 to handle its backups for approximately a year, and Elephant Outlook to manage email for more than two years. Plus, the city has been migrating public data from its document management system to cloud service provider SpringCM.

"We've had no issue with performance and security," Cross said. "We have had continuing constraints in budgets and staff, and they have economies of scale that I don't have."

This was first published in December 2010

Dig deeper on Enterprise storage, planning and management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close