Hot storage technologies for 2012

These six cutting-edge storage technologies are ready for prime time, and can help transform your data center.

This article can also be found in the Premium Editorial Download: Storage magazine: Top data storage technologies of 2012:

These six cutting-edge storage technologies are ready for prime time, and can help transform your data center.

By Andrew Burton, Rich Castagna, Todd Erickson, John Hilliard, Rachel Kossman, Sonia Lelii, Ellen O’Brien, Dave Raffo, Francesca Sales, Carol Sliwa, Sue Troy

What makes a storage technology “hot”? In our book, anything new that makes storing your company’s data a faster, better or more efficient process is worthy of the “hot” label. But it must also represent a new approach to dealing with nagging issues. Essentially, it has to be the answer to the question, “If they can send a man to the moon, why can’t they . . .?”

We found six technologies that can store massive amounts of data effortlessly, perform at lightning speeds, turn old assets like tape and server disks into something new, and put the cloud within reach of enterprise data centers.

As usual, our hot techs mix equal parts of cutting-edge cool and practicality. Object-based storage throws out tradition and replaces file systems with a simplified flat-file approach to managing data. Linear Tape File System goes the other way and adds a file system to tapes to make them look like disks. Both are timely arrivals on a storage scene that’s beginning to buckle under the weight of too much data.

With multi-level cell (MLC) flash storage the medium is the message as rapidly developing technologies have turned this relatively inexpensive type of solid-state storage into an enterprise mainstay. But even as flash hogs the headlines, server-based storage is making a comeback with innovative ways to share directly attached assets.

Last year we predicted that cloud storage services would emerge, but this year we’re putting a finer point on that prediction and singling out two technologies that will make it easy to integrate on-premises systems with cloud-based storage resources that essentially treat the cloud as just another tier.

1. Object-based storage

Network-attached storage (NAS) isn’t the only way to handle file storage. It’s not even always the best way.

Object-based storage systems are gaining a lot of attention and starting to make inroads as an alternative to scale-out NAS. Object storage has unlimited scalability, is less reliant on processing and high-speed networks, and is a fundamental building block of public and private cloud storage.

But it’s not perfect. Object storage generally isn’t a high-performance technology and it lacks the standardization of file systems, making it tougher to move from one vendor’s object storage system to another. It’s also poorly suited to data that frequently changes, and often eats up more storage capacity than traditional data storage. But the technology makes it possible to archive huge datastores cheaper, with less power and within a smaller footprint than high-performance NAS.

Object storage uses unique identifiers to access data instead of physical addresses. The data is accessed based on the name and unique ID; the storage system reads the metadata and object ID. There’s no need for a single global namespace, cache coherency or high-speed networks.

Object storage products are sold by a mix of established vendors and innovative startups. Products from established vendors include EMC Atmos, DataDirect Networks Web Object Scaler (WOS), Dell DX Object Storage, NetApp StorageGrid and Rackspace OpenStack. Products from startups include Amplidata AmpliStor, Basho Riak, Caringo CAStor, Cleversafe Slicestor, Mezeo Cloud Storage and Scality Ring.

“Objects let you have a shared-nothing architecture where every node and controller mechanism doesn’t have to know where every piece of data resides,” said Andrew Reichman, principal analyst at Cambridge, Mass.-based Forrester Research. “You can scale bigger for cheaper. As we talk more and more about massive multi-petabyte repositories of data, that scalability gets relevant.”

Object storage’s characteristics -- particularly its scalability, location independence and accessibility via HTTP -- make it well suited for storage clouds. The metadata allows administrators to apply rules that can deliver built-in multi-tenancy, encryption and chargeback. Amazon Simple Storage Service (S3), Microsoft Azure and Nirvanix Cloud Storage are storage clouds based on object storage.

Other uses for object storage include archiving (particularly medical images) and file storage that scales to multiple petabytes.

The École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland turned to Amplidata to help digitize more than 5,000 hours of video archives from the Montreux Jazz Festival dating to 1967. When the project began two years ago, Alexandre Delidais, EPFL’s director of operations, said he couldn’t find a disk storage system that met his needs and price point. EPFL bought LTO tape and continued to search.

Delidais said he was looking for storage that could scale to petabytes with low energy and power consumption, had faster restore times than tape and fit his budget.

“Nothing was really available with those requirements,” he said. “We couldn’t find a disk-based technology; they were all too costly or used too much energy.”

Delidais discovered Amplidata AmpliStor in late 2010. EPFL bought 1 PB of storage to start, and will split that between two locations and replicate. Delidais said he’s approximately 20% through his digitization project.

Of course, there still aren’t that many multi-petabyte storage implementations, so object storage isn’t yet mainstream.

“Not a lot of buyers really need hundred-petabyte repositories now,” Forrester Research’s Reichman said. “But it seems to me the long-term outlook will be object storage. It’s a better way to do file storage.”

@pb

2. MLC flash storage

The signs are everywhere that MLC NAND flash will continue its upward trajectory next year and officially overtake higher-cost single-level cell (SLC) flash in enterprise systems, ushering in a new era of more affordable solid-state storage.

Manufacturers have ramped up production of MLC-based solid-state drives (SSDs), and even major storage vendors that were once hesitant to use them are joining the ranks of earlier adopters such as IBM and Hewlett-Packard (HP) Co. Framingham, Mass.-based IDC predicts that MLC-based drives will command 52% of enterprise solid-state revenue next year and climb to 60% in 2013.

Jeff Janukowicz, research director of solid-state storage at IDC, said MLC-based SSDs have reached a point where they’re better able to handle the read/write mixes that traditional IT requires, thanks to advancements in their architectures, algorithms and controllers.

Online marketplace eBay Inc. turned heads this year with its 100 TB deployment of Nimbus Data Systems Inc.’s S-class solid-state storage, which uses a more industrial-strength flash variant known as enterprise MLC (eMLC).

One of the main distinctions between the different types of flash is endurance. The consensus is that SLC wears out after approximately 100,000 erase/write cycles, eMLC at 30,000 erase/write cycles and MLC at 10,000 or less. But the differentials are becoming less important as the MLC drive and third-party controller vendors make improvements.

“These controller manufacturers have found that they can actually watch the flash’s behavior, and as long as they keep track of every single block and how it performs, they can use certain blocks well beyond the 10,000 limit and up into the hundreds of thousands of uses,” said Jim Handy, founder and chief analyst at Object Analysis in Los Gatos, Calif.

Even the erase/write differential between eMLC and MLC is starting to become less of an advantage, and eMLC doesn’t appear to be gaining the level of popularity the industry expected, according to Handy.

“eMLC costs more than MLC and it’s slower,” he said, “and since everybody is buying SSDs for speed, a slower solution is a really tough sell.”

Dan Mulkiewicz, IT director at Carlsbad, Calif.-based High Moon Studios, a division of Activision Blizzard Inc., which produces popular video games such as “World of Warcraft” and “Guitar Hero,” said even the least expensive MLC SSDs have performed impressively.

High Moon started with 10 MLC-based SSDs nearly three years ago in workstations, and soon added another 60 or 70 after application build times dropped from 30 or 40 minutes to four minutes.

“I had to come up with a cheap solution, and we took a chance, and it paid off,” Mulkiewicz said, noting that SLC flash never became a consideration.

He said the failure rate on the workstations’ MLC drives was less than 5%, and manufacturer warranties have improved from a year to three years, comparable to that of hard drives.

So, Mulkiewicz had no qualms using an MLC-based cache from GridIron Systems Inc. to address an I/O bottleneck with his VMware server farm. Mulkiewicz was impressed with the dramatic performance boost. In worst-case scenarios, programmers and artists once waited 70 minutes for the code to recompile after they submitted a change. With the MLC-based cache, the wait fell to less than 10 minutes.

“We’re not only comfortable with [MLC] now,” Mulkiewicz said, “we’re dependent on it.”

@pb

3. LTFS

The Linear Tape File System (LTFS) is expected to help usher in a tape renaissance. It’s the first technology that lets users search for information on tape via a file-tree directory, making the process similar to searching disk storage. Users can drag and drop files to and from a mounted LTFS formatted tape, opening up new possibilities for incorporating tape into workflows and making long-term archival easier.

LTO-5, the first tape format to support LTFS, has media partitioning so a drive can write two variable-length partitions on each tape. One partition has a self-contained hierarchical file system index and the second holds the content. LTFS presents a file-structure-type interface for managing the files on tape. All a user has to do is load the tape into a drive and the data can be viewed by a browser or any application that has a tape attached to it.

HP and IBM are the main developers of LTFS software, and the LTFS open standard is supported by the LTO Consortium. HP supports LTFS in HP StoreOpen Automation, while IBM released support for LTFS for libraries last May with its IBM System Storage LTFS Library Edition. Other companies have delivered support for LTFS, such as Crossroads with its StrongBox device and Cache-A offers LTFS capability with its flagship Pro-Cache5, Power-Cache and Prime-Cache5. In addition, Atempo is now fully compatible with the LTFS platform when using Atempo Digital Archive (ADA), a file archiving product.

LTFS is still in the early adoption phase, with media and entertainment as its sweet spot. Robert Smith, founder of 2PopDigital.com, which provides editorial system support for post-production in the media and entertainment industry, said LTFS will hit the mainstream when more archiving management software supports the open standard.

HOW OUR PREDICTIONS FOR 2011 FARED
DATA STORAGE PREDICTIONS 2011
Enlarge HOW OUR PREDICTIONS FOR 2011 FARED diagram.

“It will be the catalyst to using LTFS,” he said. “Then you can see what’s on the tape instead of depending on a database that’s telling you by the tape number or barcode. If you have LTFS, you can search for a file the same way you do in a file system. LTFS has lots of benefits in that regard.”

Randy Kerns, storage strategist at Boulder, Colo.-based Evaluator Group, said the media and entertainment industry has the most immediate need for LTFS because of the requirement to transport data more efficiently. He said archiving management software can be layered on top of LTFS so users can input retention periods and data access controls -- the technology will go mainstream once more archive management software supports it. “It’s really a managing archive rather than a collection of backups,” he said. “Companies have another alternative with tape that’s more functional than just doing backups.”

@pb

4. Cloud gateway appliances

Cloud gateway appliances are garnering attention as a great way to introduce cloud storage into an organization. These devices are easy to set up and relatively inexpensive, and users can start small and scale up easily.

The premise of a cloud gateway is simple: the appliance installs in a data center and acts as a bridge between on-premises storage systems and a cloud storage service. The bridge is required because public cloud storage providers rely on Internet protocols such as REST APIs over HTTP rather than conventional storage-area network (SAN) or NAS protocols. By connecting on-premises storage to the cloud via a gateway, the cloud storage service can seamlessly integrate with existing systems.

Although the adoption rate of cloud gateways is still relatively modest, a number of products have been released in the last two years and the attention they’ve received, along with their potential to grab market share, is what has landed them on our Hot Technologies list.

“There’s a huge interest in moving data to the cloud and I don’t think there’s any risk tolerance for user companies to deal with cloud provider APIs,” Forrester Research’s Reichman said. Within the cloud storage market, there’s a need for improving local performance, mitigating extra latency issues and adding security features.

“Those are some of the key attributes of cloud gateways that I think will make them big enablers for user companies to use cloud storage,” he said.

Cloud gateways can be integrated or combined with other products, and some vendors have already set up partnerships with backup and storage virtualization vendors; for example, TwinStrata is partnering with Veeam and DataCore, while StorSimple has joined forces with Microsoft. The challenge is that many users are unaware of cloud gateways and their benefits.

“I think cloud gateways will find a home eventually as users get more comfortable with cloud storage technology. I see them getting more embedded and leveraged as cloud storage appliances,” said Terri McClure, a senior analyst at Milford, Mass.-based Enterprise Strategy Group (ESG).

Some industry experts expect data storage vendors will build this technology into their arrays if the concept catches on. That would make it easy for onsite storage systems to treat a cloud storage service as simply another storage tier without having to take an intermediate step.

“I think the jury is still out as to whether this should be a fully separate product and a fully separate vendor, or if what we’re really talking about is a feature of other products,” Forrester Research’s Reichman said.

Another factor that has slowed acceptance of gateways is that the vendors offering these products are mostly startups.

“It’s going to take some market education and probably some effective partnering on the part of those smaller vendors to really get their product to be considered by buyers,” Reichman noted.

Taylor Higley, director of information services at the American Federation of Government Employees (AFGE), recently deployed TwinStrata’s CloudArray cloud gateway. When asked why he chose a TwinStrata gateway, his answer was simple.

“Really, it was the ability to leverage cheap Amazon S3 storage but still have the security and reliability of Veeam’s Backup & Replication system,” he said. “TwinStrata was the missing piece to make it all work.”

@pb

5. Virtual storage appliances

With server virtualization firmly entrenched in data centers, server-based shared storage is making a significant mark in the form of virtual storage appliances (VSAs). These software-based systems enable the advanced capabilities of server virtualization without expensive, dedicated storage hardware. They run inside a virtual machine (VM) and create shared storage from the storage attached to the physical server a VM runs on. In 2012, we expect to see more companies -- especially small- and medium-sized businesses (SMBs) -- turning to server-based storage as an inexpensive way to support server virtualization.

Virtual storage appliances -- such as HP’s StorageWorks P4000 VSA Software and DataCore’s SANsymphony -- have been on the market for a number of years, but VMware Inc.’s rollout of its vSphere Storage Appliance is expected to inject more interest into this tech. vSphere Storage Appliance, which is targeted specifically at SMBs, runs across multiple hypervisors, aggregating direct-attached storage (DAS) into pooled storage.

“We’re going to see incredible evolution happen inside the virtual infrastructure by . . . vendors like VMware,” said Jeff Boles, senior analyst and director of validation services at Hopkinton, Mass.-based Taneja Group. “And the virtual storage appliance technology coming out for VMware . . . [is] just one more very innovative technology from one of the hypervisor vendors, and storage vendors better be reading between the lines.”

Others see VMware as leveraging its skill set in virtualization to broaden its technology’s appeal to more companies. “I think that VMware sees storage as a stumbling block keeping people from expanding their server virtualization capabilities, and to some extent [VMware] has decided to take it in their own hands to try and fix it,” said George Crump, founder and lead analyst at Storage Switzerland.

Virtual storage appliances make more sense for SMBs because they’re based on iSCSI rather than Fibre Channel (FC).

“You don’t find a whole lot of traditional businesses, traditional enterprises deploying VSAs,” Taneja Group’s Boles said. “If they’re traditional enterprises, [and] they’re doing some private cloud or public cloud stuff, you might find them deploying [virtual storage appliances] there.”

One vendor with a big footprint in the virtual storage appliance market is DataCore, with its SANsymphony-V software, which virtualizes storage across pools of heterogeneous systems, turning commodity servers into SANs.

Barren County Schools in Glasgow, Ky., turned to SANsymphony-V after a server consolidation project. The IT department had consolidated 30 physical servers to four running 30 VMware VMs. It then replaced EMC SANs with two Dell servers running DataCore software.

“[The EMC system] didn’t have the processes, the caching to do some of these more advanced features, so [the key] in looking at SANsymphony was the fact that they didn’t care what hardware [was used],” said Cary Goode, district technology service specialist at the Barren County Technology Office.

Barren’s IT department makes use of DataCore’s high-availability mirroring capabilities, resulting in 10 TB of usable storage in high-availability mode for redundancy. “We can use an entire node because the other node has the exact same data in a live environment,” Goode explained. “To get that out of a hardware system -- I can’t even imagine what the price tag on that would be.”

@pb

6. Integrated cloud backup

Cloud backup has been around as a consumer service for years, and it makes a good deal of sense -- keeping an offsite copy of data is pretty much disaster recovery 101. However, in the enterprise space a number of roadblocks have hindered widespread cloud backup adoption. One of the biggest issues limiting its acceptance was that it required an entirely new backup approach and a cloud-specific backup application. But that’s changing, and a number of major backup software vendors now allow users to back up directly to the cloud.

CommVault Systems Inc.’s Simpana lets you back up to any cloud vendor that supports the REST protocol, such as Amazon, Microsoft Azure, Nirvanix or Rackspace. Symantec Corp.’s Backup Exec can back up to Symantec’s cloud, while the firm’s NetBackup has an option to back up to Nirvanix. EMC NetWorker can ship backup data to EMC Atmos-based cloud storage services.

“They’re trying to promote this whole style of data protection to customers who don’t have a second site where they might replicate data to,” said Lauren Whitehouse, a senior analyst at ESG.

Cloud integration with traditional backup products allows users to create onsite disk backups for fast restores, but rather than sending copies offsite on tape, they can now easily choose the cloud as an alternative for disaster recovery. In the event a large amount of data must be restored from the cloud, many services can ship data on an appliance or disk to a customer site. Other users might see it as an archiving tier, sending older data to the cloud for long-term retention.

Some data storage managers opt to continue to rely on somewhat outdated backup technologies mainly because switching backup products can be complicated. And there may be resistance to new data protection technologies because using them means adding a separate tool with the associated management requirements. But when cloud storage is so tightly integrated with a company’s existing backup application, those apprehensions quickly disappear.

The addition of a cloud option to legacy backup tools may be just the push that some firms need to give cloud backup a shot. Another backup technology -- continuous data protection (CDP) -- provides a good analogy. As a standalone product, CDP saw little adoption, but when it was integrated with backup software products that users were familiar with, it went mainstream.

“Backup and long-term retention use cases are good examples of where IT organizations can dip their toe into the cloud pool,” Whitehouse said. “It’s replacing the need to create tapes and ship those tapes offsite.”

 

This was first published in December 2011

Dig deeper on Enterprise storage, planning and management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close