Feature

Hot data storage technologies for 2013

Andrew Burton, Rich Castagna, Todd Erickson, John Hilliard, Sonia Lelii, Dave Raffo and Carol Sliwa
Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Which data storage technologies will reshape storage shops in 2013?."

Download it now to read this article plus other related content.

These six data storage technologies will play pivotal roles in transforming data centers in 2013. We also review our hot storage tech predictions from last year.

Our annual hot data storage technologies forecast cites the practical applications of techs that are available and ready now, rather than oohing and ahing over a list of science projects that may never leave the lab. That's not to say our tech picks lack pizzazz; they represent some of the most

Requires Free Membership to View

exciting technologies that are at the core of data center transformation, including solid-state storage, storage clouds, virtualization and data protection.

In 2013, we think a lot of data storage shops will sidestep spinning disk in favor of all-flash arrays -- the prices are plunging and the performance is jaw-dropping. Solid-state will also become a key tool for caching apps and data to help speed up hard disk systems.

Cloud storage services will figure prominently in many companies' disaster recovery (DR) plans, offering inexpensive virtual collocations and near-instantaneous recoveries. But as file share and sync services continue to proliferate, the cloud will also create a little stress for storage managers.

Nightlies and weeklies may disappear from many backup operations in the coming year as more companies turn to snapshot-based backups. And a lot of the data they'll be backing up will be stored on systems specifically designed for virtualized server environments.

 

All-flash storage arrays

With price the major obstacle to implementing solid-state storage, arrays packed exclusively with flash have taken time to catch on. But a bevy of startups offering lower prices have made all-flash arrays a reality, and acquisitions by storage giants could push them even deeper into enterprises in the coming year.

Simply put, the need for speed has created a market for flash systems. Top-tier, all-solid-state drive (SSD) arrays can deliver 500,000 IOPS to 1,000,000 IOPS, and even "second-tier" arrays offer 100,000 IOPS to 200,000 IOPS at a fraction of the price for a top-tier box.

"When you think of an all-SSD array, you're thinking about how you can pack the greatest amount of IOPS or storage performance into the smallest form and with the smallest investment," said Jeff Byrne, a senior analyst and consultant at Hopkinton, Mass.-based Taneja Group.

The cost of an all-SSD array is exorbitantly high on a dollar-per-GB basis, but the scales tip in its favor if dollars per IOPS is the measurement. So the best use cases for all-SSD storage arrays are environments that rely on applications requiring sustained high performance.

"Those would be things like data analytics, digital imaging, [virtual desktop infrastructure] VDI, database applications, financial trading systems and gaming websites," Byrne said. The applications also feature high transactional volume and highly random I/O, which justifies the all-SSD array high cost per GB.

All-SSD storage platforms have come from startups, including Kaminario, Nimbus Data, Pure Storage, SolidFire, Skyera, Tegile Systems, Violin Memory and Whiptail Technologies, among others. But they'll soon have company. EMC Corp. acquired XtremIO last May and plans to release its all-flash "Project X" system in mid-2013. And IBM is already in the market with a slate of all-flash systems it acquired when it bought Texas Memory Systems in August 2012.

Arrays from the top-tier vendors will set you back between $16 and $20 per GB. Midtier flash arrays still fetch a premium price at $3 to $8 per GB. Today's enterprise spinning disk array prices are typically below $2 per GB.

All-SSD storage vendors are working to bring down the cost of their arrays, as well as perfecting the overall array designs for greatest efficiency and cost effectiveness.

One way of reducing cost is by using multi-level cell (MLC) instead of the more expensive single-level cell (SLC) flash. SLC flash is more durable and reliable, with a lifecycle of a 100,000 write cycles. MLC has a lifecycle of approximately 10,000 write cycles, but vendors have improved the performance and durability of MLC through software and better ways of writing data. Data reduction technologies can also help lower the price by effectively increasing the amount of usable storage space.

As flash prices drop and the use of MLC becomes more pervasive in 2013, we'll see flash arrays move from niche environments to traditional enterprise applications, even replacing spinning disk systems.

Cloud-based disaster recovery

Cloud-based DR may be an ideal disaster-proofing option for both small and medium-sized businesses (SMBs) and enterprise-scale companies. Any organization can easily and inexpensively ship copies of its data to a cloud storage service. Using server virtualization, when local operations are disrupted, new virtual servers can be stood up in the cloud to access the stored data.

"Essentially, cloud-based disaster recovery takes traditional recovery assets, such as storage systems dedicated for data backup, and relocates them into a cloud-based storage environment provided by a third-party firm," said Paul Kirvan, an independent consultant and DR expert.

Storage magazine's recent Purchasing Intentions survey found companies are still approaching cloud storage with caution, but approximately 12% of respondents said they're using the cloud for DR.

And a March 2012 Forrester Research Inc. survey commissioned by IBM noted that large organizations can benefit by looking to an outside cloud DR vendor. According to the Cambridge, Mass.-based research firm, 23% of enterprises were expanding or upgrading implementations of cloud DR (also referred to as disaster recovery as a service) or planning to implement it within 12 months.

Forrester also found that an additional 36% of those surveyed expressed interest in the technology, and that more than half of respondents considered it a "top hardware/IT infrastructure priority."

"IT managers who have not yet investigated the possibility of sending some of their recovery to the cloud are behind the times; it's time to start planning," Forrester concluded in its study.

Turning to an outside vendor can offer cloud DR users the freedom of not having to construct and maintain the infrastructure needed to support a DR plan, which can favor smaller organizations that may not have the staff or resources to build such a system on their own.

Kirvan said turning to cloud DR still requires resolving how data needs to be stored before taking action. That includes deciding whether to use synchronous or asynchronous replication to the cloud (which could be critical if bandwidth is at a premium), whether to retain any backup tapes that are in use and what type of data -- for example, databases, applications or other critical information -- should be copied to the cloud, he said.

"[Cloud-based DR services] provide a cost-effective secondary backup and recovery solution that supplements existing backup and recovery arrangements," Kirvan said. "An ideal strategy is to establish a hybrid configuration that blends both on-site and cloud DR resources."

Snapshot-based backups

The integration of array-based snapshots with backup software allows users to manage snapshots as part of the backup process. Historically, array-based snapshots have relied on management software sold by the storage hardware vendor, and as such, had to be managed separately. In 2012 this changed, with a number of backup software vendors announcing the ability to manage array-based snapshots.

"More and more backup and recovery suites are including the capabilities to control and catalog array-based snapshots," said Rachel Dines, an analyst at Forrester Research. "Furthermore, some solutions can recover individual files and objects from a snapshot. That means, for the first time, snapshots can be truly integrated into the data protection strategy."

According to Greg Schulz, founder and analyst at Stillwater, Minn.-based StorageIO, this development addresses challenges users have faced when using each technology.

"The challenge with snaps has been managing what is protected, something that legacy tools do a good job with," Schulz said. "On the flip side, the challenge with traditional backup is the time and resources needed to capture or collect the data and then copy it somewhere."

Forrester Research's Dines said that snapshot backup helps users meet backup windows and recover data faster -- two issues IT pros have cited as a growing problems for years. "Snapshot backups allow users to virtually eliminate backup windows by taking a snapshot and using that as the backup," she said. "Snapshot backups can also be rapidly mounted and used almost immediately -- much faster than restoring from a traditional backup."

The concept of using snapshots as part of a data protection strategy isn't new. Many IT shops have used array-based snapshots as a rapid recovery strategy while also creating backups of their data. However, the integration of snapshot and backup allows users to streamline that approach.

"2012 and 2013 are important for snap-backs because the technology is now there for some vendors, and [early adopters] have seen the products mature," Dines said. "The time is right to rethink backups using snap-backs as part of modernizing data protection to find and fix problems rather than swapping out media like flat tires on a car."

StorageIO's Schulz agreed that snapshot backups were an important step toward modernizing data protection and said that "snapshot backups are, or should be, hot because they [enable users to] rethink how, when and what information is protected." He went on to say that the marriage of snapshot technology with the management capabilities of backup software is a "perfect example of technology convergence" taking advantage of the best aspects of each.

Server-based flash cache

Server-based flash cache became a hot topic in the storage world with the launch of EMC's VFCache, and the performance-boosting technology picked up momentum as additional major vendors, such as Dell Inc. and NetApp Inc., unveiled similar offerings.

The push toward server-side flash cache by storage vendors essentially validated the market staked out by trailblazers such as Fusion-io Inc., with its ioTurbine software for virtual environments and directCache for physical servers, LSI Corp., OCZ Technology Group Inc., SanDisk Corp. and VeloBit Inc.

"It's hot now, and it's only going to get hotter because it's a relatively simple addition to a server," said Dennis Martin, president at Demartek LLC in Arvada, Colo. "It doesn't require application changes or changes to the back-end storage system, and it provides a significant boost in storage performance."

Putting the cache in the application server rather than the storage system reduces the latency associated with the network hop. To further minimize latency, server-based flash caches often use PCI Express (PCIe) cards connected directly to the CPU and system memory rather than SAS/SATA-based SSDs. Caching software generally determines the most frequently accessed data and automatically shifts a copy to the flash cache. Algorithms differ by vendor, but read caches typically require a warm-up period to achieve optimal performance.

For instance, EMC's VFCache write-through, or read, cache might need 30 or 60 minutes to fill with data from an Oracle Corp. database. The initial data writes go from the application server to the storage array, and the PCIe card populates on an asynchronous basis to prevent application slowdown. I/O filter driver software, which installs on the server, determines if a data request can be fulfilled via the PCIe card.

Another, more complex type of server-based flash cache, such as Dell's Fluid Cache (due out next year), aims to accelerate both the reads and the writes. A read/write cache is more work for the vendor than a read-only cache because the writes take place before the data is written to the back-end storage system and the software needs to ensure the data is protected, Martin said.

One of the key questions surrounding server-based caches is the degree to which they work with third-party storage systems. VFCache, for instance, technically works with any server or external storage system, but EMC spelled out plans to deeply integrate the cache with its storage management and Fully Automated Storage Tiering (FAST) technologies. Industry analysts expect most server-based flash cache software to ultimately work best and afford the most sophisticated features when used with the same vendor's storage systems.

 

Storage systems for virtual environments

Server virtualization prompts organizations to adopt networked storage and pushes storage vendors to change the way storage is provisioned and managed. That trend is accelerating as virtual servers become more ingrained in the data center, virtualization spreads to desktops and VMware Inc. gives server administrators more control over storage.

Major storage vendors are working more closely than ever with VMware to tie into its storage features, startups are developing storage systems that can be set up from within vCenter for easy provisioning and management, and converged stacks keep popping up to better integrate virtual machines (VMs) and storage.

More than ever, control of storage is being shared by virtual servers and server administrators. VMware is driving this trend with its virtual storage appliances (VSAs) and future features such as vFlash, vSAN and vVols.

Almost every storage vendor has changed the way its products are sold and managed as a result of server virtualization. The major storage vendors support vStorage APIs for Array Integration (VAAI) and are working on ways to provision storage without LUNs, RAID groups and mount points. They also have reference architectures, integrated stacks or both that combine storage, compute, networking and server virtualization to make it easier for users to manage storage for VMs.

Startups Nutanix Inc., Scale Computing and SimpliVity Corp. sell what they call "hyper-converged" systems that put capacity, computing and pre-installed VMs in one box. Other newcomers, such as Tegile Systems Inc. and Tintri Inc. designed storage systems specifically to support VMs.

Adoption of another hot storage technology, solid-state, has been driven largely by the need for better storage performance for a VDI. Storage performance had been the biggest obstacle to implementing VDI, but companies are getting around that now by implementing SSDs in their storage arrays dedicated to virtual desktops.

VSA technology has been around for a while, with DataCore Software, Hewlett-Packard Co.'s LeftHand and others offering similar products for years. But VMware's VSA push will likely prompt organizations to take a closer look. VSAs use a virtual machine in the host to connect to an onboard RAID controller and make that storage available to other hosts through iSCSI or NAS.

The need to manage storage for VMs has brought about a new industry buzz phrase, "software-defined storage," a takeoff on software-defined networks. Software-defined storage has no agreed upon definition yet, but you can expect vendors to commonly use it to describe how they work with VMs.

Cloud-based file-sharing and sync services

File-sharing and syncing services are growing at such a rate that more than 30 vendors now offer products. The driving force behind cloud-based file sharing is the mobile worker who is becoming more dependent on portable devices, such as smartphones and tablets. Those users want to collaborate and access documents stored on desktops or laptops using any mobile device, from any location at any time.

Companies like Box, Citrix's ShareFile, Dropbox, Egnyte Inc., Nomadix Inc., SugarSync Inc., Syncplicity Inc. and YouSendit Inc. are some of the early companies to provide cloud-based apps that sync data from desktops, laptops and mobile devices for instant access or collaboration purposes.

New companies continue to join the market. Startup Maginatics Inc. came out of stealth recently with its MagFS online file-sharing platform that uses a distributed file system and cloud storage so end users with multiple end-point devices can access data from a shared namespace. Nasuni Corp. and Scality are two of the latest vendors to deliver offerings as part of their larger platforms for cloud-based mobile access.

SMBs were the first to show interest in cloud file-sharing/syncing services, but now it's pervasive in enterprises too. Egnyte CEO Vineet Jain said he's finding interest in cloud file sharing from larger organization, prompting the launch of the company's enterprise version, EgnytePlus, to increase syncing capabilities and support more users.

"We're getting pulled more and more into larger enterprises," Jain said.

Like most cloud deployments, online file-sharing services can be implemented in three ways: via public, private or hybrid models. File-sharing vendors offer a public option in which the provider takes full responsibility for the full service. Vendors also offer a software license option where users can install their own hardware behind the firewall to ensure security. The hybrid approach melds on-premises file sharing with a public cloud file-sharing service.

While many companies originally turned to cloud file sharing just to allow employees to access files remotely from smartphones and tablets, IT managers found that as an added benefit they could replace some on-premises file servers. This reduces virtual private network (VPN) costs and the challenge of managing geographically remote workers.

It's clear mobile devices are changing the way employees collaborate and access documents, and companies now are forced to accommodate this change. "The old way of file sharing doesn't work for the mobile workforce," said Terri McClure, a senior analyst at Milford, Mass.-based Enterprise Strategy Group.

Andrew Burton, Rich Castagna, Todd Erickson, John Hilliard, Sonia Lelii, Dave Raffo and Carol Sliwa contributed to this article.

 

 

 


This was first published in December 2012

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: