by_adr - Fotolia
If you've been wondering what newfangled technology will show up in your data center in 2015, read on. For 12 years, Storage magazine has celebrated the rite of passage into a new year by highlighting the half-dozen or so hot storage technologies we think will have a real impact on data center operations in the coming year.
As in years past, our list veers sharply in the direction of practicality -- most of our hot techs are "newish" rather than brand-spanking new because we want to focus on those technologies that have attained a level of maturity that shows us they're proven and generally available.
This year's list reflects the profound impact solid-state has had on storage systems with enterprise-class all-flash arrays, flash caching and hybrid storage arrays all among 2015's hot technologies.
Rounding out our bevy of noteworthy technologies are VMware Virtual Volumes (VVOLs), which may revolutionize storage provisioning and configuration; affordable and speedy cloud-based disaster recovery (DR); and server SANs that transform servers into arrays.
VMware Virtual Volumes
Virtual Volumes is a natural fit as a hot data storage technology for 2015, and could probably qualify for a few other lists, such as most eagerly anticipated and most long-awaited storage technologies. Who wouldn't want something that eliminates the need to use LUNs and NAS mount points to provision storage? That's what VMware and storage array vendors promise VVOLs will do, and they say VVOLs are due any day now. They were part of the VMware vSphere 6 beta, which is expected to become generally available in the first quarter of 2015.
VVOLs give each virtual machine (VM) its own volume on the storage array to store services such as snapshots, replication and thin provisioning. That allows a VM to have its own storage services and policies.
VVOLs build on VMware vStorage APIs for Array Integration (VAAI) and vStorage APIs for Storage Awareness (VASA) initiatives. VAAI allows hypervisors to offload functions to storage systems, while VASA provides visibility between the hypervisor and the array. VVOLs talk to the storage system directly through VASA instead of using LUNs or NAS mount points, and work as storage containers with a data store, storage services and metadata. The containers align with individual VMs, so VVOLs change the main unit of storage management from a LUN to a VM object.
NetApp (FAS), Hewlett-Packard (3PAR) and Dell (EqualLogic) say they'll have arrays with VVOLs enabled as soon as VMware makes the technology generally available. EMC, the majority owner of VMware, is sure to follow and plans to support VVOLs in its ViPR software-defined storage platform. Smaller vendors have also disclosed VVOLs strategies. For instance, all-flash array vendor SolidFire plans to enable its quality of service to guarantee storage performance to every VM through VVOLs.
"If you manage storage, VVOLs need to be in your conversation," said Greg Schulz, founder and senior advisor at StorageIO in Stillwater, Minn. "You need to get up to speed on it. Every storage vendor better have a VVOLs story. Having VVOLs will be table stakes, just like having a LUN or a file share."
Newer storage companies, such as VM-centric array vendor Tintri and hyper-converged vendors such as Nutanix and SimpliVity, architected their systems from the start to avoid using LUNs and mount points to provision storage. VMware's Virtual SAN (vSAN) hyper-converged software will support VVOLs in its next version. But legacy storage systems need to rework their arrays to support VVOLs with services such as snapshots, replication and thin provisioning.
"VVOLs are an inevitable progression of per-VM storage capabilities proven out by Tintri, and now embraced by Virtual SAN and others," said Mike Matchett, a senior analyst at Taneja Group in Hopkinton, Mass. "Unfortunately, layering or retrofitting VVOLs support onto traditional arrays has proven challenging in the details."
Enterprise-class all-flash arrays
Performance-boosting all-flash arrays (AFAs) are poised for greater adoption across a wider range of workloads now that most of the major vendors and startups have bolstered their products with additional capacity options and enterprise storage and data reduction features.
Capabilities such as snapshots, clones and replication have become commonplace in AFAs. Plus, the combination of inline compression and deduplication, and the declining cost of flash have lowered the price of AFAs to the point they may be considered for general-purpose workloads.
The Great Atlantic & Pacific Tea Company supermarket chain -- better known as A&P -- made a long-term investment in IBM's FlashSystem V840 in mid-2014 to replace end-of-life disk arrays. A&P expects to run multiple databases for mission-critical applications on the V840 and see benefits in performance and a reduced data center footprint, according to Richard Angelillo, the company's vice president of information services. A&P licensed IBM's optional inline compression to potentially increase the capacity from 40 TB usable to 200 TB effective.
"The value of AFAs relative to pure [hard disk drive] HDD boxes is much more evident -- and you hit ROI faster -- if you're loading multiple applications onto the array as opposed to just buying it to speed up a single application," wrote Eric Burgener, a research director in IDC's storage practice, in an email. Framingham, Mass.-based IDC predicts all-flash arrays will ultimately replace traditional arrays with their HDDs, Burgener noted.
Tim Stammers, a senior analyst at New York-based 451 Research, said the AFA market will show a 42% compound annual growth rate through 2018, when it reaches an estimated $3.4 billion. A 2013 survey of more than 200 enterprise storage professionals done by 451 Research's InfoPro service showed just 8% had deployed or piloted all-flash arrays. This year, the percentage rose to 11%, and another 19% said they expect to deploy AFAs within 18 months, Stammers said.
All-flash array vendors claim potential users need to consider the total cost of ownership (TCO) and price per IOPS rather than simply the price per gigabyte (GB). But Marc Staimer, president of Dragon Slayer Consulting in Beaverton, Ore., said the usable price per GB will need to fall to the ballpark range of HDDs, especially in public perception -- and not simply with the "hand-waving voodoo magic of dedupe and compression" -- for AFAs to take off.
Arun Taneja, founder and consulting analyst at Taneja Group, said the battleground for all-flash arrays and hybrid systems is the traditional array running 15,000 rpm HDDs. "Nobody should be buying HDD-only systems anymore. They're all going to be hybrids or all-flash arrays," he said.
Cloud-based disaster recovery
Disaster recovery is one of the more costly and critical projects for IT, which makes the cloud a particularly attractive alternative to in-house deployments. As users have become more comfortable with cloud storage services such as backup, cloud-based DR offerings have proliferated for those who want to step up their use of cloud data protection services.
A cloud-based disaster recovery service requires replicating full data sets or entire VMs to the cloud. The services use server virtualization to access the storage in the cloud to effectively create a secondary data center. These offerings support server images and production data backup from a customer's site to the provider's cloud. Prepackaged disaster recovery as a service (DRaaS) offerings make failing over to the cloud even easier and potentially less costly with pay-per-use pricing models.
"With disaster recovery, the TCO seems to hold steady in favor of using the cloud," said Taneja Group's Matchett. "Until you need it, the data is cold. One thing people are talking about is restoring in the cloud; if you have virtualization and backup virtual machines, then you can restore that VM in the cloud if the primary site is unavailable."
Matchett said inroads have been made with tools that convert or migrate VMs to the cloud.
"There are tools that work at the level of the application blueprint where more complex application architectures can be spun up," he said.
James Bagley, a senior analyst at Storage Strategies Now in Austin, Texas, said there's been an increase in the past year in the number of DRaaS offerings, and they're more upmarket with features such as automation, network replication and the ability to transform hypervisors into the ones running in the cloud.
"There can be issues with taking an existing environment and having it stand up in the cloud," Bagley said. "Different hypervisors and network settings are usually the bugaboo there."
Dragon Slayer Consulting's Staimer said disaster recovery is more than just recovery of the data, meaning users need to broaden their evaluations of DRaaS offerings.
"It's more than just mounting the data," Staimer said. "How are you connecting to the user? Do they do network manipulation to allow access? Are they providing network recovery user access? What percentage of customers can they take care of at one time and for how long? A lot of people who are getting into this don't know what they're getting into."
Nonetheless, cloud-based DR can offer astounding recovery time objectives and recovery point objectives that are within the financial reach of even the smallest companies.
Flash storage has the ability to reduce latency and boost IOPS, but solid-state hardware alone won't necessarily do the trick. That's where flash cache software comes in, providing intelligence and automated management that enables critical applications to be served from a higher performing tier of storage.
The emergence of flash cache as a hot technology parallels the increased density of applications, particularly in data centers with large installations of transactional or analytic databases.
Flash caching vendors are winning converts by demonstrating that they can reduce the management burden while boosting overall system performance, said Jim Handy, a semiconductor analyst at research firm Objective Analysis in Los Gatos, Calif.
"Enterprises that have postponed adding flash to their systems are now becoming convinced that flash caching software can take away the last of the problems they worried about," Handy said.
Momentum in 2014 came from disruptive vendors like PernixData, which added the capability to pool server RAM for cache in virtualized environments, and from established hardware vendors like HGST, which unveiled its ServerCache software for Windows Server and Linux operating systems.
Flash cache can be deployed in tandem with HDDs in a single server, as a component within a shared storage array or aggregated in a virtual pool across multiple servers. The flash software uses algorithms that examine historical access patterns of applications and targets flash at data blocks most in need of acceleration. The cache mechanism temporarily stores a copy of the hottest data on NAND memory chips, enabling files to be quickly retrieved while also freeing up production bandwidth.
Stamford, Conn.-based analyst firm Gartner estimates the market for flash cache software could top $350 million by 2019, with a compound annual growth rate in the teens. The high cost of dedicated storage provided the impetus that gave rise to software-based flash cache, said David Russell, a Gartner vice president of storage technologies and strategies.
"People are tired of overprovisioning. They don't want to buy more Fibre Channel disk just to be able to meet the IOPS," Russell said. "We live in a scarcity world and more of the spotlight is on storage, especially the server vendors whose margins have been hit hard."
As all-flash arrays struggle to gain broad traction, flash caching has emerged as an interim method for speeding up performance on specific application workloads.
"In most environments, only about 10% to 15% of data is active at any point in time," said George Crump, president of IT analyst firm Storage Switzerland. "Buying 10% to 15% of your capacity in flash, and having it automatically move the write data to cache at the right time is a very economical way to deploy flash."
Networking server-based storage
Traditional shared storage poses a number of problems in today's virtualized world. The management of disparate storage entities is cumbersome, buying hardware to accommodate growing data is maxing out IT budgets and VMs have to battle each other for adequate IOPS. Those are all difficulties networking server-based storage technology can help ease, and a reason why more enterprises will be considering it in 2015.
Also referred to as server-attached storage or server SAN, this technology uses software to abstract the components of a traditional shared storage architecture away from the hardware. The storage is directly attached to the host server, while the software runs as a virtual machine, pooling the physical capacity so that all VMs have access.
That means expensive hardware is no longer a necessity; commodity servers, storage and networking can be used while still attaining adequate performance and capacity, and scaling becomes much more cost effective.
But perhaps the biggest draw of server-based storage technologies is the management capability. In traditional SAN environments, management features are specific to arrays. Server SANs abstract those features, spreading them across the aggregated capacity.
"The basic trend comes down to simplicity," said Stuart Miniman, principle research contributor at research firm Wikibon. "Having just one platform layer that handles the whole infrastructure without having to manage it is what's attractive."
In a 2013 report, Wikibon said the revenue from the enterprise server SAN market in 2013 totaled $270 million, and predicted a rapid migration from traditional to server SAN environments to begin in 2018.
One thing that's apparent today is that more vendors, both established and startups, are continuing to make networking server-based storage plays.
According to Miniman, much of that activity can be attributed to VMware hyper-converged products. "VMware has a pretty important place in the ecosystem, so when they say 'Let's get rid of the storage array and have this new way of simplifying IT,' people start to notice," he said.
VMware last year launched vSAN, highly anticipated hyper-converged software that pools physical capacity to store VMs.
"There are a ton of startups in this space," Miniman said. "There's everything from the big players like [Hewlett-Packard] HP and EMC, to Dell doing almost every single solution in the space through partnerships and OEMs, and then there's Nutanix, Nexenta and Fusion-io."
At VMworld this year, VMware expanded on its server-based storage software platform in a way that allows hardware vendors to get on board with EVO:RAIL. The reference architecture provides a form factor for hardware partners to build on while using the vSAN architecture for management and provisioning.
Hybrid storage arrays
Hybrid flash arrays that mix HDDs and solid-state drives (SSDs) are the leading option for enterprise flash deployments today -- still well ahead of all-flash arrays and server-side flash.
According to a recent IDC report, 51% of enterprises with at least 1,000 employees have already added flash to their storage environment. Of that group, 84% have deployed some kind of hybrid system. Sixty-six percent said they took a DIY approach by adding SSDs to existing arrays, while 18% opted for a new hybrid array.
Purpose-built hybrid flash array deployments will likely increase this year. Whether designed from the ground up or re-architected for flash, these arrays offer better performance and reliability than a DIY hybrid array because they're designed to make the best use of flash rather than treating the drives as if they were traditional spinning disks. Every major storage vendor offers hybrid flash arrays today, and most offer a variety of choices. EMC sells scalable hybrid VNX and VMAX systems in a variety of capacity and performance levels. The company also offers hybrid flash systems aimed at specific workloads such as the EMC Isilon Solutions for Hadoop Analytics and the EMC Isilon Video Surveillance Solution. And depending on the configuration, hybrid systems are less expensive than all-flash arrays.
Other than cost, the main limitation of all-flash arrays is capacity. Until recently, all-flash arrays offered enough capacity to handle certain application workloads but not enough to serve an entire enterprise. That's changing, but it's still far from the norm. Capacity is, of course, much less of an issue in hybrid systems running high-performance flash alongside hard disk storage. NetApp's FAS8080 EX scales to 5.76 PB of spinning disk and 36 TB of flash, for example.
Most organizations have one or two applications, such as virtual desktop infrastructure, which require very high performance, while the rest of their apps are perfectly happy accessing data on traditional disk drives. This makes hybrid arrays appealing to many organizations today. As the price of flash continues to decline and capacity grows, all-flash arrays may take the lead, but for now the hybrid array is king.
Andrew Burton, Rich Castagna, Garry Kranz, Sonia Lelii, Dave Raffo, Carol Sliwa and Sarah Wilson are all members of TechTarget's Storage Media Group.
- Data Protection Strategies in the Era of Flash Storage –Rubrik
- Data Management Strategies for the CIO –SearchDataCenter.com
- Three Ways That AI Will Impact Your Data Management and Storage Strategy –IBM
- Data integration strategy: A clearer path for data –TechTarget