News Stay informed about the latest enterprise technology news and product updates.

Bringing focus to the software-defined storage market

Jon Toigo shares two reasons why you should not lose faith despite all the hype surrounding the software-defined storage market.

This article can also be found in the Premium Editorial Download: Storage magazine: Check out the best data storage products of 2015:

For a few years now, folks have been pushing the idea that server virtualization -- and its correlatives, software-defined networks and software-defined storage -- is a panacea for everything that's been holding back IT service delivery.

The mantra is easy to grasp: less hardware and centralized management equals lower cost of operation and reduced total cost of ownership. But reality has not seen the delivery of this value proposition in very many shops and many operators, including me, have been losing our collective religion.

What's hurt the software-defined storage market?

In the storage realm, the software-defined storage (SDS) market has been hampered by the limited vision and proprietary objectives of the leading hypervisor vendors.

On the one hand, the software-defined storage market has been advanced as a purported fix for the disappointing performance of virtualized applications -- though the actual performance issue rarely has anything to do with storage I/O latency.

On the other hand, hypervisor vendors have advanced their interpretation of SDS functionality -- including some storage services, excluding others -- in an apparent effort to create and reinforce silos of technology that only work with data from their virtualized workloads and preferred vendor hardware.

What's impressive about the software-defined storage market?

There are really only two things I've seen in the software-defined storage market over the past year that have even remotely impressed:

  • The delivery of specialized archival gateway appliances as virtual machines (for those who want to keep their hardware footprint on the smallish side while still practicing the commonsensical and essential practice of data archiving)
  • The introduction of adaptive parallel I/O technology, which leverages a full SDS stack to optimize raw storage I/O throughput and delivers ridiculously fast I/O processing at an extraordinarily low cost per I/O

The archival gateway virtual machine (VM) is an innovation from Crossroads Systems, which was formally announced at the end of 2015. Crossroads has been delivering archive gateways for quite some time under the brand name StrongBox. They were among the first to see the promise of the Linear Tape File System (LTFS) as a means to bridge production file system-based storage to file and object archive, preferably using tape or tape cloud. The latter, the tape cloud, was pioneered by Crossroads' partner, Fujifilm, a company that continues to do the hard work of growing the efficiencies, capacity and resiliency of magnetic tape.

Crossroads smartly enabled a generic server with a preinstalled version of LTFS to give it the ability to take files and objects from production storage and move them transparently to extremely high-capacity tape in accordance with archival policy. They initially targeted the two leading repositories of files, NetApp filers and Microsoft file servers, and made short work of the integration of production and archive data storage bridging.

The relationship with Fujifilm gave them the ability to extend the bridge across a WAN to an archival cloud service called the Dternity Media Cloud. Using tape provides a means to seed the Dternity cloud when there is too much data to transfer cost effectively across a wire, and also as a means to retrieve a substantial amount of data from the archive when necessary and re-deploy it into production storage.

How two vendors helped the software-defined storage market

There are really only two things I've seen in the software-defined storage market over the past year that have even remotely impressed.

Both the StrongBox appliance and the Dternity gateway and media cloud service were a win-win for both software-defined storage vendors and for users. But there was still a challenge. Some customers didn't want to deploy another server -- even a cool archive gateway appliance. Given their efforts to consolidate and reduce the number of servers via server virtualization, deploying a specialized server seemed to be a bit of backsliding. So, Crossroads innovated again, delivering its StrongBox for Fujifilm Dternity gateway appliance as a virtual machine capable of running under your favorite hypervisors, starting initially with ESXi. That bit of out-of-the-box thinking may well make archive much more commonplace in virtual server settings. You can download a free 90-day trial version of Strongbox VM from Crossroads Systems to test in your own shop.

That takes care of what I call "retention storage" which is the second part of the contemporary storage paradigm. Retention storage is where we put data that we need to retain but rarely if ever access or update. As much as 70% of your current data probably belongs in retention storage, and tape archive is clearly the cost-effective choice.

Make room for "capture storage" in your infrastructure

It is rewarding to see some reasons to keep the faith in the software-defined storage market.

The other part of your infrastructure is "capture storage" -- the storage that's optimized for high-performance access and fast IOPS. There's no shortage of kit makers in the software-defined storage market who want to sell us the fastest flash arrays or the cleanest and most stove-piped VMware Virtual Volumes (VVOLs) products to expedite I/O handling. But the breakthrough that we saw with the release of the SPC-1 benchmark around DataCore Software's Adaptive Parallel I/O in December 2015, and the delivery this month of software-defined storage enabled with that technology, takes the cake.

Go visit the Storage Performance Council report for yourself and see how DataCore managed to get the lowest cost per IOPS ($.08 per IOPS) in history using commodity disk, flash and server equipment along with its own storage virtualization software. That story is poised to improve over the next few months, as the company has its second round of SPC-1 benchmark tests certified, showing how you can squeeze over a million IOPS out of an economical server/storage kit of your own choice.

While I have been listening to the woo peddlers around software-defined-everything basically bend the ideas to serve their proprietary ends, it is rewarding to see some reasons to keep the faith in the software-defined storage market. These are two.

Next Steps

Get more from SDS infrastructure

Navigate the software-defined storage marketplace

SDS heavily deployed among satisfied users

This was last published in February 2016

PRO+

Content

Find more PRO+ content and other member only offers, here.

Essential Guide

Benefits of software-defined storage architecture: Time for consensus

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Where do you see the software-defined storage market headed?
Cancel
"delivers ridiculously fast I/O processing at an extraordinarily low cost per I/O"

How much are these - how many IOPS and how much $/IOPS? :)
Cancel

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close