tiero - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Software-defined storage vendors make claims numbers don't support

Industry analysts claim software-defined storage is lowering cost and increasing efficiency, but the fine print says otherwise.

This article can also be found in the Premium Editorial Download: Storage magazine: Copy management system saves storage space, cash:

I have recently been inundated with claims by software-defined storage vendors regarding the tremendous cost savings and administrative efficiencies of "converged" and "hyper-converged" storage products enabled by their preferred storage topologies and software-defined storage stacks. Whether direct-attached (converged) or internal (hyper-converged), software-defined storage vendors go on and on about how they provide the tools to consolidate, simplify and automate storage infrastructure so that fewer administrators are required.

Some cite industry analyst claims that software-defined storage is already reducing the cost per raw terabyte of storage capacity and increasing the number of raw terabytes an individual server administrator can manage. According to Gartner's IT Key Metrics Data 2016 report, for example, the average cost per raw terabyte -- not including facility costs -- has dropped to $2,009, 50% less than what it was in pre-SDS days, circa 2011. The same analyst claimed that an individual admin can manage 344 TB of raw storage capacity; up from 132 raw terabytes in 2011.

These numbers would seem to bolster the vendor argument for SDS everywhere. That is, until you read the fine print.

The fine print

Gartner and others (vendors, analysts and so on) have sort of buried the lead on the storage efficiencies presumably enabled by SDS. Their numbers do show significant gains in the cost and management efficiencies of direct-attached or internal storage over networked storage topologies like "legacy" Fibre Channel fabric SANs and network-attached storage, but the story becomes a lot uglier when you look at utilization efficiency across the infrastructure.

Software-defined storage vendors go on and on about how they provide the tools to consolidate, simplify and automate storage infrastructure so that fewer administrators are required.

By Gartner's calculations, we are becoming less efficient in how we use the capacity we buy, not more efficient. In fact, in the last five years, utilization efficiency has declined by 10%. One way to explain the difference between these findings is to acknowledge that the cost per terabyte and number of terabytes per admin look at storage from the perspective of the server to which it is attached, and not storage that's shared among many servers (i.e., SAN, NAS).

With the tools provided by hypervisor vendors and SDS independent software vendors, a server administrator with relatively little storage smarts can probably do a decent job of managing the storage he or she sees -- that is, what's in the same chassis or in the same rack as the server host they manage. Consequently, the admin may feel pretty good about the efficiency he or she is achieving, maybe maintaining a lean extra capacity of, say, 25% to 30%. However, if every server is underutilizing bundled storage by 25% to 30% across all servers in a large data center, that's a lot of wasted space.

The inefficiencies of silos

Going further, the way that hypervisor vendors have been operating their SDS and virtual SANs contributes to this inefficiency. If you have different hypervisors running on different servers, their converged or hyper-converged infrastructure usually cannot be shared. Virtual machine disks can't be stored on Microsoft unless you convert them to virtual hard disks first and vice versa. This is a reflection of the siloing of storage that is happening within hypervisor-controlled SDS models, as every vendor seems to have delusions of becoming what IBM was back in the day.

Silos are inefficient because they restrict the ability to share resources. They also obfuscate the ability to manage storage assets across the enterprise in a holistic way. Many SDS independent software vendors have seized on this problem in hypervisor-controlled SDS to enable their products to manage their storage even when it's deployed behind heterogeneous hypervisors, which is a step in the right direction. For a few, this takes the form of selling a hardware kit with the SDS software stack, a product that becomes a lock-in for the consumer, just like that "evil legacy storage" the hypervisor vendor advised us to replace.

Software-defined storage vendors villainize monolithic proprietary storage only to suggest an equally siloed and monolithic storage replacement.

In short, it is starting to feel like there is a tire stuck in the mud somewhere. Software-defined storage vendors villainize monolithic proprietary storage only to suggest an equally siloed and monolithic storage replacement. They decry complex, expensive, networked storage when, in fact, all storage is direct-attached, and complexity and expense are a function of a failure of the industry to enable the common management of heterogeneous infrastructure.

Nothing new about SDS

As for SDS, all storage is software-defined. It always has been -- starting in 1993 with the creation of System-Managed Storage on IBM mainframes to the most overbuilt EMC controller-based storage arrays. The trick is to find a way to manage storage efficiently in a common way.

Some software-defined storage vendors appear to be getting the point. I expect Lenovo will introduce storage resource management capabilities that span its extensive product line (including discreet, converged and hyper-converged products) going forward. Products such as SANsymphony from DataCore and SAN Volume Controller from IBM subordinate hardware by using a virtualization service that can pool storage. Newcomers such as ioFABRIC, NooBaa and Pivot3, meanwhile, are among a group of vendors envisioning an SDS 2.0 with more heterogeneous management capabilities. Eventually, all software-defined storage vendors will get sensible about storage management or disappear.

Eventually, all software-defined storage vendors will get sensible about storage management or disappear.

A recent ioFABRIC survey found that storage capacity limitations and high cost were the leading concerns of 63% of 200 respondents, even outnumbering performance limitations (16%). Most respondents said they wanted to save money and extend the life of existing infrastructure, and only a few stated that they wanted to refresh their storage infrastructure at the present time.

Clearly, "disruptive" is not as compelling an argument as software-defined storage vendors originally thought.

About the author:
Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.

Next Steps

SDS vendor buyer's guide

Vendors of software-defined storage must learn

How to integrate software-defined storage products

This was last published in September 2016

Dig Deeper on Software-defined storage

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What have been your experiences with software-defined storage vendors in terms of hype vs. reality?
Cancel
"Software-defined storage" always sounded like just another way of saying "vendor lock-in" to me.
Cancel

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close