Joshua Resnick - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Scale-out software-defined storage market menaces traditional storage

Scale-out software-defined storage is on the rise to the detriment and decline of traditional storage products and arrays.

This article can also be found in the Premium Editorial Download: Storage magazine: 2017 IT spending trends for data storage:

The pressure has never been greater on conventional storage vendors. A resurgence of on-premises offerings in the software-defined storage market combined with public cloud storage -- also based on scale-out SDS -- is causing the market to shrink for traditional storage products such as Dell EMC VNX, HPE Smart Array, and NetApp E-Series and FAS. I will also contend that external storage based solely on simple dual-controller architectures may be facing long-term extinction. Here's why.

Flash-first, scale-out SDS's secret weapon

The following graphic illustrates the massive pressure scale-out storage is placing on traditional storage across all workloads. The transition to SDS is taking time as businesses continue to validate the different use cases, but momentum toward the software-defined storage market is accelerating thanks to a pair of big trends happening simultaneously.

First, organizations are modernizing legacy applications or writing new ones to be cloud-friendly that lend themselves naturally toward scale-out storage environments. Second, core technology trends are enabling the software-defined storage market to meet mission-critical workloads. For example, ubiquitous flash in all form factors means you can get the nonvolatile memory technology closer to compute, speeding up the performance of applications. Also, inexpensive networking with remote direct memory access (RDMA) protocols allows the latency performance of scale-out shared storage to approach that of most hardened purpose-built external arrays.

How scale-out storage intersects with traditional storage
SDS encroachment on traditional storage

Additional factors in the SDS shift

Improved workload performance alone isn't enough justification to embark on the massive shift in storage architecture preference required by the software-defined storage market. The following attributes are equally important factors leading us to scale-out SDS.

The days of forklift upgrades based on large Capex projects every three to five years are over. Cloud economics have led to a preference for pay-as-you-grow infrastructure, which aligns perfectly with modular nature of scale-out SDS.

Improved workload performance alone isn't enough justification to embark on the massive shift in storage architecture preference required by the software-defined storage market.

Enterprises today expect storage technology refreshes to be continuous, simple and seamless. Software-defined storage, if done correctly, enables you to implement these seamless perpetual upgrades. Scale-out SDS also enables more flexible storage product offerings that cover broader sets of workloads, making all-flash storage for tier-one workloads through globally distributed object storage for tier-two/three workloads possible.

The rise of hyper-converged infrastructure (HCI) due to the large number of compute cores available on standard servers has also led to a rise in flash-first SDS technology for tier one workloads. Software-defined virtualization, meanwhile, is gaining in popularity in the networking and compute infrastructure layers, leaving the storage layer as a natural extension of the broader trend.

While it is impossible to name all the vendors putting pressure on traditional storage architecture, here are a few examples:

Vendors mostly going after tier-one/two workloads:

  • Hewlett Packard Enterprise (HPE) SimpliVity with its OmniStack Data Virtualization Platform
  • Nutanix with its HCI products and Distributed Storage Fabric
  • Microsoft Azure Stack and Storage Spaces Direct
  • VMware Cloud Foundation and vSAN technology
  • startups Datera, Hedvig and Datrium (based on hybrid SDS)
  • other HCI vendors (e.g., Pivot3, Cisco, Dell EMC)

Vendors mostly going after tier-two/three workloads:

  • Cloudian Hyperstore
  • IBM Cloud Object Storage (Cleversafe)
  • Qumulo Scale-Out NAS
  • Scality RING Storage
  • startup Igneous Data Service

Vendor mostly going after mostly tier-three/four workloads:

  • Cohesity Hyper-converged Secondary Storage

These are just some of the many vendors in the software-defined storage market available to fulfill on-premises needs. It doesn't even include the biggest public cloud storage providers, such as Amazon Web Services, Azure and Google.

Traditional storage can stem the SDS tide, if done right

All is not dire for traditional purpose-built external storage. There's still a good chance that external all-flash arrays (AFAs) will be the products of choice for tier-zero workloads and many tier-one workloads. In order to maintain a place of preference in modern data centers, these products must meet the following criteria:

  • Be designed flash-first. Even with hybrid arrays, workload performance requirements should be predominantly met with a flash tier first and spinning media second.
  • Have capacity optimization built in with minimal performance impact (deduplication and compression are now table stakes for AFAs).
  • Architecturally scale beyond dual-controllers.
  • Demonstrate value of purpose-built hardware, such as ASIC offload or maybe even lower cost.
  • Have advance resilience features not as easily replicated in SDS technology: hardware-based replication, end-to-end data checksums and encryption to name a few.
  • Offer quality of service and availability unmatched by most architectures in the software-defined storage market.
  • Provide similar perpetual upgrades like those already inherent to SDS; storage-as-a-service consumption and evergreen licensing are a couple ways to do that.
  • Focus on ease-of-use enhancements such as virtual machine-centricity that's very popular with HCI environments.

With all the changes going on in the IT industry, the disruption to the storage market may be the most profound. The shift to public cloud and private clouds has altered the way companies prefer to purchase technology, putting enormous pressure on traditional IT vendors and, disproportionately, the storage industry.

When an HCI system is installed, for example, there is still a server and networking in the product. What is missing is a traditional external storage device. The other big change has been flash. For the first time, performance is in balance with capacity, allowing for more efficient storage devices. The days of massively over-provisioned storage are coming to an end.

There's light at the end of the tunnel for traditional storage vendors, however. The amount of data we need to store is growing so fast, once we get through the architectural transition to scale-out SDS and eliminate inefficient storage devices, the overall storage market will rise once again.

Next Steps

Sharpen your software-defined storage focus

Debating software-defined storage efficiency

Enhance your software-defined storage value

This was last published in April 2017

Dig Deeper on Software-defined storage

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How should traditional storage vendors respond to the threat posed by scale-out software-defined storage?
Cancel

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close