Essential Guide

Browse Sections


This content is part of the Essential Guide: Planning and management for a software-defined storage architecture
Get started Bring yourself up to speed with our introductory content.

How SDS influences data storage infrastructure planning

Even though software-defined storage abstracts functionality from hardware, understanding the underlying infrastructure is still important.

What you will learn in this tip: According to Jon Toigo, even though software-defined storage abstracts functionality from hardware, planning for the underlying data storage infrastructure is still important.

The goal of software-defined storage (SDS) is to divorce the storage application from the physical data storage infrastructure. Ideally, this will enable the "agile" allocation, re-allocation and de-allocation of storage resources. Put another way, SDS provides a method for separating storage services from the storage kit, providing volume persistency even as underlying hardware and interconnects are changed.

This capability is particularly apropos to applications that are abstracted from server hardware or otherwise "virtualized" and capable of moving from one server, network or storage stack to another.

In reality, storage has proven to be an impediment to accomplishing such effortless workload transitions in anything approaching a smooth and efficient manner -- particularly in settings where physical storage is connected in a Fibre Channel fabric SAN. Physical SANs are complex infrastructures with hard-coded routes established between servers and storage gear. Moving an application to another stand of server gear will likely require changes to application configuration settings to reflect modified pathways to the same storage resources. The only alternative is to break up the SAN, return to direct-attached (or internal) storage configurations and then rely on synchronous data replication between every storage array that supports every server that may potentially host a particular guest machine. The result is usually a costly and difficult-to-maintain mess.

With SDS, the storage volume presented to a virtualized workload or guest machine is itself an abstraction rather than a physical connection to physical resources. This SDS volume can move with workloads from host to host, with SDS services brokering new routes to the same storage resources on the fly. Thus, the need to replicate data behind every prospective host is eliminated.

In addition to brokering capacity, the SDS layer should broker performance by balancing I/O among all available paths to data and by selecting pathways that deliver the greatest efficiency to application I/O based on intelligent application prioritization. Intelligent load balancing should be part of the SDS control layer.

Since the services afforded to application data should include data protection guarantees appropriate to the restore priority and criticality of the application itself, the SDS layer should provide volumes to which protective services are also linked. An "always-on" application, for example, may need to be provided with a virtual volume that synchronously or asynchronously replicates its data to another volume on a separate data storage infrastructure, creating a highly available active/active cluster configuration. Less critical apps with a higher tolerance for downtime may not require such protection services and may be adequately protected by nightly backups to tape or virtual tape at an off-site location.

The bottom line is that SDS doesn't change the fundamentals of data storage infrastructure planning, which always begins with the requirement to understand one's applications before designing. Just deploying a one-size-fits-most storage virtualization is a good way to drive storage costs high and to the right.

Dig Deeper on Software-defined storage

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.