Let's get straight to the point: Software-defined storage is a meaningless term that underscores, but sidesteps, the failure in most IT shops to manage the physical storage infrastructure. It's little more than a re-branding of the terms vendors have been tossing around for the last few years, such as storage hypervisors, private storage clouds and even storage virtualization. The idea of a software-defined storage architecture is to let anyone slice, dice and provision storage capacity and services such as various types of data protection. It's also supposed to help solve the problem of having storage volumes move around the infrastructure with virtual workload.
But software-defined storage (SDS) doesn't correct or address the underlying issue. What exactly impairs the efficiency of storage allocation, hampers storage resiliency and persistence, and drives the cost of storage so high is the lack of infrastructure monitoring and management. We react to hardware faults; we don't manage them. So, let's look at how we landed in this SDS craze and what it means for storage professionals.
The concept of software-defined storage is simple. The act of storing data to a volume is an inherently software-based function. The hardware for storing data is secondary and leverages commodity kits (all disks come from one of two vendors, and hardware controllers are increasingly server motherboards running a commodity OS and so on). Therefore, SDS advocates say abstracting software functionality away from hardware is a natural or evolutionary advancement in storage architecture.
The core objective of SDS is to make it much easier to provision and use storage resources. Gone is the need to worry about physical LUNs, World Wide Names or port addresses. In a virtualized storage infrastructure, aka software-defined storage architecture, that complexity is masked from users who require a storage volume resource that provides the capacity and performance attributes that are suited to the application workload they're running.
It's important to note that implicit in the description of (and case for) software-defined storage is the idea that expert storage administrators aren't as available (or affordable) in the current do-more-with-less climate that pervades today's IT shops. Virtual server administrators, who tend to know little about storage hardware or connection technologies, are being called upon to ensure that the right storage resources are allocated to applications and their data. Just as the operation of a coin-operated coffee machine doesn't require the skills of a barista, SDS advocates argue that storage resource provisioning shouldn't require any special skills in storage.
This concept is extremely dangerous, engendering greater dependency on hardware vendors to configure gear, tune it when problems arise and repair it when components fail -- all without involving the customer IT folks (except when it comes to handling the bill). It could also be argued that outsourcing responsibility for the physical infrastructure to external agents (vendors) blunts the ability of consumers to innovate their storage architecture by limiting their ability to manage what they build. IT managers already complain of skills shortages in job applicants; SDS doesn't resolve the problem, it just layers on a better user interface.
Another argument for software-defined storage is that it enables the storage resource to be more agile. When a virtualized workload transitions from server host to server host (aka vMotion), its connections to back-end storage should update automatically. That way, the consequences of re-hosting workloads (for example, adjusting to different physical routes to storage) are made transparent to both applications and the workload.
Not surprisingly, the current fascination with SDS follows the 2012 acquisition (at an extremely high dollar cost) of software-defined networking company Nicira Inc. by VMware Inc. Now, virtually all infrastructure software is hyped as software-defined.
Many forms of storage virtualization exist in storage systems today, including RAID, file systems and various types of storage virtualization software. However, current generation storage virtualization software (DataCore Software SANsymphony-V) and/or hardware/software appliances (such as IBM's SAN Volume Controller) are more or less both hardware-agnostic (it doesn't matter whose branding is on the physical hardware) and workload-agnostic (it doesn't matter what hypervisor or application software is running on the server), so current SDS offerings tend to be part of a proprietary software stack, such as vSphere.
The goal of software-defined storage is to divorce the storage control plane from the hardware plane so that resources can be presented simply to end users and apps. To realize the purported value of SDS, consumers would be well advised to buy technology that's truly independent from both hardware and server hypervisors to avoid costly lock-ins.
SDS architecture: Examining what's underneath
Even though the goal of using a software-defined storage architecture is to separate storage management and functions from the hardware, data storage professionals won't see the best possible results without proper planning of the underlying infrastructure. Because the features on the hardware translate to the software layer, it's important to understand the storage application running underneath to avoid high costs and ensure top performance.
Even though software-defined storage abstracts functionality from hardware, understanding the underlying infrastructure is still important. Continue Reading
Spreading SDS functionality across arrays
Most storage pros would agree that software-defined storage is simply a way to provide persistent storage with sufficient capacity and performance to an application, while the underlying physical volume is masked to the end user. To help describe how SDS separates storage application from hardware, Jon Toigo explains the technology in the context of storage hardware and server virtualization in the following tip.
Products, tips and tricks for effective software-defined storage implementation
SDS might make management easier in the end, but implementation isn't necessarily one-size-fits-all. Instead, storage pros need to consider whether it will be implemented via application programming interfaces or mount points, and whether they will run into any support issues with their hypervisor or hardware vendors. Determining these aspects of the architecture beforehand helps companies save money and improve performance so that their investment in software-defined storage will pay off.
Jon Toigo explains how to set up SDS products in an environment so users can achieve the best performance and capacity. Continue Reading
SRM tools complement SDS management options
One of the main reasons storage administrators look to software-defined storage is that it centralizes management. But that doesn't mean they can overlook ensuring that hardware capacity and performance is up to par. That's because SDS does nothing to manage the physical storage, just the storage application. To effectively monitor both the software and hardware aspects of a software-defined storage architecture, it's a good idea to include storage resource management tools for everyday monitoring.
While software-defined storage eases management of storage applications, effective SRM is still necessary to monitor the hardware. Continue Reading