Guide to software-defined everything in the data center
A comprehensive collection of articles, videos and more, hand-picked by our editors
While software-defined storage is receiving lots of buzz, it isn't as new an idea as it may seem; storage virtualization...
vendors have been working toward it for years.
Realizing that this column will be published shortly after the 2013 VMworld conference, I'm quite confident that the trendiest marketing buzz among us is now -- and will be post-VMworld -- "software defined."
The idea is that virtualized services based on software will create an infrastructure that can be dynamically defined and able to adapt to changing business needs faster than infrastructures based on physical systems that must be managed independently. It'll be possible to provision optimal logical services irrespective of the makeup or even the location of the physical infrastructure. Software-defined everything is a hot topic, so it's no surprise that vendors have been touting software-defined storage (SDS).
Is software-defined storage for real?
The concept of software-defined storage strikes many data storage veterans as a pretty radical one. We've long been able to time, slice, dice and virtualize CPUs, memory and even Ethernet networks so that they can be shared and flexibly used by many different applications.
But it's a stretch to overlay that same model on storage. In contrast to a CPU or DRAM that processes and stores a transient digital bit that's on its way after a few microseconds, storage is intrinsically physical. Digital storage permanently parks a bit on disk as we collect bunches of bits over time. Essentially, there's a bucket of "data" that accumulates in our environment that must be managed over time.
So, how is it possible to software-define something that is so inherently physical?
Software-defined storage not such a new idea
In reality, we've been chasing the possibility of software defining storage for years, and we're finally approaching a point where it might be practical. Moreover, the route we've been following to get to where we are now makes the idea of SDS seem a little less scary, while shedding some light on the real potential of software-defined storage.
The industry started its SDS quest when the first storage virtualization pioneers rolled out products. Those innovators were trying to make storage more malleable in the face of ongoing data growth amid constant environmental change. Arguably, storage virtualization had some rough spots for a number of years. Many vendors didn't seem to get the basic recipe right, but a number of them prevailed and are still running strong today, most notably Hitachi Data Systems with its Universal Storage Platform line, IBM's SAN Volume Controller and NetApp with its V-Series arrays. All three are molding heterogeneous storage virtualization offerings into tools that can work more closely with a virtual infrastructure. On the software-only side, DataCore and FalconStor had early and successful storage virtualization entries.
But storage virtualization still faces a couple of obstacles when it comes to creating "software-defined" storage. The biggest one is that storage virtualization is still pretty physical. Virtualizing storage might make heterogeneous collections of storage more dynamic and capable, but storage remains connected to a specific physical point -- an appliance or controller -- in the fabric. For many users, this is no longer OK; they need storage that can match the newfound mobility and fluidity of the rest of their infrastructure.
Storage virtualization has made its mark
Fortunately, virtualization has had an influence on nearly all storage system architectures and, when combined with another trend in storage system architectures, it is bringing us closer to the possibility of software-defined storage.
First, the impact of storage virtualization has changed how storage systems of all types handle physical controllers and disks, even when they're just inside of a single array. This homogeneous, in-array storage virtualization has allowed storage vendors to make much better use of devices inside the array, making them less tied to the underlying physical controllers and disks.
Second, over the past couple of years, storage systems have moved increasingly toward a software-centric architecture, dispensing with requirements for specialized hardware and running entirely on standard x86 hardware. While there's still specialized hardware at the high end for systems that are built to operate at extreme scale and performance, a majority of midrange storage systems run on standard x86 hardware.
Those two evolutions in storage systems appear poised to usher in the age of virtualization and further the quest for software-defined storage. A number of vendors offer their storage systems as virtual machines that run within the virtual infrastructure. The storage system no longer depends on any particular type of disk and the system runs on standard x86 hardware; this makes virtualization of the entire storage system an easy step for the storage vendor. Today, most of these implementations are packaged as virtual storage appliances (VSAs). Among the vendors offering VSAs are FalconStor, Hewlett-Packard, NetApp, Nexenta Systems, StorMagic and VMware. The idea is that these VSAs can be provisioned on top of a larger pool of physical storage, often direct-attached storage, but it can also be a SAN or network-attached storage. A VSA makes it easy to carve up the storage space, reclaim any stranded storage capacity and may deliver enhanced storage functionality that's more easily managed in the virtual infrastructure.
Storage is still physical, but more flexible
VSAs don't make storage any less physical, but they offer several important benefits.
- Storage can become more mobile. While storage might still be tied to physical bits in a virtual storage appliance, it can be moved around (often without disruption), which could help put an end to disruptive hardware changes and data migrations.
- VSA storage can be more adaptable than physical storage systems. Capacity expansion can look just like expanding the capacity of any VM, rather than the process that expanding physical storage requires. Moreover, if the VSA can scale out, adding more capacity can be accomplished as easily as deploying another VSA.
- Users gain the ability to deploy advanced storage capabilities anywhere a workload needs it, whether on the premises or in a remote cloud.
Recent hands-on testing in the Taneja Group's labs has demonstrated that VSAs aren't just the toys or small storage products they were initially perceived to be. VSAs can compete with their hardware brethren, and they make pretty efficient use of virtual infrastructure resources. While they may not be the epitome of software-defined storage that's highly orchestrated and programmatically operated, they come pretty close. More importantly, VSAs are here now and are a practical enablement of software-defined storage that can add agility to a data center infrastructure.
About the author:
Jeff Boles is a senior analyst at Taneja Group.