BACKGROUND IMAGE: iSTOCK/GETTY IMAGES

Storage

Managing and protecting all enterprise data

kovaleff - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Jon Toigo explains software-defined storage

Like nearly everyone else, Jon Toigo has grappled with the definition of software-defined storage ... until he realized there are three.

What I most wish for in 2015 is some sort of lucidity in the terminology we're using to describe storage stuff. Heck, one of the things that originally attracted me to IT was the disciplined use of language -- the precise, though sometimes stilted-sounding application of terminology to describe products and processes. It was a sharp contrast to the amorphous doublespeak of marketing or politics, where up might also mean down and hot might just be something that is downright cool.

Such linguistic discipline has long since vaporized, and we're left with the doubly hard challenge of not only comparing rival solutions to technical problems as we seek the best fit for our environment, but trying to understand exactly what vendors are trying to sell us.

In 1999, the narrative around storage held up direct-attached storage (DAS) as the bogeyman. By attaching storage directly behind servers, we limited access to the data residing there. If the server was misconfigured or lost power, the doorway to the data was closed.

NAS and SAN solve DAS shortcomings ... or not

One solution to the DAS "island of storage" problem was to deploy network-attached storage (NAS), an appliance with tremendous sharing potential. Except NAS was DAS. Under the covers, the "thin server" head on the NAS appliance was (and is) a motherboard with an operating system and some application software. The disks are direct-attached to this server and shared across a network via software running on the NAS server "head." NAS wasn't any different from a file service delivered by a server connected to a DAS storage array, but vendors could cobble together two things and sell them as a unified solution that was superior to a cobble of server hardware from one vendor and storage hardware from another.

Then there was that other DAS solution called a storage-area network (SAN). Only, a SAN wasn't a network by any technical definition of networks; a SAN provided a fabric interconnect based on the Fibre Channel (FC) protocol that used a rudimentary physical layer switch to make and break DAS connections at high speed, thereby delivering some network-like functions. An elegant 10-page argument to this effect, made by a much respected senior engineer at the time from Cisco Systems, can be found in the historical records of the ANSI T11 Fibre Channel Standards Committee, although Cisco's entry into the FC SAN switch market a couple of years later found the company falling in line with market speak and referring to FC as a "network" protocol.

SANs, again, deliver storage to servers as DAS. This is true whether the fabric is created using FC, iSCSI, SAS, InfiniBand, or paper cups and string. So, together with simple DAS and NAS, all SAN storage is directly attached.

Sharable storage was the differentiator

What differentiated SAN and NAS from traditional DAS was that the storage could be shared. There were a few DAS rigs that featured sharing architectures of one type or another, but that was the domain of SAN and NAS appliances. As storage engineers, we selected a product based on the number of storage devices we needed to connect (serial SCSI and its variants, including FCP, iSCSI and SAS, provided lots of connection points, while parallel SCSI was much more limited), and how many servers/desktops needed to share the storage capacity (traditional DAS was considered a bit more unfriendly to multiuser sharing than NAS or SAN). We also considered cost, vendor affinity and other miscellaneous factors.

SANs go virtual -- sort of

Fast forward to today, and we're being told that virtual SANs are the new topology, fixing the inflexibility and expense of SANs by returning all storage to a direct-attached configuration behind each virtual server host. Moreover, such storage is software-defined in that the value-added services that would normally be placed on the storage array controller are abstracted into a freestanding software layer that lives on the server, reducing the kit to just a bunch of disks (JBOD). Instead of sharing storage with different hosts, we'll replicate data continuously between several copies of these server/storage complexes that we now call cluster nodes.

Only, that definition of software-defined storage (SDS) or virtual SANs doesn't provide clarity. VMware places the storage value-added services into a software layer that is part of its server hypervisor, adding expense to the software that's on par with the expense of the attached physical flash and disk storage. From a recent industry test lab report, we learned that the software license and hardware acquisition costs for a minimum three-node VMware Virtual SAN (vSAN) cluster work out to between $10,000 and $15,000 per node. That's a bit pricey for smaller -- and many larger -- organizations.

Moreover, in the VMware approach, the storage is dedicated only to VMware hypervisor-instantiated workloads. Indications are that in a growing number of shops only part of the infrastructure operates a VMware hypervisor. Chances are good you'll also have some Microsoft Hyper-V, perhaps some Citrix or KVM and almost certainly some un-virtualized transaction processing workload -- all with storage requirements as well -- that can't partake of the VMware vSAN storage.

Hypervisor-agnostic apps offer hope for software-defined storage

But there is some good news. Maxta, StarWind Software and StorMagic all appear to have hypervisor-agnostic software kits for doing the same thing VMware does with vSAN (a concept StarWind invented, by the way), but in a manner that will enable the storage to be shared among multiple workloads running under multiple hypervisors. But is that still software-defined storage?

More good news: With storage virtualization wares from DataCore Software and IBM, we're seeing the ability to service not only the storage needs of all hypervisor-virtualized workloads, but that recalcitrant group of mission-critical transaction processing applications that no one wants to virtualize. But is that software-defined storage?

Perhaps, as we enter the New Year, some effort will be made to refine the terminology of storage so we can get to a more useful set of descriptions. Right now, I see three centers of gravity in the debate around the definition of software-defined storage:

  • Hypervisor-constrained SDS: Storage services are managed by hypervisor software directly and used by the workload virtualized by that hypervisor exclusively.
  • Hypervisor-agnostic SDS: Storage services are managed by a third-party software product that enables managed storage to be shared across more than one type of hypervisor.
  • Workload-agnostic SDS: Storage services and capacity are managed by a third-party product that enables the allocation and sharing of storage assets across all workloads, whether virtualized or not.

I don't care what the final terminology is, as long as the industry leaves behind meaningless discriminators like direct-attached and stops with the nonsense that tries to equate hypervisor-constrained approaches with positive-sounding attributes like unified or hyper-converged. In truth, agnostic approaches are more cost-effective and I/O efficient.

That's my two centavos.

About the author:
Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.

Article 6 of 8
This was last published in January 2015

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

Get More Storage

Access to all of our back issues View All

-ADS BY GOOGLE

SearchDisasterRecovery

SearchDataBackup

SearchConvergedInfrastructure

Close