Looking for something else?
There might be a debate over what constitutes a software-defined data center (SDDC), but whatever topics that argument covers, it needs to include disaster recovery.
Lately, I've been preparing seminar materials for a three-hour presentation on the disaster recovery (DR) requirements for software-defined data centers (SDDCs). That's a combination of topics you rarely find: The software-defined crowd likes to say that the need for DR and business continuity planning goes away once you implement a software-defined infrastructure. In an SDDC, everything is virtualized and highly available by design. So, we'll have to see how a mash-up of SDDC and DR resonates with seminar attendees. The seminar is in London, and next to New York, they're probably the toughest crowd out there -- big on architecture over marketecture.
As part of my preparation, I'm studying all the source materials I can scavenge and talking to many startups seeking the golden ring of SDDC. (That ring is less about the realization of an SDDC vision than some outcome in which a venture-capital-funded whiteboard-and-slideware operation magically translates into a billion dollar acquisition by name-brand vendors and a profitable exit for early investors and founders.) What I'm finding is that just making sense of software-defined is itself a huge challenge. Beyond the silly nomenclatural issues that I've written about here in the past, there are significant problems with the architectural components themselves, or at least with the views of different SDDC advocates about what the components are and what they should do.
Take storage for example. SDDC thinkers posit that the storage used in a software-defined data center should be software-defined storage (SDS) -- a virtualized resource that, like software-defined networks and software-defined processors, can be allocated quickly to build application hosting platforms on the fly.
Well, that sounds great to me; I'm a huge advocate of storage virtualization. It lets you aggregate the capacity of a lot of storage rigs (regardless of the vendor names on their bezels) and serve that capacity up more nimbly as virtual volumes. Second, the storage virtualization engine provides a location where value-added storage services can be hosted and deployed with the virtual volumes themselves in an agile and cost-effective way. Storage virtualization pays for itself by breaking vendor hardware lock-ins and reversing the trend toward overpriced islands of storage comprising array products with lots of value-added software isolated on proprietary controllers.
But that view of software-defined storage isn't shared by many of the SDDC folks. There's a debate over the meaning of software-defined storage, with many leading voices (mostly from the server virtualization hypervisor crowd) saying that SDS doesn't have anything to do with capacity aggregation; it's only about the centralized management and administration of value-added storage services. So, no, IBM with your SAN Volume Controller, and no, DataCore Software with your top-notch storage virtualization software, and, no, anyone else with a storage virtualization play, you're not software-defined storage. The SDS club says you're old school and NOKD. ("Not our kind, dear.")
Why not virtualize capacity as part of SDS? Nobody seems to have a reason, so I'll suggest one. Server hypervisors have already run afoul of storage as they've sought to "revolutionize computing." VMware's storage problems are legend, pressing the company to introduce proprietary SCSI commands to offload certain storage functions to array controllers in an effort to alleviate a huge bottleneck in its own hypervisor microkernel stack. That effort gave rise to talk about a VMware-only storage hypervisor that would require firms that had just spent a decade building a Fibre Channel fabric to segregate all the storage resources used by VMware, and presumably those used by Hyper-V, Citrix, Oracle and so on, with the effect of aggravating even further storage management issues. When that idea fell flat, the company began promoting an interesting variation of a direct-attached-but-shared-among-multiple-clustered-servers storage contraption called a vSAN. (No, not that Cisco VSAN thing; this is a different VSAN.)
It would seem to me that this distinction between capacity aggregation and service aggregation suggests the SDDC crowd knows as much about storage as any server administrator who comes to work to find that all the storage administrators have been laid off. Now, the server admin is in over his or her head, trying to learn all the nuances of storage terminology and technology in a New York minute, and ultimately giving in to the reassuring voice of a self-interested vendor promising to deliver a completely automated storage kit, storage cloud, or some kind of vSAN contraption that separates the control plane (services) from the data plane (hardware).
For me, the disagreement within the SDDC community over the meaning of SDS crystalizes the continuing need for DR planning, especially as early adopters seek to roll out their SDDC visions. There's little depth of understanding about storage at the component, LUN, volume, system or infrastructure levels. There is, however, a big trend toward doublespeak, prevarication and oversimplification of the challenges, risks and results of reshaping physical infrastructure into some sort of coin-operated coffee machine -- a sort of Starbuckification of everything IT, including storage.
In my 30 years of working in IT, I've never seen a situation that made me want to schedule a test of my DR plan so urgently.
About the author:
Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.
Gauge your SDDC knowledge with our quiz