Olivier Le Moal - Fotolia
I am trying to put together a trip to Israel this summer. While the history is amazing, my mission will primarily be about business -- to visit the country's many storage technology startups. A recent briefing with Yuval Dimnik, the technological brain behind stealth-startup NooBaa in Tel Aviv, piqued my interest. His company is doing some exciting work on scalable file storage, leveraging RESTful protocols like S3.
After tweeting about NooBaa innovations, I heard from a bunch of other Israeli startups, all of whom want to brief me on what they're doing. Along the way, I learned that one of the brightest guys in storage, Erik Eyberg, is now working for up-and-comer Infinidat, also located in Israel (and Needham, Mass.). Infinidat offers a petabyte-scale storage platform with a "self-healing" architecture. And, of course, IBM has a nifty R&D operation in Haifa that's been doing groundbreaking work on tape media capacity. I am intrigued to see what the next evolution in storage technology will be, and am likely to find at least part of that roadmap in Israel. Particularly intriguing is what will happen to the software-defined storage market. According to some of the startup presentations I've seen, next generation SDS (or SDS 2.0) is right around the corner.
Let's unpack that.
Software-defined storage is hot
The idea of operating a bunch of storage peripherals from a server located in the software stack dates back to at least 1993 with IBM's release of Storage Management System on IBM mainframes. For newbies, a better point of reference might be hyper-cluster-based supercomputers, Blue Gene/P and that ilk, in which data movement (or friction) is despised and emphasis is placed on storing data locally on internal media or direct-attached flash or HDD shelves.
Bottom line: When VMware or someone else claims to have invented this shiny new thing called software-defined storage, I have to choke back laughter.
The software-defined storage market has apparently caught fire, mainly in the form of hyper-converged infrastructure (HCI) appliances. Just about every server vendor, all of whom are sick and tired of being characterized as "commodity" hardware, are partnering with independent software vendors of SDS stacks to glue together pre-integrated HCI kits. And, in a recent ActualTech Media 2015 State of Hyperconverged Infrastructure survey of 500 small, medium and large firms, about a quarter of the respondents said they had already begun to deploy SDS/HCI, while over 50 percent of the remaining companies said they were preparing to do so over the next 24 to 36 months.
I am heading over to Europe soon, where I am slated to speak on HCI and the software-defined storage market in at least five different countries, likely before what'll be enthusiastic audiences.
HCI: More tactical than strategic
Hyper-converged infrastructure strikes me, at least initially, as a tactical play rather than strategic architecture.
From a tactical perspective, it's a lot easier to deploy a hypervisor, storage services software stack and brain-dead flash and disk as a kit for companies that lack the expertise to actually build storage infrastructure. Plus, since most of these kits are integrated with third-party SDS software (not from VMware or Microsoft, per se), the appliances are generally less costly.
To put it simply, HCI is a quick way to roll out a server and some storage.
What would make HCI strategic is if:
- The software-defined storage market did more than just provide capacity management, data reduction and data protection.
- HCI appliances would evolve into "atomic units of compute" that could be quickly personalized, deployed and managed through a single pane of glass to support all data from all workloads, virtualized or not, regardless of hypervisor.
I am concerned by the marketecture that substitutes for architecture in most discussions of SDS and HCI. Not only is this technology not new, it isn't a guarantor of the cost reduction, improved VM performance, greater consolidation or agile IT promised by the literature. Virtually none of these promises are verifiable, and it's very likely your mileage from SDS/HCI will vary, a lot.
SDS stack needs to be reimagined
The SDS stack should not simply be a porting of all the value-added software that used to reside on array controllers to a server-side stack. We are leaving out the monitoring and management of the hardware, which is the biggest cost in storage today. We are also ignoring the power of another server-side element in storage: the file system (or object system).
And while we are at it, we might want to revisit the canard that drove SDS/HCI to the fore in the first place. HCI and SDS were supposed to address poor VM performance, which VMware laid at the doorstep of SAN and network-attached "legacy" storage. As DataCore continues to demonstrate with its breakthroughs in IOPS via parallelization of I/O, VM performance is a function not of storage I/O latency but raw I/O latency.
So maybe the SDS stack needs to include the virtualization of the underlying storage environment and the acceleration of raw I/O as well.
The startups in Israel have begun to question the design and efficacy of the software-defined storage market as it exists right now. I love the debate and the camaraderie of smart folks who are trying to execute computer science for a change.
A closer look at the SDS market
Jon Toigo defines software-defined storage
SDS vendors seek market dominance
- Why Upgrade to Software-defined Storage? –Red Hat
- Why Adopt Software-Defined Storage? –DataCore Software Corporation
- Software-Defined Storage for Backup and Recovery –Hedvig Inc
- Making Software-defined Storage Easy –Red Hat