Ahead of a scheduled update to Sanbolic Melio 5, SearchStorage sat down with Sanbolic Inc.'s CEO Momchil Michailov to discuss the idea behind providing enterprises
a way to work around a single point of failure for devices that use PCI Express flash, solid-state storage and spinning disk. Sanbolic provides host-based distributed software that creates storage nodes on commodity hardware, clusters the nodes and provides RAID data protection.
I want to start by talking about Sanbolic Melio. One of the things it does is enable distributed storage. Talk to us about what it is and how it works.
Momchil Michailov: Storage -- for the past 30 years -- hasn’t changed much, other than probably a little bit faster RAID controllers, a little bit bigger hard drives. Storage has become this monolithic thing that’s killing customers, both in terms of upfront purchase cost as well as ongoing management cost.
If you take a look at the newer data centers, if you take a look at companies like Facebook, eBay and Google, they don't take that approach. Instead of those monolithic, expensive storage devices, they now have distributed storage architectures. That requires a different way of approaching the infrastructure. That really is what Melio is going after. Software-defined storage is how most people position Melio now. What we focus on is creating distributed storage architectures so you can use commodity devices; you can use flash or a spinning disk inside the servers, or external [just a bunch of disks] JBODs. With that, you eliminate the upfront cost of monolithic enclosures. You create a lot more agile -- and a more distributed -- platform.
You mentioned JBODs. Are customers running Melio with a number of vendor arrays to pool to storage resources? Or are they using it with JBODs in place of brand-name arrays?
Michailov: Well, we don't believe in rip and replace. If you take a look at some of these data centers, customers have millions, if not hundreds of millions, of dollars of infrastructure. So it's a question of augmenting customer infrastructure, not ripping and replacing it. In that regard, Melio really has three layers. It has a storage management and volume layer, a data management layer, a file system and then a clustering layer.
It's a question of augmenting customer infrastructure, not ripping and replacing it.
If you think about the bottom layer, the volume management or storage management technology, that allows customers to utilize any existing storage infrastructure, whether that is direct-attached storage or legacy SANs, from any vendor. It allows them to use JBODs for consolidated storage, whether that is SSDs [solid-state drives] or spinning disk. It allows them to use server-side storage.
The general idea is that through the resiliency enabled by our volume management technology, customers avoid single points of failure. You can take the existing infrastructure, JBODs for massive scalability, scale-out and server-side, and augment that infrastructure and scale it out without [having to] rip and replace anything.
Whether you take a sophisticated, high-end storage array, a JBOD or just flash inside servers, we would then allow you to do snapshots, distributed RAID or replication for disaster recovery. All of the advanced storage features are now in a software layer that can be applied across any one of these different types of infrastructure.
Let's talk about where you see the flash market going and the influence of software-defined storage in that market.
Michailov: Personally, I believe flash is the future. It's a perfect coincidence, perfect storm, [with] the push from customers to reduce IT spending. Storage vendors today still operate on double-digit profit margins. You tell me one aspect of IT that has the margins that storage vendors have today. There isn't anything even close. Then you have flash, which is now delivering low latencies, high performance and high bandwidth. It's still cost-inefficient for high data environments, but that cost is coming down quickly.
Today, customers in corporate data centers use flash as caching. Let's think this through: In order for me to have a caching device, I have to have a storage device. That storage device is a multimillion-dollar storage array. Not only do I have to have the Capex of the storage array and the Opex of the storage array, now I have to have the Capex and Opex of a cache card -- and they're no joke. They are tens of thousands -- and sometimes hundreds of thousands -- of dollars. Then I have to have caching software that's going to move my data from my legacy [system] into my caching cards because the caching card is a single point of failure. Then I have to have the IT staff to do that.
First of all, that explanation on its own is making me want to become suicidal. Two, the Capex is going to kill me. Three, I don't know of a lot of applications that have the elasticity to fit that, [considering] that this is now a single point of failure. That is why we don't see everybody in a corporate data center running to buy this.
To get flash to become a mainstream storage platform, to make it a little more broadly available, flash has to become a tier-zero persistent storage device. It can't be a caching device.
Our focus is to create this distributed layer that provides the agility and high availability of a distributed storage array. You can actually create a highly available infrastructure; you can provide all of the traditional high-end storage array functionality -- snapshots, Quality of Service, dynamic expansion in that infrastructure -- and employ flash as a persistent storage device and then tier zero [storage] instead of a caching device.
Sanbolic Melio is often deployed as server-side flash. Is there room for flash in other places throughout the array?
Michailov: Absolutely. There are significant benefits in having data inside the server and having the PCIe flash card [PCI Express] for a number of applications; that's an important area. The flip side of that is how many applications do you know that can benefit from 9 million IOPS? Don't get me wrong; 9 million IOPS sounds sexy on a press release, but can you really use it? The short answer to that is no. You certainly can't use it in a single server that also automatically represents a single point of failure. I think it's a question of balance for some applications.
We can now understand the metadata stream and place metadata streams in a flash card -- in a server -- or in SSD storage. We understand what user data is, and we can place it in SSD on storage or spinning disk. If you take a look at that, it makes the total available market for flash a lot bigger than it is today. If an average user takes the hybrid environment of flash and spinning disk, and deploys it in file-serving environments -- small amounts of flash, say 15% flash and the rest spinning disk -- then they can multiply their average workload capability by 10. We're talking about an insignificant capital expenditure compared to the existing infrastructure to improve their workload by 10 times.
Going back to your original question, there's going to be flash that will live in the servers and it's purely server-side architecture. That would be either smaller clusters that don't have a lot of data, or some really I/O-intensive application that can take advantage of PCIe cards. I think we're going to continue to see predominantly flash as SSDs in storage. It just makes more sense for general-purpose applications; it's definitely a lot more economically viable. Then the question there is: How do we get around the constraints that traditional RAID controllers provide and [have] now put in place?
This was first published in November 2013