Solid-state storage gets attention for granting speedy performance to applications and data storage, plus the technology's form factors give users a number of options to deploy solid-state in their environments. A wide variety of SSD storage products exist today, from server-side PCIe SSDs to all-flash arrays to SSD caching appliances. Each offers unique benefits and drawbacks, so there is a lot to consider before deploying solid-state storage. In this 2012 Storage Decisions TechTalk interview, Dennis Martin, founder and president of Colorado-based Demartek LLC, explains solid-state storage technology and how it can be put to work for an organization.
What are the advantages of putting a solid-state storage device directly into a single server, rather than sharing it?
If you have an application or environment that really needs a big boost in performance and you want to dedicate that resource to that, then that's a great way to go. You can put it in the server. The application can take advantage of it. Or, if you're doing caching, then multiple applications running on that server can take advantage of it fairly easily.
What are the alternatives in terms of form factors and interfaces for server-based solid-state storage devices?
So you have the PCIe cards. That's one form factor. You have the drive form factor, same as disk drives, and you can certainly put those in a server. So, that's another, and there are a lot of variations. And then there's a memory slot form factor that goes into memory slots, but it's actually storage. [It] typically has a SATA port on it, so it goes into the memory DIMM socket, but it looks like storage.
Will PCI Express Version 3.0 have an impact on server-side solid-state storage?
Yes, the PCIe Gen3 does a couple of things for you. First of all, it doubles the speed or the throughput. So, if you have a x8 or an 8-lane card with Gen3, that'll give you 8 GB [per] second of throughput, which is really quite a lot. These servers also give you more lanes of PCI Express than the previous generations. So, you can get up to 40 lanes of PCIe in a server per processor. If you have a two-processor box, that means you have up to 80 lanes. If you have enough slots, you could put a lot of PCIe SSDs in there.
Some server-based SSDs act like normal solid-state storage devices, while others are used as a cache. What are the pros and cons of each approach?
For what I call primary storage or persistent direct storage, as soon as you put that in there and you direct an application to the data there, it gets an immediate performance improvement. It's dramatic. You do have to decide when to put the data there, and what data to put there. So, that's a little bit of a management step that you have to do, because you have to decide, 'OK, if I've got multiple applications, this one really is the one that needs it. No, this one needs it.'
You have to have that argument and figure out what's what, because there's a limited amount of capacity.
If you go the caching route, then the caching says, 'Whatever I/Os are hot go in there and you don't have to think about it.' The management of a caching solution is a little easier, but, with caching, the performance ramps up over time. It's not instant like it is if you put whole app's data on there, like on the persistent storage side. So, there's a cache warm-up time. There are tradeoffs. It just depends on which way you want to go. If you go caching, then you have multiple applications that can take advantage of it. Any application that's hot can take advantage. Whereas, on the persistent side, only the apps you choose will be accelerated.
When you use a solid-state storage device in a server, is it better to use SLC or MLC?
There is a difference. SLC, of course, is single-level cell. That's the high-end, high-performance, high-cost and much more expensive option. You get better performance there, but you aren't going to get as much capacity. If you really want the super high-end, go with SLC.
If you don't need the super high-end and you can afford to, or you'd rather go with more capacity, then MLC, or multi-level cell, is a good choice. Then the only question is, do you go with what I would call consumer-grade MLC? Or do you go with something a little better? And there's something called eMLC, which has been called enterprise MLC. I would say more accurately, it's really endurance MLC. It gives you longer endurance that you would typically see with SLC-type products, but not necessarily the same performance of SLC.
So, that's a nice trade off -- the eMLC. You have to decide, how much performance do you want to pay for? Or, do you want to go with a little less performance and maybe a little more capacity?
Given the high price of flash, how can vendors offer all-SSD arrays at an affordable price?
With an all-solid-state array, if you just look at the price per gigabyte of a single small quantity of it, it's going to be more expensive than the equivalent capacity of a hard drive. But you're not just buying a single thing. You're buying a whole array.
By the time you look at the whole thing -- the cost of the controllers and the drives and everything else that's in there, it's actually is a lot closer to what you would expect to pay for a high-end disk array full of 15K drives. So, when you put the whole package together, it's still maybe a little more expensive, but it's not that out of line, really.
How big of a difference is there between hybrid SSD arrays and all-SSD arrays?
We've done some testing on both all-flash and hybrids. You get great performance on both. [With] the hybrid approach, of course, [you can] say, "We have SSD for performance, and we have hard drives for capacity." Depending on the vendor, they can do different things to manage the difference between the two and where you put the data and how you put the data on there.
We've seen very nice performance on hybrids as long as everything fits on the SSD. As soon as it doesn't fit, then of course things are going to be different. With an all-flash array, it's just a matter of how much capacity you get. Everything runs great on all flash arrays.
Can you expect all-SSD arrays to last as long as traditional arrays?
You can. It all depends on the components that are in there. If you take it down to the drive level, first of all, there are SSD flash drives available that are enterprise quality that have five-year warranties from the manufacturer. If you look at a hard drive array, the high-end enterprise drives also come with five-year warranties.
So, you've got similar warranties, and then the rest of the controller components are very similar in both. If all the components have a high warranty on them, and the array vendor gives you a similar warranty on it, then, yes, you can expect it to last just as long.
What kinds of applications benefit the most from all-SSD arrays?
Anything you put on the all-SSD array will run much better. But, certainly we've seen that the first obvious one is databases. Any kind of database app will run a lot better on an all-SSD array. If you can put your email system on there, that'll run, because internally it's really a database -- as long as you have the capacity for it. Pretty much any application will run much better on an all-flash array.
Some all-flash arrays have NVRAM or DRAM installed. How do they use that?
So, the non-flash memory, I'll call them NVRAM and the DRAM, are typically used for a cache to help accelerate things. Some systems, because the NVRAM, of course, is nonvolatile, they will do writes in the NVRAM in some cases. Some systems will actually take all of that and put it together and say, 'Here, we've got some DRAM, we've got some NVRAM, I've got some flash. Let's manage it all together, and let's do the right thing with the right type of media.' So, it's definitely a benefit.
Are solid-state storage appliances that sit in front of a standard hard disk array a viable alternative to all-SSD arrays?
Yes, they are. If you want to let multiple servers take advantage of this cache, and you want to accelerate multiple existing arrays that you already have, you can put these caching appliances in the middle, and they can be both file-based and block-based. And, they can accelerate stuff in front of all your existing storage.