"So, what's a good workload to put on an SSD? I've been quoted as saying just about every workload would do better with SSD. For the most part, that's true … most everything does better on an SSD because it's a faster device," Martin said. "No mechanicals, no motors -- it's just silicon."
Databases "do very well" using SSDs, Martin said, but noted that databases are also running an application. "It's not just doing I/O and storage. It's doing stuff in the memory, it's using the CPU, using the cores -- throwing SSDs at it will work well, but it's not the only thing a database needs," he said.
Martin noted that database environments deploy large amounts of RAM to reduce the number of I/Os. These environments may not fully utilize the maximum performance of some solid-state devices. At Demartek, he said, some of his company's testing artificially reduced the amount of available memory in storage systems in order to "stress-test" the SSDs.
Martin also addressed the topic of dealing with latency issues and how solid-state technology can reduce the presence of latency, especially when compared to hard disk drives. "Some applications are much more sensitive to latency -- IOPS is great, bandwidth is great, but they want turnaround -- so, if you have a database that's got successive queries that have to happen before the final answer comes out, every trip out there is another turn on the latency crank. You want latency to be small," he said.
He noted that SSD users need to determine if there are any performance bottlenecks with the implementation of SSDs. "A properly designed storage system -- I'm talking a big storage or enterprise array -- will be such that controllers will never get overrun by the drives. … [I]n the early days of SSDs, if you took out all the hard drives and put in SSDs, the controller would get overrun because all the drives were moving faster."