Using either solid-state storage as a cache or as primary storage can increase I/O performance, but each use of the technology has strengths and weaknesses, according to Dennis Martin, founder and president of Demartek LLC.
With SSD caching, a caching controller -- positioned in front of an array or inside the server -- can monitor for hot I/O usage and place a copy of that data onto the SSD for use as a cache for faster access than from spinning disk, said Martin.
"The nice thing here is that any application that is busy gets a performance benefit … [and] performance in this environment generally improves over time as the cache fills up," said Martin in his Storage Decisions presentation.
While the system has to monitor I/O usage before building up the cache, the performance improvements take hold with data usage, he said.
"After the cache gets built up, the load on the hard drives is less, because they're being asked fewer times for I/O. So what that means is when they are asked for I/O that isn't in the cache, they can respond more quickly, so your hard drive performance goes up a little bit."
By comparison, manually keeping specific data on SSDs used as primary storage grants a boost only to applications using the SSD, but the improvements can be found immediately, he said. "You don't have to wait for the second read or the third read … [you] get instant performance boost," said Martin.
However, Martin does not recommend that users manually migrate data and workloads to SSDs. Automated tiering software can be used to determine what data should be moved to SSDs and what should be kept on spinning disk, he said.
"So what might be a good choice today for … putting data on [SSD] primary storage might not be the right choice tomorrow, or next week or next month. That's where this automated tiering software comes in, because it will watch the I/O rates and move the right stuff there at the right time," said Martin.