In-memory database technology is catching on for its performance and analytics capabilities, but what are some considerations when it comes to choosing storage for such an environment? To find out, SearchStorage asked David Floyer, co-founder and chief technology officer at Marlborough, Mass.-based research and analysis firm Wikibon, to assess the hot technology of in-memory databases from the vantage point of data storage.
From a storage perspective, what is the greatest challenge associated with an in-memory database?
David Floyer: The biggest single mistake made on in-memory databases is having slow recovery times. Let's say you had a 10 TB [terabyte] database. Something goes wrong, and you have to restart. If you have to get that database in 4 KB blocks with an average latency of 20 milliseconds, it would be a minimum of 14 hours (assuming no recovery problems) before you can get your database back up again.
One of the challenges of recovering an in-memory database is that you have to refresh all that memory, and you have to get it from disk. You need a very high bandwidth recovery mechanism so that you can reload all the data you've got in [dynamic RAM]. Usually it's not an I/O problem. It's a bandwidth problem, getting all that data in recovery mode.
Still wondering about the benefits of in-memory databases? Find some additional insights in these stories:
Learn how in-memory databases can extend the possibilities of analytics
See how an in-memory database helped this bank boost productivity
How to achieve the speed you need with in-memory data grids
In-memory data catches on in the big data world
IDC analyst explains why in-memory technology is beneficial
Oracle capitalizes on in-memory technology with database search feature
This was first published in February 2014