Q

How storage IT pros can support the recovery of in-memory databases

Wikibon CTO David Floyer provides advice on how storage IT pros can support the recovery of an in-memory database to avoid hours of downtime.

SearchStorage senior writer Carol Sliwa called on David Floyer, co-founder and chief technology officer at Marlborough, Mass.-based research and analysis firm Wikibon, to supply advice for storage-focused IT professionals on the hot topic of in-memory databases, which store data in main memory to facilitate faster response times.

How can a storage IT professional support the recovery of an in-memory database as opposed to a traditional database?

Recovery is different on in-memory databases than on traditional databases because you need to design recovery with a much more aggressive architecture. Even though it won't be used very often, you need to overdesign the recovery mechanism because the reason you have in-memory databases is that you want to do things fast. There's a business value to doing them fast. I used to have a job where we got reports in the morning, and we would have to wait until the next day to get a refresh. If we could have gotten a refresh five times a day or 10 times a day, we would have done a much better job of managing the end-of-year books for the company.

For in-memory databases, you should make sure that there is as little contention on the persistent storage for writing as you can. Overprovision direct Fibre Channel ports, and ensure you have the best possible quality of solid-state drive or PCI Express flash card to ensure low latency. You want major performance redundancy and as little network switching as possible.

Most in-memory databases come as pre-designed appliances. It has some dynamic RAM capacitance, maybe some flash memory, and everything is designed to be in balance with the software. If you use a pre-designed appliance for your in-memory database, test it under load. Test it for both throughput and recovery for the worst possible day on the worst possible year. Then tell everybody what that recovery time is and get sign-off that's fast enough for the business. You don't want surprises on the last day of the year or on some critical business day because you can't recover. You spend a lot of money on the in-memory databases, and you expect them to perform fast and recover fast.

SAP HANA and other in-memory databases are getting smarter about how they recover and very smart about how they place data so they can optimize the throughput and recover as quickly as possible. They do smart things to avoid bringing the system down. The worst thing you can do is have an I/O failure bring down the whole in-memory system. You will get the brunt of having taken shortcuts and causing an outage by not thinking through how to optimize for recovery speed.

This was first published in January 2014

Dig deeper on Enterprise storage, planning and management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close