Storage Bin: The devil is in the details

"Close" isn't good enough for disaster recovery and security planning. Overlooking what might seem to be a minor detail could result in major consequences.

This article can also be found in the Premium Editorial Download: Storage magazine: How to scale up with storage clusters:

I owe everyone a bit of an apology. Each month I have this forum to rant and rave about what users and vendors should do--and, of course, I'm always right. But I often forget to mention some of the details I either didn't think about or that I thought you knew. I'm going to try to stop doing that.

For example, I've been yapping at you for years about disaster recovery. You think you can't afford it. You think you can't manage it. But you're wrong. You can't afford not to do it.

Case in point: An IT manager from a "small" shop (a $6 billion hedge fund, but a small IT shop--go figure) told a very interesting story at a meeting I recently attended. His company had smartly invested in what it thought was "good enough" disaster recovery--some clustering and a giant uninterruptible power supply (UPS) system. One day, the entire building lost power and the UPS system kicked in exactly the way it was supposed to. All of the company's systems that needed to be up, stayed up. The UPS was designed to handle an outage for up to eight hours.

Awesome, except that an hour after the power outage, this poor guy was scrambling around the data center shutting machines down as fast as he could, all the while sweating like he just ate a fistful of habaneros. Why? Because the computers were up and running, but the air conditioning wasn't. In no time, the data center began to resemble the Earth's core. Power wasn't the issue; the issue was servers turning into liquid magma and firing off the life-sucking Halon system.

Who thinks about air conditioning? Well, that IT manager does now. If the company had replicated its applications and data to a remote disaster recovery site, it could have switched over and not suffered any noticeable downtime. I'm not suggesting they should have bought another data center in New Mexico, staffed it and stuffed it full of gear to be used only under such drastic conditions. But they could have replicated asynchronously over a cheap connection to a co-lo facility for pennies and been far better off.

This reminds me of another little detail that recently came to light. You may have heard how Bank of America "lost" some backup tapes. Backup is great if it's recoverable. Recovery is great if it's private. We're not all school children and we no longer believe in the tooth fairy, so we should also come to grips with the idea that if someone can steal something, they will. You'll never prevent bad things from happening--you can only minimize the damage. Do you still think security isn't an issue when it comes to storage? I enjoyed getting Ashlee Simpson's phone number from Paris Hilton's cell phone hacker, but it would bum me out if my stuff were out there for everyone to access. As far as Bank of America goes, I suppose it can downplay the whole episode by saying, "The backups probably didn't work anyway."

What's the bottom line? Let's tell everyone we give information to that we want them to at least encrypt their backup tapes. Why on earth shouldn't backup information be encrypted? It's not primary, it's usually removable and it's easily pocketed. Do you want to keep out of trouble? Encrypt your backup tapes. It's smart, it's easy, it's cheap and it'll keep you out of jail. How's that for detail?

This was first published in April 2005

Dig deeper on Storage Resources

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close