This article can also be found in the Premium Editorial Download "Storage magazine: Adding low-cost tiers to conserve storage costs."
Download it now to read this article plus other related content.
The four fundamentals
Now, let's talk about some actual best practices for the storage industry. I'll start with four fundamental best practices from the wider IT world, and then zoom in on how they inform the type of storage decisions Ifind myself making.
Best practice 1: Minimize complexity. IT is a tough world. We're ignored as long as everything is working well, and we're in trouble when things start going wrong. In most cases, we have accountability, but not control. The best we can do is to minimize the complexity of the solutions we must support. At least then we have a fighting chance.
This is where the current move toward consolidation and standardization comes from. I've mentioned before how a single administrator can manage a far larger environment if it's standardized. This is also an outcome of tiered storage models--we can stick with a single technology for each of our few tiers of offerings.
Unfortunately, minimizing complexity is the antithesis of a true hacker's heart. Who wouldn't want to play with the latest gadgets? Why not build an elegant interconnected system? But these solutions fail our tests--they're risky, unusual and just plain imprudent.
Best practice 2: Use the right tool for the job. As any woodworker or mechanic can tell you, it pays to use the right tool for any job. Even if it works, it's just a bad idea to force a network-attached storage (NAS) filer to look
You see, this best practice is trumped by the first. It's better to minimize complexity and stick to what you know than to press for the optimum solution every time. It wouldn't be right to introduce a SAN to an all-NAS environment for a single database. In that case, DAFS, or another NAS protocol, would be the right solution. But for most people, this is still akin to driving in a screw with a hammer.
Best practices: Consolidate file servers into NAS boxes. Don't use oddball square-peg technologies.
Best practice 3: Prepare for failure. The hacker in me wants to build everything so it will work, but the realist in me knows that I have to make it twice as good. You can't prepare for every eventuality, but you can look for the probable points of failure and harden them.
Many of the fundamental best practices in storage come down to this. The redundancy of RAID 1, the widespread adoption of multipathing software and the use of redundant SAN fabrics are all examples of preparing for the worst. We wouldn't have adopted these practices if we didn't know that failures happen with alarming regularity and that storage is extremely sensitive to failure.
Best practices: Build redundant everything: SAN fabrics, data centers, network links. Rotate tapes off site daily. Have spare parts handy. Don't listen when someone claims that maintenance won't affect production.
Best practice 4: Align expectations with reality. User A has 100GB of data to be restored. So you tell him you can restore it in 10 hours. But what if users B through G come asking for their data at the same time? It's time to adjust everyone's expectations.
This is where concepts like SLAs and business impact analyses come from. Although we don't have real control over user demands, we can at least control their expectations by demonstrating what we can and can't do. And it's up to us to do this in words that they understand, rather than through technical jargon.
Best practices: Write out your SLAs. Get a reporting tool to help you understand the reality of your infrastructure. Don't let your users labor under their misconceptions of your capabilities.
This was first published in August 2004