Feature

San pioneer: start small, but smart

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Best storage products of the year 2002."

Download it now to read this article plus other related content.

Four tips for SAN scaling

    Requires Free Membership to View

1. Even if you only have a few switches and arrays, make sure they're connected in a way that can scale. Daisy chain at your own risk.
2. Your SAN is only as good as your backup capability.
3. Invest in the physical plant (cable trays, racks, etc.) to make expansion easy.
4. Get good management tools in place as soon as possible.
Building your first storage area network (SAN) may seem like the biggest hurdle to realizing the benefits of storage networking, but if you don't build it correctly, you may find that scaling your SAN infrastructure is even more difficult.

That's what we learned at Intuit Inc. when we decided to implement a SAN three years ago. With business requirements and storage doubling - sometimes tripling - every year, the advantages of achieving greater storage resource utilization through centralization, consolidation and availability were incentive enough to go ahead and be one of the early adopters of SAN technology. As our SANs grew from around 20TB, 128 ports and 60 DLT tape drives to approximately 200TB, 900 switch ports and 140 DLT drives, we encountered unforeseen problems that can plague you if you're not prepared.

One of the challenges was sharing SAN resources and achieving 100% utilization while trying to avoid both high costs and a large team to manage the SAN. We also had to figure out how to protect our initial investment while expanding - you don't want to have to throw out the infrastructure you built when you were relatively small in order to expand.

You can avoid these landmines by not boxing yourself in with a SAN design that can't scale effectively. Understanding what that means concretely, however, is far from obvious.

The right stuff
SAN veterans can frequently be heard muttering "need more tools." You may think that only concerns those with large, complex environments, but bringing in the right tools early can yield immediate benefits in several areas.

Interoperability. When adding or upgrading new hardware or software to the SAN, it's important to know all the different firmware and driver versions for the switches, storage and HBAs deployed to verify everything is supported. Going to each server to manually check the HBA firmware and driver version in a large SAN, spread across multiple networks of various security, can become tedious. A tool to provide us with an accurate report was necessary. Without this information, the network would potentially be at risk when changes were made to the environment.

Planning. When planning for scheduled maintenance of your SAN, it's important to know the downstream dependencies for scheduling downtime if necessary or identifying the paths that need to be failed over in each fabric one at a time in order to prevent the potential downtime. The critical issue here is to identify which applications will be disrupted by breaks in the fabric. A good tool will help you avoid the manual task determining the paths for each of the servers and which storage arrays they are being served from.

Scaling. When implementing a SAN or redesigning your current one it's helpful to have a visual diagram of your environment. Without a visual network node diagram, it can be hard to scale or redesign the current architecture effectively.

Reclamation. As projects are added, removed and applications modified, storage can become unused or retired without the storage being reclaimed. Without a reporting tool to help manage the allocation and usage of the disks in the array, there could potentially be big dollars going to waste.

There is also money to be saved for storage that's allocated and in use for a particular project by looking at the size of the application and calculating the percent used to the overall allocated. This can be especially true for databases.

Avoid false economies
Our SAN implementation began as a few isolated monolithic and modular storage arrays with redundant fabrics made up of a few switches. We had mostly Unix servers of mixed operating systems connected to the monolithic arrays while Intel servers were connected to the modular arrays. While the modular arrays were less expensive, at the time they didn't yet have the availability, caching and ability for multiple mirror copies and therefore were mostly used for the smaller applications such as databases running on Intel platforms. Although this changed over time as the software for the modular arrays became competitive with the features of the monolithic arrays, we continued to use the monolithic arrays for the most critical applications.

Ultimately, decisions at a higher level forced us to move to a method for replicating data, which meant moving more apps to monolithic storage and scrapping some of the modular systems. Always try and anticipate your future needs when you chose your primary storage (see "Scaling backup,").

Some of our initial SAN implementations were performed by adding Fibre host bus adapters (HBAs) to the servers and then migrating the direct-attached servers from maxed arrays to new ones with Fibre switches placed in between. These isolated SAN islands were designed and laid out in a simple fashion. Management was manual, but relatively easy. A few Excel spreadsheets showed the switch and disk configurations for each of the servers. The infrastructure was nothing more than several strands of fiber laid throughout the data center under the floor in the network trays. Switches were racked and located centrally between the servers and storage. Backups were performed on a daily basis. Soft and hard zones were configured and our SAN implementations were a success.

This initial configuration worked well while things were relatively small and isolated. However, some of the benefits of a SAN weren't being fully utilized in this design. New servers were added to these SAN islands, but once we grew beyond the capacity of the switches or arrays in the initial design, the various components of the SAN started to become obstacles. One by one, each component needed to be addressed.

Design your SAN with a topology that scales regardless of how small you initially start out. Spending money up front will save you both soft and hard costs down the road.

The soft costs saved include the amount of time it takes to manage, redesign and then implement a core-edge topology later down the road. The hard costs are the longer investment protection over time of the initial hardware purchase.

You can protect your initial investment if you correctly anticipate faster hardware speeds for tape, switch, servers and storage. With speeds increasing and the ability to create trunks between your switches, you'll have more flexibility if you've designed an architecture that lets you move the older and slower technology out to the edge and implement the new faster hardware at the core. You'll also reduce the amount of downtime you experience in the future. With our SAN islands, we had to bring down the fabric in order to merge islands - and perhaps the servers as well - to bring firmware levels in sync, or upgrade them to the latest version to support more or newer drives.

We found that it was better to schedule downtime when making most major changes to a SAN. Due to the infancy of SAN technology at the time, interoperability along with older versions of software firmware and drivers could - and had - resulted in unplanned outages. Ensuring data integrity and uptime to our customers was the main objective, and therefore, scheduling the downtime for maintenance was sometimes necessary.

SAN maturity and issues with interoperability have improved, so you may not need to bring everything down to make changes now.

But without the right architecture, you may be forced into awkward configurations just to utilize all the available resources across multiple islands. In our case, we would sometimes end up with small switches linked in daisy-chain fashion to each other through a single ISL in order to achieve this. As a result, our SAN was vulnerable to single points of failure.

This was first published in January 2003

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: