How many HBA adapters are required in a SAN environment for backups and restore policies by a SAN? Is true that the best way to do it is with one HBA only for backup?
Trying to determine how many host bus adapters you should have in each system is pretty simple. Almost all SAN installations these days include at least two adapters in each server. Using two host bus adapters allows your operations guys to take down one half of the SAN at a time for maintenance. Most SANs have at lease two fabrics for failover. So, if most installations have two host bus adapters, that's the way to go, right? Well, it may be the way most SANs are set up but when you add backup to the mix, you have to start being careful.
The whole idea behind backup is to:
1) Get your data safe
2) Get your data safe while not impacting production
3) Get it done as fast as possible
When you design a backup solution, getting rid of bottlenecks is a major priority.
SAN backup lets you move your backup stream off the corporate network, and onto the faster SAN. The problem with SAN backup shows its face when you start building out your SAN environment and start adding more and more servers to the fabrics.
Let's say you have two 64-port director class switches in your SAN. This gives you two 64-port fabrics. Let's also say you're using an enterprise class storage array that was shipped with 16 Fibre Channel ports. This would give you eight storage ports per fabric. The eight storage ports allow a "fan in ratio" of eight servers per every storage port under this config.
When your SAN first started out, and you only had a few servers per director, everything was real fast. Your backup library was able to back things up real fast, since your disk "feed speed" (the speed at which you can read from your disks) was fast enough to keep your tape drives moving. If the feed speed is two slow, the tape buffers get empty, and the library has to constantly repositioning tape over the heads as it writes.
What happens as you add more servers and share more and more bandwidth at the storage ports, is that backup starts getting slower and it starts impacting the other servers on the ports. If you shared a single 100MB per second port between eight servers and you tried to back them all up at once, there would be contention for the port bandwidth. If you had one server running production on that port while you were backing up another one on the same port, you would see an impact to production performance. This is why adding a third HBA to each server and another fabric just for backup makes sense.
Not many people have the budget to add a third backup fabric though. As an alternative, you can connect your tape library to only one fabric, and use that fabric as the backup fabric during the day. You can use zoning in the fabric to make sure your backup traffic does not impact production servers. Another alternative is to add multiple HBAs only to your backup server and keep a couple of storage ports open just for backup. You can then use array-based snapshots to make copies of production disks and assign them to the backup ports. The backup server can mount the snapshots and back them up.
Editor's note: Do you agree with this expert's response? If you have more to share, post it in one of our .bphAaR2qhqA^[email protected]/searchstorage>discussion forums.
Dig Deeper on SAN technology and arrays
Related Q&A from Christopher Poelker
SAN expert Chris Poelker compares connecting a SAN with wavelength cabling and dark fiber and discusses the pros and cons of each. Continue Reading
SAN expert Chris Poelker discusses how to change the size of a LUN in a Microsoft cluster server environment. Continue Reading
Storage expert Chris Poelker outlines WWN basics in order to answer the question: "Why do HBAs in a SAN have same base?" Continue Reading