As enterprises trust more and more of their data to SANs, high availability becomes increasingly important. According to the Fibre Channel Industry Association (FICA), there are a number of best practices in building a high availability
Redundancy is the foundation of high availability. That includes having dual or alternate paths from HBAs to devices and dual switch fabrics or directors. Remote mirroring or replication protects data and allows quick recovery of lost data. Although mirroring and replication can be done on-site, having the data stored at a remote site offers more protection in the event of a disaster. Beyond the hardware, a high-availability SAN must be designed for the job. That includes intelligent use of LUN or volume mapping and mirroring and path management for HBA failover.
Finally, high availability involves more than the network. Clustering servers and hosts improves reliability by enabling failover if the server or host has a problem.
Obviously all this costs money. As a first approximation, a high availability SAN will cost at least twice as much as a simple SAN of the same capacity due to the redundancy. Often SAN administrators have to work with users to decide what is most important and what doesn't need high availability. Some things, such as mirroring and mapping the LUNs or volumes, are relatively inexpensive. Other parts of the highly available SAN, such dual switch fabrics, are quite costly and might only be implemented for the most critical data.
A white paper discussing this and other aspects of SAN scalability is available from the Fibre Channel Industry Association (FCIA) at data.fibrechannel-europe.com/technology/whitepapers/060202_9.html.
Rick Cook has been writing about mass storage since the days when the term meant an 80K
floppy disk. The computers he learned on used ferrite cores and magnetic drums. For the last twenty
years he has been a freelance writer specializing in storage and other computer issues.
This was first published in November 2010