This Content Component encountered an error

Storage area network

Storage area network connectivity and protocols <<previous|next>> :SAN connectivity issues

SAN management

SAN performance over distance

SAN performance over distance

By  Christopher Poelker

SearchStorage.com

We are a large hospital with two computer rooms in separate buildings about a quarter of a mile apart (let's call them computer room A, which is unmanned and room B, which is manned). There are about 200 servers in computer room A, and we are planning to put in a SAN. I want to put in the SAN in room A as the servers, and one of my co-workers wants most of the equipment in room B with the operators.

I have concerns about the quality of the fiber, distance and eventual speed, if the SAN and most of the devices it services are geographically separated. I can't seem to find any articles on speed vs. distance, but know from experience with Cisco, that we have problems with short haul gigabit interface converters (gbics) over 700 feet apart and have to go to long haul.

Good question. It all comes down to speed, distance, latency and signal loss.

First of all, these are the accepted standards:

  • 1 Gbit Fibre Channel using multi-mode 50u cable plant = 500m max
  • 2 Gbit Fibre Channel using multi-mode 50u cable plant = 300m max
  • 4 Gbit Fibre Channel using multi-mode 50u cable plant = 150m max

    Signal loss should be no more than 0.5 dB per connection, and total loss no more than 20 dB.

    The longer the distance, the greater the latency. The faster the speed, the shorter the distance allowed.

    For hooking up the servers, you should have the switches and storage within the same computer room. If you are concerned about disaster recovery, and would like to have one computer room back up the other, than a quarter of a mile is probably not far enough. You would be on the same power grid, and a storm would span both buildings.

    For fire purposes, you could run a dark fibre cable (9u single-mode) between the buildings for data replication, and keep half of everything at both computer rooms. If one burns down, you would have a copy of your data in the other, and the other servers could take up some of the load for critical applications. This would require your storage arrays or fabric to support some form of data replication.

    Since SAN-based management is mostly web-based these days, putting the equipment in one building in a lights-out environment would be possible. With a SAN, one no longer has to open up servers and add disks. All storage management can be done remotely.

    If all your servers are in one place and all your storage in another, and each server has two HBAs for path failover, it would mean you would need to run 400 cables between buildings, and may require more expensive single-mode fibre cables and lasers to span the distance, which is probably not cost justified or feasible.

  • 06 Jun 2005