I have concerns about the quality of the fiber, distance and eventual speed, if the SAN and most of the devices it services are geographically separated. I can't seem to find any articles on speed vs. distance, but know from experience with Cisco, that we have problems with short haul gigabit interface converters (gbics) over 700 feet apart and have to go to long haul.
First of all, these are the accepted standards:
Signal loss should be no more than 0.5 dB per connection, and total loss no more than 20 dB.
The longer the distance, the greater the latency. The faster the speed, the shorter the distance allowed.
For hooking up the servers, you should have the switches and storage within the same computer room. If you are concerned about disaster recovery, and would like to have one computer room back up the other, than a quarter of a mile is probably not far enough. You would be on the same power grid, and a storm would span both buildings.
For fire purposes, you could run a dark fibre cable (9u single-mode) between the buildings for data replication, and keep half of everything at both computer rooms. If one burns down, you would have a copy of your data in the other, and the other servers could take up some of the load for critical applications. This would require your storage arrays or fabric to support some form of data replication.
Since SAN-based management is mostly web-based these days, putting the equipment in one building in a lights-out environment would be possible. With a SAN, one no longer has to open up servers and add disks. All storage management can be done remotely.
If all your servers are in one place and all your storage in another, and each server has two HBAs for path failover, it would mean you would need to run 400 cables between buildings, and may require more expensive single-mode fibre cables and lasers to span the distance, which is probably not cost justified or feasible.
This was first published in June 2005