There are several reasons for considering a long-distance storage area network (SAN): You have a large campus environment or separate data centers that you want to link for improved manageability. You have specific disaster recovery or business continuity requirements. Or, maybe you're hearing from vendors who want to demonstrate some unique feature in hopes that it will win them business in a competitive market.
Before you can consider any of these options, you must recognize that applications, operating systems and storage arrays have different requirements. Applications differ in terms of latency and bandwidth. Read and write characteristics vary greatly; some applications read more data than they write, some write in small data blocks, others large, and some have serial activity while others are random. You also have to consider what caching is being done in the server, array or application.
Cross-site replication techniques vary considerably as well. Are you mirroring synchronously or asynchronously? Is the storage array doing the replication, or is it being done by the operating system or application? Is a virtualization appliance handling the replication? Are we making full replicas of data or just snap copies?
Lengthy articles can be written just on these subjects alone, totally ignoring the mechanics of what also needs to happen in the SAN fabric, but it is this area that I am going to talk about today. Do also remember to talk to your suppliers about support -- just because something works does not mean that everyone will support it.
Speed over distance
A normal short wave GBIC (gigabit interface converter) or SFP (small form-factor pluggable) will drive 500m on 50micron of cable (less on 62.5micron) at 1 Gbps, and only 300m at 2 Gbps. If or when 4 Gbps and faster links are available, these distances will drop still further. Quite simply, short wave length signaling gets blurred, limiting distance as you increase the speed of the link.
A normal wave GBIC or SFP will drive 10km on 9micron cable at either 1 or 2 Gbps. Discussions on 10 Gbps are still happening in the Fibre Channel world. Although Ethernet borrowed standards from Fibre Channel previously, it seems that the opposite is happening for 10Gbps; there may be a six-month lag between 10Gbps Ethernet and Fibre Channel.
What if you want to go further than 10km? You need to do some clever optical stuff to get the data from one end of the cable to the other. Actually, there are also some neat bridges, routers and other devices that allow us to use existing network infrastructure, which works long distance. These data links can be used to transfer data from one SAN island to another, encapsulating our Fibre Channel frames inside IP packets, or converting our FCP into iSCSI and back again.
Approaches for a long-distance SAN
To drive longer distances, there are three main approaches:
- extended wavelength GBICs or SFPs
- optical repeaters/signal boosters
- DWDM (dense wavelength division multiplexing) and related technologies
Extended wavelength GBICs and SFPs are just like normal GBICs or SFPs but they have more power and are on a slightly different frequency (longer wave). This allows them to drive a longer link, typically from 20km to 70km depending on specification. I should add that extended wave length 2 Gbps SFPs were fairly hard to find last year.
Also, the short wave, long wave and extended wavelength GBIC and SFP distances do have some flexibility. The reality is that the GBIC or SFP requires a certain signal strength and quality to receive and understand it. The standard lengths I mention actually make a number of assumptions about the quality of the cable, the number of joins and so on. This means that if you have a good clean cable run, then your signal may be good enough at longer distances.
An optical repeater or signal booster is fairly self-explanatory. They take the signal from the optical cable and re-transmit it with a new, clean and maybe more powerful signal allowing you to drive much longer distances.
DWDM and related technologies
WDM (wave division multiplexing) is not so much about distance as it is about putting multiple signals on one optical cable. Everything I have talked about so far is working on the principle of a single device at each end of an optical cable, sending data using a single optical signal.
WDM takes multiple optical signals, putting each one on a unique frequency or color of light, and sends them down a single cable. Since the cost of putting in optical cables is quite high, any technology that allows you to multiply the effective number of cables you have saves you money.
Since DWDM has become more common, prices have fallen. These days, you can buy an expensive system that allows you to run a lot of signals down a single cable, or you can buy a cheaper system that will run fewer signals -- WDM or sometimes called CWDM (course wave division multiplexing). There are also companies that will simply lease you a service.
One benefit of DWDM is that it's protocol agnostic for the most part. You can use one box to transmit a multitude of different optical signals and so share the cost between the network team and their Ethernet, the mainframe team and their ESCON, and the open systems storage team and so Fibre Channel.
Another benefit is, being an external, specially-designed box, the DWDM can push the signal over hundreds or even thousands of kilometers -- and not just over a single cable but over a complex optical network, which in turn provides resilience, rerouting and so on.
An interesting European spin here is that the deregulation of the telecom industry is at different stages in different countries. The availability of long distance optical cable and services varies from country to country. Indeed, there are other local differences even within Europe. I once heard that the reason long distance dark Fibre was so hard to come buy in Africa and South Africa was that as fast as one company put it in, another man would come along in a truck with a big drum and pull the cable out to sell to someone else!
Click for the conclusion of this two-part series.
About the author:
About the author: Simon Gordon is a senior solution architect for McDATA based in the UK. Simon has been working as a European expert in storage networking technology for more than 5 years. He specializes in distance solutions and business continuity. Simon has been working in the IT industry for more than 20 years in a variety or technologies and business sectors including software development, systems integration, Unix and open systems, Microsoft infrastructure design as well as storage networking. He is also a contributor to and presenter for the SNIA IP-Storage Forum in Europe.