Q

The problem, believe it or not, is the speed of light

We are doing some testing on two SAN islands linking by a CWDM link. However, we have observed performance hits as distance gets longer. What is the possible reason?

Performance measurement:

- 0km (3meters) : OK
- 12.5km : 10 - 15% lower
- 25km : 20-30% lower

Possible trouble spot:

1. I thought the FC 10km limitation only applies to the FC optics itself.

2. For the CWDM system, each lambda is 2.5G so each Channel can actually takes two 1G FC and "mux" it to a 2.5G lambda for the CWDM filter. However, the CWDM system "should" be a transparent box that takes FC lambda as the vendor claims.

3. The HBA/RAID controllers and disk enclosures are all 2G but use multi-mode optics.


The problem, believe it or not, is the speed of light.

Fibre Channel uses optical fibers for data transmission. Data is transmitted over the cables using serial SCSI over FC protocol. This means that every bit of that block of data you just transmitted gets converted to an optical pulse of light and gets sent over the cables through your CWDM multiplexer and over to the other SAN island. (WDM or Wave Division Multiplexer, is a technology that "multiplexes" multiple sessions over a single fibre cable by slicing up the light frequency into many frequencies that can then be used to transmit multiple data streams at the same time.)

The speed of light in a vacuum travels at around 300,000KM/second The speed of light over a single mode fibre cable travels at around 200,000KM/second

That gives us a "latency" (the time it takes for the light pulse to travel across the cable) of around 50 microseconds per 10KM. Therefore, latency over long haul transmission cables (9u SM fibre or "dark fibre" as it is called) is approximately 1 millisecond for every 200KM traveled.

As your distance increases, so does the latency of getting your data to the other side. We have not found a way to speed up light yet. (By that time, we would all be carrying around "phasers," and saying things like "beam me up Scotty!)

Data transmission over long distance affects performance of production applications, especially when that transmission is done "synchronously". A better method would be to use asynchronous transmission methods where you "spoof" the application into thinking the I/O is complete but the actual data transmission happens behind the scenes through the hardware.

A number of storage vendors have this capability, mostly built into "replication" firmware.

Connecting two SAN islands together using wave division multiplexers is usually only viable up to 100KM. Make sure the switches you are using includes support for "extended fabric". Brocade has an optional license for this that is used to increase the total amount of buffer credits for your data. McData includes this capability with their switches. You need enough buffer credits in order to "fill up the pipe" if you will. Sixty-four buffer credits should be a minimum when connecting islands together.

Chris

Editor's note: Do you agree with this expert's response? If you have more to share, post it in one of our .bphAaR2qhqA^0@/searchstorage>discussion forums.


This was first published in August 2002
This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close