# Fibre Channel, part 2: Calculating data speed over distance

## An overview of the basic components of Fibre Channel, how it works and some basic math for running FC over long distances.

The following is the conclusion of this two-part series on using Fibre Channel over long distance. Click to return...

to part 1.

Before you implement a Fibre Channel storage area network (SAN), it's important to understand some of its basic components, buffer credits and how it all works.

A frame in Fibre Channel is a bit like a packet in IP. A Fibre Channel frame is approximately 2k in size. If you do some complex mathematics involving the speed of light, you can calculate that at 1 Gbps. A Fibre Channel frame is something like 4km long when it is running through optical cable; at 2 Gbps your frame is squashed to 2km long.

Using an analogy like cars on a road or trains on a railroad track, you can see that the bandwidth you get from the cable is dependent on having it full of frames, all nose-to-tail down the cable. This is very different from latency, which is dependent on the length of the cable and, of course, any delays caused by the various boxes we go through.

Fibre Channel works through devices talking to each other -- each one telling the other how big a buffer it has. This means each device knows it can send a number of frames non-stop. Once the device at the other end has got the data and moves it out of the buffer space, it then sends back a signal to say that some space in the buffer is free again. Hence the term buffer credits.

Now, of course, the acknowledgments have to go all the way down to the cable. This means to keep the cable full, and so get maximum bandwidth, the number of buffer credits we need is enough for a round trip. So, for a 10km cable, we have a 20km round trip. If we are running at 2 Gbps, the frame is 2km long, so we need 10 buffer credits in order to get full bandwidth. Similarly, for a 100km cable we would need 100 buffer credits at 2 Gbps.

Limitations to buffer credits

First, when working over long distance, it is really important to think about what you are actually using the long distance link for. If you are doing a 50km off-site synchronous replica of your mission critical database, and the database is undergoing 70% read and only 30% write, etc., then the amount of bandwidth you actually need may be a fraction of the 2 Gbps.

So, back to the math: 2 Gbps and 50km is a 100km round trip with 2km long frames, so we need 50 buffer credits to get an actual 2 Gbps. If we only give the link 25 buffer credits we can only get a half-full cable. This means that even though we are transmitting at 2 Gbps, we are only sending data half the time so we get an effective 1 Gbps. Of course, if we only tried to run the link at 1 Gbps, then our frame would be 4km long. This would also give you 25 buffer credits.

In reality, actual results will depend on many factors, some of which are hard to consider. So the best route is to pilot and measure what happens in your actual environment.

There are of course other aspects that may limit your use of buffer credits. For instance, if the connection is switch-to-switch, and the design of the switch is such that frames cannot necessarily be immediately taken from the input port into buffer space or the output port, then we may not get back our buffer space immediately. Maybe the design of the switch has an oversubscribed backplane; maybe it is a cross bar, in which case we have to wait for the connection to the other side to be established. These are all complex factors that may prevent you from getting as much use of your buffer credits as you would expect.

As another example, you may have a slow disk array with limited cache, which cannot take data from the switch fast enough. So you gradually starve buffer credits back up the route the data is coming from. This could quite easily cause performance problems on a link.

A little know problem particularly in a DWDM (dense wavelength division multiplexing) environment is one of lost frames causing loss of buffer credits. If the DWDM system reroutes, then it will probably do so fast enough that we do not actually lose our Fibre Channel connection. If these are data frames, then various high levels of the protocol, if nothing else SCSI will detect and retransmit so we wont actually corrupt our data. However, if we lose the acknowledgement, then we may never get back a buffer credit. Indeed, just normal transmission errors could cause this even without DWDM. So, over time, we gradually get fewer and fewer buffer credits and so get less and less bandwidth from the link -- until we reset it by hand.

Fibre Channel standards bodies are working on a solution for devices to double check with each other and so regain these lost credits, but that may be a long ways off. In a long distance link where a number of buffer credits may already be important, this is a vital point and so an area where it is worth monitoring performance over time.

Speed negotiation

One tip, my personal view is that on any of these unusual links, if you have a device that can run different speeds and auto negotiate -- don't. Manually set the speed instead. There are some interesting little signaling details that can cause problems and it's simply better to set the link to the speed you intend to use.

Another tip, if you have a 500m run and a pair of 2 Gbps switches, set the ports in question to 1 Gbps and you'll be within the Fibre channel specification again.

Summary

So, there you have it. The reality is that long distance Fibre Channel is actually not all that difficult once you understand a few bits and bobs. The real difficulties come from not being able to be in two places at once (actually quite important for a 100km link), understanding the data flow (server to tape, server to storage, storage to storage, synchronous, asynchronous, etc.) and understanding the nature of the operating system, application and storage array.

Happy photon pushing.

About the author: Simon Gordon is a senior solution architect for McDATA based in the UK. Simon has been working as a European expert in storage networking technology for more than 5 years. He specializes in distance solutions and business continuity. Simon has been working in the IT industry for more than 20 years in a variety or technologies and business sectors including software development, systems integration, Unix and open systems, Microsoft infrastructure design as well as storage networking. He is also a contributor to and presenter for the SNIA IP-Storage Forum in Europe.

This was last published in March 2003

## Content

Find more PRO+ content and other member only offers, here.

#### Start the conversation

Send me notifications when other members comment.

## SearchSolidStateStorage

• ### Will the eMMC controller market keep up with flash innovation?

EMMC host controllers may have a hard time handling advances in flash memory technology, like 3D NAND and newer connection ...

• ### Small but mighty eMMC flash storage grows its enterprise role

Many common devices, like your cell phone and tablet, use eMMC flash for storage. But the internet of things will soon make eMMC ...

• ### How eMMC 5.0 can improve your organization's small storage needs

The latest eMMC specification puts the tiny flash storage devices on a level playing field with many SSDs when it comes to speed ...

## SearchConvergedInfrastructure

• ### Holy COW! New Hampshire med center turns to Pivot3 vSTAC for VDI

Southern New Hampshire Medical Center put its traditional server-storage architecture out to pasture when it added ...

• ### Examining the state of the hyper-converged infrastructure market

HCI market leaders have emerged, but some question how long they'll retain their hold over the rapidly evolving segment.

• ### Nutanix networking management includes microsegmentation, APIs

Nutanix adds 'one-click networks' to its hyper-convergence as part of its plans to become an on-premises version of Amazon Web ...

## SearchCloudStorage

• ### Hitachi Content Intelligence searches, analyzes data

Hitachi Content Intelligence, built into Hitachi Content Portfolio object storage, extracts data and metadata from repositories ...

• ### OpenStack Newton storage features include data encryption

Storage updates in OpenStack's Newton release include at-rest data encryption in Swift, a message API for async tasks in Cinder ...

• ### Zadara expands hyperscale cloud options with Google storage service

Google Cloud Platform expands Zadara Storage VPSA and ZIOS hyperscale cloud SaaS options, which already support Amazon Web ...

## SearchDisasterRecovery

• ### Case closed: Law firm selects iland DRaaS for faster, easier DR

Minutes count in legal work, and Graubard Miller needed a simpler platform for disaster recovery. The verdict: The law firm chose...

• ### Disaster recovery and business continuity plans require updating

Updating business continuity and disaster recovery plans can seem daunting, but it becomes easier when you delegate tasks and ...

• ### Enhance cloud resiliency with proper data management

Explore factors that can influence your level of cloud resilience, such as outages in different geographic locations, and the ...

## SearchDataBackup

• ### Mobile data backup helped by encryption, data policies

More and more corporate data is being created and living on mobile devices such as tablets and smartphones. That dynamic requires...

• ### Veeam backup software protects mental health facility's Hyper-V

A mental health and addiction facility had an ongoing problem with its virtual machine backup and recovery until it was solved ...

• ### Commvault backup software builds in cloud capability, virtualization

By embracing new technology, like the cloud and virtualization, Commvault Systems provides businesses with a complete backup ...

Close