I hoped to overcome this latency by leveraging on the extra buffer credits of the Connectrix. Activating those...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
extra BCs is merely a matter of flagging one field (the "10-100 km" field) on the Connectrix. That's what I did, but the elapsed time of the job does not improve at all. I tried with flagging the field either on the E-port only, or on both E-port and F-port. Any ideas?
First, did you benchmark the target disk on the local switch first (e.g. no ISL at all) to establish a local baseline and if so, how did that differ if at all from the baseline ISL test? Second, what speed (1 Gb or 2 Gb) are the Fibre Channel ports from the server to the switch, in the ISLs between the switches, and from the switch to storage? What model of Connectix are you using? Were you able to note how many buffer credits were actually assigned to each port? How did you simulate the 20 km distance? Was it using a spool of 20 km single mode fiber (SMF) or was it over an actual circuit?
The Connectrix model number would have an impact on the number of available buffer credits dedicated or available to each port. Buffer credits will have an impact on throughput performance particularly as you go further, and faster. For example, a general rule of thumb is about 1 buffer credit per 2 km at 1 Gb, 1 buffer credit per km at 2 Gb, 1 buffer credit per 1/2 km at 4 Gb and so forth. As you found, DWDM should not introduce any bandwidth degradation until you encounter droop (loss of flow control and loss of protocol efficiency performance due to lack of buffers) at longer distances. Buffer credits deal with keeping the network pipe full and thus a measure of bandwidth. Latency will increase with distance over any network interface or protocol. The speed of light and associated propagation delay is about 5 microseconds per km. So a round trip latency would be about ( ( 5usec * 20 [distance] ) * 2 [round trip] = 200usec ). You should be able to measure any delay introduced by the switches, DWDM, or other components based upon your baseline tests.
You may also hear about something called db loss, which does not impact performance as much -- unless you have a lot of disruptions, breaks and connections in your network -- as it impacts the effective distance for transmission. Put simply, db loss is the amount of light bandwidth, or, I should say, the strength and accuracy of the light beam in the fiber optic circuit, due to cable connectors, cable quality, devices in the circuit and so forth. Buffer credits do not help with latency; rather, they help with bandwidth. Given your environment and application I'm not sure that a distance compression device like those from Adva, Ciena, Cisco, CNT, McData, Nortel, etc. would benefit or not.
Dig Deeper on SAN management
Related Q&A from Greg Schulz
Service provider outages should be a warning to customers that keeping data safe in the cloud is a shared responsibility.continue reading
When cloud durability is added to the mix, cloud providers are able to tout a high number of nines of availability.continue reading
Cloud storage can be less expensive from a cost-per-gigabyte perspective, but it's important not to lose sight of other benefits as a value ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.