F_port vs. FL_port protocol: What's the real-world difference?
This is a two-part question.
Please assume that a prospect is researching a modular vs. monolithic storage solution for his FC SAN.
Please also assume that -- except for one key difference -- the prospect found a modular system and a monolithic system that, for all intents and purposes, appeared to address all his business and application needs.
If one of the storage solutions communicated out to hosts through a FC switch using true switch/fabric/mesh F_port protocol and the other storage solution communicated out to hosts using classic arbitrated loop FL_port protocol, what would be the real-world differences and impacts?
(No need to talk about the oft-heard theoretical metrics of FC-AL loop vs. fabric, like 127-node limit in loop vs. thousands of nodes in fabric, etc. I'm looking for practical, in-the-trenches differences.)
Part 2:What if the data sheet for the monolithic storage array in Question 1 above states that it provides support for only FC-AL to hosts (i.e., it doesn't indicate support for fabric; put another way, it doesn't indicate support for F_port protocol).
And, what if the same data sheet for the same monolithic system states that it is internally architected with switching technology that supports a Fibre Channel-based fabric of, say, 64 internal point-to-point paths between the disk, cache and server interface controllers.
Will this monolithic storage device support true fabric/F_port protocol in an FC SAN environment? Does the "internal, fabric-based switching technology" take the place of an external fabric switch? Or, does the FC SAN communications protocol remain FL_port as opposed to true F_port protocol?
OK, I'll have a go at this.
I'm glad the prospect was looking at both monolithic and modular. Using both types of storage in a SAN will allow a scaled service level agreement for access to the different classes of storage within the same SAN. An application with a higher SLA need can be provisioned with the higher end storage, and an application with less requirements can get away with a lower cost per MB SLA by provisioning storage from the modular solution.
Most FC disk based storage arrays use FC-AL internally to connect the disks to their internal controllers whether they are modular or monolithic makes no difference. There are three types of disks available in monolithic storage arrays today:
FC-AL -- Fibre Channel dual ported active/active
SSA -- IBM only solution
SCSI -- Dual ported active passive
Using true Fibre disks allows for the entire subsystem to be more like a "packet switched" network using Fibre Channel frames throughout. So the "backend" of the subsystem you propose would use FC-AL internally and then connect through the cache and internal switch to the front end processors which could use either FC-AL or FC-SW protocols for connection to the external fabric.
So what is the impact then, whether you set the storage port up as an F-port or an FL_port?
If your connecting the subsystem to a hub you would set the port as an L_port, not an FL_port. If you were connecting to a switch, you would set the port up as an F_port.
In the proposed storage arrays you are talking about, if one used FC-AL and the other used FC-SW, then there would be an impact on how you architect the connections. Storage ports will only be either L_port or F_port. It's the switch port that becomes an FL_port when connecting to a loop based solution.
This means the L_port only storage array could only be connected to a switch that is auto sensing and has the capability to map FC-AL to FC-SW internally to talk to a loop (like Brocade's QuickLoop) or you would need a hub. The storage array that can connect via F_port protocol would not have this problem.
The only other difference between what the user (host) will experience in the connection (besides the connectivity stuff you mentioned in your question) would be the slight overhead in mapping FC to FC-AL through the switch port. Assuming the switches FL_port is communicating with a single storage array port and not a bunch of devices sharing a hub, then performance to the disks behind that port should be fine.
Click here for Part 2
Dig Deeper on SAN technology and arrays
Related Q&A from Christopher Poelker
SAN expert Chris Poelker discusses how to change the size of a LUN in a Microsoft cluster server environment. Continue Reading
SAN expert Chris Poelker compares connecting a SAN with wavelength cabling and dark fiber and discusses the pros and cons of each. Continue Reading
Storage expert Chris Poelker outlines WWN basics in order to answer the question: "Why do HBAs in a SAN have same base?" Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.