Feature

Unsnarl port traffic

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Backup overhaul: From a mainframe to an open-systems environment."

Download it now to read this article plus other related content.

Ports and Fibre Channel HBAs

Requires Free Membership to View

Host bus adapters (HBAs) enable servers to connect to switches and arrays. They're typically PCI, PCI-X or lately PCI Express add-in cards with one, two or four Fibre Channel (FC) ports. Servers are connected redundantly to switches or arrays via at least two connections, using Multipath I/O (MPIO). While multiport HBAs enable multipathing through a single HBA, having the redundant connections on physically separate HBAs prevents storage failure; redundant links on separate HBAs eliminate the HBA as a single point of failure.

Multiport HBAs with a port count beyond two, such as QLogic Corp.'s SANblade QLA2344 HBA, face an oversubscription problem similar to that of high-density blades. The aggregated bandwidth of four FC ports exceeds the capacity of a single I/O port; as a result, all four ports won't be able to operate at full speed simultaneously. PCI Express, with its higher bandwidth, can alleviate if not resolve this issue. High port-count HBAs, however, can be beneficial in virtualized environments with multiple virtual servers running on a single physical server. By assigning instances that run high-performance and low-performance applications to different HBA ports, the impact of low-performance applications on high-performance applications can be minimized.

Virtualization is becoming increasingly important. Until recently all virtual servers had to connect to storage sharing a single worldwide name (WWN). Now that some HBAs support N_Port ID Virtualization (NPIV), you can assign different WWNs to each virtual server; as a result, each virtual server can access only the volumes specifically assigned to it. Furthermore, NPIV simplifies the creation of new servers and the migration of existing virtual servers to different physical servers.

The most important aspect of designing a well-performing storage system is the creation of a balanced system that has no bottlenecks along the data path; from HBAs and switches, to array host ports and array back-end ports, capacity needs to match the throughout of the system. Although this logic applies to all storage components, it's especially crucial for arrays and switches.

Data enters the array through host ports, passes through the storage controller for storage processing and reaches the spindles in the disk enclosure through the storage controller's back-end ports. Obviously, the number and size of host and back-end ports is only one of the many factors influencing array performance. The controller's internal bandwidth, cache, number of processors, disk performance, and the type of interconnect between the controller and disk enclosure--switch vs. loop--determine the array's performance, and all these pieces need to be balanced.

A similar thought process needs to be applied to switches and directors. The throughput of a switch or director is determined by its internal bandwidth, and connected storage devices tap this bandwidth through switch ports. If the number of ports multiplied by the port speed is greater than the internal capacity of the switch, the switch is oversubscribed; the more a switch is oversubscribed, the more performance will suffer. Oversubscription results in connected devices not operating at full port speed. In other words, a switch can only operate up to its internal capacity; if the concurrent aggregate bandwidth of connected devices is greater than internal bandwidth, some devices will operate at a fraction of full port speed.

To keep costs under control, most switches and directors operate with some level of oversubscription. When combined with traffic prioritization techniques like quality of service (QoS) and bandwidth reservation, oversubscription is a great tool to balance cost with performance requirements. To give storage managers a choice between high port counts and guaranteed performance, switch vendors offer their line cards with varying numbers of ports. For instance, while a 12-port 4Gb Fibre Channel (FC) line card in a Cisco Systems Inc. MDS 9500 series multilayer director will operate at full line rate under any circumstances, a 48-port line card in the same switch operates at an oversubscription ratio of 4:1.

This was first published in April 2007

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: