Unsnarl port traffic

Configuring the number of ports on storage arrays and switches shouldn't be a guessing game that results in an excess of ports and a big dent in your budget. To properly size a switch or storage array, you need to analyze the average and peak bandwidth requirements of each device. Monitoring current utilization rates will help you determine effective bandwidth requirements.

This article can also be found in the Premium Editorial Download: Storage magazine: Backup overhaul: From a mainframe to an open-systems environment:

You'll need to juggle the right number of ports on your switches and arrays to maximize performance and reduce the management complexity of your applications.


At first glance, it might seem that the number of ports is a relatively insignificant factor when choosing a storage array, switch or host bus adapter (HBA). Application requirements, operating system support, device features, performance and scalability are all factors to consider--the number of ports is almost an afterthought. But the number of ports plays an increasingly important role in the application's availability and performance, as well as the cost of the storage device and how complex it is to manage.

Think of ports as gates into arrays and switches; if these gates are congested, the fastest arrays and switches won't be able to live up to their potential. Therefore, the number of required ports into a storage system is determined by how much traffic the storage system must handle.

Proper sizing of a switch or array begins by analyzing the connected storage devices. The average and peak bandwidth requirements of each device need to be taken into consideration. Let's assume you need to size a storage switch for 25 servers that connect to three storage arrays. The device count dictates the minimum number of ports, which in our example is 28 ports. Depending on the storage bandwidth requirements of the applications running on each server, the aggregate bandwidth to the arrays will likely require multiple trunked connections, increasing the minimum number of ports.

Unfortunately, it's not always easy to accurately project the average and peak performance of all connected devices; the only way to get an accurate assessment is to monitor and analyze the utilization metric in production, which will reveal performance issues. If you want to add extra ports to a congested link or affix additional devices in general, it's crucial to include spare ports in the required port count. As a rule of thumb, you should plan for a port count that meets your port requirements for the next 12 months. "Typically, we see companies size their infrastructure up to a factor of twice today's needs," says James E. Opfer, research vice president at Gartner Inc., Stamford, CT. "Sizing infrastructure beyond a factor of two is expensive and, in most cases, uneconomical."

Ports and Fibre Channel HBAs
Host bus adapters (HBAs) enable servers to connect to switches and arrays. They're typically PCI, PCI-X or lately PCI Express add-in cards with one, two or four Fibre Channel (FC) ports. Servers are connected redundantly to switches or arrays via at least two connections, using Multipath I/O (MPIO). While multiport HBAs enable multipathing through a single HBA, having the redundant connections on physically separate HBAs prevents storage failure; redundant links on separate HBAs eliminate the HBA as a single point of failure.

Multiport HBAs with a port count beyond two, such as QLogic Corp.'s SANblade QLA2344 HBA, face an oversubscription problem similar to that of high-density blades. The aggregated bandwidth of four FC ports exceeds the capacity of a single I/O port; as a result, all four ports won't be able to operate at full speed simultaneously. PCI Express, with its higher bandwidth, can alleviate if not resolve this issue. High port-count HBAs, however, can be beneficial in virtualized environments with multiple virtual servers running on a single physical server. By assigning instances that run high-performance and low-performance applications to different HBA ports, the impact of low-performance applications on high-performance applications can be minimized.

Virtualization is becoming increasingly important. Until recently all virtual servers had to connect to storage sharing a single worldwide name (WWN). Now that some HBAs support N_Port ID Virtualization (NPIV), you can assign different WWNs to each virtual server; as a result, each virtual server can access only the volumes specifically assigned to it. Furthermore, NPIV simplifies the creation of new servers and the migration of existing virtual servers to different physical servers.

The most important aspect of designing a well-performing storage system is the creation of a balanced system that has no bottlenecks along the data path; from HBAs and switches, to array host ports and array back-end ports, capacity needs to match the throughout of the system. Although this logic applies to all storage components, it's especially crucial for arrays and switches.

Data enters the array through host ports, passes through the storage controller for storage processing and reaches the spindles in the disk enclosure through the storage controller's back-end ports. Obviously, the number and size of host and back-end ports is only one of the many factors influencing array performance. The controller's internal bandwidth, cache, number of processors, disk performance, and the type of interconnect between the controller and disk enclosure--switch vs. loop--determine the array's performance, and all these pieces need to be balanced.

A similar thought process needs to be applied to switches and directors. The throughput of a switch or director is determined by its internal bandwidth, and connected storage devices tap this bandwidth through switch ports. If the number of ports multiplied by the port speed is greater than the internal capacity of the switch, the switch is oversubscribed; the more a switch is oversubscribed, the more performance will suffer. Oversubscription results in connected devices not operating at full port speed. In other words, a switch can only operate up to its internal capacity; if the concurrent aggregate bandwidth of connected devices is greater than internal bandwidth, some devices will operate at a fraction of full port speed.

To keep costs under control, most switches and directors operate with some level of oversubscription. When combined with traffic prioritization techniques like quality of service (QoS) and bandwidth reservation, oversubscription is a great tool to balance cost with performance requirements. To give storage managers a choice between high port counts and guaranteed performance, switch vendors offer their line cards with varying numbers of ports. For instance, while a 12-port 4Gb Fibre Channel (FC) line card in a Cisco Systems Inc. MDS 9500 series multilayer director will operate at full line rate under any circumstances, a 48-port line card in the same switch operates at an oversubscription ratio of 4:1.

Midsized arrays
Choosing the right array is a complex process that's largely determined by application and scalability needs. One of the key decisions you'll face is whether you need the performance, features, modularity and scalability of a high-end array or if a midsized array suffices. Midsized arrays are typically dual-controller arrays with a host port count of four to eight ports, as well as two to eight back-end ports.

While a single, back-end port per controller is sufficient for entry-level arrays to redundantly attach disk enclosures, two host ports per controller are imperative to enable cluster-type configurations with two servers or switches attached to each controller.

In addition to the number of ports and port speed, arrays differ in how the back-end ports are connected to the disk enclosure. Vendors have been transitioning from an arbitrated loop design with shared bandwidth to a switched or point-to-point-type architecture to connect disks to controllers. Switched connectivity not only provides higher performance than a loop, "it also simplifies dealing with individual disk drive failures," says Craig Butler, manager of disk, SAN and NAS product marketing at IBM Corp. "That's why all our enclosures have a switched back end, all the way from the storage controller to the enclosure and drives."

For backward-compatibility reasons, some arrays ship with a hybrid loop/ switched architecture. For instance, EMC Corp.'s recently released Clariion CX3 UltraScale series arrays attach the disk enclosure via an arbitrated loop, but the disks within the enclosure are connected via a switched connection that enables the UltraPoint enclosures to work with pre-CX3 UltraScale series arrays. Similarly, the Hewlett-Packard (HP) Co. StorageWorks Enterprise Virtual Array (EVA) family deploys a hybrid interconnect to the spindles.

"We use a point-to-point connection between the controllers and the disk enclosure; but within the enclosure itself, the drives are connected via an arbitrated loop," reports Kyle Fitze, director SAN marketing, HP's StorageWorks Division.

While midsized arrays from EMC, HP and IBM have a relatively fixed number of host ports--from four to eight, depending on the model--and a relatively fixed back-end port count (two to eight), these arrays don't allow you to mix host-side and back-end ports. Network Appliance Inc. arrays, on the other hand, let you configure each available array port as target (host port) or initiator (back-end port), thereby giving storage managers the option to designate any number of ports as host or back-end ports.

Performance Tuning
Determining when to move to the next level is never easy, and it's no different with storage arrays and switches. If money didn't matter, oversizing the infrastructure could be one approach, but tight IT budgets and a mandate to do more with less make this a no-go for most environments. Fortunately, there's a systematic approach to right-sizing your storage infrastructure and it starts with planning, designing and architecting your storage landscape. An analysis of the hosts/servers and applications that connect to arrays and switches will provide the information you need to determine the average and peak bandwidth each server requires; this, in turn, will enable you to determine the number of ports and bandwidth per port your switches/arrays need to provide.

This analysis must include which servers can accept some level of congestion and those that absolutely need to operate at full line speed. Armed with this data, you can assign critical as well as storage-intense servers like database servers, transactional application servers and backup servers to full line-rate ports and have less storage-dependent servers like DNS, fax and other auxiliary application servers leverage oversubscribed ports. The more you take advantage of oversubscription, the more difficult this exercise will become.

Once the storage system is in production, the single most important tool for tuning or adding ports and capacity is performance monitoring. Nothing beats actual performance, utilization and latency data to isolate and remedy bottlenecks. For oversubscribed ports, the monitoring data will provide a clear picture of how often a port will reach maximum utilization. You'll be able to identify the servers causing these spikes, while the I/O data determines whether to move the server to a different port. User feedback is a crucial element, especially in environments with oversubscription. A storage manager may deem a port that reaches maximum utilization a few times a day acceptable, but it may not be acceptable to a business user performing a critical task.

High-end arrays
For high-end end arrays, the guiding principle is performance and scalability and, naturally, complexity increases. High-end arrays are very modular, and the number of host ports and back-end ports is determined by business requirements. Ports are added on an as-needed basis. For instance, if you need additional disk enclosures, you simply add back-end disk directors or cards to support the additional spindles. Similarly, if you need more ports and bandwidth, just add front-end host directors or cards.

Unlike midsized arrays, the equilibrium of a balanced system can be easily broken. For instance, if you add host ports without beefing up the back end by adding more ports and controllers, the array back end will be unable to keep up with the increased host load. Performance and utilization monitoring are recommended for any size array, but it's imperative to ensure that high-end arrays stay tuned. Some of the elements to monitor include front-end host ports for fan-out ratios and host traffic, the cache subsystem, back-end disk processors, back-end I/O paths, and array or RAID processors.

High-end arrays can easily scale beyond 100 ports. To support this level of scalability, array vendors have implemented advanced array architectures. EMC's Direct Matrix Architecture in its Symmetrix array family employs a point-to-point design that directly connects front-end channels with array cache and back-end channels, eliminating any elements that could introduce delay. Array capacity is scaled by simply adding channel directors for host communication, back-end disk directors (two to eight, depending on the model, with eight ports per director), and global memory directors for I/O delivery from hosts to disk directors.

Unlike EMC, the Hitachi Data Systems (HDS) Corp. TagmaStore Universal Storage Platform (USP) implements a parallel crossbar switch with its Universal Star Network architecture, which connects front-end ports to back-end controllers and disks to cache. Similar to the EMC Symmetrix array, port count is scaled by simply adding front-end directors and back-end directors, supporting up to 192 FC ports, 96 ESCON ports or 96 FICON ports.

IBM's TotalStorage DS8000 array family--formerly known as Shark storage servers--takes yet another architectural approach, leveraging clusters of IBM System p servers that act as storage controllers where two (DS8100) or four (DS8300) of these systems are connected via a high-speed internal bus (RIO-G), supporting a total of 64 (DS8100) or 128 (DS8300) ports.

Switches
The first question to ask yourself when choosing a switch is if you need a modular switch that allows you to simply add ports by adding line cards or if a fixed-port, nonmodular switch will suffice.

Nonmodular switches are available from vendors like Brocade Communications Systems Inc., Cisco and QLogic Corp., and port counts typically range from eight to 64 ports. All vendors, including Cisco with its MDS 9124 Multilayer Fabric Switch, now support ports on demand, which enables you to buy a fraction of the total number of available ports and simply activate the remaining ports by purchasing a license when additional ports are needed.

If the port count of a single nonmodular switch isn't high enough, multiple nonmodular switches can be cascaded using single or multiple trunked inter-switch link (ISL) connections. QLogic, with its SANbox 5200 and 5600 switches, made cascading a more acceptable option by offering a 10Gb/sec interconnect between each switch, eliminating the performance and management issues of switches linked via standard ISL links.

Furthermore, QLogic's recently released SANbox 9000 series stackable chassis switch connects to QLogic's modular switches via a 10Gb/sec high-speed FC link, providing a smooth transition path from nonmodular switches to director-level switches. The SANbox 9000 series stackable chassis switch differs from traditional directors from Brocade, Cisco and McData Corp. (recently acquired by Brocade) by limiting the total number of ports to 128, offering only nonblocking 16-port 4Gb/sec blades and four-port 10Gb/sec blades.

It's easy to add ports to a modular, chassis-based switch: Simply add line cards. Both Cisco and Brocade offer line cards with two, three or even four times (Cisco) the number of ports that originally came with their switch or director. Higher port-count line cards have a lower price per port, but they also have a higher level of oversubscription that can result in a less-predictable storage performance. "If you deal with oversubscription, you have to be mindful where you use it and you may need monitoring to deal with it," says Mario Blandini, Brocade's director of product marketing. "The use of oversubscription definitely makes storage management more complex, but at the same time, it lowers cost," he says.

High port-count line cards and oversubscription can also make cable management more challenging, and some of the existing racks may not be able to deal with it. "Large density can become a big problem, and if your cable management system isn't able to deal with it, you may end up just using every other port," says Blandini.

Port strategy comparison

High availability
One of the strategies in designing redundant storage systems is to connect each storage node via dual or multiple paths with the next node. This is accomplished by connecting servers and storage arrays to separate fabrics. For the highest level of availability, the redundant ports on each storage node typically originate from physically separate components, eliminating downtime due to component failure. For storage arrays, this means having primary and secondary ports on separate controllers; in the case of switches and directors, this is achieved by putting ports in the same multipath group on physically or virtually separate fabrics; on the host-side, ports on separate HBAs result in higher availability than having ports in the same multipath group on a single multiport HBA (see "Ports and Fibre Channel HBAs").

A redundant storage design doubles the number of required ports and significantly increases storage costs. A mandate to lower storage costs without compromising availability has caused some storage managers to reduce the number of ports by eliminating storage switches, moving from a networked storage architecture to a more directly connected design. "With a dual-controller Clariion AX150 priced at about $10,000 and FC switches priced at about $5,000, being able to build a highly available environment without switches can literally cut the cost in half," says Jay Krone, director of Clariion platforms at EMC. However, a directly connected architecture isn't the norm for large storage environments; the more servers you need to connect to an array, the more you can make a case for SAN switches.

Ports aren't necessarily the primary concern when buying an array or switch, but they play an instrumental role when building a balanced storage system (see "Port strategy comparison," this page). As a general rule, the number of required ports is determined by the amount of data per second a storage system needs to handle; and your system will only perform well if it's in balance with the system's internal capacity.

This was first published in April 2007
This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close