Integrating iSCSI and FC storage

Mixing iSCSI with Fibre Channel (FC) allows you to make more efficient use of installed storage capacity, but marrying the two protocols isn't without its challenges. Bringing iSCSI into existing FC SANs raises integration issues and leads to a somewhat more complex storage infrastructure that requires IP and FC knowledge, as well as the ability to manage and troubleshoot a multiprotocol storage environment.

This article can also be found in the Premium Editorial Download: Storage magazine: Top 15 Storage hardware and software Products of the Year 2006:

There are five basic ways to bring iSCSI to a Fibre Channel SAN, each with its own benefits, shortcomings and "best-fit" environments.


Until recently, iSCSI was deployed mostly by small and midsized firms migrating from direct-attached to networked storage; large organizations remained loyal to more mature and proven Fibre Channel (FC) SANs. But the technology's low cost and simplicity has an increasing number of Fortune 1000 firms complementing their FC infrastructure with iSCSI SANs for data protection and Tier-2 storage.

Marrying iSCSI to FC isn't without its challenges. Bringing iSCSI into existing FC SANs not only raises integration issues, it leads to a somewhat more complex storage infrastructure that requires IP and FC knowledge, as well as the ability to manage and troubleshoot a multiprotocol storage environment. Storage architects attempting to go down this path must be familiar with iSCSI design options as they affect performance, security and availability.

Performance
Some of the novelties of iSCSI are related to the nature of the IP protocol. While the FC protocol is optimized for networks dedicated to connecting servers to arrays, IP-based iSCSI SANs may have to compete with nonstorage IP traffic. To minimize the impact of disruptive IP traffic, data center managers are isolating iSCSI traffic from nonstorage traffic via dedicated iSCSI networks that have no physical connection to the rest of the network, or leveraging Ethernet isolation techniques like access control lists and Virtual LANs (VLANs). "To avoid interferences from the in-house LAN, we decided to run our iSCSI network on a standalone Foundry Networks Inc. FastIron 48-port Ethernet switch," reports Kevin Mount, senior network administrator at Spokane Public Schools in Washington.

Although physical and virtual segregation greatly enhance security and performance, storage managers may need to resort to advanced Ethernet techniques like Ethernet jumbo frames and enabling flow control on the network switch and adapters to alleviate congestion and optimize throughput. In cases where the bandwidth of a single gigabit link doesn't suffice, multiple links may have to be combined into a single aggregated link by taking advantage of Ethernet trunking or link aggregation; this can sidestep the need to deploy an expensive 10Gb Ethernet infrastructure to overcome bandwidth limitations.

On the host-side, TCP offload engines (TOEs) and iSCSI HBAs can save valuable CPU cycles, especially on slower or performance-critical application servers. Although TCP and iSCSI overhead of 1Gb/sec connections is less than 10% on state-of-the art server hardware and 85% of today's iSCSI deployments just use software iSCSI initiators, according to David Dale, chairman of the SNIA IP storage forum, TOEs and iSCSI HBAs will play a much larger role once 10Gb iSCSI becomes more prevalent. Besides the I/O performance boost, iSCSI HBAs come with added services like the ability to boot from a SAN and encryption.

In mixed-protocol environments, storage managers need to be aware of Ethernet oddities like erroneous speed/mode autonegotiations between Ethernet switches and network interface cards (NICs) because they can have a detrimental impact on iSCSI network performance. "To eliminate the possibility of autonegotiation problems, we hardcode all switch and server Ethernet ports," says Mount.

Security
The different approaches taken by iSCSI and FC to secure storage access are probably the biggest hurdles multiprotocol storage architects have to deal with. While FC leverages FC switches for zoning, arrays for LUN presentation and host identification through worldwide names, iSCSI secures storage access through a combination of the aforementioned physical and virtual isolation of the iSCSI network, as well as access restriction by IP addresses, initiator-target names and internal/external CHAP authentication.

Although it may seem confusing to have multiple iSCSI authentication options, there's a simple rule of thumb: For isolated IP-based iSCSI networks, initiator-target name authentication will typically suffice. In situations where the iSCSI network is physically connected to the LAN, the stronger CHAP authentication should be deployed, eliminating the external threat of spoofed IP addresses accessing iSCSI LUNs. In environments with a large number of iSCSI devices, central authentication via a Radius server eliminates the need to manage user credentials in iSCSI targets.

Mike Layton, director of enterprise services and information systems at Baylor College of Medicine in Houston, applied a similar strategy when he opted for initiator-target name-based iSCSI authorization for a small number of servers accessing his Hitachi Data Systems (HDS) Corp.-based FC SAN via a Network Appliance (NetApp) Inc. FAS980c gateway over an isolated iSCSI LAN.

One of the big benefits of iSCSI over FC is the native support for IPsec encryption within the IP protocol, and it should be used whenever IP traffic may fall into the wrong hands. But the overhead of IPsec encryption on a busy server can be significant. In environments with IPsec turned on, servers and bandwidth-hungry desktops should be equipped with iSCSI HBAs or NICs with hardware encryption available from companies like Cavium Networks Inc. At the network level, encryption appliances from companies such as Decru Inc. (now a NetApp company) and NeoScale Systems Inc., which sit in the data path, encrypt the FC and iSCSI data before it reaches the network-attached storage arrays.

Storage management interfaces are highly vulnerable to security lapses, but are often neglected during storage design. Using default passwords, a single password for all storage devices or passwords that are never changed puts an otherwise well-designed SAN at risk. Securing management interfaces by making them accessible only from certain systems and VLANs with strong password policies, and using a centralized Radius authentication server in environments with a large number of IP devices, reduces the risk of unauthorized management changes.

Availability
Availability is the most important requirement for any SAN, including iSCSI SANS, and it needs to be architected at the server, network and array level. At the network level, redundancy is achieved by deploying switches in pairs and leveraging Ethernet failover techniques like spanning tree and dynamic routing. At the server level, high availability is achieved by dual-connecting servers to Ethernet switches; with Microsoft Corp.'s 2.0 release of its iSCSI Initiator in 2005, multipath IO (MPIO) has enabled iSCSI hosts to redundantly connect to the iSCSI network.

"All our hosts run MPIO and are dual-connected into the iSCSI SAN," explains Kuljit Dharni, director of infrastructure at Babson College, Wellesley, MA. "While one of the onboard NICs is used for regular LAN access, the second onboard Ethernet port and an additional Intel PCI gigabit network adapter connect to the iSCSI SAN."

Redundancy options in iSCSI targets vary by vendor and product type. iSCSI gateway appliances, intelligent storage switches and server-based iSCSI targets are typically available in cluster configurations in which two devices run in active-active or active-passive mode. Some midrange storage arrays with iSCSI support, such as EMC Corp.'s Clariion CX3-20 and CX3-40, provide redundancy via dual-controller architectures. High-end arrays like the EMC Symmetrix DMX family are chassis based and redundancy is obtained by simply adding multiple iSCSI blades.

iSCSI integration options
iSCSI can be integrated into an existing FC SAN at various levels and to varying degrees, depending on the existing storage environment and integration goal. On one hand, there are storage architects like Baylor College's Layton whose main goal is to access all existing FC storage via iSCSI. At the other end of the spectrum are storage managers like Dharni who have completely migrated to iSCSI, eliminating their FC SANs. Another option is to run FC and iSCSI SANs side by side, the approach taken by Spokane Public Schools' Mount and Dan Schneidemantle, data systems manager at Logs Financial Services Inc., Northbrook, IL.

In the latter case, the key design consideration is the extent of the iSCSI FC SAN integration. In many mixed-protocol environments, storage architects deploy an additional iSCSI SAN that runs in parallel to the existing FC infrastructure, each managed separately, to avoid complex integration issues. "Most of our customers run the iSCSI SAN separate from their FC SAN," says Eric Schott, director of product management at EqualLogic Inc. "Mapping iSCSI LUNs and FC LUNs, which employ a very different security model, isn't a trivial task," says Schott, "and to many storage architects the integration benefits aren't worth the added complexity."

Unified management of iSCSI and FC SANs is doable with multiprotocol storage arrays from a single vendor. EMC, HDS, Hewlett- Packard (HP) Co. and Net-App all offer multiprotocol arrays that manage iSCSI, Fibre Channel and, in some cases NAS, under a single umbrella. Unified storage management can also be accomplished with iSCSI virtualization products from companies such as FalconStor Software Inc., NetApp and Sanrad Inc.

Storage managers who need to bring iSCSI into an existing SAN have several integration options to consider:

  • iSCSI gateways
  • FC switches and directors with iSCSI support
  • Intelligent storage switches and gateways
  • Array-based iSCSI integration
  • Server-based iSCSI integration

iSCSI gateways
iSCSI gateways perform protocol translation from iSCSI to FC and vice versa. They typically come with at least two FC ports to connect to back-end FC storage and a minimum of two 1Gb Ethernet ports to provide IP connectivity to servers. iSCSI gateways present FC LUNs as iSCSI targets, making FC storage accessible over IP and obviating the need for FC HBAs in servers. Examples of these gateways include Brocade Communications Systems Inc.'s iSCSI Gateway, Cisco Systems Inc.'s MDS 9216i, Emulex Corp.'s 725/735 iSCSI Storage Routers and QLogic Corp.'s SANbox 6140 Intelligent Storage Router.

Storage provisioning in SANs with iSCSI gateways requires two steps. FC storage admins provision LUNs to the iSCSI gateway, and then iSCSI access to FC LUNs is restricted on the iSCSI gateway to defined IP addresses, initiator-target names or via CHAP credentials. Once this is accomplished and the iSCSI initiators have been properly configured on iSCSI clients, they can see assigned storage as local disk drives.

Lessons from the field
Click here for some of the iSCSI lessons from the field (PDF).

The main benefit of iSCSI gateways is their simplicity and unobtrusiveness when added to an existing FC SAN. They don't require architectural changes on the FC network and allow servers to access FC storage by simply configuring the iSCSI initiator software. But iSCSI gateways aren't cheap--street prices start at approximately $10,000 for a single iSCSI gateway appliance. With FC HBAs costing at least $400 for Windows and Linux systems, the cost-benefit analysis is straightforward: An iSCSI gateway can only be justified if a relatively large number of servers use it. For a small number of servers, it's more economical to add FC HBAs to servers to directly connect to the FC SAN. "You need about 100 servers before you see a real price-performance benefit of iSCSI gateways," says Mario Blandini, Brocade's director of product marketing.

FC switches with iSCSI support
If you run high-end FC switches and directors, there's a good chance your switch vendor offers an iSCSI gateway as a blade option, eliminating the need to deploy standalone gateway appliances. By putting the iSCSI gateway into an FC director, you'll get a single management console and inherit all the redundancy and performance benefits of a director-class switch.

Brocade offers the SilkWorm FC4-16IP iSCSI blade for its SilkWorm 48000 Director with eight 4Gb/sec FC ports and eight 1Gb Ethernet ports with an aggregate throughput capacity of 64Gb/sec. Similarly, Cisco provides the IP Storage Services Module (eight 1Gb Ethernet ports) and the Multiprotocol Services Module (14 2Gb/sec FC ports and two 1Gb Ethernet ports) for its MDS 9200 Series Multilayer Fabric Switches and MDS 9500 Series Multilayer Directors.

By putting the iSCSI protocol in the FC switch, vendors can add intelligence and features not found in standalone iSCSI gateway appliances. For instance, Cisco's support of the Virtual Router Redundancy Protocol (VRRP) enables storage architects to configure alternate paths for Ethernet connections into iSCSI blades, enabling iSCSI sessions to resume on a standby Ethernet port if the primary port becomes unavailable. Cisco's iSCSI Server Load Balancing (iSLB) allows storage managers to configure all servers with a single iSCSI target-portal IP address, delegating the actual Ethernet port assignment to the switch. Letting the switch assign Ethernet ports to iSCSI clients not only simplifies storage management, it enables highly redundant iSCSI network designs by allowing the iSLB feature to automatically reassign defunct ports through VRRP.

Intelligent storage switches and gateways
While iSCSI gateways are limited to protocol translation, intelligent storage switches and gateways add storage services like virtualization, snapshots, replication and mirroring. In other words, they're very similar to multiprotocol storage arrays, except that they don't have storage attached.

These devices let storage architects aggregate existing storage, add multiprotocol support (including iSCSI), and provide virtualization and storage management under a single management console. NetApp's V-Series gateways are the most prominent product line in this category. V-Series gateways are NetApp storage controllers without storage attached, providing iSCSI, FC and NAS connectivity to a central storage pool. "We front-ended our HDS Fibre Channel SAN with a NetApp [FAS] 980c gateway to add both iSCSI and NAS capabilities to our SAN," says Baylor College's Layton.

Sanrad's iSCSI V-Switch connects to a blend of SCSI, FC and iSCSI arrays on the back end, and provides iSCSI connectivity to servers on the front end. Similar to the NetApp V-Series gateway, the V-Switch storage service platform supports volume virtualization, provisioning, mirroring, snapshots and replication. "The Sanrad V-Switch not only provides iSCSI connectivity to our Xiotech [Corp.] FC array, it also enables us to manage the FC storage and SCSI-attached SATA storage under a single umbrella," says Spokane Public Schools' Mount.

But this approach may have its drawbacks. "Virtualization in a switch with no storage attached is quite expensive because you have to pay for the switch and you need to get the storage," says Pete Caviness, solution marketing manager at LeftHand Networks Inc.

Array-based iSCSI integration
Large storage array vendors like EMC, HDS, HP and NetApp tout their arrays as the best place for iSCSI-FC integration. Leveraging multiprotocol arrays is especially attractive for companies that have standardized their infrastructure on one of these vendors, which offers a single management point for iSCSI and FC. "[EMC] Navisphere Management Suite transparently manages storage for both iSCSI and FC, all the way from provisioning to setting up replication," says Peter Lavache, senior manager, storage products marketing at EMC.

Multiprotocol arrays eliminate the need for iSCSI bridges or gateways, allowing hosts to directly connect to storage arrays via iSCSI. But not all FC arrays sold by these vendors are iSCSI enabled; iSCSI support depends largely on array type and age. NetApp has been an iSCSI proponent for some time and all of its arrays have supported iSCSI since 2003. Today, all NetApp boxes ship with iSCSI as the default protocol. Other vendors have taken a more cautious approach. EMC supported iSCSI in its high-end Symmetrix arrays since inception, but only recently began offering native iSCSI support in its midrange Clariion CX3-20 and CX3-40 arrays. Similarly, HP supported iSCSI in its high-end StorageWorks XP arrays, but only added native iSCSI support to its midrange StorageWorks EVA arrays in 2006.

Server-based iSCSI integration
Microsoft's Windows Storage Server 2003 R2 now supports iSCSI. FalconStor upped the ante by adding FC support to its IPStor platform, which supports iSCSI, FC and NAS all in one box. Similar to intelligent storage switches, IPStor offers advanced storage features like storage virtualization, snapshots, mirroring and replication.

IPStor closely resembles the Sanrad iSCSI V-Switch with one difference: The IPStor software can run on any server platform in almost any conceivable configuration. "We chose FalconStor IPStor for its flexibility, which allows us to integrate our existing HP MSA1000 FC array with FalconStor SATA storage, all managed through IPStor," says Logs Financial Services' Schneidemantle. He installed a primary and secondary data center with the IPStor cluster in the primary data center replicating to an IPStor cluster in the secondary data center over iSCSI.

The biggest benefit of server-based iSCSI integration is the flexibility to accommodate a wide range of integration requirements. On the downside, software that runs on a standard server makes it more susceptible to problems, as Schneidemantle discovered. "We changed the automatic active-active IPStor cluster failover to manual failover after a power failure, during which neither of the two cluster nodes came back," he says.

10GbE on the horizon
The price of 10Gb Ethernet is expected to drop enough to become an affordable option for iSCSI networks. This will allow storage architects to use iSCSI for high-end applications that would have ended up on FC storage in the past. "Existing apps will stay on FC," says Robert Grey, research vice president, worldwide storage systems research at IDC, Framingham, MA. "But new apps will increasingly choose iSCSI."

This was first published in February 2007
This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close