Feature

Get your iSCSI game on: Best Practices

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Betting on an enterprise-level virtual tape library (VTL)."

Download it now to read this article plus other related content.

You don't have to be an iSCSI cheerleader, but you should know the score when it comes to this maturing protocol.

It's generally accepted that IP-based SANs are here to stay. iSCSI, the flagship IP SAN protocol, now enjoys or should enjoy the same respect as Fibre Channel (FC). But with respect comes responsibility. While some storage administrators may be hesitant to have iSCSI in their environments, it should be treated as a mature protocol and deserves the same level of administrative discipline deployed for FC networks. There are plenty of storage practices built around FC that can now be easily transferred to IP SANs or iSCSI networks. Let's examine some of these practices and determine how they can make your iSCSI experience similar to the one you have with Fibre Channel.

In the past year, the number of iSCSI installations has surged. Many have been native installations, as most storage vendors now offer some form of iSCSI support on their arrays. On the server side, most of the clients have been Microsoft Windows or Linux, again thanks mostly to free iSCSI-initiator software. While bridged iSCSI installations haven't increased as much, they offer an alternative for users who don't have arrays that support iSCSI or aren't yet willing to make the new investment (as it introduces an additional layer that increases overhead).

Technologies such as blade

Requires Free Membership to View

servers and server virtualization benefit from iSCSI as it allows you to minimize the number of connections required. Because everything is IP-based, there's no more need to waste slots for host bus adapters (HBAs). No HBAs means fewer cables, which simplifies your configuration.

Because Gigabit Ethernet (GbE) is now generally available on all systems, and most systems come with multiple network interfaces, the lack of additional HBAs doesn't present a problem.

What iSCSI means for storage management
In the early days of IP, collision-based networks were common. They posed a challenge for scalability and were eventually replaced by more scalable switching networks. But one lesson we all learned from those networks is the importance of traffic segregation. Traffic that has no business sharing the pipe with other forms of traffic needs its own network. An example of this is backup traffic. It's a common practice to put backup traffic onto its own network to minimize its impact on other types, such as application traffic. IP SANs are no different. If you put your iSCSI traffic alongside your other traffic, it's bound to cause a performance issue.

A separate iSCSI network can be a physically separate network or a sub-component of a bigger network (like a VLAN on a corporate network). The key is to ensure that the traffic isn't contained only within that network or VLAN, and that there's enough dedicated bandwidth for devices to communicate with the storage port, which may be on a different switch. In general, it's a good idea to keep the storage and initiator ports on the same physical switch to minimize disruptions. While iSCSI networks are generally Gigabit Ethernet (and sometimes 10GbE), traffic should remain balanced. It's a good practice to make sure that storage fan-in ratios are within vendor recommended limits. Unlike other IP networks, traffic on IP storage networks is aggregated at the storage port and not just on inter-switch links (ISLs). This is due to the nature of how storage is accessed--it's an initiator-target model. In this model, initiators don't communicate with each other and neither do targets. (When it comes to non-storage traffic, servers can communicate with one another.)

Don't make the decision to accept or reject iSCSI strictly on the basis of speeds and feeds. Unless the FC or iSCSI camp surrenders, there will always be a war raging to decide who runs faster. If there's any comparison to be made, it should be on the basis of how many I/Os or connections can be sustained at high speeds by each of the respective protocols.

This was first published in August 2008

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: