Published: 11 Aug 2008
| You don't have to be an iSCSI cheerleader, but you should know the score when it comes to this maturing protocol.
It's generally accepted that IP-based SANs are here to stay. iSCSI, the flagship IP SAN protocol, now enjoys or should enjoy the same respect as Fibre Channel (FC). But with respect comes responsibility. While some storage administrators may be hesitant to have iSCSI in their environments, it should be treated as a mature protocol and deserves the same level of administrative discipline deployed for FC networks. There are plenty of storage practices built around FC that can now be easily transferred to IP SANs or iSCSI networks. Let's examine some of these practices and determine how they can make your iSCSI experience similar to the one you have with Fibre Channel.
In the past year, the number of iSCSI installations has surged. Many have been native installations, as most storage vendors now offer some form of iSCSI support on their arrays. On the server side, most of the clients have been Microsoft Windows or Linux, again thanks mostly to free iSCSI-initiator software. While bridged iSCSI installations haven't increased as much, they offer an alternative for users who don't have arrays that support iSCSI or aren't yet willing to make the new investment (as it introduces an additional layer that increases overhead).
Technologies such as blade servers and server virtualization benefit from iSCSI as it allows you to minimize the number of connections required. Because everything is IP-based, there's no more need to waste slots for host bus adapters (HBAs). No HBAs means fewer cables, which simplifies your configuration.
Because Gigabit Ethernet (GbE) is now generally available on all systems, and most systems come with multiple network interfaces, the lack of additional HBAs doesn't present a problem.
What iSCSI means for storage management
A separate iSCSI network can be a physically separate network or a sub-component of a bigger network (like a VLAN on a corporate network). The key is to ensure that the traffic isn't contained only within that network or VLAN, and that there's enough dedicated bandwidth for devices to communicate with the storage port, which may be on a different switch. In general, it's a good idea to keep the storage and initiator ports on the same physical switch to minimize disruptions. While iSCSI networks are generally Gigabit Ethernet (and sometimes 10GbE), traffic should remain balanced. It's a good practice to make sure that storage fan-in ratios are within vendor recommended limits. Unlike other IP networks, traffic on IP storage networks is aggregated at the storage port and not just on inter-switch links (ISLs). This is due to the nature of how storage is accessed--it's an initiator-target model. In this model, initiators don't communicate with each other and neither do targets. (When it comes to non-storage traffic, servers can communicate with one another.)
Don't make the decision to accept or reject iSCSI strictly on the basis of speeds and feeds. Unless the FC or iSCSI camp surrenders, there will always be a war raging to decide who runs faster. If there's any comparison to be made, it should be on the basis of how many I/Os or connections can be sustained at high speeds by each of the respective protocols.
| The basics: Availability, security
Most networks offer some type of high-availability or failover mechanisms. They can be hardware-based (like Ethernet bonding) or software-based (like IP multipathing). IP traffic can switch between different interfaces, making for high availability when considering network access.
iSCSI benefits from this characteristic, hence, every iSCSI implementation should have some kind of high-availability mechanism built in. Storage vendors have also upgraded their multipathing software to support iSCSI and FC. The use of such software is recommended in addition to the IP mechanisms cited above.
FC SANs enjoyed an intrinsic sense of security because the networks were isolated from the rest of the infrastructure, minimizing external and internal intrusion. Basic security, such as LUN masking and zoning, is also available in iSCSI networks. But that's not all. By default, iSCSI configurations are insecure and by their nature (being IP-based) are exposed to much more risk than FC SANs. In the IP security world, authentication, authorization and auditing categories are commonly used. Most iSCSI configurations can be secured by the use of IPSec, a protocol commonly used in virtual private networks. iSCSI networks should never be configured for open access, just like a server should never be configured for remote access without passing any login credentials.
Booting from SAN
Booting from the SAN allows you to centrally locate all operating system images and eliminates individual images stored internally on servers. This is especially helpful in diskless blade and virtualized environments. But boot from SAN may be a stretch if you're converting to an iSCSI environment and your systems currently boot from local disk. In that case, you're better served keeping your legacy environment as is and having your newer servers boot from SAN.
In the IP world, TCP/IP tuning can be an exhaustive exercise. Because iSCSI is IP-based, it's not spared this troubleshooting. While the list of tunable parameters can easily fill a few pages, not all parameters apply to all situations; the storage and/or the iSCSI initiator vendor will generally have a list of parameters that need to be examined. The untuned iSCSI system will function, but the slightest load may cause performance issues.
Once you've deployed your first batch of iSCSI storage systems, you can build on top of that comfort level to deploy more complex solutions such as clusters. The basic concepts of clustering on the SAN remain the same as for FC, except the LUNs are now accessed via iSCSI.
So there you have it. iSCSI is a mature protocol for accessing storage and a solid alternative to FC. Fibre Channel may not be going away anytime soon, but iSCSI is definitely here to stay.