Advanced storage area network

Advanced storage area network overview <<previous|next>>

Storage Technology News:

Guide to high-level administration of SANs

Guide to high-level administration of SANs

By  Stephen J. Bigelow, WinIT

25 Oct 2007 | SearchStorage.com

Storage area networks (SANs) concentrate storage into one specialized network. But SAN deployments can be complex. An administrator must configure the SAN hardware elements to achieve the proper mix of performance and reliability. Then the storage resources must be organized and allocated to the users or applications across the organization.

You've already learned from our guide to the basics of storage networks. Now let's look at the higher-level issues in a SAN architecture: the SAN components, SAN connectivity, SAN protocols and SAN management issues.

SAN cabling

Whether optical or copper, the cables used for SAN implementaions should always be labeled in a clear, consistent manner throughout the plant. It doesn't matter what cable labeling scheme you adopt, but it should be followed faithfully. Once cabling is installed or replaced, make the time to denote it on SAN (and physical plant) documentation so that other installers or technicians can follow the SAN later.

Poor cable installation practices also include tight runs, excessive bends (crimps) and inadequate protection across floors and other exposed areas. This is particularly true for optical cable, and can lead to premature failure. It is critical for administrators or other IT staff to oversee the work performed by professional cable installers to verify that reasonable installation practices are followed.

SAN HBAs

Normally, a single host bus adapter (HBA) is enough to establish a connection to each SAN server or storage device, but traffic congestion and connection reliability are always concerns. A single HBA link handles all of the traffic to a storage device, so excess traffic can congest the HBA and reduce performance. Also, if an HBA fails, communication is cut off and the affected portion of the SAN can become unavailable.

Storage administrators can overcome these problems by implementing multiple HBAs at the host server and storage device. Multiple HBAs can improve traffic handling through load balancing, as well as redundancy in the event of a fault. Software tools are used to define the load balancing and failover behaviors of the HBAs. These tactics also help to avoid single points of failure in the SAN, and are central to the notion of high-availability storage networks.

SAN switches

Switches are taking on more functionality in storage networks, supporting management features and intelligent functions such as switch-level storage virtualization, storage tiering and data migration. The main benefit of this development is heterogeneity -- since switch traffic is largely independent of a particular manufacturers' servers or storage devices on the SAN, a switch can oversee a much larger suite of systems without interoperability concerns.

If you're considering an iSCSI SAN, opt for Ethernet (IP) switch devices, rather than Fibre Channel (FC) switches. Furthermore, Ethernet switches intended for iSCSI SAN deployment should offer high performance and low latency while avoiding the port oversubscription that is typical of common Ethernet switch products. For example, the V-Switch family from Sanrad is designed to interface iSCSI host servers to FC SAN resources.

SAN connectivity and protocols

SAN connectivity is gaining bandwidth, allowing more users and applications to access burgeoning volumes of data. FC connectivity is particularly noteworthy, building on traditional 1 gigabyte per second (Gbps) and 2 Gbps speeds by adding 4 Gbps support available today, and even implementing some 10 Gbps ports.

However, 10 Gbps FC is currently only found in high-end SAN devices such as McData Corp.'s Intrepid series or the MDS 9513 director-class switch from Cisco Systems Inc. Since 10 Gbps FC is not backward-compatible with slower FC port speeds, 10 Gbps is normally reserved for inter-switch links (ISLs). Once 8 Gbps FC appears, it is expected to offer the backward compatibility needed to gain user acceptance.

Storage switches typically support common protocols such as SCSI FC Protocol (FCP) on open systems and FICON for IBM mainframes. More and more storage switches include Ethernet/IP ports to handle iSCSI, iFCP and FCIP protocols.

As a SAN grows in size, the number of switches deployed in the SAN also increases, and this can eventually lead to performance degradation due to interswitch latency (passing traffic switch to switch). Switches normally interconnect using dedicated inter-switch link ports. You can reduce ISL latency by choosing switches with fast ISL ports, or else you can trunk multiple ISL ports together to improve performance and redundancy.

Deployments of iSCSI SANs are gaining ground in mid-sized businesses and enterprise departments, largely due to the simplicity and low cost of Ethernet technology, as well as the ready availability of Gigabit Ethernet (GigE) components. Storage administrators can bolster iSCSI performance through the use of network interface cards (NICs) with TCP/IP offload engine (TOE) features.

In addition, iSCSI HBAs are available from major component makers such as Adaptec Inc., Intel Corp. and QLogic Corp. With 10 GigE looming, iSCSI may soon become a serious contender for enterprise SAN deployments against Fibre Channel. Furthermore, ISCSI initiator (client) and target (server) software is now readily available for most enterprise operating systems, including Windows, AIX, NetWare and Linux. The main issue with iSCSI is security, and network administrators must take great care to keep iSCSI SAN traffic segregated from everyday user traffic using a virtual LAN (VLAN) or separate physical network.

No matter whether you're using FC or IP connectivity in the SAN, you need to consider the connection speeds at every link between a server and storage device. As SAN deployments grow, administrators sometimes forget that all data in a given path will only be as fast as the slowest link. Investing in a 10 Gbps switch may not make sense if the main storage system is only communicating at 2 Gbps.

Auto-negotiation can also be an issue if one link fails to shift its speed properly to accommodate another data rate. Many administrators choose to eliminate potential connectivity problems by configuring the speed of each link manually.

Connectivity is tied closely to reliability, and many SAN architectures implement multiple simultaneous connections between HBAs, switches and storage systems. By creating redundant connections using different paths, SAN architects eliminate single points of failure that can potentially cut off storage from mission-critical applications, because a fault in one path can "fail over" to another path. Multiple paths can also be aggregated to improve performance.

SAN management

At most companies, the number of IT staff is not keeping pace with storage capacities that continue to spiral upward. Many administrators who were managing 3 TB of data just a few years ago are now often responsible for 15 TB. This trend has put a new emphasis on SAN management, especially in areas of process control and automation.

RAID platforms often include management tools, but the goal is to provide administrators maximum flexibility with minimal downtime. RAID arrays typically required downtime so administrators could add disks or to change the RAID group size, and the entire RAID group would have to be rebuilt when changing RAID levels (e.g., migrating from RAID 5 to RAID 6). Today, look for RAID platforms and management tools that can support these features on the fly. RAID controllers are also touting advanced drive diagnostics, launching pre-emptive rebuilds of disks that report questionable behavior.

Storage resource management tools

A big part of SAN management is storage resource management (SRM) tools. SRM tools can analyze and report on available storage systems and utilization, and improved analytical features can even help administrators to ease bottlenecks and other performance trouble spots. For example, Softek Storage Solutions Corp.'s Performance Tuner first establishes a performance baseline across the SAN, alerts IT staff when performance falls outside of the norm and then suggests improvements.

But the most common push for SRM tools is in heterogeneity and automation -- creating, allocating, and managing large pools of storage in the data center. CA's BrightStor tool supports more than 100 storage arrays, SAN switches, tape libraries and applications. This kind of interoperability is important in order to centralize management functions. BrightStor also supports 500 million files and centralizes backup functions across popular backup products such as ARCserve, Tivoli Storage Manager, Legato Networker, and Veritas NetBackup.

Automated provisioning features also save considerable time for storage administrators, and emerging chargeback capabilities ensure that enterprise SAN users pay for only the storage capacity that they are utilizing -- a crucial element in tiered storage strategies.

Related Topics: SAN management, VIEW ALL TOPICS