Managing and protecting all enterprise data


Who should control the SAN?

There's a tug-of-war going on over the storage network. Network people want to manage it, and so does the storage staff. But who should control the SAN?

It has been a long time coming, but enterprise storage management teams are finally here. Nearly every large company now has a separate manager focused on the data storage infrastructure, and most have a team with people in various roles to support them. The next question concerns the demarcation of these roles--exactly what functions should be part of the storage group's sphere of responsibilities?

There are a few key areas where management responsibilities intersect. Host bus adapter (HBA) installation and configuration normally requires administrator access, so that task has often been handed off to the systems administrator. Overseeing volume managers is normally in the systems administrator's purview, while the installation of software agents for storage management products has emerged as a huge bone of contention between systems and storage people. Other areas of overlap include change management functions, service support (monitoring and escalation), asset management and cabling.

Cisco's monkey wrench
Cisco threw a monkey wrench into IT operations with the introduction of its MDS line of Fibre Channel (FC) switches in 2003. Although they supported the FC protocol, the switches shared many common characteristics with the firm's Ethernet switch line, including Cisco's familiar Internet Operating System (IOS) command line interface. While Cisco's MDS line of FC switches wasn't the floor-clearing event initially feared by rivals, the company has racked up some major wins, especially in the largest businesses.

With huge Cisco SANs in major organizations, the inevitable question was whether the ownership and management of these devices should remain alongside storage devices or whether they should be merged with the network support groups in place to support Cisco's Ethernet and WAN hardware. Not surprisingly, the answer proved elusive.

While there were certainly economies of scale to be gained by leveraging the same personnel to manage these new switches, FC SANs require quite different skills. Anecdotal experience lends some support to Cisco's contention that the similarities outweigh the differences; an associate reports that his friend, trained on Cisco Ethernet gear, was far quicker at picking up MDS skills than storage-focused participants at a recent training session. And a number of companies have handed off the management of Cisco FC gear to network groups with great success.

The benefits boil down to better utilization of personnel, with the same engineers handling tasks like provisioning, troubleshooting and the monitoring of SAN, LAN and WAN equipment. In many cases, these groups also had more mature processes in place for things like change management and fault response, instantly improving SAN availability.

A few other benefits became apparent once the switch was made. The network designers architected SANs like their networks, running fiber cables to patch panels, reserving plenty of card slots for future expansion and designing in high levels of resiliency. A more subtle benefit: This change in thinking about the SAN will have organizations quickly reaping the benefits of iSCSI because network personnel will almost certainly manage Ethernet-based iSCSI SANs.

The case for going solo

A spectrum of choice

The responsibility for the management of storage connectivity assets varies by technology. The Ethernet network used by NAS filers clearly falls under the network group, while Fibre Channel SANs have typically resided alongside storage arrays. But new technologies are blurring the lines of ownership and causing controversies in many enterprises.

@exe The handoff to the network group is by no means a done deal, however. Only Cisco's FC gear is a quick learn for networking folks, so the purchasing decision is sharply curtailed. And even with the Cisco gear, the similarities end with the management interface: FC fabrics are considerably different than Ethernet networks. Storage applications require extreme levels of availability and low latency, and it might be difficult to convince a seasoned network veteran that storage is a different traffic type.

Another issue is that many functions would still fall to the storage management group to perform. They'd have to perform capacity planning, allocate array storage and support the array elements, but an often-overlooked task is fabric zoning. Asking network administrators to regularly perform zoning tasks would be a significant diversion because Ethernet switch configurations normally remain quite a bit more static than FC fabrics. Luckily, the role-based access model used for the Cisco switches allows zoning tasks to be delegated to the storage group.

But even with a technical solution to the zoning problem, more hands in the mix will likely lead to slower provisioning, more complicated processes and difficulty in determining the total cost of ownership. Management costs and even capital equipment costs might be buried in two or more vertical organizations within IT. This financial point affects just the large companies implementing large Cisco SANs, as they're far more likely to desire comprehensive financial accounting.

In contrast, placing the SAN fabric in the storage group leads to a single, end-to-end organization for storage. SAN devices can be purchased, deployed, configured and managed as a single unit with storage arrays. Expertise is focused on storage, and a single operations team is responsible for all elements from the array to the host. This can help with provisioning speed and the time required to investigate and mitigate problems.

Make your choice
If you don't use Cisco FC hardware, there's no compelling case to hand off SAN management--and there may never be one. Even if you're an all-Cisco shop, the decision isn't the slam-dunk it might seem. Storage pundits are ambivalent, with Gartner's "not right now" stance phrased in the mildest of terms.

The choice comes down to time and effort. If your network group is raring to go, has mature processes for monitoring and deployment, and you've chosen an all-Cisco SAN, then leverage their abilities. But don't think you can just hand over management of the SAN lock, stock and barrel. You'll have to set up a provisioning process that details who will do zoning tasks, and you'll have to keep them in the loop with capacity planning, metrics and billing.

If you've selected McData or Brocade for your SAN hardware, then there's no reason to hand over management to the network group. Getting them trained and ready to work with these new platforms won't be any easier than if you hired into your team to manage these devices.

No matter which situation you're in, the future seems to be getting cloudy. iSCSI is rapidly moving into production, especially for sites with large bases of Microsoft Windows servers, and the use of NAS (often called NAS filers) is solidifying. These technologies use commodity Ethernet equipment, so the decision to give network management responsibility to the LAN network group is a foregone conclusion for most companies. Both technologies rely on array-level authentication and other features rather than on network-level zoning, so this area of confusion is eliminated.

As your SAN becomes more focused on commodity hardware (whether Cisco FC or plain Ethernet), you could find the decision made for you. The network group will eventually own SAN network management responsibilities. But by then, you might not recognize the SAN anymore.

Article 17 of 17

Dig Deeper on SAN technology and arrays

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

Get More Storage

Access to all of our back issues View All