Managing and protecting all enterprise data


Test before you taste

If you're putting a SAN together, you'll need to test interoperability. Surprisingly, that goes double for open systems products. Some companies are even doing it themselves - here's how

From switch and storage makers to software companies and end users, it seems everyone is building their own interoperability lab. Stocked with the latest storage networking hardware and software, these labs are being used by manufacturers to help advance interoperability between equipment, and by end users to guarantee that their storage area network (SAN) configurations will work reliably. Interoperability labs are rapidly becoming a requirement for any organization that's trying to put their own SAN together.

So, why now all the focus on creating interoperability labs? Some clear reasons for this trend include the increasing complexity of SAN configurations, continuing development of SAN standards and the increasing familiarity of users creating their own network configurations. As SANs have become accepted as the right way to build data center networks, there's now a greater mix of storage vendors, operating systems, switches, host bus adapters (HBAs) and other equipment that must work with each other - and as advertised.

The size of storage networks is also multiplying, leading to new, untested configurations. On top of increased complexity, there still are examples of interoperability issues between components from different vendors. Finally, users who have worked with storage network equipment have also found that they're now confident enough to mix and match their own networks, and are establishing labs to help them internally certify new applications and mixtures of vendor equipment. The net result has been a surge in interest in the interoperability and testing of storage networks.

The importance of testing
The benefits of interoperability testing are clear. Stephen Schaeffer, consortium manager of the University of New Hampshire's Fibre Channel (FC) and iSCSI interoperability labs, which provides interoperability testing services to vendors says, "The goal of the lab is to help vendors attain 'plug it in, turn it on' interoperability. As a computer consumer, I want something that I can unpack, turn on and use without spending days trying to configure things to connect to other things." Interoperability tests are a key to uncovering issues between different vendors, who often can't - or aren't able to - do intensive interoperability testing between their devices.

Gary Wierman, a storage analyst at Boeing, Everett, WA, who maintains a test lab with about 2TB of storage believes production is way too critical to Boeing to expose something which they haven't tested as stable. "We've found many problems in the lab we never would have anticipated without running our own tests," he says. He runs storage arrays with FC switches, testing against single hosts, with plans to test clustering environments soon.

"An interoperability lab is critical to delivering a service to a customer," says Tony Scotto, senior VP of product development at StorageNetworks, Waltham, MA, which has recently expanded its use of interoperability labs to test its new storage software. "When we were a storage service provider, we had to test the switches and storage to guarantee that the equipment would perform how a user would expect it to perform."

All of the major SAN vendors now operate their own interoperability labs. EMC, Hitachi, Compaq, and IBM all run major interoperability programs from the storage side; McData, Brocade, and others run switch interoperability labs, and even software vendors like Veritas have large investments in interoperability labs. "If you're selling storage, people need to always read back the data they wrote correctly," says Albert Cummings, who's responsible for Hitachi Data Systems' Santa Clara interoperability labs. "Customers have a very unforgiving attitude if data doesn't come back the same. It's the nature of the environment we're in that forces us to test and test and test." Vendors who expect to stay in business need to maintain interoperability testing labs, he explains, because "the alternative doesn't leave us business."

A good interoperability test is essential to assuring the performance of end-user configurations; vendors and integrators have found this is the only way they can certify a solution will work - by testing it rigorously in a lab. Most of all, interoperability labs help to ensure everything works as promised, before a large-scale rollout within an enterprise, and allows IT managers as well as vendors to make sure everything is up to snuff before committing lots of time and money into deployment of a solution.

New protocols
With the large amount of vendors shipping equipment into the storage market today, there's a growing need to make sure all this disparate equipment works as required - without issues. As is the case with almost every new communications protocol, FC, iSCSI and other storage protocols have had their share of interoperability issues. Problems do occur, especially in early incarnations as protocols are being developed. Even today, with protocols like FC and hardware now into second and third generation products, there are still configurations and combinations which haven't been tried, or which vendors haven't yet certified that requires interoperability testing. Either way, interoperability tests are a requirement for developing any new network equipment, even for more mature protocols like Ethernet.

For example, one of the early issues with FC was the complexity of the Fibre Channel-Arbitrated Loop (FC-AL) protocol. The loop initialization process requires the participation and compliance of every device in the loop to operate properly. Unfortunately, the complexity of the standard and the heavy dependence on everyone getting it right often resulted in what were termed LIP storms - in reference to the barrage of low-level LIP primitives which were the symptom of this incompatibility. This problem may have single-handedly started the belief that FC had interoperability problems, and is one of the major drivers that moved the industry to switched fabric, which has none of these issues. Even as the industry has moved beyond these early gaffes, there continue to be areas where interoperability testing is required to prevent similar problems.

In fact, testing requirements have increased, not diminished. "When everything was proprietary, testing was generally very well-defined," says HDS' Cummings. "However, with open systems, we now require lots and lots of testing - so much, in fact, that it's sometimes hard to know where to stop."

As with any standardization process, SAN standards such as FC have also undergone the process of moving from pre-standard implementations of protocols to a more consistent implementation across vendors. It's arguable that this process is still under way as vendors continue to standardize such things as switch zoning and security. Fortunately for FC users, the standards and equipment have matured significantly. "As the technology matures, I expect to see a shift away from finding standards-related issues to finding implementation-related ones," says UNH's Schaeffer.

However, even as the bumps in FC have been worked out, new protocols such as iSCSI and InfiniBand will see their own interoperability issues, requiring an increasing need for interoperability testing.

The keys to interoperability testing
Interoperability testing can be done on many levels. At its most basic is testing at the protocol level, where equipment is tested to see that it meets written standards. Protocol-level testing usually involves measuring SAN equipment against the written specifications for a protocol, and ensuring they meet timing and other specifications. The industry's SANmark tests do many tests like these, and individual labs and analyzer vendors have developed test suites that check equipment against the basic assertions of a standard. This sort of testing is usually done extensively by OEMs, and rarely required of users. Next is basic hardware connectivity and qualification of equipment in a SAN. Equipment is hooked up to common SAN configurations, and tested to make sure that other equipment (switches, HBAs and storage) can access and is accessible from the new equipment. The bulk of interoperability testing is usually conducted between vendors who want to ensure their components can work reliably together, and by users who want to guarantee they're getting what they need.

Getting vendors to do your testing
One of the biggest issues with setting up an interoperability lab is the cost of equipment. Fortunately, many vendors are willing to work with customers to help them determine the best configuration of hardware and software for their application. Stocked with equipment, vendors are busy setting up precertified configurations, and often can help to make sure that their equipment will work in your configuration.

After extensive testing, Hitachi Data Systems provides its configurations to its sales people, who are required to only sell solutions that have been certified and are on its support matrix, according to Albert Cummings, who's responsible for HDS' Santa Clara interoperability labs. "If our sales people want to sell solutions outside of our matrix, they come to us to make sure it will work," he says.

Although Veritas doesn't often specifically test configurations for customers, the company usually finds that there's just a small difference between their certified configurations and a customers. "We may have a different level of firmware or HBA driver that we've tested, or perhaps we have found a problem with a level they're using - and we may just request a firmware or driver change to support that configuration," says Alan Orr, interoperability lab manager at Veritas. Regardless of the vendor, and even across vendor lines, it's clear that the interest is to support customers with their configurations.
"The biggest problem we've run into is firmware incompatibility," says Boeing's Wierman. Although he hasn't had as many issues with basic connectivity, Wierman is finding that version control is an area where he spends lots of time resolving problems with vendors. Says Wierman, "Compatibility and upgrades to the outside world are very important," because "we want to be able to upgrade one part of our system and continue to run our system on another. We're becoming more and more cognizant that version levels of firmware are very critical."

Performance and throughput measurement are also a major part of the work of interoperability labs. Equipment is tested to ensure it can provide the performance needed by a customer, tested against peak loads and heavy traffic. Tools such as Intel's Iometer are used to generate traffic loads that simulate a user's typical application, and throughput through different components (HBAs, switches, storage) is measured and checked. A key part of performance measurement is ensuring that equipment can handle the expected swing in required throughput for a customer's application.

At the software level, there's configuration testing of applications on top of hardware. Backup applications are run across a SAN to ensure that the software package behaves correctly, or failover software such as Microsoft's Cluster Server is tested to ensure that it works with the hardware configuration. Software application testing is becoming more and more important as SANs move beyond simple file sharing to critical database and business applications support. This testing is key for end users, who not only need the hardware to work, but must ensure their applications will run on the SAN.

Finally, negative testing is used to deliberately insert errors into the storage network, such as corrupted frames and spurious signals. Using special traffic generators, these tests generally check the limits of a manufacturer's error correction and data integrity circuits, and simulate worst-case situations for the equipment.

An interesting industry effort to help this whole process is the Storage Networking Industry Association's (SNIA) Supported Solutions Forum, which is focused on certifying configurations. As part of the forum, says Alan Orr, interoperability lab manager at Veritas, vendors work together to create complete configurations and fully certify them. Tom Conroy, director of the SNIA Technology Center says, "The Supported Solutions Forum enables a customer that purchases a defined solution to call one vendor; the vendor will follow the call to conclusion, even if it's not their own component." Members have exchanged support agreements so that customers can avoid the usual finger pointing which occurs from vendors trying to avoid a problem. According to Orr, these supported solutions are driven by customer request. "Usually, a customer starts with a configuration they have installed, and asks us to support and certify that configuration," he says.

The typical interoperability lab
Interoperability labs range from small installations, with a few homogeneous switches, hosts and some storage, to extensive, heterogeneous, mixed networks. No matter the size of the interoperability lab, they all revolve around the same set of core equipment: a mixture of switches, hubs, HBAs and storage, as well as a mixture of systems with different operating systems and software applications.

Interoperability lab setups can be quite extensive. According to Veritas' Orr, "We have over $100 million dollars of equipment to support our interoperability testing requirements." Veritas has something on the order of 9,000 sq. ft. of interoperability labs, and maintains an internal reservation system to allow different product development groups to reserve sharing of resources such as storage arrays. Hitachi Data Systems maintains labs in Santa Clara and San Diego, which total nearly 40,000 sq. ft. of raised floor lab environment for its interoperability testing.

More typical of an end-user lab is Infinity I/O, which provides network storage training and certification, and maintains its multivendor SAN lab for students as well as for private groups for training, testing and proof of concept. Their lab contains a wide variety of equipment. Robert Bushey is the lab manager, and says, "The core lab equipment generally consists of six to eight FC switches, two FC Hubs, six Unix hosts, six NT/W2K hosts, 12 Linux hosts [all hosts populated with various HBAs], two FC/SCSI bridges or routers, two tape libraries, virtualization hardware, QoS hardware, 14 JBODs and six FC analyzers/generators."

Interoperability labs are usually well-stocked with network analyzers as well. "We have test tools from a number of vendors, including Finisar, I-Tech and others as well as homegrown tools," UNH's Schaeffer says. Network analyzers allow those testing SAN components to analyze and debug problems or issues that occur, down to the wire level - either to fix an issue, as an OEM, or to help the user provide better information to a vendor.

Setting up your own lab
As SANs have become more commonplace, users are finding that it's beneficial to set up their own labs. These labs tend to be based on a primary vendor's configuration, but offer a secondary place to test new equipment, firmware upgrades and changes to a user's setup - without jeopardizing running applications.

Probably the biggest hurdle to setting up your own interoperability lab is finding the money to purchase necessary equipment. As you can imagine, stocking a full-blown interoperability lab runs from expensive to exorbitant. Asked about his budget for a lab, Boeing's Wierman reports that it's "in the millions." Despite the cost, he says, "We've found problems we never would have anticipated; things that we'd never put into production until they were resolved." Boeing's labs are part of a long-term plan, he says. "We're looking right now at homogeneous SANs, and are branching out to heterogeneous data centers. We are trying to look five years ahead," - where his test labs will be expected to be in use for a long time.

For a minimal lab, it's possible to get an environment that reasonably duplicates the critical parts of your network infrastructure, so you can do some small-scale testing of configuration changes, firmware and software upgrades and qualify the performance and interoperability of new components.

In many cases, you'll find that vendors will be willing to let you evaluate equipment to make sure that it will work in your environment before you actually deploy; and many vendors will loan you equipment if you have a problem or configuration you need to test.

Towards the future
Interoperability labs are here to stay, and the need for interoperability testing is only going to increase. Companies developing equipment will continue to increase the size and scope of their labs, and those who are using storage networks will find that it only makes sense to establish and maintain at least a small lab to help test configuration changes and new equipment. The age of the interoperability lab is here, and will continue to help to drive storage networks forward.

Article 7 of 23

Dig Deeper on SAN technology and arrays

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

Get More Storage

Access to all of our back issues View All