Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Overview of top tape backup options for midrange systems and networking segments."

Download it now to read this article plus other related content.

Next is basic hardware connectivity and qualification of equipment in a SAN. Equipment is hooked up to common SAN configurations, and tested to make sure that other equipment (switches, HBAs and storage) can access and is accessible from the new equipment. The bulk of interoperability testing is usually conducted between vendors who want to ensure their components can work reliably together, and by users who want to guarantee they're getting what they need.

Requires Free Membership to View

Getting vendors to do your testing
One of the biggest issues with setting up an interoperability lab is the cost of equipment. Fortunately, many vendors are willing to work with customers to help them determine the best configuration of hardware and software for their application. Stocked with equipment, vendors are busy setting up precertified configurations, and often can help to make sure that their equipment will work in your configuration.

After extensive testing, Hitachi Data Systems provides its configurations to its sales people, who are required to only sell solutions that have been certified and are on its support matrix, according to Albert Cummings, who's responsible for HDS' Santa Clara interoperability labs. "If our sales people want to sell solutions outside of our matrix, they come to us to make sure it will work," he says.

Although Veritas doesn't often specifically test configurations for customers, the company usually finds that there's just a small difference between their certified configurations and a customers. "We may have a different level of firmware or HBA driver that we've tested, or perhaps we have found a problem with a level they're using - and we may just request a firmware or driver change to support that configuration," says Alan Orr, interoperability lab manager at Veritas. Regardless of the vendor, and even across vendor lines, it's clear that the interest is to support customers with their configurations.
"The biggest problem we've run into is firmware incompatibility," says Boeing's Wierman. Although he hasn't had as many issues with basic connectivity, Wierman is finding that version control is an area where he spends lots of time resolving problems with vendors. Says Wierman, "Compatibility and upgrades to the outside world are very important," because "we want to be able to upgrade one part of our system and continue to run our system on another. We're becoming more and more cognizant that version levels of firmware are very critical."

Performance and throughput measurement are also a major part of the work of interoperability labs. Equipment is tested to ensure it can provide the performance needed by a customer, tested against peak loads and heavy traffic. Tools such as Intel's Iometer are used to generate traffic loads that simulate a user's typical application, and throughput through different components (HBAs, switches, storage) is measured and checked. A key part of performance measurement is ensuring that equipment can handle the expected swing in required throughput for a customer's application.

At the software level, there's configuration testing of applications on top of hardware. Backup applications are run across a SAN to ensure that the software package behaves correctly, or failover software such as Microsoft's Cluster Server is tested to ensure that it works with the hardware configuration. Software application testing is becoming more and more important as SANs move beyond simple file sharing to critical database and business applications support. This testing is key for end users, who not only need the hardware to work, but must ensure their applications will run on the SAN.

Finally, negative testing is used to deliberately insert errors into the storage network, such as corrupted frames and spurious signals. Using special traffic generators, these tests generally check the limits of a manufacturer's error correction and data integrity circuits, and simulate worst-case situations for the equipment.

An interesting industry effort to help this whole process is the Storage Networking Industry Association's (SNIA) Supported Solutions Forum, which is focused on certifying configurations. As part of the forum, says Alan Orr, interoperability lab manager at Veritas, vendors work together to create complete configurations and fully certify them. Tom Conroy, director of the SNIA Technology Center says, "The Supported Solutions Forum enables a customer that purchases a defined solution to call one vendor; the vendor will follow the call to conclusion, even if it's not their own component." Members have exchanged support agreements so that customers can avoid the usual finger pointing which occurs from vendors trying to avoid a problem. According to Orr, these supported solutions are driven by customer request. "Usually, a customer starts with a configuration they have installed, and asks us to support and certify that configuration," he says.

The typical interoperability lab
Interoperability labs range from small installations, with a few homogeneous switches, hosts and some storage, to extensive, heterogeneous, mixed networks. No matter the size of the interoperability lab, they all revolve around the same set of core equipment: a mixture of switches, hubs, HBAs and storage, as well as a mixture of systems with different operating systems and software applications.

Interoperability lab setups can be quite extensive. According to Veritas' Orr, "We have over $100 million dollars of equipment to support our interoperability testing requirements." Veritas has something on the order of 9,000 sq. ft. of interoperability labs, and maintains an internal reservation system to allow different product development groups to reserve sharing of resources such as storage arrays. Hitachi Data Systems maintains labs in Santa Clara and San Diego, which total nearly 40,000 sq. ft. of raised floor lab environment for its interoperability testing.

More typical of an end-user lab is Infinity I/O, which provides network storage training and certification, and maintains its multivendor SAN lab for students as well as for private groups for training, testing and proof of concept. Their lab contains a wide variety of equipment. Robert Bushey is the lab manager, and says, "The core lab equipment generally consists of six to eight FC switches, two FC Hubs, six Unix hosts, six NT/W2K hosts, 12 Linux hosts [all hosts populated with various HBAs], two FC/SCSI bridges or routers, two tape libraries, virtualization hardware, QoS hardware, 14 JBODs and six FC analyzers/generators."

Interoperability labs are usually well-stocked with network analyzers as well. "We have test tools from a number of vendors, including Finisar, I-Tech and others as well as homegrown tools," UNH's Schaeffer says. Network analyzers allow those testing SAN components to analyze and debug problems or issues that occur, down to the wire level - either to fix an issue, as an OEM, or to help the user provide better information to a vendor.

Setting up your own lab
As SANs have become more commonplace, users are finding that it's beneficial to set up their own labs. These labs tend to be based on a primary vendor's configuration, but offer a secondary place to test new equipment, firmware upgrades and changes to a user's setup - without jeopardizing running applications.

Probably the biggest hurdle to setting up your own interoperability lab is finding the money to purchase necessary equipment. As you can imagine, stocking a full-blown interoperability lab runs from expensive to exorbitant. Asked about his budget for a lab, Boeing's Wierman reports that it's "in the millions." Despite the cost, he says, "We've found problems we never would have anticipated; things that we'd never put into production until they were resolved." Boeing's labs are part of a long-term plan, he says. "We're looking right now at homogeneous SANs, and are branching out to heterogeneous data centers. We are trying to look five years ahead," - where his test labs will be expected to be in use for a long time.

For a minimal lab, it's possible to get an environment that reasonably duplicates the critical parts of your network infrastructure, so you can do some small-scale testing of configuration changes, firmware and software upgrades and qualify the performance and interoperability of new components.

In many cases, you'll find that vendors will be willing to let you evaluate equipment to make sure that it will work in your environment before you actually deploy; and many vendors will loan you equipment if you have a problem or configuration you need to test.

Towards the future
Interoperability labs are here to stay, and the need for interoperability testing is only going to increase. Companies developing equipment will continue to increase the size and scope of their labs, and those who are using storage networks will find that it only makes sense to establish and maintain at least a small lab to help test configuration changes and new equipment. The age of the interoperability lab is here, and will continue to help to drive storage networks forward.

This was first published in July 2002

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: