Brocade XPath Technology standard

Brocade has just submitted its XPath Technology to the T11.5 task group to adopt as the new standard for the Fabric Application Interface Standard. Here's a look at how it could change your life.


How XPath works
@exe With current storage management applications running on hosts or array subsystems, system and storage administrators must provision and manage storage at the many endpoints that represent the initiators and targets in the storage area network (SAN). Obviously, managing those entities from a common connect point would reduce the cost of administrative overhead and licensing.

The first enabler to network-based management is a recently proposed standard that's being managed by the T11.5 task group within the T11 technical committee. The Fabric Application Interface Standard (FAIS), which is still in its development phase, will attempt to deliver by mid-2004 an API framework for implementing storage applications on a SAN.

Brocade Communications Systems Inc. has jumped out in front of the standards process by submitting its XPath Technology to the T11.5 committee for adoption as an FAIS standard. This is technology that Brocade acquired with Rhapsody Networks in late 2002. Brocade wants to recruit development partners and standardization would help because vendors don't want to write to multiple interfaces. Currently, partners include CommVault Systems, FalconStor Software Inc., Hewlett- Packard Co., StoreAge and Veritas Software Corp.

There are already a number of competing approaches to Brocade's XPath Technology, some of which take a different approach to allocating resources or processing the data itself. Ultimately, which approach to network intelligence you buy into may hinge on the strengths and weaknesses of competing architectures.

In that spirit, let's take a look at the XPath Technology (see "How XPath works"). The XPath API is supported by four independent components:

  • Partitioned processing
  • Storage processors
  • A multiprotocol fabric
  • A layered OS and API that was developed to provide third-party applications with a portal into the XPath's functions and subroutines
Partitioned processing is the concept of breaking down a complete unit of work into sections and assigning those sections to software and hardware processors designed to enhance the performance and scalability of that section. This split processing isolates the most performance-sensitive sections of the unit of work and then distributes them to hardware-assisted data paths or storage processors, while assigning control path operations to more centralized software-based control processors. And although this approach to separating commands from data to enhance SAN performance is new, the concept is not.

Storage processors are computing units in the switch whose sole purpose is to throw processing cycles at the storage application's data flow. The computational units are made up of:

  • RISC processors outfitted with local memory and frame buffers
  • Multiprotocol I/O ports with negotiating link speed
  • An extensive set of software engines that are used to move data blocks through the storage processor
CPUs and memory are coupled with each I/O port on the switch to enable parallel processing of exchanges and sequences, thereby reducing the possibility of overrunning the port and increasing your ability to scale your systems over the long term. This port-based virtualization is a more scalable approach than some other solutions in this space that use general-purpose servers, and are thus confined to processing limits of the server's CPU, memory and bus.

The multiprotocol I/O ports enable the storage processor to accommodate both Fibre Channel (FC) and GigE transports, which suggests greater investment protection and ease in deploying new technologies and varying protocols. Server-based virtualization can do the same thing, but it must be implemented using multiple I/O cards on the server's bus.

The storage processor is also made up of software engines that make use of the close proximity of the multiple RISC chips and memory modules at every port to move data to and from these resources. Server-based solutions would require this same data to move from the server's host bus adapter (HBA) to the main memory of the server. The main impact is that the port-based architecture can boost performance because each port is equipped with the processing power of multiple CPUs and memory that does not have to be shared with other servers connected to the switch. In addition, the processing capability of the individual switch port is substantial, especially when compared to that of a virtualization server's HBA.

One such engine is the "deep-frame" classification engine that peeks into every frame and then assigns it to a specific software function without any additional overhead (i.e., data copies and context switches). The data gleaned from this interrogation can be passed up to a management application and seen by your administration staff as well.

The XPath multiprotocol fabric is a protocol-neutral interconnect that enables data and control processors to exchange data without contention with the primary data flow. It also eliminates the need for separate hardware to bridge varying protocols into the SAN for long-distance disaster recovery solutions. Who's in charge?

I envision a problem with storage applications being developed to execute on the SAN: Who will be responsible for their administration? At one time, people were asking, "Who should be responsible for managing SAN switching gear?" Some people think that because this equipment was in fact a network element that perhaps network operations should get the job. However, with the likelihood of storage applications being ported to the SAN, I think it makes more sense for the system administrative staff to take up the charge.

Only time will tell how Brocade's XPath Technology submission will fair under the T11.5 standards review. You can be sure that Cisco/Andiamo will have something to say about how much of XPath's inherent design is incorporated into the standard. For now, we should strike a match to all of the marketing hype and set our attention on the proposed benefits of this technology and how it can help us drive our data to new distances, while at the same time drive down the costs of doing so.

Dig Deeper on SAN technology and arrays

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.