By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
|How XPath works|
The first enabler to network-based management is a recently proposed standard that's being managed by the T11.5 task group within the T11 technical committee. The Fabric Application Interface Standard (FAIS), which is still in its development phase, will attempt to deliver by mid-2004 an API framework for implementing storage applications on a SAN.
Brocade Communications Systems Inc. has jumped out in front of the standards process by submitting its XPath Technology to the T11.5 committee for adoption as an FAIS standard. This is technology that Brocade acquired with Rhapsody Networks in late 2002. Brocade wants to recruit development partners and standardization would help because vendors don't want to write to multiple interfaces. Currently, partners include CommVault Systems, FalconStor Software Inc., Hewlett- Packard Co., StoreAge and Veritas Software Corp.
There are already a number of competing approaches to Brocade's XPath Technology, some of which take a different approach to allocating resources or processing the data itself. Ultimately, which approach to network intelligence you buy into may hinge on the strengths and weaknesses of competing architectures.
In that spirit, let's take a look at the XPath Technology (see "How XPath works"). The XPath API is supported by four independent components:
- Partitioned processing
- Storage processors
- A multiprotocol fabric
- A layered OS and API that was developed to provide third-party applications with a portal into the XPath's functions and subroutines
Storage processors are computing units in the switch whose sole purpose is to throw processing cycles at the storage application's data flow. The computational units are made up of:
- RISC processors outfitted with local memory and frame buffers
- Multiprotocol I/O ports with negotiating link speed
- An extensive set of software engines that are used to move data blocks through the storage processor
The multiprotocol I/O ports enable the storage processor to accommodate both Fibre Channel (FC) and GigE transports, which suggests greater investment protection and ease in deploying new technologies and varying protocols. Server-based virtualization can do the same thing, but it must be implemented using multiple I/O cards on the server's bus.
The storage processor is also made up of software engines that make use of the close proximity of the multiple RISC chips and memory modules at every port to move data to and from these resources. Server-based solutions would require this same data to move from the server's host bus adapter (HBA) to the main memory of the server. The main impact is that the port-based architecture can boost performance because each port is equipped with the processing power of multiple CPUs and memory that does not have to be shared with other servers connected to the switch. In addition, the processing capability of the individual switch port is substantial, especially when compared to that of a virtualization server's HBA.
One such engine is the "deep-frame" classification engine that peeks into every frame and then assigns it to a specific software function without any additional overhead (i.e., data copies and context switches). The data gleaned from this interrogation can be passed up to a management application and seen by your administration staff as well.
The XPath multiprotocol fabric is a protocol-neutral interconnect that enables data and control processors to exchange data without contention with the primary data flow. It also eliminates the need for separate hardware to bridge varying protocols into the SAN for long-distance disaster recovery solutions. Who's in charge?
I envision a problem with storage applications being developed to execute on the SAN: Who will be responsible for their administration? At one time, people were asking, "Who should be responsible for managing SAN switching gear?" Some people think that because this equipment was in fact a network element that perhaps network operations should get the job. However, with the likelihood of storage applications being ported to the SAN, I think it makes more sense for the system administrative staff to take up the charge.
Only time will tell how Brocade's XPath Technology submission will fair under the T11.5 standards review. You can be sure that Cisco/Andiamo will have something to say about how much of XPath's inherent design is incorporated into the standard. For now, we should strike a match to all of the marketing hype and set our attention on the proposed benefits of this technology and how it can help us drive our data to new distances, while at the same time drive down the costs of doing so.