By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
|Where to start|
A switch is labeled "intelligent" when it can run applications that generally run on hosts or storage devices. These applications are built on a foundation layer of virtualization, such as volume management, replication, mirroring, snapshots, logical unit number (LUN) masking, backup and restore. But just because an application runs in the fabric doesn't necessarily make it better.
A typical enterprise may have storage and storage applications from a variety of vendors, all managed by an army of specialists. But consider how much simpler and less costly it would be to manage this diverse storage if it were available in a uniform fashion across all storage devices and hosts.
Different ways to embed intelligence
There are three broad platform categories for delivering storage applications from the storage area network (SAN) fabric: intelligent switches, general-purpose appliances and purpose-built appliances (PBA). The intelligent switches (or directors) share the common characteristic that there's processing power associated with each port, in addition to normal layer-2 switching functionality. This is generally provided by an additional ASIC or network processor at each port. In a director-class product, these intelligent ports are generally delivered on a blade with eight, 16 or 32 ports. In addition to the intelligent ports, the architecture generally calls for an additional blade where the application runs. This "application blade" may be as simple as a bladed version of a standard Intel processor, memory, cache, I/O, running Linux or it may be a specialty processor designed to run specific applications efficiently.
An application works with the intelligent ports to direct I/O traffic to the appropriate storage system, host or to another switch. Another crucial activity that takes place at the port level is frame termination and regeneration, or frame cracking. This essentially means that the FC frame (a multiprotocol port could also handle iSCSI, FCIP or iFCP traffic) is cracked open to obtain relevant information about the content so it can be manipulated, reformatted if necessary and then pushed off to its destination. What manipulation occurs depends on the application and could be as simple as discarding a frame not authorized to be sent to the specified destination or automatically replicating a frame for data protection depending on defined policies. Policy information is generally held in the application blade. Terminating FC traffic, cracking open FC frames and performing virtualization table look-ups require lots of processing power, so most intelligent switches add an ASIC to each port.
The level of processing power added to each port determines how much work can be done at the port level and how much must be done in the application blade. This clearly has implications for performance and scalability. So while implementations of intelligent switches vary in this dimension, fundamentally they operate the same way. Latencies incurred by applications (wherever they are hosted) will show up as switch latencies when pushing application functionality into intelligent switches. Whereas a typical FC switch adds about five microseconds of latency, an intelligent switch will add about 25 microseconds of delay.
|Strengths and weaknesses of intelligent fabric products|
Virtualization processing can reside outside of the switch, either in a standard Intel processor or in a PBA. While they don't perform switching functions, PBAs aren't conceptually different from an intelligent switch; there's a massive amount of processing power available to run applications and to perform virtualization. As opposed to switches, PBAs:
- Can be deployed in existing SANs
- Cost less because they don't have intelligence on every port
- Run multiple applications in one device
In an intelligent switch, at least the first-generation variety, only one application can run on a blade. With PBAs, you buy only the number of intelligent ports you need for the size of the SAN; additional appliances can be added as needed. The logical data path is from the host initiator to the FC switch to the PBA, and then back to the switch for transmittal to the target destination. The appliance basically runs everything on the Intel server and leaves the switching environment as is.
Cisco, Maranti Networks and Maxxan Systems Inc. are shipping intelligent switches. Brocade, CNT and McData are planning on shipping theirs soon. In the general-appliance category, some of the pioneers of virtualization are DataCore Software Corp., FalconStor Software Inc., IBM Corp., Sanrad Ltd., Softek Storage Solutions Corp. and StoreAge Networking Technologies. All use an Intel server for all application processing. DataCore and Softek (based upon DataCore's source code) are the only ones that use Microsoft's NT operating system; the others use Linux.
The purpose-built category includes shipping products from Candera Inc. and Troika Networks. These typically come in a 16- or 32-port format. The primary difference between these two is that Candera delivers all foundation layer applications as an integrated suite, whereas Troika has designed its controller as a platform that can run a wide variety of third-party applications.
Recently, a number of applications originally designed for hosts (Veritas Volume Manager) or appliances (StoreAge, FalconStor and IBM SVC) have been ported to either intelligent switches (for example, FalconStor on Maxxan, Veritas VM and IBM SVC on Cisco) or purpose-built platforms (StoreAge on Troika). The replication functionality embodied in EMC Corp.'s Symmetrix Remote Data Facility and the virtualization functionality in Symmetrix will be ported to several, perhaps all, intelligent switches in the near future.
Applications that work best in a central location and those that require significant movement of data from one type of storage device to another should be moved to the fabric. The biggest advantage that fabric has over other methods is that it sees everything connected to it.
Those applications include mirroring, replication (synchronous, semisynchronous and asynchronous), snapshots, storage virtualization and volume management, including LUN masking. In addition, many backup and restore and archive applications will gain from being in the fabric. It's best not to provide network-attached storage (NAS) file services from the fabric because of their reliance on a local file system; NAS heads can be connected to the fabric that delivers virtualized storage to them. The movement of applications to the fabric can be a big step toward implementing information lifecycle management (ILM).
An intelligent switch from Cisco may not interoperate with one from Brocade, except in a rudimentary fashion. The Fabric Application Interface Standard (FAIS) is a developing standard that will make interoperability a reality, but don't expect compliant products for at least another year (see "The state of standards"). Whatever product you pick, it must be able to interoperate fully with your existing SAN(s). In that regard, appliances and PBAs have a big advantage over intelligent switches: They work with all popular layer-2 switches and also bring applications to heterogeneous islands of SANs.
|How intelligent switches can improve virtualization|
In-band, out-of-band and SPAID
In a virtualized SAN fabric, there are three ways to deliver applications: in-band, out-of-band or split path architecture for intelligent devices (SPAID). To understand the advantages and disadvantages of each approach, you need to be familiar with how the metadata server, control path software and data path software operate in those three architectures:
Metadata server. This server maintains the configuration database for the storage services provided. For virtualization services, this database contains the entire mapping between virtual volumes and physical devices.
Control path software. This software provides the interface between the metadata server and the data path software. It also performs background I/O tasks for applications, such as copying data from a snapshot to a remote location (data replication), resynchronizing a broken/restored mirror (virtualization) and third-party copy functions (backup).
Data path software. This moves the data from servers to storage and vice versa. It also performs the actual translation from virtual to physical addresses using mapping tables passed to it from the metadata server.
In-band and out-of-band describe where the metadata server and the control path software reside in the network. It's important to understand that in all cases, the data path software resides in the data path.
In the case of in-band, the metadata management, the control path processing and the data path processing are all performed by the same computing elements. In other words, all three are "in the path." For out-of-band implementations, the metadata management and the control path processing are both performed by a separate compute engine other than the data path software. Given that the majority (more than 95%) of the transfers are data transfers, a fast and efficient data path results in excellent performance and better scalability.
In-band products have been typically represented by Intel server-based appliances such as IBM's SVC, DataCore and FalconStor where the appliance provides all the computing power. These products are relatively simple to deploy, but suffer from performance bottlenecks and scalability issues as workloads increase. This is why these appliances have not made serious inroads in enterprise environments.
Out-of-band solutions, represented by StoreAge's SAN management applications, run the metadata server and the control path software in an Intel server appliance connected to the FC SAN and deploy data path software as an agent on each of the application servers. This results in excellent transfer speed and scalability. The biggest negative for out-of-band solutions is the need to place agents in each host. These agents are operating system- and platform-specific, making them impractical for many enterprise environments.
|Pros and cons: Appliances, PBAs and intelligent switches|
An intelligent switch is typically characterized by specialized hardware and processing capability at the port level. This makes the data path processing highly distributed and therefore efficient and scalable. In concept, the agent code that resides in the host in an out-of-band solution, now can reside on or near these ports, eliminating the need for code on the hosts. The amount of processing power placed at the port level, and whether the metadata is placed in one location or another, determines the architectural differences of various intelligent switches. Depending on the implementation, the control path software can be installed on a blade inside the switch or externally on a separate piece of hardware.
In a nutshell, an intelligent switch can offer in-band simplicity and out-of-band scalability. It splits the data path from the control path and eliminates the requirement for host-level code. This is called SPAID, an architecture that will become the most popular way to deliver high-performance storage services from the fabric.
The SPAID architecture:
- Separates the control path from the data path.
- Possesses an independent metadata server.
- Leverages port-level processing capabilities of intelligent switches or purpose-built controllers.
- Allows for independent scaling of the control and data path processing.
EMC is expected to deliver a software product code-named Storage Router. It will leverage the SPAID concept on multiple intelligent switch and director platforms, including those from Brocade, Cisco and McData. It's fundamentally a virtualization application at the foundation layer, with a variety of other applications built on top of it, most notably heterogeneous data migration.
In Cisco terminology, the application would run on a service module inside a Cisco MDS 9000, and the control path/metadata server functions would be provided by an appliance outside the switch, or as a blade inside the switch. Both can be scaled independently for exceptional performance. Vendor implementations
Despite the hype created by the pioneers of intelligent switches such as Rhapsody (now Brocade) and Sanera (now McData), there aren't many products available. Brocade's intelligent switch should ship soon, with McData's coming in nine or more months.
Maxxan is currently the clear intelligent switch leader, followed closely by Cisco. Maxxan has implemented significant computational performance at the port level by using an off-the-shelf Intel network processor rather than designing an ASIC.
Cisco based its MDS 9000 family on an in-house- developed ASIC that's implemented at every port. In addition, Cisco offers a series of service modules (application blades) where the application runs. For IBM Storage Virtualization Controller, Cisco implemented a Cache Services Module. For Veritas Volume Manager's port (called Veritas Storage Foundation for Networks) they implemented an Advanced Services Module (ASM). It's noteworthy that each application requires its own unique Service Module. Cisco has also created an innovative software product called the MDS 9000 Data Tap Service that runs on an ASM inside the switch and allows a regular appliance-based application (e.g., FalconStor IPStor) to run significantly faster.
In the purpose-built category, Troika and Candera both have ASIC-based, high-performance platforms designed to run applications in the fabric. Not being switch vendors, they can focus on working with all legacy switch vendors equally well and stay focused on application integration.
The movement of intelligence to the fabric is a given because the benefits are apparent. This is a great time to start planning for the movement of intelligence into your storage fabric.