Virtualization in the switch? Not so fast

Before you buy into the whole intelligent switch hype, consider that some intelligence is best placed elsewhere.

This article can also be found in the Premium Editorial Download: Storage magazine: Inside the new Symmetrix DMX model offerings:

As virtualization spreads its tentacles to almost every part of the storage area network (SAN), a new generation of SAN switches is set to join the party and offer storage virtualization services as well. Here's how a switch with virtualization services plays in a multilayered virtualization scheme.

But before discussing switch virtualization applications, let's quickly review the basics of SAN switching in order to separate the new functions from the old. As you know, a SAN switch forwards storage I/O transmissions that arrive in one (ingress) port through a different (egress) port. The egress port is typically determined by the destination ID that's contained in the transmission's header information. Switches maintain tables indicating which ports can communicate with known destinations.

Virtualization operator

The virtualization operator exports virtual devices to upstream I/O requestors.

Switches can forward transmissions between similar and different types of networks. For instance, a switch can forward transmissions from one Fibre Channel (FC) link to another FC link on the same network. Another type of switch may be able to forward transmissions from an FC link to a Gigabit Ethernet (GbE) link. Switching between different types of networks is typically referred to as routing, requiring the ability to correlate different addressing schemes within a single larger address system. This has been one of the hallmarks of IP networks for many years.

Some network technologies--such as FC--provide other services through switches. These services are made available to network clients through reserved addresses and provide consistent information to all clients helping maintain a reliable, secure network environment. The services provided by FC switches provide such things as fabric login, simple name management, time synchronization, state change notification services, SNMP management and traffic segregation (zoning). Generally, these FC switch services are communications services that don't provide storing or filing functionality.

Wiring, storing and filing
To understand storage applications in switches, it helps to have a clear distinction between the orthogonal functions of storage networking which can be broken down into three areas: wiring, storing and filing. Wiring is all of the software and hardware involved with low-level transport technologies, including bus and networking technologies. Wiring includes such things as switches, routers, routing algorithms and software, addressing schemes and QoS (quality of service) algorithms.

Storing is what many people refer to as block-level storage and includes device operations and control as well as most of the storage virtualization technologies such as RAID and volume management. Storage defines bulk storage addressing such as disk partitions, extents and volume address spaces, but doesn't determine or have an awareness of the data objects or their placement within storage volumes. Redundancy can be provided by storage functions through mirroring and RAID.

Filing is the function that determines how data objects are stored. This includes the ability to create, read, write, update and delete specific data files or objects. Filing uses the underlying wiring and storing technology to perform its work. The most common filing technologies are file systems, database systems--over raw partitions--and backup software.

Simple SAN configuration

A complete way to draw the I/O path is to identify all the filing, storage and wiring components.

The I/O path
The I/O path is another key concept that will help you better understand switch storage applications. The I/O path is the complete set of filing, storing and wiring entities existing between a software application and storage media where data is stored. The nature of the communications changes along the path, but the intent of the operation remains consistent.

For example, an application requests services from a filing entity using a file access interface. In turn, the filing entity generates a storing request acted on by one or more storing level functions. The storing function passes the request to a network function, which frames the requested information for transmission over the wiring (network) function. Usually, host bus adapters (HBAs)--also called initiators--have both storing and wiring functions implemented in hardware and software.

The wiring--or network--function transfers the information to an eventual target receiving node. There, the wiring information is removed and the data is passed up to a storing function. In the simplest cases, the storing function translates the request to a device where the data is read or written to storage media.

The I/O path can be analyzed as the network connections that exist between initiators and targets. However, doing so leaves out important storage and filing components from the overall picture. A more complete way to draw the I/O path is to identify all the filing, storage and wiring components in each of the systems, subsystems and network devices (see "Simple SAN configuration").

There are a couple things worth noticing in the "Simple SAN configuration" figure. First, there are two storing functions in the server system. One of them is a volume manager, and the other is a device driver for the HBA. Secondly, unlike data networks, the functional stacks for SAN communications aren't symmetrical.

As mentioned, the next generation of SAN switches will attempt to provide both storing and filing functions as part of a switch. We'll look at how these functions are likely to be integrated in SAN switches.

Downsides of switch virtualization
The more I/Os a switch has to process, the more latency will creep into the system. Queuing, terminating, processing and reinitiating I/Os will be much slower than simply forwarding an I/O through an egress port.
Error processing for I/O operations becomes much more complicated. If there's an error on any of the I/Os, the virtualization operator in the switch must become aware of it and attempt to retry the operation again.
Switch-based virtualization is another instance of in-band virtualization, a concept that hasn't been too warmly received by the market yet.
It's not a proven fact that switch-based virtualization dramatically lowers the cost of storage.

The virtualization operator
A concept that can help understand how virtualization can work in switches is the virtualization operator. The virtualization operator is the logic that masks the details of storage I/O from the host--or some other upstream operator. Virtualization operators are storing-level functions located in the I/O path that manipulate the address spaces of the devices, subsystems and virtual devices that are downstream. From another perspective, the virtualization operator exports virtual devices to upstream I/O requestors (see "Virtualization operator").

Of course, there can be multiple virtualization operators in sequence along the I/O path. For example, a virtualization operator in a disk subsystem controller can provide one type of virtualization and a virtualization operator in host volume management software can provide the same or a different favor of virtualization. See "Multilayered virtualization scheme," which shows a host volume manager and several downstream disk subsystems. One way to think about multilayered virtualization is that the I/O path fans out into multiple downstream paths after it is processed by the virtualization operator.

Storing functions in SAN switches
Now it's time to examine how virtual storage techniques can be implemented in SAN switches. In general, this involves incorporating a virtualization operator for certain I/O paths that pass through the switch.

The first order of business for virtualization in a SAN switch is finding a way to identify the target virtual devices created by the switch's virtualization operator. Virtual storage devices in SAN switches need to be configured by an administrator. The storage administrator must first select the form of virtualization for the virtual device, and then discover the downstream target storage IDs that will comprise the virtual device. The next step is to create the exported target ID of the new virtual device, and assign this new virtual device to a zone or virtual network. Once the virtual device and its exported IDs are created, they can be accessed by any initiators in the SAN that have access to it.

The virtualization operator must have a processor to run on. There are many ways this could be implemented in SAN switches with a wide variety of choices for processors. For instance, the virtualization operator could run on a processor that the switch has for providing other services. It could also run on its own dedicated processor, which may be on a separate adapter or blade. Another alternative is to have a separate system work in tandem with the switch to provide the virtualization function. While this wouldn't necessarily be switch-based virtualization, there are possible implementations where the distinction would be splitting hairs.

Downstream storage IDs
The whole idea of virtual storage is based on making some number of storage entities or address spaces appear to be something different, or virtual. In the case of switch-based virtualization, the storage entities being virtualized are some type of SAN-resident storage. In the remainder of this article, I refer to this as SAN-resident and virtualized storage as downstream storage IDs. "Downstream" in this case doesn't necessarily imply a physical or even a relative location in the network topology. Instead, downstream here is intended to refer to the relative location on the I/O path. Entities closer to the host are upstream and entities closer to storage are downstream.

A close analysis of the virtualization process begins by understanding what happens when an incoming I/O command from an initiator is addressed to the switch's exported target virtual device ID. The switch port identifies the egress port for this virtual storage ID as the virtualization operator, then queues the command for processing by it. This doesn't have to be an actual egress port, and depends entirely on the design of the switch and its method for integrating virtual storage functions.

Multilayered virtualization scheme

The I/O path fans out into multiple downstream paths.

The virtualization operator then will read the entire command and process it according to the form of virtualization being used for this virtual device. There's a subtle, but important point: The initial I/O command is terminated by the virtualization operator and reinitiated it to downstream storage IDs. In other words, the initial I/O no longer exists, but has been reconstituted and retransmitted to a number of downstream IDs, through some combination of egress ports in the switch. See "Reinitiating an I/O command," which illustrates the termination and reinitialization process.

This terminating and reinitiating of I/Os isn't necessarily a bad thing, but it's important to understand the implications. First, this process adds latency to all I/Os addressed to virtual devices. Queuing, terminating, processing and reinitiating the downstream I/Os will be slower than simply forwarding an I/O through to an egress port based on the destination ID in the header.

Second--and perhaps more importantly--error processing for I/O operations becomes much more involved. If there's an error on any of the downstream I/Os, the initiator function in the virtualization operator has to become aware of it and attempts to retry the operation again or passes the error back to the upstream initiator for a retry. The virtualization operator then needs to determine if it will continue working with the problematic downstream ID. If so, it can try again--if not, it can operate in reduced mode according to the form of virtualization being used for the virtual device.

When failures occur, it's essential that the virtualization operator in the switch notifies the administrator of the failure, so it can be rectified in a timely fashion. This is a high-priority problem because a subsequent network or device failure on any of the I/O paths for the virtual device results in a loss of data. The situation is no different than for any other virtualization product, such as volume managers and disk subsystem controllers, except that the notification, diagnosis, troubleshooting and repair of the problem is done through switch management interfaces, instead of host or subsystem interfaces. Customers will need to verify that the timeliness of reporting problems, the severity level assigned and the reparation methods available through the switch are adequate for their operations. If best practices have been defined for storage I/O failure operations, then the switch should be expected to comply with them.

Initially, storage application-enabled switches aren't expected to have local storage connections for directly-attached storage devices, which implies that any storage functionality will involve virtual storage techniques. The most common forms of storage virtualization are mirroring, striping, concatenation and RAID.

Mirroring and striping
Mirroring is one of the most simple, yet most powerful virtualization techniques. The latency injected by the virtualization operator in mirroring is minimal as there's no parsing of the command and no block address manipulation to do. Additionally, error recovery is simplest for mirroring compared with other forms of storage virtualization.

Striping takes advantage of parallel operation on storage to provide faster performance. Incoming I/O operations from initiators would be terminated by the virtualization operator and a number of reinitiated I/O operations would be created, corresponding to the number of downstream storage IDs in the virtual device.

Striping requires that the incoming I/O operation be split up among the member storage destinations of the virtual device. While striping without parity redundancy is used solely for performance purposes, it's not clear that using striping in a switch would result in a net performance gain, after the latency of the virtualization operator is taken into account. Also, considering the lack of protection from failures, it's likely to be used only in cases where flexibility in creating striped arrays of different sizes is the highest priority.

Reinitiating an I/O command

The terminal and reinstallation process.

RAID
RAID in a switch uses parity or mirrored redundancy to provide data protection from downstream storage and link failures. As with striping, the incoming I/O would be terminated by the virtualization operator, and the number of subsequent downstream I/Os would be determined by the number of member storage IDs in the array.

Parity RAID adds a significant level of overhead to the virtualization operator. For example, the read, modify and write penalty of RAID-5 would be a much higher burden on a switch processor than mirrored redundancy such as used in RAID-1 or 0+1. As RAID-1 was previously discussed as mirroring earlier, RAID-0+1 should be the RAID level of choice for switch virtual storage implementations to minimize the negative impacts of unanticipated write and switch workloads.

Concatenation
Concatenation merges two separate storage address spaces together and utilizes them as a single, virtual contiguous storage address space. Like striping, it has no built-in redundancy to protect against downstream link and storage failures. Unlike striping, however, a failure in a concatenated virtual device doesn't necessarily result in the loss of all data because data that's stored completely on any downstream storage IDs may still be able to be accessed. However, there's no guarantee that data objects were written completely on any one device of a concatenated storage volume. The overhead of concatenation is relatively minor, involving simple algebraic processes for translating storage addresses.

Fierce competition
In the heyday of the technology bubble when everything seemed possible, storage switch vendors expected the market to gobble up virtualization switches like hotcakes. Times have certainly changed and the prospects of switch-based virtualization don't seem like such a sure thing anymore.

The issue for IT professionals is whether or not switch vendors will provide virtualization functions that can add enough value to make it worthwhile to pay a premium for it. This can only happen by making storage easier to manage and less expensive to own. Already, volume management software and disk subsystem controllers provide complete functionality for the most useful types of storage virtualization.

Beyond competition with volume management software and disk subsystems, there are new virtualization products and architectures to contend with. Switch-based virtualization is another instance of in-band virtualization, a concept that hasn't been too warmly received by the market yet. If the market decides that out-of-band virtualization is a better way to go as Gartner Inc. has suggested, that would leave the switch virtualization players holding a mostly empty bag.

This was first published in February 2003
This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close