This article can also be found in the Premium Editorial Download "Storage magazine: Using two midrange backup apps at once."
Download it now to read this article plus other related content.
"In a split-path virtualization architecture, 90+% of the requests pass through the switch at wire speed; only if something special like migrating of data needs to be performed [does] the control-path controller get involved," explains StorageIO Group's Schulz. The separation of the data path and control path, combined with the low latency of switching for translating and forwarding virtualized storage requests, makes fabric-based virtualization the best performing and most scalable virtualization architecture today.
On the downside, switch-based virtualization has the highest level of vendor lock-in of all virtualization approaches. Because the switch is used as the platform
| to run the virtualization software, it becomes very difficult for users to change switch vendors if they choose to do so. Furthermore, as intelligent switches turn into multitasking platforms, concurrent storage services from the switch vendor and third parties make supporting these switches more challenging.
As long as there are no problems, it's a great concept. But if there are problems with the virtualization software or any of the third-party storage services, the concerted effort of all involved parties may be required. Besides the relatively high cost of intelligent switches, increased complexity and more challenging technical support are among the contributing factors for the cautious adoption of fabric-based virtualization. "In general, storage managers like to keep things simple and tend to go with more self-contained, easier-to-manage solutions like LSI [Corp.'s] StoreAge or even IBM SVC," says Nelson Nahum, who was CTO at StoreAge before it was acquired by LSI.
Without question, the low latency of fabric-based virtualization is a big plus, but while it eliminates the use of cache, there is a downside: Virtualization solutions with cache, like IBM's SVC and Hitachi's USP V, use that cache to increase performance of the back-end storage. As a result, virtualization products with cache encourage the use of lower cost, lower performing storage tiers, with the cache boosting access performance. While the low latency of switch-based virtualization products is great for accessing fast arrays, its lack of cache actually turns into a disadvantage for accessing lower performance arrays. "In switch-based virtualization, back-end disk performance shows unmasked," says StorageIO's Schulz.
A second and more profound implication of the stateless nature of switch-based virtualization is the more challenging support of virtualization applications that require information beyond the mapping information. Features like remote replication and thin provisioning require memory to maintain certain state information. For instance, for a 2TB thin-provisioned volume using 100GB of physical storage, information about which 100GB are actually used needs to be maintained. While products like IBM's SVC and Hitachi's USP V maintain this information in memory along with the cache, switch-based virtualization products don't have the luxury of cache memory and their only option is maintaining this information on the SAN. "There's no complete solution for remote snapshots, remote mirroring and thin provisioning in switch-based virtualization products today because they're very difficult to implement without cache," says Fujitsu's DeCaires.
This was first published in September 2008