This article can also be found in the Premium Editorial Download "Storage magazine: Are your data storage costs too high?."
Download it now to read this article plus other related content.
Today's data centers are beginning to be structured around the separation of storage into storage area networks (SANs), with servers consolidated around the compute function. In tomorrow's data centers, servers will follow the modular path that storage has already started down, as we discussed the first part of this article last month (see "Towards the new data center," November 2002).
What's missing from this picture is how storage and computing power will be connected and managed. There's more at stake than just questions of bandwidth and protocols: Data center designers will have to decide to what extent their infrastructures will be layered in tiers organized around price/performance.
As for the networking aspect, Fibre Channel (FC), with its 2.12Gb/s bandwidth, has become the standard method for shuttling data between servers, switches and storage arrays within the enterprise SAN. But as server and storage densities increase, it's clear tomorrow's data centers will require a faster approach. "A high-speed interconnect is absolutely essential," says Fred Hanhauser, director of storage products marketing with Unisys.
|From a distance|
What's less clear is just what that interconnect is going to be. There are three main contenders: 10Gb/s FC, 10Gb/s Ethernet and InfiniBand.
The argument for FC is straightforward. It's a continuation of today's 2Gb/s technology and will have to undergo little change to reach the 10Gb/s level. The argument against it is also straightforward: FC is confined to storage networking and requires skills and equipment that can't be leveraged across the rest of the computing infrastructure. Consequently, FC carries at least a perceived cost premium, both in equipment and staffing.
That's exactly the opposite case for 10Gb/s Ethernet. IP storage based on Ethernet is new. But will users really be able to leverage their existing Ethernet infrastructure, or will they have to establish a second Ethernet network for storage, especially in higher-performance applications like those in the data center? Such questions remain to be answered. But the promise of lower cost and integration into the enterprise systems and network management infrastructure are attractive.
Secondary data layer
It's also worth considering setting up a secondary storage network using Gigabit Ethernet, iSCSI and ATA-attached disks to provide hot backup of your faster, mission critical disks. Creating this secondary data layer not only facilitates more flexible backup strategies, but allows the creation of data management policies that free up fast disk space by cascading little-used files onto the secondary array.
Certainly, a case can be made that both FC and iSCSI over high-speed Ethernet should be part of the infrastructure. Users will have their choice of approaches to the networking impact of mixed protocols, as multiprotocol switches and gateways from vendors such as Cisco, Nishan, and Pirus--now part of Sun--allow for IP-based and FC storage traffic to be handled by the same switching fabric. As storage subsystems take on more connectivity options, a multiprotocol storage network will become a more practical alternative for users who want to leverage the relative strengths of each protocol.
IP for storage
While extensive use of IP for storage raises a lot of conventional eyebrows, it's at the heart of what Gary Johnson, an architectural consultant at Carlson Companies, Minneapolis, MN is doing. Johnson maintains that the best approach to reining in the company's more than 10TB of Oracle data is to retain IP's topological independence and instead rely on protocols such as the Internet Fibre Channel Protocol (iFCP), which translates FC data from SAN equipment into native TCP/IP streams.
This approach lets Johnson sidestep the need to add FC experts and stick with what he knows--run the entire SAN backbone over existing Gigabit Ethernet links. Carlson implemented iFCP by using a Nishan 4300 IP Storage switch to translate between the company's IP network and its FC-connected HP XP512 storage array, accessed by 20 HP-UX servers and eight Solaris servers.
Retaining a FC interconnect ensures that Carlson can continue to build on its FC success in the data center. But outside of the data center, Johnson points out, FC is much harder to manage.
Because IP-based iFCP can be managed like any other IP stream, Johnson is able to use VLANs to create and destroy IP-based storage access pipes at will. IP-based data can be routed as necessary, and is translated back into FC using a matching Nishan switch at the remote office. Furthermore, IP data transfer can be monitored and managed using standard IP-based management environments. Johnson can also use conventional IP-based VLAN methods to logically associate specific storage volumes with remote workgroups as necessary.
"Because I now use IP in the center of my SAN, I can segregate storage and servers on separate VLANs," Johnson says. "Traditionally, you start to run into a problem with how many devices you have on the network. But by staying with IP, I get the same solution across the board for all solutions--if I start using Fibre Channel, I'm going to have distance limitations and other problems with how I do that."
VSANs-the analog in the FC world-should be available in 2003. Theoretically, they should provide a way to manage quality of service, congestion and security for large, complex fabrics. Farther down the road, the ability to run both Ethernet and FC over a common transport, with common management of bandwidth, flow control and quality of service is exactly the promise of InfiniBand. Designed around bundles of point-to-point 2.5Gb/s links, InfiniBand was given a cautious green light in an IDC analysis last year and will soon begin appearing in 4x configuration-at 10Gb/s-in products from IBM and others, although it has suffered a number of setbacks, both perceptual and real. Still, InfiniBand will probably soon appear as a method for interconnecting high-speed servers or I/O components within the box. Expect some storage vendors to use it to connect parts of a disk subsystem within the box, while sticking to FC or Ethernet as the outbound protocol. But the vision of a single, unifying data center transport protocol remains just that.
"InfiniBand chewed off too big a piece by wanting to provide all the transport as well as the server architecture," says Tom Clark, director of technical marketing with switch maker Nishan Systems, San Jose, CA. He believes it will be early 2004 before InfiniBand gains enough relevance to merit inclusion in Nishan's products.
While InfiniBand has yet to dethrone standard Ethernet-based technologies, there are other options for linking together SAN components and server farms. In September, Intel licensed Lucent Technologies to bolster Gigabit Ethernet with remote direct memory access (RDMA) features--found in InfiniBand--that will facilitate even faster interconnects.
There are other alternatives. For example, StarGen, Marlborough, MA, is pushing StarFabric technology that offers up to 11Gb/s of bandwidth per chassis and aggregate switching capacities measured in terabits per second. StarGen is working with another effort, the Intel-backed PCI Express, to combine many of its key characteristics into the PCI Express standard due in 2004.
Another question in the design of the new data center is where key aspects of intelligence will lie. In direct-attached storage (DAS), intelligence about storage mainly lies in the host, volume management, file system and RAID software or hardware. The current state of the art in storage networks is that some intelligence exists in all layers of the network such as host, switch, subsystem and management appliances on the network, but no obvious reason to consolidate it in any one place has emerged.
The need to virtualize storage at several levels raises this question, though. Expect to see software such as Veritas Foundation Suite move out of hosts into switches such as Cisco's. Integrating intelligent switches or virtualization engines with multivendor storage forces the need for a second layer of intelligence. Functions such as snapshot, replication and point-in-time copy won't work correctly unless they exist out on the network. That's further complicated by the lack of standardization in these functions. Users will likely be the guinea pigs for the storage industry as these questions are sorted out.
The same is true of file systems. Network-attached storage (NAS) file heads that address multiple SAN devices will be a key component of the new data center, but exactly what form this will take-integrated into the array (NetApp) or a standalone box (EMC Celera, among others)-is impossible to predict at this point.
This was first published in December 2002