Feature

Your next data center: Think flexible

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Are your data storage costs too high?."

Download it now to read this article plus other related content.

Today's data centers are beginning to be structured around the separation of storage into storage area networks (SANs), with servers consolidated around the compute function. In tomorrow's data centers, servers will follow the modular path that storage has already started down, as we discussed the first part of this article last month (see "Towards the new data center," November 2002).

What's missing from this picture is how storage and computing power will be connected and managed. There's more at stake than just questions of bandwidth and protocols: Data center designers will have to decide to what extent their infrastructures will be layered in tiers organized around price/performance.

As for the networking aspect, Fibre Channel (FC), with its 2.12Gb/s bandwidth, has become the standard method for shuttling data between servers, switches and storage arrays within the enterprise SAN. But as server and storage densities increase, it's clear tomorrow's data centers will require a faster approach. "A high-speed interconnect is absolutely essential," says Fred Hanhauser, director of storage products marketing with Unisys.

@exb

    Requires Free Membership to View

From a distance
The push towards total storage and server virtualization is truly underway, but it's not enough to simply take the trend towards its logical conclusion--consolidation of all data management assets into a single location.

Here's where technical reality meets the demands of business. As Sept. 11 taught us, data centers still need to be architected for redundancy and easy recovery in a disaster. That means it's best to plan on spreading your resources across at least two or three data centers, depending on the size of your organization and your storage requirements.

Fortunately, the convergence of new technologies makes this process relatively seamless. Widely available metropolitan area network (MAN) connections offer fiber-based connections at multigigabit speeds, nullifying the potential effect of any latency that might otherwise be caused by distance. Deploying a MAN allows the creation of a backup data center that can either be configured as a hot standby, or put to more serious use as a full-time data center with excess capacity to pick up the slack.

Inexpensive ATA arrays may become the backup targets for the new data center. Such arrays can be replicated in every data center with relatively little expense. In the event of a disaster, disk backup can be used to quickly restore the other data centers. Tape could still be used for off-site archival storage.

Using IP as the root of your storage strategy pays off big when you've got large distances to cross: Third-party network providers can be held to SLAs and minimum performance that ensures your data transfer rates won't suffer no matter how far apart your data centers are. IP is also a critical management tool, since it allows the wealth of IP-based management tools to be applied to shape and logically allocate data center resources.

@exe

What's less clear is just what that interconnect is going to be. There are three main contenders: 10Gb/s FC, 10Gb/s Ethernet and InfiniBand.

The argument for FC is straightforward. It's a continuation of today's 2Gb/s technology and will have to undergo little change to reach the 10Gb/s level. The argument against it is also straightforward: FC is confined to storage networking and requires skills and equipment that can't be leveraged across the rest of the computing infrastructure. Consequently, FC carries at least a perceived cost premium, both in equipment and staffing.

That's exactly the opposite case for 10Gb/s Ethernet. IP storage based on Ethernet is new. But will users really be able to leverage their existing Ethernet infrastructure, or will they have to establish a second Ethernet network for storage, especially in higher-performance applications like those in the data center? Such questions remain to be answered. But the promise of lower cost and integration into the enterprise systems and network management infrastructure are attractive.

Secondary data layer
It's also worth considering setting up a secondary storage network using Gigabit Ethernet, iSCSI and ATA-attached disks to provide hot backup of your faster, mission critical disks. Creating this secondary data layer not only facilitates more flexible backup strategies, but allows the creation of data management policies that free up fast disk space by cascading little-used files onto the secondary array.

Certainly, a case can be made that both FC and iSCSI over high-speed Ethernet should be part of the infrastructure. Users will have their choice of approaches to the networking impact of mixed protocols, as multiprotocol switches and gateways from vendors such as Cisco, Nishan, and Pirus--now part of Sun--allow for IP-based and FC storage traffic to be handled by the same switching fabric. As storage subsystems take on more connectivity options, a multiprotocol storage network will become a more practical alternative for users who want to leverage the relative strengths of each protocol.

IP for storage
While extensive use of IP for storage raises a lot of conventional eyebrows, it's at the heart of what Gary Johnson, an architectural consultant at Carlson Companies, Minneapolis, MN is doing. Johnson maintains that the best approach to reining in the company's more than 10TB of Oracle data is to retain IP's topological independence and instead rely on protocols such as the Internet Fibre Channel Protocol (iFCP), which translates FC data from SAN equipment into native TCP/IP streams.

This approach lets Johnson sidestep the need to add FC experts and stick with what he knows--run the entire SAN backbone over existing Gigabit Ethernet links. Carlson implemented iFCP by using a Nishan 4300 IP Storage switch to translate between the company's IP network and its FC-connected HP XP512 storage array, accessed by 20 HP-UX servers and eight Solaris servers.

Retaining a FC interconnect ensures that Carlson can continue to build on its FC success in the data center. But outside of the data center, Johnson points out, FC is much harder to manage.

Because IP-based iFCP can be managed like any other IP stream, Johnson is able to use VLANs to create and destroy IP-based storage access pipes at will. IP-based data can be routed as necessary, and is translated back into FC using a matching Nishan switch at the remote office. Furthermore, IP data transfer can be monitored and managed using standard IP-based management environments. Johnson can also use conventional IP-based VLAN methods to logically associate specific storage volumes with remote workgroups as necessary.

"Because I now use IP in the center of my SAN, I can segregate storage and servers on separate VLANs," Johnson says. "Traditionally, you start to run into a problem with how many devices you have on the network. But by staying with IP, I get the same solution across the board for all solutions--if I start using Fibre Channel, I'm going to have distance limitations and other problems with how I do that."

VSANs-the analog in the FC world-should be available in 2003. Theoretically, they should provide a way to manage quality of service, congestion and security for large, complex fabrics. Farther down the road, the ability to run both Ethernet and FC over a common transport, with common management of bandwidth, flow control and quality of service is exactly the promise of InfiniBand. Designed around bundles of point-to-point 2.5Gb/s links, InfiniBand was given a cautious green light in an IDC analysis last year and will soon begin appearing in 4x configuration-at 10Gb/s-in products from IBM and others, although it has suffered a number of setbacks, both perceptual and real. Still, InfiniBand will probably soon appear as a method for interconnecting high-speed servers or I/O components within the box. Expect some storage vendors to use it to connect parts of a disk subsystem within the box, while sticking to FC or Ethernet as the outbound protocol. But the vision of a single, unifying data center transport protocol remains just that.

"InfiniBand chewed off too big a piece by wanting to provide all the transport as well as the server architecture," says Tom Clark, director of technical marketing with switch maker Nishan Systems, San Jose, CA. He believes it will be early 2004 before InfiniBand gains enough relevance to merit inclusion in Nishan's products.

While InfiniBand has yet to dethrone standard Ethernet-based technologies, there are other options for linking together SAN components and server farms. In September, Intel licensed Lucent Technologies to bolster Gigabit Ethernet with remote direct memory access (RDMA) features--found in InfiniBand--that will facilitate even faster interconnects.

There are other alternatives. For example, StarGen, Marlborough, MA, is pushing StarFabric technology that offers up to 11Gb/s of bandwidth per chassis and aggregate switching capacities measured in terabits per second. StarGen is working with another effort, the Intel-backed PCI Express, to combine many of its key characteristics into the PCI Express standard due in 2004.

Another question in the design of the new data center is where key aspects of intelligence will lie. In direct-attached storage (DAS), intelligence about storage mainly lies in the host, volume management, file system and RAID software or hardware. The current state of the art in storage networks is that some intelligence exists in all layers of the network such as host, switch, subsystem and management appliances on the network, but no obvious reason to consolidate it in any one place has emerged.

The need to virtualize storage at several levels raises this question, though. Expect to see software such as Veritas Foundation Suite move out of hosts into switches such as Cisco's. Integrating intelligent switches or virtualization engines with multivendor storage forces the need for a second layer of intelligence. Functions such as snapshot, replication and point-in-time copy won't work correctly unless they exist out on the network. That's further complicated by the lack of standardization in these functions. Users will likely be the guinea pigs for the storage industry as these questions are sorted out.

The same is true of file systems. Network-attached storage (NAS) file heads that address multiple SAN devices will be a key component of the new data center, but exactly what form this will take-integrated into the array (NetApp) or a standalone box (EMC Celera, among others)-is impossible to predict at this point.

This was first published in December 2002

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: