This article can also be found in the Premium Editorial Download "Storage magazine: Who owns storage in your organization?."
Download it now to read this article plus other related content.
Source: Marc Staimer, founder, Dragon Slayer Consulting
TCP/IP wasn't designed to transport large volumes of stored data. The pervasive WAN protocol has no problem with small amounts of data that can be sliced and diced into tiny packets.
However, "TCP/IP doesn't handle congestion well. It does a lot of checking back and forth and resending, which just slows things way down," says Marc Staimer, founder, Dragon Slayer Consulting, Beaverton, OR.
As a result, when an organization tries to replicate large amounts of data between sites over, say, a T1 link (1.5 Mb/s) using TCP/IP the performance can't match what it gets when backing up that same data over the LAN at night. And if the organization wants to do anything else with that T1 link, forget it--the link is nearly saturated.
Today storage managers can choose among a growing number of appliances that promise to speed up the movement of large volumes of data over TCP/IP links. These products, network accelerators of various types (see "Network accelerators") allow organizations to transmit more data faster and use less bandwidth, which saves money and boosts performance. Some of the new appliances--dubbed remote network-attached storage (NAS) accelerators or WAN file servers--even promise to eliminate the need to maintain file storage at remote sites, allowing centralized NAS storage to meet the storage needs of remote workers as if they were local.
Finisar Corp., a manufacturer of optical telecommunications tools, figured that it needed two--or even three--T1 links between its Network Appliance Inc. (NetApp) NAS box at its Sunnyvale, CA, headquarters and a similar NetApp device at its facility in Malaysia to handle file replication and other data traffic between the sites. Faced with a communication investment running well into thousands of dollars a month, the company tested a network acceleration appliance from Peribit Networks, Santa Clara, CA. The company dropped a 100MB file on the NetApp box in CA, and using NetApp's SnapMirror function, fired it off to the NetApp filer in Malaysia.
"It took four minutes and 21 seconds to get it there compared to over nine minutes if we just used the T1," says Jon Hudson, Finisar's storage area network (SAN) architect. At that rate, the Peribit appliance, which compresses files, would give Finisar "a payback in weeks or a few months," he concludes.
A large Midwest shoe retailer faced a similar problem. "We were looking at having to buy a DS3 [45Mb/s] pipe to replicate data between our two data centers for disaster recovery," says the retailer's IT manager. The company maintains an AS/400 at each site and continuously replicates database transactions between them. In the event of a failure at either site, its hundreds of stores would be able to continue functioning and barely miss a beat.
Hoping to avoid the need for the costly pipe, the shoe retailer turned to NetCelera Networks, a company that provides a WAN acceleration appliance. The appliance terminates a TCP/IP connection and substitutes its own protocol to eliminate the TCP/IP overhead. In addition, it applies compression at Layer 5 of the ISO network stack. "We got 10:1 compression from NetCelera, which saved us from having to buy the DS3," says the manager.
NetCelera and Peribit are just two of a many vendors introducing products to boost the performance of TCP/IP for the transmission of large data sets over the WAN. Many of these products are basic network accelerators that compress traffic, effectively boosting performance of data transmission over the network or allowing the organization to use a smaller, less-costly communications link to achieve the same performance as before. Others don't use conventional compression at all. Instead, they transparently replace TCP/IP for the portion of the link between the primary and target site with an efficient proprietary protocol optimized for large data sets.
Even vendors just doing compression are achieving sizeable performance gains, typically 300%, says Peter Firstbrook, senior research analyst, at the Meta Group. Compression at the application level ordinarily slows down the server. The new products avoid this by moving the processing to an appliance. Other products look at the content of the data being transmitted and can achieve up to a tenfold performance gain with some content, he adds. For example, a product will look at a large data set, identify repeating patterns and assign a tiny token to represent a big chunk of data. The product then sends just the token--instead of the data--and reconstructs all of it on the other side from the token.
This was first published in May 2004