Column

Speed up cloud storage with WAN acceleration technology ... or Tachyons

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: How to get the most out of solid-state technology."

Download it now to read this article plus other related content.

But if you don't have any of those hypothetical particles that move faster than light, there might be another way to implement WAN acceleration technology in your environment.

Lately, I've been giving a lot of thought to a technically nontrivial hurdle that cloud storage and WAN-based data replication aficionados don't want us to talk about: Our WANs are subluminal (slower than the speed of light). Outside of the Syfy channel, we're hard-pressed to collect enough tachyons on which to piggyback our data signals so they'll move faster than the speed of light. Neither are there any traversable wormholes in the public interexchange carrier (IXC) network, nor a sufficient quantity of the spice mélange to enable space time distortion (even for Dune fans) so our data can travel great distances without really moving. Basically, we're stuck with subluminal velocities in our data transfers. Latency and jitter are facts of life.

There are some vendors that don't want us to think about this too much. I recently pulled a paper I was writing for a client because they wanted to "soften" references to WAN limitations that might stymie the appeal of their cloud value proposition. I saw their edits as an attempt to dumb down the discussion of what are substantively important gating factors on cloud service viability: the vicissitudes of data transfer over long distances. Frankly, because clouds depend on WAN links to deliver access to services, I fail to see how they can possibly guarantee -- with a straight face, that is -- any sort of meaningful service levels.

Try as they might to deliver quality infrastructure, applications or capacity services in conformance with a service-level agreement, cloud- or network-based service providers are at the mercy of the miles and miles of copper and glass that connect customers to their facilities. And all of them are controlled by a gaggle of incumbent local exchange carriers (ILECs), competitive local exchange carriers (CLECs) and IXCs that provide connections between local access and transport areas (LATAs). That voice, video and data is able to move at all through this hodgepodge of turf captains is a miracle. Moving data through these component parts efficiently and reliably would be an even greater miracle.

The political and economic stresses and strains in the relationships between these carriers can impact transfer rates through their cobbled networks. Each day, one of my clients wonders whether their transfer of less than a hundred gigabytes of data between Sacramento and the Silicon Valley, on a path "owned" by nine carriers, will take a few seconds or the entire day. There's simply no way to predict the transfer rates.

Add to the bureaucratic nonsense the sobering realities of switch protocols that deliver not the shortest physical path between point A and point B, but rather the least number of router hops. Moving data through a public WAN is like taking a multisegment airplane trip from Tampa to Dallas; the carrier gets you to your destination not by flying across the Gulf of Mexico, but rather using a route that moves you through Boston, Charlotte and Detroit before landing in Dallas.

Then toss in routing problems within the switch fabric itself. Queuing delays, buffering delays and buffer bloat, dropped packets, resend requests and other processing delays can gum up the works and slow transfers even further. Pass-through technologies are limited in their ability to streamline or expedite traffic through heterogeneous infrastructure where the proprietary quality-of-service protocols provided by one switch vendor are dropped as soon as traffic hits a switch from a different vendor.

Adding more bandwidth doesn't fix the problem, nor does compressing or deduplicating data payloads. Think of a traffic jam on your favorite highway: that little Fiat 500 ahead of you isn't moving any faster than the big Peterbilt 18-wheeler in the next lane, is it?

It's irritating when vendors obfuscate these matters. I almost brought my ire to bear on one vendor last month when I received a press release proclaiming that Bridgeworks Limited had beaten the whole latency thing. Despite my anger, I sat for a couple of Web meetings with Bridgeworks (and with folks from Agilesys, which partners with them in the U.S.) to learn more about their SANSlide product.

The first thing I discovered was that the "no more latency" claim was a bit of hyperbole introduced by PR flacks. SANSlide doesn't fold space time; it just optimizes the way connections are used and how data is placed onto the link by using "patented artificial intelligence" to fill the pipe. Bridgeworks claims -- backed by many customer accounts and endorsements -- it was improving the performance of data transfers across WANs by as much as 50x without requiring "warm" data, compressed data or deduplicated data, and without the use of hard disk caching, UDP or modified TCP/IP protocols.

My takeaway thus far, and I plan to test their appliance soon, is that Bridgeworks isn't moving data more quickly through networks, but packing data more efficiently onto pipes and balancing the load across multiple virtual and physical interconnects.

In short, SANSlide optimizes the network capacity you're paying for by packing more luggage and passengers onto the plane, so to speak, so the payload gets to where it's needed as efficiently as possible. But you're still at the mercy of the CLECs, ILECs and intra-LATA IXCs when it comes to traffic routing, so we haven't really beaten the subluminal limits that are cooling enthusiasm for clouds and WAN-based data replication.

An important caveat: Not all data transfers are latency sensitive. Depending on how you intend to use your data -- how synchronized replicated data must be with source data, for example -- a certain amount of latency may be tolerable. In cases where latency isn't a significant issue, SANSlide technology may be a great choice for WAN optimization. It's possible to use a pair of SANSlide appliances to "black box" the WAN in much the same way channel extenders spoofed connections between the mainframe backplane and remote peripheral devices 20 years ago. To the mainframe, the channel extender appeared to be the peripheral, only it buffered the commands and data, and placed them on the WAN link for delivery at top speed. At the remote location, the peripheral device talked to the extender as though it was communicating with the mainframe backplane. This is very similar to the operational model of Bridgeworks' SANSlide.

It isn't a wormhole or a Tesseract, but it looks like pretty interesting technology.

About the author: 
Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.

This was first published in July 2013

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: