Remote offices: Keep them working well and keep them safe

Arun Taneja addresses the revolution of remote offices and how and why you should keep a close eye on them.


Pretty much since the birth of computers, remote offices have been a pain in the butt for IT. Before the client/server style of computing became a standard, the mainframes did all the crunching at the data center and presented the results on a terminal. If that terminal happened to be in the HQ, one got a response in a reasonable amount of time. If one happened to be far away, then you got what you got. Yes, all kinds of terminal server...

software was created to serve the remote users better but, all in all, the performance stunk. At least a few of you out there remember 1,400 bits per second modems. (Yes, you read it right. There is no K in it. One could almost carry the bits to the other side faster on foot with sneakers on.)

More storage blogs
Read what all of our expert bloggers have to say on data protection, storage networking and more. Click here.

Then we went to client/server computing and basically pushed additional computing loads to the client machine. Things got better, granted, but for remote users performance was still miserable. IT did what it needed to in order to maintain some level of peace. They implemented application servers, files servers, NAS, print servers, Web servers and email servers locally in the remote offices, albeit in smaller configurations than in the data centers. Then, to make data sharable they replicated what they needed to between offices. Data spread out and multiple copies existed but at least the job got done and the users didn't revolt.

But since valuable data was not only created in the remote offices but data from HQ was also modified and sent back from time to time, data needed to be protected in the remote offices. Lo and behold, we had to add a media server, backup software and perhaps a tape autoloader to each branch office. The backed-up data then needed to be sent somewhere for protection from disasters.

A number of schemes became popular over the years but the most common ones included shipping of tapes daily to the HQ, replicating backup data electronically to the HQ or to another third-party vault. It is hard to believe, but that is the state-of-the-art today. Most remote offices today look like mini-data centers but with one key ingredient missing: local IT expertise. Little surprise that the remote user is up-in-arms and feels neglected. Good, or bad, that is how we have managed so far.

But not anymore! There are three reasons that the status quo is unacceptable moving forward. One, the amount of data being created is putting a lot of pressure on this methodology. Secondly, in this day and age when even the design and testing of data is outsourced to places like India and China, remote offices are no longer the shadow of the HQ. They are creators of large amounts of extremely valuable data that needs the same care and feeding as the most critical data being generated in the HQ. Thirdly, and probably the most critical, is the fact that the regulatory bodies have basically said, "We don't care where you create or manipulate data, all data must be treated in such a way as to meet the regulation." That means you can't say you lost the SS#s for 10,000 customers because the person in the branch office backed up the data that night, took it home for shipping it to the HQ the next day and lost his briefcase in the bar where the poor guy, after a hard day's work, stopped just for one beer. No, that is not acceptable anymore. Yes, someone can possibly go to jail for losing that data, but it may not be the person who lost it. It may be the CIO… or the CEO.

So what is IT to do? Very fortunately, a few technologies that we describe as wide area file services (WAFS) or wide area data services (WDS) have come to the rescue just in time. Basically, all these technologies do one thing: They let you eliminate IT infrastructure drastically in the remote location and consolidate all computing in the data center(s) and yet deliver LAN-like performance to the users in the remote offices. There are three segments within these. One, WAFS products only focus on the elimination of file servers or NAS boxes in the branch offices. Two, WAN optimization products fundamentally attack each layer in the OSI stack so as to optimize their performance. Techniques vary but making TCP/IP more efficient by reducing their "chattiness" is a common trick. Some products add quality of service (QoS) by giving preferential treatment to certain data that is considered more important by the company. Yet others increase the TCP "window size" so they can send more data in a gulp.

The third segment is application acceleration. In a way, WAFS products belong to this segment. In effect, they attack the application layer and make it more efficient for long distance transmission. In the case of WAFS, they attack the NFS and the CIFS layers. Some products do this by caching data in a remote office appliance. Others do it by eliminating the chattiness of NFS and CIFS. Outside of WAFS, they attack the MAPI layer, for instance, and improve the performance for centralized Exchange servers. Others attack the HTTP layer. Or FTP. Or SAP. Or Peoplesoft. Or whatever. The idea is to eliminate the application servers in the remote office, centralize them all in the HQ and yet deliver LAN-like performance to the remote users.

The question you must be asking is, "Why couldn't I do this yesterday and why now?" The answer lies in the difficulty in dealing with the issues of latency. Think of this as having to do with the speed of light. While 186,000 miles per second sounds faster than anything one can imagine, it is super slow within the context of computer science. Also, think of it this way: It takes 16 milliseconds (msec) for light to travel from NYC to San Francisco (3,000 miles), as the crow flies. As a packet flies over a typical WAN it takes more like 50 to 60 msec. A round trip doubles this time. Without going too deep into the details here, that means even if the bandwidth of the WAN pipe was OC-48 (2.5 gigabits per sec), the application throughput will still be limited by the speed of light. If you quadrupled the WAN bandwidth the throughput of the application will still be limited by the speed of light. IT has always had the tendency to add bandwidth every time an application performance declined. More often than not, adding bandwidth makes little or no difference to application performance due to this latency.

No, we have not found a way to increase the speed of light. What we have found is a way to cleverly reduce the number of trips required to complete a task. We have also found a way to cleverly send pieces of data to the other side in advance of being asked for it. The same is true with other techniques. These as a discipline have created the concept of WDS and WAFS and these products are here today from several companies. While there are too many to mention here, a very partial list includes Availl, Cisco, DiskSites (now Expand), F5, Juniper, Orbital, Riverbed, Silver Peak and Tacit Networks (now Packeteer). You can write to me at arunt@tanejagroup.com for more details on these and others if you like.

So my advice to IT: If you haven't started to evaluate these products, do so now before the inevitable happens… you lose a customer's sensitive data and the long arm of the law reaches over to you. At the speed of light.

About the author: Arun Taneja is the founder and consulting analyst for the Taneja Group. Taneja writes columns and answers questions about data management and related topics.

 

Dig deeper on Remote and offsite data storage

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close