Endpoint device management key to controlling corporate data

Storage expert Jon Toigo analyzes how security, bandwidth constraints and cloud services factor into endpoint device management strategies.

This Content Component encountered an error

What exactly does endpoint device management mean to storage pros? To answer that question, let's define an endpoint device. The label can be applied to just about any device that connects to the edge of the corporate network. They can be small network-attached storage arrays comprising a few disk drives or end-user devices such as tablets, smartphones or laptops.

Looking only at the latter, many storage industry analysts predict that a fast-growing percentage of mobile workers -- upwards of 80% by some estimates -- will use endpoint devices in the next year or two. This move will require the quick development of a business IT strategy to support these devices and manage the data they use.

Developing this strategy gains urgency if endpoint devices contain corporate data and if end users currently select their own mechanisms to back up data, sync data with desktop PCs or cloud services such as Dropbox or Box.net, and archive data. The failure to create a strategy to manage endpoint device data could compromise data integrity, security, e-discovery, and regulatory and legal compliance, thereby increasing the risk of data loss, unauthorized data disclosure and legal liability. In truth, some of this concern is premature; most endpoint devices are inadequate both as network clients and as storage devices to do much more than passive reading and writing of email.

Still, the situation can be very high risk. Corporate IT exercises virtually no control over these devices or their data, and there is no ready capability for searching the contents of the distributed data set. Meanwhile, frustrated by the difficulties associated with secure virtual private networks, WANs and other troublesome aspects associated with data transfer to a secure corporate infrastructure, many users are turning to more accessible consumer offerings for online backup, file sharing, storage and other "public storage cloud" services. This complicates data management and data security and places data at even higher risk. There is no easy fix for these challenges. The simplistic approach, establishing a bring your own device (BYOD) strategy, limits or restricts the use of endpoint devices and does not take into account that certain employees are increasingly mobile and have an actual need to access data on a portable platform.

A good way to decide how to handle the data management requirements of mobile computing is to identify how many users leverage mobile endpoints, which endpoint devices are used, what type of data is stored and how much data is amassed on the endpoint device in a work period (day, week or month). This information can help to establish a range of options to facilitate the mobility and endpoint device usage being sought.

The characteristics of data -- the volume of data created or consumed -- will help define the bandwidth requirements and viability of the mobility and management offering.

For example, 4G cellular networks, supposedly equipped with the speeds and feeds required to support mobility requirements, aren't quite there yet. According to International Telecommunications Union standards, 4G is supposed to provide a 100 Mbps connection. However, today's quasi-4G networks (4G LTE, 4G WiMAX or 3G HSPA+) actually deliver speeds ranging between 10 Mbps and 12 Mbps, and only in select locations.

Analysts don't expect this situation to improve very much through 2014, in part because there has been such a boom in wireless data growth that a spectrum deficit has been created. Traffic growth per cell site has increased from 100% per year in 2009 to 612% in 2012, and it is expected to climb above 1,250% by 2014. At the same time, a spectrum deficit is expected to begin this year and will reach -275 MHz of the megahertz needed to handle traffic load.

Evaluating endpoint device response time, bandwidth delay

But that connection is just the on-ramp between the endpoint device (which tends to use cellular communications techniques) and the WAN across which most data travels to the data center. The WAN itself now needs to be evaluated and tested due to the impact of distance-induced latency and WAN jitter on throughput. At a minimum, you need to test the bandwidth delay product of the network interconnect between endpoint device WAN access points and the corporate environment; link capacity in bits per second multiplied by end-to-end delay in seconds equals bandwidth delay.

If remote application access is sought, response time requirements need to be observed. IBM Corp. performed some seminal research into minimum acceptable response times and discovered that high-productivity apps could withstand delays of less than 2 seconds. Fully interactive applications could withstand delays of 2 seconds to 12 seconds, and some batch programs could handle delays of up to 10 minutes. However, these studies do not reflect the latest application requirements with respect to latency: SQL Server will fail if a transaction response is not received within 100 milliseconds, and Exchange will experience problems with transaction latency greater than 50 milliseconds.

The bottom line on latency and jitter is that the transfer of data will take a different amount of time using WAN links of different transfer rates, but the nominal transfer rates of various link services cannot be trusted. Nominally, it would take approximately 2.25 hours to transfer 10 TB of data across an OC-192 link and more than a year to transfer the same amount of data over a T1 link. In actual operation, these transfers may take much longer due to the impact of routing protocols, network congestion, time of day, number of network operators in the mix and so on.

It is worth noting that deduplication and compression do not alleviate transfer delay. In a traffic jam, an 18-wheeler moves just as slowly as a small, efficient hybrid.

The above conditions impact the transfer of data and remote operation of applications, whether they are hosted on the corporate premises or from a "cloud." The problem with some clouds -- in addition to their known security deficits, unpredictable service-level agreements and potentially high cost -- is that many vendors have aligned their architectures with one of several server hypervisor software packages, limiting what devices and applications can be used with the service.

Despite these challenges, planners may still be able to find a way (whether privately or publicly hosted) to provide data access for mobile users, preferably in a manner that does not store data on the mobile endpoint device. Where data is centralized, it can be "managed" (replicated, protected, encrypted, compressed, frozen in the face of pending litigation, archived and so on) more efficiently via policy. It is worth examining private cloud software, particularly private storage cloud software or storage virtualization software, to identify the best way to host data so that protective services can be applied in the most expedient manner. E-discovery/indexing software can also enable your data to be classified so that appropriate data management policies can be created and applied.

This was first published in March 2013

Dig deeper on Remote and offsite data storage

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close