Feature

Cost-effective business continuity

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Should you consolidate your direct-attached storage (DAS)?."

Download it now to read this article plus other related content.

There are no cookie-cutter business continuity implementations. If disaster strikes, each organization has its own unique requirements to make sure its business can continue.

Requires Free Membership to View

Outside the disaster horizon

A key requirement of business continuity is that there's sufficient distance between data storage locations so that a disaster that wipes out one site is unlikely to wipe out another site.

In fact, an organization should consider creating several types of business continuity solutions so it can respond to different types of risks. This article and a second one being published next month will look at some of the issues, technologies and products to keep your business afloat in case of a disaster.

Traditional disk mirroring offers a familiar method for business continuity that assumes low latency and high bandwidth connections. This can be done with dark fiber or dense wavelength division multiplexing (DWDM). DWDM is a technology that puts data from different sources together on an optical fiber, with each signal carried at the same time on its own separate light wavelength. Unfortunately, costs for those solutions may be higher than many budgets allow.

Alternatively, store-and-forward solutions in disk subsystems have been successful in providing both short- and long-distance business continuity. In all cases, synchronous acknowledgements should be considered over asynchronous until it's obvious that it will impact host system and/or application performance.

Local and remote storage
Spanning extended distances means that the assumptions about bandwidth and cost must be reevaluated. MAN and WAN networks used for business continuity are usually much slower and/or much more expensive than the buses and networks used for local storage. In addition, it's difficult to maintain low latency communications to remote storage. Depending on the distance involved and the application's latency requirements, there may be several compromises involved in building a business continuity solution.

In general, there are no hard rules for what distance qualifies as remote storage. One organization could consider remote storage to be five miles away, while another organization might consider it to be 100 miles away. In fact, it's highly likely that different applications and platforms could have different definitions for remote storage distances based on the relative importance of the data and bandwidth costs.

It's not necessary to dedicate a whole storage subsystem for remote storage. Selected resources--such as certain LUNs exported from a disk subsystem--can be used as remote storage. For example, an organization might have two data centers in two different geographies where some of the storage resources on disk subsystems in both locations would be used for remote storage by the other data center.

Disadvantages of disk
mirroring for disaster recovery
Overlapped reads are a drag on performance.
Disk mirroring needs to be implemented over fast, short links.
Both writes and reads are sent to disk targets, as opposed to just the writes, using expensive bandwidth. For business continuity, the only I/Os of interest are writes because reads create no new data to copy to remote storage.

Disk mirroring
There are essentially two different techniques for creating redundant copies of data for business continuity purposes. The first is host-based disk mirroring and the second is subsystems-based store and forward. Disk mirroring is one of the most basic and common forms of data redundancy protection. The concept of disk mirroring is simple: For every storage I/O created in a host system, two identical I/Os are sent to different disk targets. If one of the disks fails, the system can continue to work with the disk that's still functioning. Host-based disk mirroring has been implemented in several different kinds of products, including operating system software, volume management software, device driver software for host adapters, hardware chips on host adapters and in storage subsystem controllers.

Disk mirroring can provide performance benefits by reading overlapped I/O reads across both local disk targets. For example, a group of read I/Os can be sent to one disk target while another group of read I/Os could be sent to the other in such a way as to minimize the total seek time, and hence latency, on both drives. The benefits of overlapped read I/Os depend on low latency, short-distance local storage connections. While overlapping reads could be deployed in a business continuity environment, it would be counter-productive to send any reads to a slow remote connection with much higher latency. It may be possible to configure the disk mirroring product to not use overlapped reads, depending on the vendor and product.

Besides the possibility of overlapped reads impacting performance, there are two other primary problems with using disk mirroring for business continuity. The first problem is that disk mirroring typically has been implemented to work over fast, short links. When longer, slower links are used, host system performance will be adversely affected, and it's possible that disk timeouts could occur, resulting in the mirroring operator taking the remote disk target offline.

The other shortcoming of disk mirroring is that I/Os are sent to both disk targets, as opposed to just the writes. For many applications, the ratio of reads to writes is approximately three to one. That means disk mirroring takes up a fair amount of the available, expensive bandwidth. Not only that, but for the purposes of business continuity, the only I/Os of interest are writes because reads create no new data to copy to remote storage. The situation is upside down--reads dominate I/O activity and take most of the bandwidth, although there's no requirement to transmit them over a long-distance connection.

Advantages of store and
forward for disaster recovery
Designed specifically for business continuity applications
Host system only generates a single write I/O. There's no problem with read I/Os taking the lion's share of the available MAN or WAN bandwidth.
Subsystem manages the transmission details including acknowledgements and any error recovery.

You'd want to use disk mirroring for business continuity when the long-distance connection resembles a local connection--when it's fast and has low latency. One scenario is when the distance is short enough for dark fiber (private fiber optic cable) between local and remote storage. For example, people often quote Fibre Channel's (FC) 10 km-supported distances over single-mode fiber optic cables. There are many FC users who implement remote storage this way because the I/O performance problems discussed above are alleviated. Access to the remote disk target would be practically the same as to local disk targets.

In addition to using dark fiber, you can use DWDM optical networking technology to span the distance between local and remote storage. Business continuity traffic over DWDM is similar to dark fiber by virtue of being a high bandwidth, low latency connection. The main advantage of DWDM is its ability to span greater distances than dark fiber and use services provided by public network service providers.

In a nutshell, DWDM provides the underlying optical network transport for carrying virtually any kind of physical network, including FC, FICON and Gigabit Ethernet. Business continuity is one of the fastest growing applications that takes advantage of this incredible technology. (See "DWDM can connect distant Fibre Channel nets")

Another option for extending high bandwidth connectivity is to use special-purpose optical line drivers that support distances in excess of 30 km and can reach up to 100 km through the use of repeaters. This technology has ample bandwidth, but can be expensive to implement unless you happen to already own the right of way to string fiber cables over that distance. The latencies are higher due to the increased distances, but there are many applications that won't notice the impact of disk mirroring over this type of cable plant.

As high-speed optical networking capabilities are brought to the market along with improvements in switching latency, disk mirroring will become more viable for business continuity than it is today. Until then, most organizations will find it more advantageous to use a store-and-forward method.

This was first published in June 2003

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: