Satellite offices and workers are changing the look of companies of all sizes, and backup technology is changing...
to keep pace. Learn which strategy is best for your remote office, and whether remote copies and tape are necessary or not.
Due to the wide distribution of corporate data across sites, organizations with remote offices/branch offices (ROBOs) are often challenged by the demands associated with backup and recovery. Enterprise Strategy Group (ESG) recently surveyed more than 450 IT professionals regarding people, process and technology at ROBO locations (“2011 Remote Office and Branch Office Technology Trends,” June 2011) and found that 59% of firms with fewer than 10 employees at ROBOs function without any local IT staff, even though 71% indicated that on-site storage is leveraged at some point in the backup processes at these locations. Both disk and tape storage systems remain the go-to components of most ROBO data protection strategies, but newer wide-area/remote backup technologies are garnering more serious consideration as a primary means of data backup. Specifically, 26% of organizations currently back up data from these locations over the WAN directly to a centralized corporate site vs. a mere 7% employing this methodology back in 2007.
Those with more storage capacity at ROBOs cited improving backup and recovery processes as a top IT priority. For example, ROBOs with more than 25 TBs of storage capacity ranked this as their No. 1 priority, those with 1 TB to 25 TBs of storage capacity ranked it second and ROBOs with less than 1 TB ranked it fourth. Data growth is a contributing factor. The top ROBO data storage challenges include keeping pace with overall data growth, the need to improve backup and recovery processes, and storage system costs.
ROBO data protection strategies
There are many options available when planning and configuring a data protection strategy for ROBOs. Choices will depend on the availability of on-site staff, the volume of data to protect, corporate policies regarding retention and privacy/security, available bandwidth and the capabilities of the backup infrastructure.
Centralized backup with no ROBO-based copy: With this option, data is backed up directly to an off-site corporate location, such as a corporate headquarters (HQ) data center, with no on-site copy. All backup data is centralized and under the direct control of the IT organization. This ensures the security of the backup copies, and the ability to enforce requirements for corporate or regulatory mandates. It also eliminates the need for local backup infrastructure and personnel. The downside is that the bandwidth required between sites to transfer daily backup streams could be costly and/or it could take considerable time to transmit backup data to/from the central site -- unless source deduplication is employed to reduce the volume of data transferred between sites. That’s probably why ESG research found this to be the top method for companies with 1 TB or less of data to protect.
Software as a Service (SaaS) with no ROBO-based copy: Data is backed up to a third-party service provider’s cloud storage directly over the WAN, with no on-site copy. Similar to a centralized backup strategy, this approach maintains only a remote copy of data for recovery. After the initial configuration via a Web-based application, data is automatically backed up over a WAN connection at scheduled intervals to the service provider. Because data is transmitted over the WAN and there’s no on-premises copy, the pros and cons of the SaaS model are similar to the HQ centralized approach; however, backup data custody is with a third party, so you have to be comfortable with everything that accompanies that strategy. The most important thing here is to make sure you understand your service-level agreements (SLAs) and that they work for you.
Local-only backup: Data is backed up to on-site storage with no off-site copy. This approach ensures a duplicate copy of data is made, but doesn’t provide contingencies for a possible outage at the site. In the event data can’t be recovered locally or a catastrophe destroys the local copies and the original, it may not be possible to recoup your losses.
Local backup with an off-site copy via tape media: Data is initially backed up to on-site storage and a copy is sent off site via removable media (i.e., tape). This approach is the most traditional and still one of the more popular ways to ensure a two-site copy strategy. The on-site copy can be disk- or tape-based (D2D2T or D2T2T), with backup to disk providing a few benefits: speed, the ability to deduplicate data and ease of remote management. Copy to tape, however, requires local tape equipment, media, and a mechanism to transport copies to the central HQ or third-party storage facility. It also typically requires a local operator, especially if tape device or media error troubleshooting is required. Even with all the constant talk about eliminating tape and the adoption of disk in backup processes, this approach is the most popular overall as reported by ESG research respondents.
Local backup with an off-site copy sent over the WAN to HQ: Data is backed up to on-site storage and a copy is transmitted to a central corporate location over the WAN. With a disk-to-disk-to-disk (D2D2D) configuration, IT organizations can more easily manage backup operations from a remote location and reduce or eliminate ROBO-based staff. This method has gained in popularity over the last few years, mainly driven by lower disk costs, data deduplication and optimized replication between backup disk targets. The optimization introduced through deduplication delivers more efficient use of bandwidth and storage. The only downside is bulk recovery from the HQ’s copy. In the unlikely event a recovery is required from the HQ copy, it may be faster to ship a portable disk to the ROBO site than to recover the data over existing bandwidth. This is an approach more often adopted by organizations with higher volumes of data to protect.
Local backup with an off-site copy sent over the WAN to the cloud: Data is initially backed up to on-site storage and a copy is then sent to a third-party cloud storage provider. The disk-to-disk-to-cloud (D2D2C) scenario uses local disk for most recoveries, while public cloud storage provides the repository for the long-term data retention. Organizations get faster operational recovery from disk; however, rapid recovery from the cloud may prove to be a challenge for larger data sets.
BIO: Lauren Whitehouse is a senior analyst focusing on backup and recovery software and replication solutions at Enterprise Strategy Group, Milford, Mass.