Controlling the storage sprawl

Consolidation, automation and cost control are key to managing storage across multiple data centers.

What you will learn from this tip: How some companies have evaluated their infrastructure and consolidated their data centers.


While today's technology wisdom frequently centers on maximizing existing resources and doing more with less, the reality is that many companies still find themselves managing a sprawling conglomeration of data centers scattered across the country, and perhaps even the globe.

"I would say that if you look at the Global 2000, the probability that each has more than one data center is almost 100%," says Arun Taneja, founder and consulting analyst of the Taneja Group, a Hopkinton, Mass.-based research company that specializes in storage technology.

A number of factors can cause companies to find themselves with multiple data centers -- mergers and acquisitions bring new centers into the fold, while decentralized management can allow different divisions within a company to build separate technology centers.

Related information

Data center strategies

The future of the data center

Tech Roundup: Disaster recovery tools

Then, too, regulatory compliance can cause companies to build a separate data center for disaster recovery (DR) purposes, says Tony Asaro, senior analyst at Enterprise Strategy Group. "Many of the financial firms in Manhattan are required by law to essentially implement DR environments," he says.

This translates into a challenge for storage managers, as they try to build a centralized management strategy that allows them to direct their assets as efficiently as possible. Any storage strategy should be part of an overall effort to bring data center sprawl under control, and the following tips can help companies effectively manage storage across multiple data centers.

Consolidate what you can

For the past several years, companies have sought to evaluate their infrastructure and consolidate data centers down to a more easily managed number, and there is no sign that the trend is abating. "Consolidation is happening on all levels as companies shut down data centers because of the high capital and support costs," Asaro says.

Data center consolidation is best done in stages with the timeline dependent on the size of the project. Planning is done at a senior executive level, with chief information officer's working with business managers to evaluate business processes and locations, and create a consolidation plan that will maximize efficiency and minimize performance issues.

A good start would be to consolidate satellite office data center operations into larger regional data centers. Many of these remote offices look like mini data centers -- except without the IT expertise necessary to run regular backups and maintenance tasks. Taneja suggests using wide-area file services to pull the file servers at remote locations and consolidate them at the nearest data center. "Then you just manage the NAS box at the data center," he says. "It makes them part of the bigger corporation and simplifies the heck out of the IT structure.

The next step up the ladder is to boil down data centers to the fewest possible, based on the outcome of the business requirement plan. It's important not to oversimplify, as tempting as the prospect might be. If your company is based in Detroit but runs a large and autonomous subsidiary in Germany, for example, a separate data center in Germany is completely reasonable.

Automate and centralize

Implementing an overall data center management plan means that storage managers can take the opportunity to save time and resources by automating and centralizing storage management processes whenever possible, says Richard Ackerman, senior consultant at GlassHouse Technologies Inc. in Framingham, Mass. "You can gain significant efficiencies to processes around standardization of tools, people and processes, and around the centralization of things like reporting, monitoring and management functionalities."

This generally includes the use of automated tools that allow storage administrators to monitor, provision and troubleshoot remotely.

'They have to get more automated and use technology to automatically manage remotely if necessary," says John Sing, senior consultant at IBM Systems Group Business Continuance Strategy and Planning practice.

With storage continuing to grow at an exponential rate, automated tools are necessary to keep staffing resources from spiraling out of control. "Look for tools that do things like automatically track data and storage patterns, or move data around to adhere to performance thresholds," he says, adding that storage virtualization tools are emerging as a viable solutions to remote management as well.

Sync connectivity with business and DR requirements

One of the big attractions of having multiple data centers lies in their inherent ability to act as DR sites for each other, which raises complex questions of how the DR plan should be built.

The knottiest problem by far lies in how to connect the sites, Asaro says. "WAN bandwidth is a recurring cost that is often the main issue that prevents customers from implementing a solution," he explains. "Customers have to balance their business performance requirements with the cost of the WAN."

Businesses must determine their tolerance for data loss, as that also will determine what sort of backup plan is put in place. The most immediate recovery will come through synchronous remote mirroring, which provides realtime mirroring of data, but which can only be done over short distances due to latency concerns. (The frequency of DR sites in New Jersey for Manhattan-based financial services data centers is one illustration of this.)

"If the remote storage system is beyond practical distance limitations the application will time out or performance will become unacceptable," says Asaro. " Asynchronous remote mirroring does not require a write commit from the remote storage system and therefore is not impacted by latency. However, if a disaster occurs there might be some data loss."

Moreover, many companies just don't do the calculations to realize the crushing amount of bandwidth necessary to replicate terabytes of storage remotely. On average, 1 terabyte of online transaction processing disk storage will generate 1 to 2 MBps of write data, Sing says.

"That means that if you would like to mirror 20 TBs of storage on a steadystate basis, you'll need 20 to 40 MBps on average," he says. "Storage level replication is a big and coming thing, but bandwidth is the delineator on what you can and can't do."

Telecom carriers can and do use technologies such as SONET and dense wavelength division multiplexing to help manage bandwidth, but it's still a major cost. "In a disk mirroring environment, its not unusual over three or four years for 70% of total costs to be in ongoing telecom charges," Sing says.

This leaves storage executives in search of technology that can help bring bandwidth costs under control. "Data compression can reduce the amount of data that is replicated over the WAN," Asaro says. "Additionally, new data de-duplication software can significantly reduce the amount of data that is being transferred by up to 20 to 1. This reduces the WAN bandwidth needed for remote mirroring and can protect more data in a shorter period of time."

All of this makes way for ongoing conversations with line of business executives to discuss how much of data loss they can tolerate, as well as prioritization of business data. "This all ties back in the end to efforts around data classification and understanding business needs and requirements down to the application level," Ackerman says. For example, some applications may suffer performance erosion running in a synchronous mirroring environment. "There's a balance of how much data protection they want or need versus the impact on the performance of the database or application," Ackerman says.

Outsourcing possibilities

There's one last possibility for companies that want a second data center purely for backup purposes: outsource it. There are plenty of DR service providers only too happy to take the problem off your hands, Ackerman says. "You don't have to maintain the actual physical infrastructure, but you can bring up your data quickly in the event of a major outage," he says.

For more information:

Chart a course for consolidation


This was first published in July 2005
This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close