In this economy, alluring IT projects are all about cutting costs. Companies such as Whirlpool Corp. and MasterCard...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
International have launched overhauls of their worldwide storage infrastructures to consolidate their global storage facilities and manage applications from as few central locations as possible. For these multinationals, it's money well spent. Jim Hall, vice president of engineering services at MasterCard, says the company is using some serious economies of scale to drive down technology costs.
Whirlpool and MasterCard are on the leading edge of a new phenomenon: As companies embark on managing storage across the global reach of the enterprise, their chief tactic is to do so through data center consolidation.
|Problems plaguing global storage consolidation|
Storage used to be a think globally, act locally technology. In the not-so-distant past, multinational corporations had to store their data in isolated data centers scattered all over the world. The connectivity wasn't there, and certainly the ability to manage storage as a single entity was sorely missing.
"There's been interest over the years in trying to manage storage globally," says Gary Johnson, the vice president of new technology enterprise solutions at CNT, a storage service provider based in Minneapolis, MN. "Now, it's starting to happen as part of data center consolidations."
Indeed, data center consolidation is driving interest in global storage management for many companies. "If companies are able to consolidate their storage, they can more effectively share both storage resources and management of those resources," says Nancy Marrone, a senior analyst at Enterprise Storage Group in Milford, MA. It's all part of the process of more tightly controlling their storage resources and reducing costs. To do that, Marrone adds, "companies must effectively manage on a global basis."
"At the end of the day, it's about productivity and cost," says Bob Passmore, a research director in Gartner's storage practice. The trend is to pull hardware and services into a centralized data center and deliver applications remotely.
Of course, managing storage on a global basis comes in many different flavors, and there are varying degrees of implementation obstacles that must be overcome (see "Problems plaguing global storage consolidation," this page) for each installation. Strategies are driven by a range of business needs, technology requirements and budgetary constraints, all of which can be mixed and matched to concoct a formula that best meets the needs of each company. "There's nothing that's common to any one company that says why they are doing it or how," says Marrone. As the following profiles show, there are almost as many ways to manage storage globally as there are global companies.
Whirlpool: the consolidator
Whirlpool, the $11 billion dollar manufacturer of household appliances, has been consolidating its data facilities into one massive data center since the late 1990s. Jim Haney, the vice president of architecture at Whirlpool, Benton Harbor, MI, says his company saved $8 million the first year, along with $12 million in network costs.
"We've got a small AS/400 setup in Northern Italy and some small manufacturing applications that are run locally," says Haney. "There are also a few countries, such as Brazil and India, where we find it much easier to outsource the applications. But other than that, about 90% of all the applications that support Whirlpool are run globally out of Michigan." The company runs four zSeries mainframes, as well as 400 or 500 NT and Unix servers, to serve the needs of 60,000 employees in 170 countries. In other words, there's a lot of horsepower.
But while the data centers were centralized, the company couldn't consolidate its storage until about two years ago. "We knew we had to do something--our spending on storage was getting out of hand," says Haney, who attributes the spike to an SAP implementation, Web-related growth and the demands of a rapidly expanding business.
"We had a ton of storage, and all of it was DAS [direct-attached storage]--anything from the really expensive new stuff to minus sixth- or seventh-generation devices. And we had to just keep adding capacity because we couldn't manage it as a utility."
The company used Tivoli's virtual tape system, which uses disk to emulate tape. "We began backing up through the Net to VTS. It was much faster for backup, but it was really congesting our network," says Haney. Finally, Haney was bumping up against poor SAP I/O response time. Whirlpool runs SAP centrally, which means there are only one set of application and database servers for 5,000 or 6,000 North American users. Says Haney: "We needed bigger stuff for storage. With the disks we had sitting behind the zSeries, response time was noticeably affected. We'd still be running batch from the night while people were trying to get online."
Whirlpool built one central storage area network (SAN) based on IBM Shark technology and simultaneously put in an integrated global frame/relay network, yanking the hodgepodge of networks previously used. The upgrades let the company run its applications centrally, giving users a larger bandwidth for accessing and using data. At the same time, storage data was moved off the network and onto a SAN big enough to handle a little more than 40TB of data.
Haney is contemplating building a remote site that would mirror his central data center in real time. The big sticking point is connectivity. Haney is examining his options--including dark fiber--but is still daunted by the costs. "We will need a heck of a lot more connectivity than we do between business locations because we'll be moving vast truckloads of data in real time," he says.
While it's taken the company seven years to build its centralized infrastructure, Haney says that the cost savings have made the effort worthwhile. The company took $8 million out of its budget just by consolidating its four data centers, and another $12 million was saved with the network consolidation. "It was an easy decision," says Haney. "And oh, by the way, now we can buy the newest technology rather than having to wait."
MasterCard: the hybrid
MasterCard has been busy: Its 300TB data center in St. Louis is only a couple years old. The credit card giant is moving and rebuilding its disaster recovery site, as well as moving its Australian and European data centers, which comprise another 150TB of storage. This activity is part of a global storage management strategy put together by MasterCard's Global IT operations. By standardizing policies, procedures and technologies across data centers, MasterCard will have taken the first step to global storage management. 2
Since 1999, MasterCard has been adding 100TB of storage yearly. Jim Hall, vice president of engineering services, realized that they needed to articulate a global storage policy. So, the company built the St. Louis center, with its multiple SANs running EMC storage devices, and is now concentrating on disseminating the strategy company wide. The reasoning? "We get a much better deal through volume buying," says Hall. Secondly, by using standard technologies and implementing them in a uniform manner, the company increases interoperability amongst its data centers. This is particularly important in light of the fact that all three of the global data centers will eventually back up to the one disaster recovery data center being built at an undisclosed site in New York.
It also means it's easier to implement storage management tools globally because the underlying technology is similar. Hall plans to use a storage resource management tool to do resource allocation between the primary site and the disaster recovery data center. He doesn't do that in Europe and Australia yet, but could if the decision is made. "As we're building out in Europe, the network architecture will be standardized so that if we need to do remote operations from St. Louis we could," he says.
The next step is to analyze the company's storage to see whether they're keeping the right data for the right amount of time. "We're looking at things like what we're storing, for how long, who has access, how many copies of it there are," says Hall. "We need to analyze what we've done and make sure that past practices are valid and what kind of improvements we need to make."
Evans & Sutherland: local works best
For employees at Evans & Sutherland (E&S), Salt Lake City, UT, being close to their data is more than just a wish--it's a business imperative. That's because they create customized, 3-D computer graphics for simulation, training, engineering and other applications throughout the world. E&S' complex computer graphics are used for military and commercial training, including systems for air, sea and land simulation, for example.
Each project generates a ton of data, says Derek Brawdy, senior systems administrator. Projects can range in size from 300GB to 3TB of data--small wonder that this $500 million company has almost 25TB of storage.
The sheer bulk of the applications under development mean that Brawdy must eschew a centralized storage plan in favor of more localized storage. "We just cannot do some of these things over distance," he says. "The projects are too large to think about storing remotely."
So Brawdy runs three separate data centers--one in Salt Lake City, UT, one in Horsham, England, and one in Orlando, FL, where E&S does a lot of work. All three centers run HP-based SANs, but aren't interlinked as far as replication goes. "We don't have a reason to remotely replicate back and forth," says Brawdy. Instead, each data center backs up to a tape library. In fact, remote backup could hinder the work being done. Brawdy gives the example of a 3-D simulation graphic of a system used to train pilots for desert flying. "The data being used to stitch together the final product is in use 24 x 7 as we build the product, and it's a long and intensive process to back up remotely," he points out.
Brawdy likes the balance SANs give him--he can place the storage close to users who need to access it, but at the same time gives him the ability to remotely administer the data with some ease.
Interestingly, his Salt Lake City SAN is decentralized. The E&S campus had a lot of pre-laid dark fiber and small raised floor spaces scattered amongst the buildings, so Brawdy took advantage of the setup. He interconnected 16 SAN switches into the existing fiber, allowing him to distribute his storage infrastructure across the campus.