Carlson companies, a hospitality giant based in Minneapolis, MN, needed more storage space. Earlier this year, architectural consultant Gary Johnson was ready to put it in place. Since he assumed it would be a straightforward request, Johnson wasn't expecting the response he got from his CIO.
"He said he wasn't ready to approve more storage until he understood what my global storage strategy was," Johnson says. That's no small question at a company with 180,000 employees spread over operations in 140 countries-Carlson's brands include Radisson, TGI Friday's, Regent International, Park Plaza hotels and Country Inns & Suites.
What followed was a major planning and implementation effort that's extended the reach and management of Carlson's centralized storage sites far into the field. It's a process being repeated daily in board rooms and data centers across the country. Economic pressures combined with the lingering uncertainty of Sept. 11 are forcing storage managers to sit down with CIOs, CEOs and business line managers to chart out a far more integrated course for their IT strategies.
Consolidation and flexibility have become the twin drivers for those new strategies. The key to both is the complete separation of computing, networking and storage into their own realms and the use of virtualization to dynamically allocate resources between realms.
Diskless servers, standalone intelligent disk and agnostic networks are the Holy Trinity of this emerging philosophy. Mike Feinberg, chief technical officer for network storage solutions at Hewlett-Packard, says storage is the key to this new vision. "If you have a dataless server and it breaks, you just replace it," he says. "If you have a storage environment that's unavailable, you have to restore it. That takes a lot of time. That's why storage is the center of the new data center."
In this article, we look at the servers that will connect to that storage. Next month, we'll conclude with a look at the storage networking and management issues.
Diskless servers rule the roost
"One application, one server" and "Need storage? Buy another server" can no longer be the cornerstones of capacity planning. Instead, a kind of just-in-time processing is evolving around partionable and blade servers.
In the large Unix server world, HP, IBM, and Sun-which offer massive-scale computing via their respective SuperDome, eServer p690 (Regatta) and SunFire 15k partitionable servers-have attempted to offer more efficiency by delivering multiprocessor boxes that allow you to assign compute power to processes as needed. Some further sweeten the pot by delivering fully provisioned servers and charging customers only for what they use.
Although he sees promise in the blade architecture, for now John Reynders, vice president of informatics with bioinformatics giant Celera Genomics, of Rockville, MD, is sticking with a partitionable server architecture as the basis for the company's current data center consolidation project-for now, at least.
Celera, which for years has run an array of clustered Compaq AlphaServer ES40s providing an aggregate 75TB of server-attached storage, will build its new data center around a dozen IBM Regatta servers attached to a SAN incorporating around 150TB of EMC Symmetrix-based storage.
The need to centralize storage and dynamically allocate the company's resources was a driving force behind the shift, according to Reynders. "We've been with servers and storage pretty tightly coupled, but we've discovered that's really constrained our flexibility," he says.
|Simplifying complex environments|
"To move storage between projects, we had to physically move it between servers. With virtualized storage, whatever the project demands are going forward, we can decouple the components and get that flexibility. Rather than having one architecture for computing and one for [file, print and database] servers, we have a single architecture we can flow back and forth between our server and computer needs."
Blade servers take the idea of partionable servers to their logical conclusion.
"We believe the combination of processing area network [PAN] and SAN is the data center of the future," Susan Davis, vice president of marketing and management for blade start-up Egenera of Marlborough, MA, explains. "By separating the hardware definition from its logical identity, you don't need to overprovision-you can use it when you need it and allocate it to something else when you don't. No longer are infrastructure people slaves to their systems-the systems become these spongeable pools of resources that adapt on the fly."
If your database is showing signs of strain, for example, blades can be retasked from supporting your Web server to supporting the database. Egenera's servers virtualize network settings by storing system-specific parameters including MAC, IP and HBA-related settings in software. This approach provides transparency between blades, allowing failover blades to instantly assume the identity of another without applications even noticing, and hopefully improving the generally low utilization rate of servers.
The high density and better cost predictability of blade servers has made them extremely attractive to operators of data centers such as co-location hosting provider Citadel Data Center, in Des Plaines, IL, a division of The Systems House, Inc.
"Any increase in density increases yield in my data center space," says Vice President Jeff Whittemore. "Say we have a customer with 20 servers and they don't all take a lot of storage. I can sell them a blade server and lower their overall costs, and I'm getting the benefits of centralized storage management. The less space customers take up in my facility, the less I charge them." With current rates running at hundreds of dollars per square foot, even small savings can add up quickly.
Until blade servers support the major Unixes (Sun will have a Solaris blade server soon), both blade servers and partionable servers may coexist in many new data centers. But whether you go the blade or partionable route, you'll likely be faced with the need for a logical layer that connects servers to storage. Such capabilities have long been available on proprietary platforms-Digital Equipment had VMS volume shadowing and repartitioning features years ago-but virtualization is a much younger concept in the open systems world.
It's caught on quickly, however, and ever-improving SAN management tools are rapidly closing the gap. To avoid the technological lock-in of past storage regimes, future data centers-whatever platform they're based on-will use virtualization freely and extensively to provide seamless transparency of storage and server resources.
- Intelligent Storage is the Future of the Modern Data Center –Hewlett Packard Enterprise
- Cloud Storage for Primary or Nearline Data –SearchStorage.com
- Lower Your Data Center Costs With vSAN –VMware
- The Benefits of Edge Data Centers –Schneider Electric