Feature

Storage grid pushes the envelope

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Continuous data protection (CDP) and the future of backup."

Download it now to read this article plus other related content.

The project was dubbed Kilo-Client because the grid would have to support a minimum of 1,000 clients. For it to be a useful engineering tool, Kilo-Client had to meet the following requirements: It had to accommodate rapid provisioning with minimum intervention using different OSes (Windows, Linux, Unix) and configurations. "With more than 1,000 nodes in the Kilo-Client, every minute it takes to provision an operating system counts," says Brown. Management and configuration of the solution also needed to be simple. To generate load and stress on NetApp storage controllers, host configurations would vary, depending on the types of tests run. "Having to manage host OS images, patches and applications independently would be unwieldy," notes Brown.

Finally, the Kilo-Client had to be scalable and flexible. Although the initial configuration was for 1,000 clients, the testing grid had to scale far beyond this number. "In just over half a year, the Kilo-Client has grown 50% to 1,500 nodes," says Brown.

The rapid provisioning challenge
Because of the large number of clients and the requirements of rapid configuration, client provisioning was a pivotal aspect of the Kilo-Client project. The project team was initially planning to leverage off-the-shelf disk provisioning tools and evaluated products from Altiris (now part of Symantec Corp.), IBM xCAT (IBM freeware) and Symantec Ghost. Unfortunately, it took each of these tools many minutes to re-image a single

Requires Free Membership to View

client. "It would have taken many hours to re-image a thousand clients," says Ferguson.

These tools would also have added to the management complexity of dealing with boot image servers to handle the bandwidth requirements for imaging hundreds of nodes simultaneously. Moreover, the requirements for storing images that vary slightly would have added to the disk space requirements because each image would have to be stored in its entirety.

The project team realized that having a full, local operating system image on each server blade wasn't an option and, to be able to rapidly repurpose clients, SAN boot would be required. That's when the project team steered into unknown waters. SAN boot is a proven technology; however, prior to the Kilo-Client, the largest number of nodes using SAN boot was with a 250-node FC cluster. Not only would the Kilo-Client have to support SAN boot for more than 1,000 nodes, to up the ante, it would have to SAN boot more than 1,000 iSCSI nodes.

This was first published in October 2007

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: