Feature

Storage grid pushes the envelope

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Continuous data protection (CDP) and the future of backup."

Download it now to read this article plus other related content.

The client blade systems are built primarily around the IBM BladeCenter chassis with Intel-powered HS20 blades. Each blade comes with IP connectivity for standard network access and blade management. For each blade to connect to the back-end SAN boot network and front-end storage network, a two-port FC or two-port 1Gb IP option card needs to be added to each blade. Depending on the added option (FC or IP), each blade would be hardwired to become an FC or IP node. The project team had to make a judgment call on the number of IP and FC nodes to use. "[As] determined by engineering test requirements, the Kilo-Client grid today consists of 300 FC nodes and 1,200 iSCSI nodes," says Ferguson.

The boot LUNs and "Golden LUN" OS images are stored on NetApp FAS980 filers to which the blades connect through the back-end network via Cisco Systems Inc. Catalyst 7609 or Brocade FC switches, depending on the node type. In a similar fashion, each node connects over the front-end network to the storage systems under test via Cisco Catalyst 7609 and Brocade switches.

Managing connectivity for more than 1,000 nodes and ensuring sufficient bandwidth was a big challenge for the Kilo-Client team. They decided to group nodes in multiples of 224 server blades called "modules" or "hives."

"Two hundred and twenty-four nodes per 'module' isn't a random number," says David Klem, a Kilo-Client engineer. "It is the number of client blades we could connect through a single Cisco

Requires Free Membership to View

Catalyst 7609 switch." To support the bandwidth requirements of 1,200 iSCSI clients, the project team had to architect a high-performance IP network. "We use quite a bit of 10Gig," says Klem. "We give each node a 1Gig port and then have eight 1Gig ports per chassis linked to the 7609 Kilo-Client core. From there, we connect via multiple 10Gig connections to our storage grid to which the storage systems under test are connected," says Klem, describing the front-end IP network.

The efficiency of the Kilo-Client is linked directly to the Golden LUN OS images--having hundreds of images readily available makes the solution hum. Consequently, keeping the images protected was another key aspect of the solution. The project team used NetApp NearStore systems to back up and archive boot LUNs. "The brunt of the work is in getting the suitable OS images in place," says Brown. "As images differ in configuration settings, patches and possibly applications required for stress testing, keeping golden images safe was imperative."

This was first published in October 2007

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: