Storage grid pushes the envelope

What started out as a test-bed project for Network Appliance is now a good example of architecting enterprise storage systems. The vendor's Kilo-Client project showcases how SAN booting and thinly provisioned snapshots can be used in a storage grid for rapid provisioning, simplified storage management and huge disk space savings.

This article can also be found in the Premium Editorial Download: Storage magazine: Continuous data protection (CDP) and the future of backup:

When a prominent storage vendor built a 1,000 node storage test grid, it learned plenty of lessons, and a lot of what it learned can be applied to architecting enterprise storage systems.


IT projects can be divided into two camps: out-of-the-box implementations that require very little customization and risk; and highly customized jobs in which unique business requirements force a nonstandard approach or the technology (or how it's applied) is so novel that there's limited precedence and experience.

Most storage projects fall into the former category. A prime example of the latter is the Network Appliance (NetApp) Inc. Kilo-Client project, where the objectives and requirements coerced the project team to build a unique solution. The project uses NetApp storage components, but the storage test grid could easily be replicated using other storage vendors' gear. And the technologies applied, such as SAN booting of servers for simplified management and rapid recovery, could benefit any storage network.

The task
David Brown, engineering support manager at NetApp's office in Research Triangle Park, NC, and Gregg Ferguson, Kilo-Client manager, NetApp engineering support, were asked to build a large QA testing grid. The grid had to be constructed for easy, rapid provisioning of a large number of clients with various configurations to a variable number of NetApp FAS storage controllers for rigorous load and stress testing.

The clients also had to connect via Fibre Channel (FC), iSCSI or file-system protocols to the storage controllers, and the solution would have to group multiple clients into a single test bed. The number and size of these test beds could vary, ranging from one single test bed consisting of all clients, to tens of independent test beds running in parallel. To add to the challenge, the solution needed to be rapidly changed and restored to any of the available or new test-bed configurations.

The project was dubbed Kilo-Client because the grid would have to support a minimum of 1,000 clients. For it to be a useful engineering tool, Kilo-Client had to meet the following requirements: It had to accommodate rapid provisioning with minimum intervention using different OSes (Windows, Linux, Unix) and configurations. "With more than 1,000 nodes in the Kilo-Client, every minute it takes to provision an operating system counts," says Brown. Management and configuration of the solution also needed to be simple. To generate load and stress on NetApp storage controllers, host configurations would vary, depending on the types of tests run. "Having to manage host OS images, patches and applications independently would be unwieldy," notes Brown.

Finally, the Kilo-Client had to be scalable and flexible. Although the initial configuration was for 1,000 clients, the testing grid had to scale far beyond this number. "In just over half a year, the Kilo-Client has grown 50% to 1,500 nodes," says Brown.

The rapid provisioning challenge
Because of the large number of clients and the requirements of rapid configuration, client provisioning was a pivotal aspect of the Kilo-Client project. The project team was initially planning to leverage off-the-shelf disk provisioning tools and evaluated products from Altiris (now part of Symantec Corp.), IBM xCAT (IBM freeware) and Symantec Ghost. Unfortunately, it took each of these tools many minutes to re-image a single client. "It would have taken many hours to re-image a thousand clients," says Ferguson.

These tools would also have added to the management complexity of dealing with boot image servers to handle the bandwidth requirements for imaging hundreds of nodes simultaneously. Moreover, the requirements for storing images that vary slightly would have added to the disk space requirements because each image would have to be stored in its entirety.

The project team realized that having a full, local operating system image on each server blade wasn't an option and, to be able to rapidly repurpose clients, SAN boot would be required. That's when the project team steered into unknown waters. SAN boot is a proven technology; however, prior to the Kilo-Client, the largest number of nodes using SAN boot was with a 250-node FC cluster. Not only would the Kilo-Client have to support SAN boot for more than 1,000 nodes, to up the ante, it would have to SAN boot more than 1,000 iSCSI nodes.

To manage the various boot images, the project team came up with the concept of "Golden LUNs"--OS images with specific engineering requirements--and used the NetApp LUN clone technology to present writable LUN copies to each Kilo-Client node. Each node would have to be configured only once to point to an assigned boot LUN, identified by the assigned world wide port name (WWPN) for FC boot LUNs and an iSCSI Qualified Name (IQN) for iSCSI boot LUNs, leveraging NetApp's Data Ontap OS to map and unmap various boot LUNs for a specific node.

The LUN clone technology was selected by the project team for its efficiency. LUN clones only have to store differences between the original image and the cloned image, resulting in huge disk space savings. In other words, a fresh LUN clone requires no additional disk space; it's only as a clone changes that differences from the original image need to be stored. As boot images differ only slightly (mostly configuration differences), they're an ideal fit for operating system boot image provisioning.

The difference between full-disk OS provisioning and using LUN clones is staggering. "It takes about two and a half hours and a little over 200GB to provision 1,500 clients using LUN clones. With full-disk OS provisioning, the same task would take almost a full 24-hour day and over 14TB of disk space to achieve the exact same result," says Brown.

The architecture
The Kilo-Client architecture has three primary networks: one for SAN boot, one to connect to NetApp storage systems under test and one management network to blade clients (see "The Kilo-Client architecture" PDF). "The Kilo-Client comprises 1,500 server blades, 7,000 FC and IP ports, and over 94 storage controllers with over half a petabyte of disk space we assign dynamically," says Ferguson.



Click here for an overview The Kilo-Client architecture (PDF).

The client blade systems are built primarily around the IBM BladeCenter chassis with Intel-powered HS20 blades. Each blade comes with IP connectivity for standard network access and blade management. For each blade to connect to the back-end SAN boot network and front-end storage network, a two-port FC or two-port 1Gb IP option card needs to be added to each blade. Depending on the added option (FC or IP), each blade would be hardwired to become an FC or IP node. The project team had to make a judgment call on the number of IP and FC nodes to use. "[As] determined by engineering test requirements, the Kilo-Client grid today consists of 300 FC nodes and 1,200 iSCSI nodes," says Ferguson.

The boot LUNs and "Golden LUN" OS images are stored on NetApp FAS980 filers to which the blades connect through the back-end network via Cisco Systems Inc. Catalyst 7609 or Brocade FC switches, depending on the node type. In a similar fashion, each node connects over the front-end network to the storage systems under test via Cisco Catalyst 7609 and Brocade switches.

Managing connectivity for more than 1,000 nodes and ensuring sufficient bandwidth was a big challenge for the Kilo-Client team. They decided to group nodes in multiples of 224 server blades called "modules" or "hives."

"Two hundred and twenty-four nodes per 'module' isn't a random number," says David Klem, a Kilo-Client engineer. "It is the number of client blades we could connect through a single Cisco Catalyst 7609 switch." To support the bandwidth requirements of 1,200 iSCSI clients, the project team had to architect a high-performance IP network. "We use quite a bit of 10Gig," says Klem. "We give each node a 1Gig port and then have eight 1Gig ports per chassis linked to the 7609 Kilo-Client core. From there, we connect via multiple 10Gig connections to our storage grid to which the storage systems under test are connected," says Klem, describing the front-end IP network.

The efficiency of the Kilo-Client is linked directly to the Golden LUN OS images--having hundreds of images readily available makes the solution hum. Consequently, keeping the images protected was another key aspect of the solution. The project team used NetApp NearStore systems to back up and archive boot LUNs. "The brunt of the work is in getting the suitable OS images in place," says Brown. "As images differ in configuration settings, patches and possibly applications required for stress testing, keeping golden images safe was imperative."

Using the Kilo-Client storage grid
When a test engineer requests time on the Kilo-Client grid, the engineering support team assigns the engineer a single Kilo-Client node booted from a LUN clone of the Golden LUN. The engineer then installs applications and tools on the SAN booted OS required for his testing needs.

If the customized SAN booted OS needs to be used on more than one node, the customized boot LUN gets cloned again using LUN clone technology. The LUN clone is then split from the original LUN, unmapped from any blades to preserve the original LUN, and optionally archived to a NetApp NearStore system for future use. In cases where the customized boot LUN is significant enough, it could be used as a new Golden LUN. To use the customized LUN on additional Kilo-Client nodes, a snapshot copy of the FlexVol volume containing the new OS SAN boot LUN is created; writable LUN clones are created for as many blades as the engineer wants for testing.

By using LUN clones, the thinly provisioned boot LUNs allow each bootable LUN clone to be unique and writable, and yet share blocks from the original LUN. "It isn't uncommon that hundreds of LUN clones get created from a single original LUN," says Klem.

Advantages of a SAN boot architecture

Rapid client provisioning and repurposing. By separating a server and the boot medium, a server can be assigned to a different boot volume by remapping it from one boot LUN to another. Various boot LUNs are identified by assigned world wide port name (WWPN) for Fibre Channel and an iSCSI qualified name (IQN) for iSCSI.

Scalable and flexible. SAN booting enables scaling horizontally by simply adding servers or server blades and boot images on the back-end storage. Moreover, servers can be mapped easily from one boot LUN to another one by remapping the boot LUN assignment.

RAID protection for boot images. Because the boot LUNs reside on enterprise storage, they inherently have the RAID protection of the array. Boot LUNs eliminate having a pair of mirrored disks in each server.

Boot images can be spread across multiple spindles. Performance can be adjusted up or down, depending on the number of spindles assigned to the boot images. Local boot disks are typically limited to a pair of mirrored drives.

Inherent backups. Because the boot LUNs reside on enterprise storage, they're automatically backed up as part of the array backup, eliminating the need to back up each individual server.

Lessons learned
The Kilo-Client has ventured into several new territories and has encountered surprises that required adjustments. One of the biggest surprises was how the storage grid is used. Initially, it was envisioned that many nodes performing the same task would be a primary use of the Kilo-Client. Instead, "most of our engineers are using a single node, such as a Linux server, testing 50 different storage controllers," says Ferguson.

Assigning an adequate number of disk drives to boot all 224 blades in a "hive" took a while to get right. "Initially, we started with 30 drives per 'hive,' which worked great for booting; but once the applications started to run, spindles were 100% busy, causing excessive latency," explains Brown. The problem was remedied by adjusting the number of spindles per "hive" from 30 to 42, which lowered drive utilization to 70%.

The team was in for another surprise when engineering started VMware tests with 224 servers in a "hive" running five virtual servers per node. Disk utilization went through the roof and the number of disks had to be increased from 42 to 96, requiring four additional disk shelves.

Another lesson learned relates to FC SAN booting. "If someone wants to run tests over Fibre Channel, it's not optimal that these servers are SAN booted via Fibre Channel," says Ferguson. "It would be better to either boot these servers via NFS or iSCSI, and let [the] FC ports be available for FC testing."

The Kilo-Client project showcases how SAN booting and thinly provisioned snapshots can be used in a storage grid for rapid provisioning, simplified storage management and huge disk space savings. These tasks are central to any storage environment and certainly don't depend on NetApp products to implement. As SANs take on more and more ports, it becomes increasingly important to find new ways to reduce management interventions, test new products, and move and protect data.

This was first published in October 2007

Dig deeper on Enterprise storage, planning and management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close