Feature

Virtual SANs put to the test

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Evaluating the benefits of IP SANs."

Download it now to read this article plus other related content.

Australia's biggest SAN
The impetus behind the SAN migration was to provide a centralized,

    Requires Free Membership to View

consistent repository for all kinds of administrative, research and everyday data. Backup would be easier because Deakin's terabytes of data would be backed up on the SAN, rather than being constantly distributed around the university network. And thanks to logical partitioning of the SAN, it would be possible to accommodate the widely varying requirements of the university's many communities of interest.

Last November, Warren's department began evaluating the technology that would make up its SAN. It quickly settled on an IBM TotalStorage Enterprise Storage Server (Shark) 800 Turbo, which offers a capacity of 55.9TB, but was installed with just 30TB to start. Also to be connected were two SuperDLT-based Quantum tape libraries--a 500-cartridge P7000 at the Waterfront facility, and a 250-cartridge P3000 unit in Burwood.

SAN switching for the future
Deakin University's new storage area network (SAN)-based storage infrastructure will support anticipated growth well into the future. Matched with the Cisco MDS switches, Deakin has flexibility that will let it manage growth more efficiently. Here's what the new infrastructure does for Deakin:
The use of two VSANs allows Deakin to isolate production and development environments. In the past, testing applications in the development environment required copying massive amounts of information to create a test data set.
In the future, higher-granularity VSANs will let Deakin carefully manage availability of SAN LUNs to particular ports on the Cisco switches; this will allow tighter access control and better backup strategies, which is always helpful in a large and distributed computing environment.
The new SAN environment made migrating from direct-attached storage (DAS) onto the Shark array a breeze. Because most of Deakin's Sun servers already had dual host bus adapters (HBAs), it was able to set up the secondary HBA to mirror data from the DAS to the SAN, and then break the mirror by disconnecting the HBA from the DAS. This redundant approach allowed the mass data migration to occur all but unnoticed.
The Cisco MDS switches' support for iSCSI will allow Deakin to link its many Linux servers--which are used for tasks including running virtual Windows 2000 and XP sessions to provide Windows services to the university--directly to one or more SAN LUNs, without requiring a separate Fibre Channel HBA. This reduces cost and facilitates better server-to-SAN communication.

Such a dramatic increase in storage capacity doesn't come cheap, but Deakin's IT team was able to justify the expenditure to senior managers by pointing out a simple fact: At the rate its storage demand was growing before the SAN upgrade, Deakin's DAS costs would have exceeded the price of a completely new SAN within two years, and the SAN would provide far more storage.

It was also clear that the environment would benefit from having many of Deakin's servers directly attached to the SAN. That meant linking up the Sun systems--as well as approximately 50 Red Hat Linux servers used for load balancing and other sundry infrastructure tasks--to the storage network. That was an expensive proposition using conventional Fibre Channel (FC) connectivity, but that problem was solved when Deakin began investigating options involving IP-based iSCSI.

Because it already had a long-standing relationship with Cisco Systems for its other networking equipment, Deakin preferred to source appropriate iSCSI technology from Cisco, rather than having to establish a new relationship with an FC specialist such as McData or Brocade. Working with Cisco, the Deakin team explained its need for direct server-to-SAN connectivity and had nearly settled on the Cisco SN5428 before it learned of the Cisco MDS 9000, a new family of multilayer directors and fabric switches that combine iSCSI with support for Fibre Channel over IP (FCIP).

"We've been using low-cost Intel-based Linux servers for load balancing," says Warren. "iSCSI was quite attractive to us because we wanted large amounts of storage on these boxes, but we didn't really want to be paying the price of Fibre Channel HBAs [host bus adapters]. So we were looking at a SAN front-ended with some storage network routers. Since we were looking for both an FC switch and [iSCSI] storage network router, we got very interested in the MDS line."

By January, Deakin was putting its new SAN equipment through its paces. Within five weeks, the basic SAN was up and running, using a pair of Cisco MDS 9509 multilayer directors to mediate between its large server environment, IP WAN and FC-attached IBM Shark storage and Quantum tape silos.

Deakin's experiments with iSCSI confirmed that the protocol will play an important role in reducing the cost of server-to-SAN connectivity. Deakin used several Linux servers to run VMware, which manages virtual Windows 2000 and XP sessions that host services such as Microsoft Active Directory. This approach makes it easy to back up the Windows 2000 virtual machine, because doing so only requires copying the relevant VMware .dsk file.

By using iSCSI to attach those Linux servers directly to one or more LUNs on the SAN, Deakin can retain connectivity from its dozens of Linux servers without having to purchase costly HBAs for each machine, as it has done for its high-end Sun servers.

Ultimately, extension of SAN-based services via iSCSI will enable capabilities such as remote booting, flash-copy backups, and continual mirroring of terabytes of data. Diskless machines will run completely over iSCSI, as will file, Web, database, DNS, DHCP and many other types of servers.

"The reason we're using it is cost," says Andrew White, system programmer at Deakin. "It will save us thousands and thousands of dollars [and it works even though] we are a very distributed university. It's amazing how little throughput iSCSI really needs to work."

Better still, this flexibility doesn't impose any performance hits: In tests, iSCSI--running over a Gigabit Ethernet connection--actually performed slightly faster than a direct FC connection (via a QLogic 2200F HBA directly into the SAN). Deakin is also exploring use of FCIP to seamlessly interlink its FC SAN fabrics between campuses over its IP network.

The SAN began serving files in early February, and throughout that month, Deakin's IT staff began migrating data from the individual DAS drives onto the Shark array.

Because most of Deakin's Sun servers already had dual HBAs installed, migration of the data from the DAS to the SAN environment has been simple. Deakin uses the secondary HBA to mirror data from the DAS to the SAN, and then breaks that mirror by disconnecting the HBA from the DAS.

In this way, the team has already migrated more than three-quarters of the data into the new environment, where it's being partitioned using the VSANs. "About 90% of our migration has been done online with hardly anyone noticing," says Warren.

During the data migration, backups are still being run over the network in the conventional way. But once all the DAS data has been migrated, the team will turn its attention to implementing SAN-based backup.

This was first published in July 2003

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: