This article can also be found in the Premium Editorial Download "Storage magazine: Evaluating the benefits of IP SANs."
Download it now to read this article plus other related content.
Australia's biggest SAN
The impetus behind the SAN migration was to provide a centralized,
consistent repository for all kinds of administrative, research and everyday data. Backup would be easier because Deakin's terabytes of data would be backed up on the SAN, rather than being constantly distributed around the university network. And thanks to logical partitioning of the SAN, it would be possible to accommodate the widely varying requirements of the university's many communities of interest.
Last November, Warren's department began evaluating the technology that would make up its SAN. It quickly settled on an IBM TotalStorage Enterprise Storage Server (Shark) 800 Turbo, which offers a capacity of 55.9TB, but was installed with just 30TB to start. Also to be connected were two SuperDLT-based Quantum tape libraries--a 500-cartridge P7000 at the Waterfront facility, and a 250-cartridge P3000 unit in Burwood.
|SAN switching for the future|
Such a dramatic increase in storage capacity doesn't come cheap, but Deakin's IT team was able to justify the expenditure to senior managers by pointing out a simple fact: At the rate its storage demand was growing before the SAN upgrade, Deakin's DAS costs would have exceeded the price of a completely new SAN within two years, and the SAN would provide far more storage.
It was also clear that the environment would benefit from having many of Deakin's servers directly attached to the SAN. That meant linking up the Sun systems--as well as approximately 50 Red Hat Linux servers used for load balancing and other sundry infrastructure tasks--to the storage network. That was an expensive proposition using conventional Fibre Channel (FC) connectivity, but that problem was solved when Deakin began investigating options involving IP-based iSCSI.
Because it already had a long-standing relationship with Cisco Systems for its other networking equipment, Deakin preferred to source appropriate iSCSI technology from Cisco, rather than having to establish a new relationship with an FC specialist such as McData or Brocade. Working with Cisco, the Deakin team explained its need for direct server-to-SAN connectivity and had nearly settled on the Cisco SN5428 before it learned of the Cisco MDS 9000, a new family of multilayer directors and fabric switches that combine iSCSI with support for Fibre Channel over IP (FCIP).
"We've been using low-cost Intel-based Linux servers for load balancing," says Warren. "iSCSI was quite attractive to us because we wanted large amounts of storage on these boxes, but we didn't really want to be paying the price of Fibre Channel HBAs [host bus adapters]. So we were looking at a SAN front-ended with some storage network routers. Since we were looking for both an FC switch and [iSCSI] storage network router, we got very interested in the MDS line."
By January, Deakin was putting its new SAN equipment through its paces. Within five weeks, the basic SAN was up and running, using a pair of Cisco MDS 9509 multilayer directors to mediate between its large server environment, IP WAN and FC-attached IBM Shark storage and Quantum tape silos.
Deakin's experiments with iSCSI confirmed that the protocol will play an important role in reducing the cost of server-to-SAN connectivity. Deakin used several Linux servers to run VMware, which manages virtual Windows 2000 and XP sessions that host services such as Microsoft Active Directory. This approach makes it easy to back up the Windows 2000 virtual machine, because doing so only requires copying the relevant VMware .dsk file.
By using iSCSI to attach those Linux servers directly to one or more LUNs on the SAN, Deakin can retain connectivity from its dozens of Linux servers without having to purchase costly HBAs for each machine, as it has done for its high-end Sun servers.
Ultimately, extension of SAN-based services via iSCSI will enable capabilities such as remote booting, flash-copy backups, and continual mirroring of terabytes of data. Diskless machines will run completely over iSCSI, as will file, Web, database, DNS, DHCP and many other types of servers.
"The reason we're using it is cost," says Andrew White, system programmer at Deakin. "It will save us thousands and thousands of dollars [and it works even though] we are a very distributed university. It's amazing how little throughput iSCSI really needs to work."
Better still, this flexibility doesn't impose any performance hits: In tests, iSCSI--running over a Gigabit Ethernet connection--actually performed slightly faster than a direct FC connection (via a QLogic 2200F HBA directly into the SAN). Deakin is also exploring use of FCIP to seamlessly interlink its FC SAN fabrics between campuses over its IP network.
The SAN began serving files in early February, and throughout that month, Deakin's IT staff began migrating data from the individual DAS drives onto the Shark array.
Because most of Deakin's Sun servers already had dual HBAs installed, migration of the data from the DAS to the SAN environment has been simple. Deakin uses the secondary HBA to mirror data from the DAS to the SAN, and then breaks that mirror by disconnecting the HBA from the DAS.
In this way, the team has already migrated more than three-quarters of the data into the new environment, where it's being partitioned using the VSANs. "About 90% of our migration has been done online with hardly anyone noticing," says Warren.
During the data migration, backups are still being run over the network in the conventional way. But once all the DAS data has been migrated, the team will turn its attention to implementing SAN-based backup.
This was first published in July 2003