Feature

Virtual SANs put to the test

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Evaluating the benefits of IP SANs."

Download it now to read this article plus other related content.

Virtual storage area networks (VSANs) are the latest tool available to storage administrators who need better ways of managing the equipment attached to increasingly massive storage arrays. At Australia's Deakin University, VSAN technology has become a major component of its ongoing effort to retire direct-attached storage (DAS) and centralize the school's immense data requirements onto a single SAN.

Deakin's largest campus is located at Burwood, in the eastern suburbs of Melbourne. Melbourne is a city of 3.5 million on Australia's southeast coast; it sits on the north shore of 25-mile-wide Port Phillip Bay. Approximately 11,000 students attend the Burwood campus and a smaller site in nearby Toorak. Other students attend smaller satellite campuses that are separated from the main campus by up to 200 miles; more than 12,500 other students are pursuing Deakin degrees via the Web. All told, Deakin served more than 30,000 students last year.

Replicating over water
Supporting such a large student body is no small task, particularly when those students are distributed over many campuses hundreds of miles apart. Historically, meeting the needs of those students has required a significant investment in DAS storage: Some 35TB of data was spread across 53 Sun Microsystems Sun Fire 12K, E6500, E4000 and E3500 servers distributed across the university's campuses.

    Requires Free Membership to View

Lessons learned
Strength in numbers doesn't apply to direct-attached storage (DAS). At high volumes, in fact, it becomes downright difficult to manage.
Standardization requires a lot of storage. Deakin's move to roll out a standard operating environment to thousands of desktop PCs, combined with its provision of standard e-learning storage services for 4,500 different courses, quickly multiplied its storage needs significantly.
A virtual storage area network (VSAN) works just like a virtual large area network (VLAN), but for storage. Think of it this way, and it shouldn't be hard to see how virtual SANs might make it easier to serve the right data to the right applications without the need to copy it all over the place.
Vendors love early adopters. In Deakin's case, that meant IBM and Cisco Systems were willing to go all out to make sure the VSAN worked as it should. As with any project, pick a partner that knows its stuff.
Don't forget your fiber. Installing a high-density switched SAN environment means you're going to have lots of fiber running around your data center. Realizing that its typical rack had a dozen fiber optic cables running into it, Deakin installed a patch panel so that fiber could be installed neatly and kept that way.
Physical protection counts, too. The massive storage equipment forced Deakin to upgrade the power feed, air conditioning and even the flooring in order to support the electricity consumption and weight of the new equipment.

For three years, the primary file server had been an eight-CPU Sun E3500 server, with 8GB of RAM and 10TB of attached storage in 20 Sun A5xxx series disk arrays. This server was typically supporting anywhere from 1,000 to 1,200 concurrent administrative, staff and student users. The E3500 was located in the large data center in the city of Waterfront, while 12 more Sun A5xxx arrays provided an additional 6.5TB of disk space at the four other campuses.

Deakin's IT team faced increasing pressure to accommodate rapid growth in demand for storage space, particularly since a recent desktop standardization project put standard operating environments on more than 4,000 Deakin desktops. Although users can store data on their local hard drives, Deakin uses Novadigm's Radia desktop management tool to ensure files are also stored on the centralized storage. With an average of more than 100MB of data per networked PC, that project alone increased storage requirements by 4TB.

Data storage needs have also skyrocketed in recent years with the rapid uptake of online learning--which requires storage of multimedia course materials for students in more than 4,500 courses. To store these materials, Deakin relies on a 1.5TB Oracle database supporting Callista, a Deakin-developed student management system recently purchased by Oracle for worldwide distribution. Callista runs in a logically partitioned domain on Deakin's Sun Fire 12K.

In the past, Deakin has delivered these applications to remote campuses from clusters of servers located in its Waterfront and Burwood data centers. Servers at those locations were backed up to tape, then the backed-up data was replicated to DAS at the university's secondary data center in Burwood and vice versa.

Because WAN bandwidth is expensive outside of Australia's largest cities, Deakin links its campuses using a private microwave network that pushes 300 Mb/s of bandwidth from the Geelong sites to Burwood, spanning the waters of Port Phillip Bay. Approximately 5,000 students are spread across campuses at Waurn Ponds and Waterfront, suburbs of the satellite city of Geelong.

That network is the lifeblood of the university, shuttling mountains of application data, voice over IP (VoIP) and backup traffic between sites. With around 50Mb/s of the microwave bandwidth typically left for backup after other applications have taken their share, data was being replicated between Waterfront and Burwood for up to 23 hours a day.

Although this approach worked, it imposed some major inconveniences on Deakin's community. For example, data pertaining to various applications became unavailable when host servers were brought down during regularly scheduled maintenance outages. It also perpetuated the management and consistency problems intrinsic within DAS environments.

"Our strategy up until this point has been DAS storage," says Craig Warren, Deakin's desktop and services manager. "We had all the standard problems associated with DAS: Storage was managed in silos, it wasn't easy to provide short-term storage needs, and even politics was an issue. We were managing it as islands, so we'd clean up one file server and have to move on to the next one, and the next."

As the volume of data the school was managing grew, Deakin investigated hierarchical storage management (HSM) solutions that it thought could help move old and less-used data onto tape. But the solutions were "a bit wanting," says Warren, because they weren't particularly efficient at delivering small files to users quickly. Another way of getting data off the system--in which students choose the files they want through a Web page and those files are burned and delivered via CD before being deleted from the server--was expected to be a moderate success, but it was nowhere near successful enough to counter the growth in demand for storage.

It soon became clear that the best way for the university to expand its storage strategy was to consolidate its data from its distributed servers onto a single, scalable storage area network (SAN).

This was first published in July 2003

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: