Until it got NAS, UTA was tying its storage to servers, which required a constant investment in new storage sy...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
To do cutting-edge academic research effectively, scientists and engineers need cutting-edge computing technology. Unfortunately, the most advanced technology often comes with a price tag that would break any academic budget. Faced with this conundrum, University of Texas at Arlington Professor Dennis S. Marynick and IT director Li-Chi Wang Fan began a research project of their own. Their mission: discover a storage technology that cut the cost of system upgrades.
As is appropriate for a university located in the technology corridor of Dallas/Fort Worth, UTA provides high performance computing capability to its faculty and students. Keeping this capability up-to-date and user-friendly is the job of UTA's Office of Information Technology's Academic Computing Services (ACS) department. Heading up that effort are Fan, ACS director, and Marynick, Jenkins Garrett Professor of Chemistry and chair of UTA's High Performance Computing Committee.
It's a tough job, says Marynick, because most of the faculty and students who use the facilities run computing-intensive programs, not simple spreadsheets or word processors. Instead, they're doing cutting-edge research in the areas of quantum chemistry, computational fluid dynamics, molecular modeling, solid-state physics, finite element analysis and mathematical modeling.
In the past, says Fan, UTA took the same approach to serving users' heavy data- and numbers-crunching needs as many universities do. They threw a big box at it. UTA's old research computing facility used a high-end 16-processor Origin 2000 server from Mountain View, Calif.-based Silicon Graphics with 154G bytes of direct-attached storage.
Tying storage to servers created a need for constant investment in new storage systems. Typically, processors become obsolete quickly, about every 18 to 24 months. Server-attached storage systems have a much longer life span, but the server-attached storage architecture requires that storage be replaced when the server is. So, whenever the processors needed to be upgraded, the storage system had to go, even though it had a lot of life left in it.
Examining the situation, it was obvious to Fan and Marynick that constantly buying and trashing storage systems was not cost-effective. "Moving to a more distributed approach would save money over the long term by reducing replacement expenses," says Marynick. Figuring out which distributed approach to take, however, "took a considerable effort."
The team spent several months evaluating both storage area network (SAN) and network attached storage (NAS) technologies. NAS came out on top. "NAS standards are fully developed, while those for SAN are just evolving," says Marynick. "Stable standards and deployment via proven technology were important to our decision because the primary reason for implementing centralized storage was to eliminate the need to replace it so frequently."
Having decided on the NAS approach, Fan and Marynick evaluated a number of NAS products. The field was eventually narrowed to two RAID-based NAS devices, a RAID 4 NAS from Sunnyvale Calif.-based NetApps and NetServer, a RAID 5 NAS from Santa Clara, Calif.-based Auspex Systems Inc.
"Both devices offered good price-performance ratios, but the NetApps device had some limitations," says Marynick. The NetApps product worked well for read transactions but did not have the write transaction rate of the Auspex RAID 5 device.
The RAID 5 option offered better I/O performance because it doesn't have the parity drive bottleneck of RAID 4. In the Auspex architecture, each I/O node contains a dual-Intel processor motherboard that has different and logically separate processing functions. For example, a network processor processes network protocols and manages associated caches. Meanwhile, a file and storage processor manages the file systems and associated storage hardware. Also, the NAS automatically balances loads across the file servers, and mounts can be moved dynamically between servers.
Fan and Marynick selected an Auspex NetServer with one network processor that provides two 1G byte connections and has fourteen 36G byte drives. The NetServer resides in a new research computing system that will go live in January. The heterogeneous environment consists of the NetServer, 34-processor Intel servers running the Red Hat Linux operating system and 32-processor Compaq Alpha servers running Compaq Tru64. The servers and storage are connected by a dedicated high-speed local area network.
"Our new architecture will provide substantial improvements in performance while reducing our replacement expenses," says Fan. Now she can upgrade processors and/or add different operating systems without replacing the storage device.
Being able to build a "creative" system has been very satisfying, Fan says. "Traditionally, universities spend a million dollars and get a big, old box," she says. "If we had to do it again, I don't think we would do anything differently."
For additional information about Auspex, visit its Web site.
To learn more about University of Texas at Arlington, visit its Web site.
For More Information
>>Share your storage networking experiences and post questions in our searchStorage.com Storage Networking Forum
>>SearchStorage.com Tips: Tech (SAN/NAS)
>>SearchStorage.com Best Web Links: Networked Attached Storage (NAS)