Because Linux servers with direct-attached Fibre Channel (FC) disks couldn’t keep pace with the rapid growth of...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
file data storage required by UCLA’s Institute for Digital Research and Education (IDRE) compute cluster users, the institute turned to high-performance network-attached storage (NAS) four years ago.
IDRE’s systems store a broad range of data, from home directories to millions of smallish genomics files, to the huge files of physicists and other scientists doing computational work. UCLA has three BlueArc Corp. Titan 3200 servers from Hitachi Data Systems (HDS) with a total capacity of approximately 500 TB. It also stores 240 TB on a Panasas ActiveStor 11 (PAS 11) and another 160 TB on more expensive PAS 12 systems.
Scott Friedman, chief technologist at UCLA’s IDRE, said he plans to add another 540 TB to the PAS 11, which will increase the total capacity of the Panasas system to 940 TB.
IDRE began its ongoing journey to scale-out NAS with BlueArc Titans for its compute cluster. Over time, it has made the powerful data storage systems available to other campus users, including the athletic department with its extensive library of video game footage.
“What they’re really good at is lots of IOPS, lots of small operations, lots and lots of clients,” Friedman said of the Titans, which are now sold by HDS following its acquisition of BlueArc last September.
HDS claims the Titan 3200 can deliver 200,000 IOPS per node, scale to eight nodes per cluster and manage 16 PB of data under a single namespace. But each node is currently limited to two 10 Gigabit Ethernet (10 GbE) ports.
File data storage requirements cause bandwidth issues
Unfortunately, the bandwidth wasn’t always adequate for some of UCLA’s research scientists. With its three Titan 3200 nodes, IDRE gets good performance only when the heaviest users aren’t choking the links to the heads. Even just one researcher can overload the connection to the Titans, according to Friedman.
“It’s a network limitation. We aggregated two 10 Gigabit links to each head, and they're often saturated,” he said. “That’s the limit of bandwidth that you can feed into one of the heads, and it’s just not cost effective for us to keep adding those heads because it doesn’t scale the way we need it to scale.
“If I’m opening and then writing to a huge file, and I’m constrained on the network, it doesn’t matter how many IOPS they can do," he continued. "We can’t take advantage of it. It’s not that there’s anything wrong with BlueArc. It means it’s just not the right tool for the job for that user.”
Fred Oh, a senior product marketing manager for the Hitachi NAS product line, agreed that network bandwidth is the limiting factor to the file-sharing server on the front end or to the storage array on the back end. He said HDS intends to support 40 GbE connections with its next-generation architecture, due later this year.
“We’ve been waiting for a while now for the network bandwidth to catch up to our NAS head horsepower,” Oh said.
Friedman, at UCLA’s IDRE, hoped Parallel NFS (pNFS) would address the problem. The long-awaited protocol promised a performance boost by providing parallel, rather than serial, client access to the file data and metadata distributed across multiple clustered storage devices.
Prior to the HDS acquisition, BlueArc announced plans to make pNFS available in the first half of this year. But HDS' Oh said the specification still needs work. He declined to give a timetable for its public release, saying only that HDS will bring pNFS to market when it’s “ready and reliable.”
Panasas products offer scaling
Last spring, UCLA’s IDRE decided it couldn’t wait any longer for BlueArc’s implementation of the pNFS standard (which is part of NFS version 4.1). It turned to Panasas Inc., which helped develop pNFS, although Panasas has yet to support the pNFS standard in products. Panasas, however, does make available its direct precursor to pNFS, called DirectFlow.
In April 2011, UCLA deployed a PAS 12 with four 40 TB disk shelves, and last October it added a less expensive PAS 11 with four 60 TB disk shelves. All servers in UCLA’s Linux-based compute cluster run NFS and Panasas’ DirectFlow clients, which IDRE installs via an automated provisioning mechanism.
“What we really needed was the scalability, and I don’t care whether it’s NFS or what the protocol is,” Friedman said. He added that, in the end, it’s more important to have a product suited to horizontal scaling, “where as our needs grow, the scalability of the bandwidth can increase with it. And that works a lot better in the Panasas case for us than BlueArc.”
Each of UCLA’s PAS shelves has a 10 GbE link. The PAS 12s, which IDRE uses only as scratch space, offer write throughput of 1.6 GBps and the PAS 11s can write data at 950 MBps.
When IDRE added a second shelf to its PAS 11 and used the DirectFlow clients, the system rebalanced and supplied 1.8 GBps of write bandwidth, spreading the load over all 20 blades instead of only the 10 blades in a single shelf, Friedman said.
Tests have also shown performance scaling almost perfectly with the four shelves they’ve tried so far, according to Friedman. IDRE has no plans to spread Panasas’ object-based parallel file system over more than five or six disk shelves.
“As long as we keep our infrastructure up to support each new shelf that we add, we get almost linear scaling in the performance,” he said. “We’ve seen the scalability match the advertising, so far.”
Friedman expressed hope that once the users causing the problems on the BlueArc Titans move over to the Panasas NAS systems, those remaining on the BlueArcs will have a better experience.
“To us, the systems serve two different purposes,” Friedman said. “We have a really wide array of use cases here, and there's no one vendor that can cater to all the needs that we have.”