Red Hat’s Gluster software has been rebranded as Red Hat Storage following the $136 million acquisition. During Red Hat’s earnings call last Wednesday, CEO Jim Whitehurst described Red Hat Storage as a competitor to EMC Isilon’s scale-out NAS and he expects it to be popular in high performance computing (HPC). “It’s hard to come up with a direct competitor because there’s not another software-only storage solution like that,” he said of Red Hat Storage. He added that Red Hat plans to beef up engineering and sales around the storage software.
Before the acquisition plans were made by Red Hat, Gluster was a standalone company and sold its product as GlusterFS. Gluster’s pre-acquisition customers fit the profile Whitehurst described as HPC shops looking for software scale-out NAS to run on commodity hardware.
The Cornell University Institute for Biotechnology and Life Science Technologies uses Red Hat’s Storage Software Appliance to manage data from research projects such as DNA sequencing that generate high volumes of data. The institute runs the virtual appliance on file servers and DataDirect Networks arrays that the Cornell Center for Advanced Computing (CAC) already had in-house.
James VanEe, the institute’s IT director, said he needed storage that would let him keep up with the rapidly expanding data his team produces. The institute’s capacity grew by 150TB over 18 months until slowing down -- probably temporarily – over the last six months.
VanEe said he was looking for a method of implementing a global namespace so his researchers could access data across nodes. His previous storage filers could only handle 16TB per node.
“The core mission of our group is to generate data,” he said. “I was suffering from the same challenges that anybody in this field was experiencing, which was how do we manage this volume of data?”
He considered using a NetApp FAS array that his group had for other data, and he looked at EMC Isilon before testing Gluster’s free open product. He said CAC systems consultant Steven Lee ran a test on existing servers and quickly pronounced GlusterFS ready for primetime. The institute upgraded to the supported version, running it on file servers attached to CAC’s DataDirect Networks S2A storage platform.
“It required a minimal hardware investment because we could use what we already had,” VanEe said. “A couple of days after Steven set it up, he came back to me and said, ‘This looks easy, I’m ready to roll it out.’”
VanEe said his old storage system could fit about 10 experiments on it, “then you’d have to provision another file system, and another. A large global namespace and scalability were important to me.”
VanEe said his group has since added disk to DataDirect system to keep up with the growth rate of 50TB every six months.
“That’s leveled off because we’re archiving or deleting older data,” he said. “But we have grants in for new instrumentation and if they’re funded, we have to be ready to scale.”
VanEe said the type of numbers-crunching his group does would place it under the “big data” umbrella, so performance is also important to him. “Along with DNA sequencing, other research groups are customers of mine and this file system is exported to them,” he said. “So they might have a large memory multicore server to compute without a lot of local storage. They’re working on data sets that might be terabytes in size. Performance is quite good. The fact we can scale out the system and increase performance is also nice.”
VanEe said his decision to go with Red Hat and Gluster instead of a major storage array was made easier because his group was looking for scalability and performance with little regard for traditional storage management needs such as data protection. “When we prioritized what was most important, DR was pretty low on the list,” he said. “If we had deployed Isilon, we probably would’ve built it out with a replication site and snapshotting features. We can do snapshotting with Red Hat Storage, but I haven’t implemented that yet. I’m using it for bulk storage running on robust hardware, but not set up with protection that would be needed for other types of data.”
He said he was “shocked” when Red Hat acquired Gluster, and was concerned about possible pricing changes but he thinks the Red Hat model will work well with Gluster’s storage. “I’m a fan if Red Hat puts more resources in,” he said. “I’m happy with the support we’ve received and I wouldn’t expect that to get worse. I like the Red Hat model as far as supporting free and open software while adding enterprise support.”
Red Hat receives praise from Pattern Energy Group
San Francisco-based Pattern Energy Group LP was another Gluster customer before the acquisition, and now uses Red Hat Storage Software Application for its weather prediction data.
Charles Ringley, manager of atmospheric modeling at Pattern Energy, said his company first installed GlusterFS on 16 Hewlett-Packard blade servers in 2007 and increased that to 32 blades the following year. Pattern uses InfiniBand connectivity for low-latency, high-bandwidth performance.
Ringley said Pattern was among Gluster’s first commercial customers.
“We use it specifically for our weather resource and modeling,” Ringley said. “It’s an I/O intensive application with a lot of reading and writing. We do a lot of number crunching and it runs 24 hours a day, so we needed high reliability and fast performance. We couldn’t find an existing infrastructure to deal with it. Since we were starting fresh, we decided to take a cutting edge technology and run with it.”