The University of Utah Scientific Computing and Imaging Institute is taking a chance on storage startup Qumulo...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
to replace its legacy NAS for primary storage.
Citing the startup's advantage in cost, linear scaling and built-in data analytics, the research facility is phasing out EMC Isilon NAS. It is replacing it with Qumulo QC24 hybrid storage appliances and Qumulo Core data-aware scale-out NAS software.
Qumulo founders Aaron Passey, Neal Fachan and Peter Godman were among the early Isilon engineering team, and earned dozens of patents for helping to create Isilon's scale-out architecture. EMC acquired Isilon for $2.25 billion in 2010.
QC24 is 1U commodity hardware, with 24 TB of hard disk drive capacity and fronted by 1.6 TB of solid-state drives. Qumulo Core is available as a software-only product or bundled with Qumulo-branded commodity appliances. The Scientific Computing and Imaging (SCI) Institute is running a four-node Qumulo cluster, with plans to expand.
Qumulo Core supports NFS and SMB file storage, and includes REST management APIs for object storage. Assistant director of IT Nick Rathke said the SCI Institute is planning to add "one or two" Qumulo nodes every few months to fully manage 500 TB of primary storage.
"When you're dealing with that many terabytes, just finding your data is a big challenge," Rathke added. "In the past, storage was just a big black box for us, with data in and data out. The economics of storage is forcing us to do more proactive management."
Qumulo Core brings analytics for storage management
The SCI Institute is one of eight permanent research institutions at the University of Utah in Salt Lake City. A team of faculty and undergraduate researchers use federal grants to conduct experiments in imaging analysis, scientific computing and visualization -- primarily for the life sciences and medical field.
The work involves parallel processing of massive data sets with high I/O demand. Rathke said Qumulo storage has reduced image processing from months to days, and provides real-time analysis on storage consumption and usage patterns.
The University of Utah SCI Institute's Qumulo implementation at a glance
- Provides built-in analytics to cut image-processing time
- Streamlines movement of large working sets
- Offers predictable storage costs
- Performance and capacity scale in linear fashion
- Phased-in approach for 500 TB of primary storage
Qumulo Core software is distributed as a single software license that runs on top of Linux hybrid flash storage. Qumulo adds storage features and functionality in rolling updates at no charge. The SCI Institute doesn't have to license components separately, which helps keep storage costs predictable.
"Isilon was outstanding for us, but Qumulo's all-you-can-eat licensing model really fits our budgeting needs," Rathke said. "With the way budget funding has changed, we need to understand what our storage is and how it's being used -- much more so than we did three to five years ago. We want to maintain the same scale-out that we [have] with Isilon. I can buy Qumulo storage in smaller increments and at a better price point."
Qumulo deployed in beta, then production
Rathke said Qumulo provides many of the same advantages as Isilon, such as the ability to write data to multiple nodes for failover and load balancing.
Although Isilon provided the Institute with scalability and high performance, Rathke said he was unable to linearly scale capacity and performance at the same time. Adding a new Isilon box required additional compute power and network bandwidth to keep pace.
Qumulo Core software recognizes when nodes join the cluster and automatically adds them nondisruptively. Qumulo storage analytics help the SCI Institute reclaim capacity by identifying workloads that could be archived or removed.
"We had some faculty members who asked why we didn't just delete old projects to free up storage capacity," Rathke said. "But those are two different issues: data being old and data being important. Which old data do we delete? Qumulo gives us near real-time analytics, so a researcher knows how much storage is being consumed by project X and how much project X has grown over the last year. Those kinds of cost justifications are becoming more and more important as grant money gets tight."
Finding new uses for Isilon NAS
Due to the nature of its work, SCI does not virtualize workloads or run storage in the cloud. The agency was an early beta tester of Qumulo storage, running a four-node system initially to streamline sharing of dense workloads.
"We were really having some challenges moving data around for one particular project that was 40 TB. I told researchers, 'Hey, we have an opportunity to get this experimental system from Qumulo.' They said, 'Go for it,' and the Qumulo has worked superbly for them," Rathke said.
SCI Institute is considering launching a disaster recovery site at a university-owned data center in downtown Salt Lake City. For now, data on Isilon appliances is backed up to a Commvault disk-based appliance and offloaded to an Oracle StorageTek SL500 tape library. Backup of research data varies by individual project.
Licenses for the Isilon NAS appliances remain in force for two more years, at which point Rathke plans to dedicate them for special uses. "We may end up doing our sync or some other backup mechanism between our Qumulo system and our Isilon system," he said. "The Isilon may become our peer [machine] for data replication. We'll find some use for it."
Why data-aware storage's moment is now
Storage and metadata are finally working together
DataGravity CEO gives her take on data-aware storage