Confirming the theory of relativity requires brilliant minds and dense storage.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The Laser Interferometer Gravitational-wave Observatory (LIGO) needed petabyte-scale storage systems to support complex laser detectors used to prove Albert Einstein's general theory of relativity. The LIGO project, which succeeded in its mission in early 2016, installed high-capacity and hybrid flash arrays from Nexsan Inc. to store 6.4 PB of data in the form of 1.7 billion files of raw information.
The LIGO observatories, built and operated by Caltech and MIT, were designed to directly observe gravitational waves of cosmic origin. The project's goal was to test and prove Einstein's theory of relativity that the scientist first published in 1916.
The LIGO project uses twin laser detectors located in Livingston, La., and Hanford, Wash., for the large-scale physics experiment. The lasers in February 2016 discovered gravitational waves that indicated two black holes had merged into one massive black hole. That proved gravitational waves produced ripples through space and time and confirmed Einstein's theory.
The research project built a central data archive to store the massive volumes of data comprised of billions of files of raw instrument data and analytics processing information.
LIGO needed to replace a 15-year-old storage infrastructure, which consisted of a dozen Sun Microsystems servers running an Oracle Hierarchical Storage Manager (HSM). The LIGO project back end was supported by StorageTek tapes with a front end of disks for caching.
Stuart Andersonsenior staff scientist, Caltech
"We had been sitting with this (old storage) for a while," said Stuart Anderson, senior staff scientist at Caltech. "It was old and it was running 200GB disk drives and it really needed to be updated. It was getting too old to support (our requirements). It had reached the end-of-life stage."
Anderson's group started with one Nexsan SATABeast, a performance block storage systems built for high-capacity, high-density storage. SATABeast arrays are most commonly used for backup, archiving and digital video surveillance.
The research organization also deployed about six SATABoy systems and eventually added 20 Nexsan E60vt models that are part of Nexsan's E-Series hybrid arrays that mix hard disk drives and solid-state drives (SSDs). The hybrid arrays helped the LIGO project with its high-performance needs from Unix-based servers.
"We needed a new disk layer so we had to pick a new product," Anderson said. "We installed the SATABeast and integrated into the infrastructure and we tried to break it, but it just worked. Then we got SATABoy systems and eventually moved to the B Series E48 systems."
Anderson said the LIGO project will continue to use this Nexsan system for research that explores colliding black holes and neutron stars.
"This was the most optimal solution that we found in terms of cost per performance, reliability and support," he said. "The solution met all our criteria."
Available storage density staying ahead of demand -- for now
Nexsan plans Unity software for E-series high-density storage
How to avoid data storage capacity issues