BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
BOSTON -- Future-proofing data with open source technology was the common theme among the presenters during the closing keynote at the Linux Vault storage conference last week.
NetApp senior vice president Brian Pawlowski, who spoke to attendees about data preservation, said the biggest challenges for storage professionals include determining which data is important enough to keep and ensuring that years down the road they will have the ability to access the physical media and file format data is stored on.
To cope with this, Pawlowski explained that in addition to backing up and creating redundant copies, it's important to continuously forward migrate important data to newer technologies. He said open standards give the developer control of the technology that allows access to the data.
"Open source solutions have critical architectural advantages because while closed-source vendors are either tied to the standards they invested in years ago or have to invest in building new standards, open source solutions don't have that issue at all," said Sage Weil, Ceph principle architect at Red Hat.
Jorge Campello, the global director of system solutions at HGST, previewed shingled magnetic recording (SMR), a hard disk drive technology that aids efficient long-term data preservation. SMR uses a design similar to shingles on a roof -- when data is written to a hard disk it creates a magnetic track that overlaps the previous one to provide a higher disk drive density. Seagate and HGST have SMR drives on the market.
Campello explained host-managed SMR drives give an advantage to the open source technology community. Because they rely on the operating system to determine what part of the drive to write to, it's easier to integrate with open source technology.
Facebook future-proofs storage by developing upstream
Chris Mason, a software engineer at Facebook, closed the Vault conference with a presentation on how the social media platform strives to spur innovation through the Linux kernel. With 1.3 billion users, Facebook obviously has a large, fast growing amount of data it needs to store. It does that by using the btrfs, XFS and Gluster file systems. But the real innovation, according to Mason, stems from an "upstream-first methodology," meaning developers work first to suggest changes to the core code before using the feedback to determine whether to implement it in the kernel.
"Facebook desperately needs not just scalability, not just availability, but flexibility,” Mason said. “We need to be able to change around our infrastructure to add new services and accommodate new things. And when you bring something in through open source that's used by all these people and all these communities, you're pretty much forced to make it better," he added.
HGST, Seagate launch helium drives