News Stay informed about the latest enterprise technology news and product updates.

HGST Active Archive System available at lower capacity

HGST makes its object-storage archive system available with lower capacity points for organizations that don't need to start with multi-petabyte installations.

Western Digital Corp. this week shipped a new model of its HGST Active Archive System with lower capacity options...

in hopes of appealing to more IT organizations.

The initial version of the HGST Active Archive System, released last April, had a minimum raw capacity of 4.7 PB per rack, or 3 PB usable after erasure coding. The new SA1000 model has an entry raw capacity point of 672 TB (425 TB usable after erasure coding), although it uses the same 42U rack as the original SA7000.

"We wouldn't have done this if there wasn't customer demand," said Scott Cleland, senior director of product marketing at HGST. "We're going after a larger part of the market. We've been kind of cornered into very high-end, high-enterprise, high-capacity deals. There are plenty of them, but we need to build a business around this, and to do that, you need more of the volume space."

The HGST Active Archive System is based on object storage technology that Western Digital acquired on March 3, 2015 from Amplidata. Western Digital Capital had been an investor in Amplidata prior to the deal, and Western Digital's HGST subsidiary had partnered with Amplidata to jointly develop the Active Archive Platform.

Part of object storage's value is it can scale into the petabytes, but most projects start at lower capacities. Cleland said the majority of Amplidata's customer wins in life sciences stored less than a petabyte of data. "It's hard to go back in there with 4.7 PB and say, 'Here's your next generation,'" he said.

The SA1000 can hold 4.7 PB raw, but is not fully loaded upon purchase. Users can purchase upgrade kits in increments of 672 TB. They also can scale out by buying additional racks to get to the maximum supported raw capacity of 28 PB (about 14 PB usable capacity).

"We've tested to this number, but there's nothing stopping anyone from clustering more sites together," Cleland said. "Theoretically, the software can support unlimited capacity expansion."

Both the SA7000 and the new SA1000 use enclosures capable of holding 98 of HGST's dense, low-power Ultrastar HelioSeal hard disk drives (HDDs). The 7,200 RPM, 12 Gbps SAS helium drives currently store 8 TB, but plans call for even denser 10 TB drives in the future, according to Cleland.

Feature-wise, the SA7000 and SA1000 are identical. Both models now integrate software features such as data-at-rest encryption and 3GEO erasure coding. In the past, SA7000 customers had access to those capabilities at no extra charge, but they had to download the software and install it, Cleland said.

The 3GEO technology enables the system to store the fragmented, erasure-coded data across multiple geographies to improve resilience and offer protection against disasters. Customers also have the option to the store the data across nodes at a single location. Those preferring replication would need to purchase third-party software.

Cleland said best practices call for sufficient bandwidth to overcome the performance hit of erasure coding. The system has six 10-Gigabit Ethernet links to ingest data. "Even with a small performance hit, what you get in return is data durability of greater than 15 nines and data center resiliency," Cleland said.

The object-based HGST Active Archive System claims file throughput of up to 3.5 GB per second, depending on the configuration. The product primarily targets backup and archival use cases in industries such as cloud services, media and entertainment, healthcare and life sciences. The HGST system is not designed for transactional database workloads that require fast writes, Cleland noted.

Amplidata sold object storage software that can run on commodity hardware. Cleland said some customers, such as cloud service providers, often do the installation work themselves. But, he said WDC is trying to educate all customers on the advantages of an object storage product that optimally integrates software and hardware.

Cleland said the HGST Active Archive System uses off-the-shelf Intel processors but is designed to use enclosures equipped with helium drives.

"One of the reasons we've got the helium in there is so we can stack the platters differently. You don't have to worry about the heads wobbling around so much, so you can really pack the bits in there," said Erik Ottem, director of product marketing at HGST.

Ottem said the system software runs on redundant controller nodes and it tracks where the sharded data is stored. Another software program continually monitors the data to ensure integrity, he said.

The HGST Active Archive System supports the Amazon S3 API natively. Customers requiring NFS or SMB/CIFS file support must purchase a third-party NAS gateway. Cleland said Avere is currently the only certified gateway partner, but the company is working to qualify others.

List pricing for the SA7000 system, at 4.7 PB raw capacity, remains unchanged at $850,000, Cleland said. He declined to disclose pricing for the SA1000 with 672 TB raw capacity, other than to confirm it would cost more than one-seventh ($121,429) of the SA7000.

"A fully populated system is more cost-effective than buying it modular," Cleland said. "It's like buying a bag of candy or a single candy bar. You're going to pay more piecemeal than you are for the total."

Expanding the reach of object storage platforms

Main competitors for the HGST Active Archive System include Scality, OpenStack Swift, IBM's Cleversafe, EMC, NetApp and HPE, according to Cleland.

"While data continues to grow and there are a lot more four petabyte environments than there used to be, that limits you to only these massive type of environments," said Scott Sinclair, senior analyst at Enterprise Strategy Group in Milford, Mass. "It's helpful to have a smaller starting point not only to just increase the number of folks you can talk to, but it also makes deployments a little easier for evaluation."

He said the shift from custom APIs to S3 APIs has made object storage easier to deploy and integrate into IT environments. Now that more application support S3, IT organizations are considering object stores for different types of data, he said.

"It's made object a more viable platform even at lower capacity points," Sinclair said.

Sinclair said other object storage vendors didn't tend to state as high a capacity entry point as Western Digital had in the past. But he also noted that object storage is "meant for big environments" -- not for organizations with 100 TB or less of data.

"It makes sense to have your data on a platform that can scale with you versus something that you know in three years you're going to have to migrate off of," Sinclair said.

Next Steps

When object storage technology makes the most sense

Comparing object and file storage in the cloud

Data archive options that can save organizations the most money

Dig Deeper on Data center storage

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What's the ideal starting capacity for an object storage system?
Cancel
Well, Mr. Sinclair is off the mark when he claims that object-based storage is only "meant for big environments" and not for organizations with less than 100TB of data to protect. What this really says about the HGST's Amplidata solution is that it doesn't scale down all that well. Because object-based storage is software-defined and uses commodity storage servers, you can start with as little storage as meets your requirements, even if it is as small as 10TB. There is nothing inherent in object-based storage that sets the minimum useful capacity to 100TB. With object-based storage you don't need to over-provision because it can scale from tens of terabytes to hundreds of petabytes.
Cancel

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close