According to Jon Toigo, a helium drive doesn't advance the art of hard disk design, it just makes it possible to stuff more old technology into a new package.
Like many data storage industry watchers, I had to repress a bored yawn late last year when Western Digital's Hitachi Global Storage Technologies (HGST) Division announced a helium-filled disk drive with a 6 TB capacity. For one thing, Seagate had had comparable technology for many years but chose not to build a product around it, preferring to pursue other technologies to drive capacity growth. But that wasn't even the primary reason for my unenthusiastic response to the news.
Simply put, helium-filled disk drives don't do anything to increase capacity. Rather, engineers take advantage of the lower-density-than-air-hence-lower-friction properties of the gas to design a drive that violates the design trends that drive makers have been pursuing for some time.
For the past decade or two, the industry has tried to reduce the number of platters, motors, actuator arms and read/write heads in a given drive unit. Fewer components mean less stuff to break and usually cost less to manufacture, but there were other reasons for the trend. The mantra among disk drive design engineers was that capacity improvements should be driven by innovative "new" technologies, such as gigantic magnetoresistive heads, perpendicular magnetic recording, shingled media or bit-patterned media in the near future, possibly augmented by heat-assisted magnetic recording (HAMR) or acoustic-assisted magnetic recording.
So just adding a couple of additional platters inside the drive -- each with its own motor, actuator arms and read/write heads, which is what HGST has done with its helium drive -- isn't really a step toward meaningful drive capacity improvement. Rather, it's a step sideways.
What's next on this path: nine platters instead of seven in a hermetically sealed drive case or 14 platters in a double-height 3.5-inch drive? How long will it be before the vaunted power savings of a helium drive is offset by the number of motors that need to be added to the unit?
Frankly, if we keep going this way to improve storage capacity, pretty soon we'll be doing considerably less innovation in the technology of disk-based data storage itself. To some of us, it's kind of a letdown, an acknowledgement that the money guys have lost faith in real and vital changes, driven by technological improvements and refinements in manufacturing processes.
Two years ago, Toshiba and IBM demonstrated a 40 TB capacity 2.5-inch hard disk leveraging a new sputter-coating method that used lithographic techniques to create "mesas and valleys" on disk platters in a predictable way. They showed how this bit-patterned media could store significantly more bits on the same amount of turf while not increasing signal loss in any meaningful way. Within the year, Seagate demonstrated HAMR and achieved something like 60 TB on a 3.5-inch spindle. These represented the kind of sea change advances that seem to occur "just in time" -- when the solvency of disk-based storage seems threatened by superparamagnetism, for example.
That we haven't moved toward implementing these technologies yet seems to reflect unwillingness within the industry to spend the money required to retool their manufacturing lines. Analysts say the storage industry is off its high-water mark of $31 billion of a few years ago, hovering today somewhere around $29 billion. Given that "recession," is the return on investment for technical innovation being questioned by the industry bean counters? Have they lost faith in the time-honored and well-documented trend that technology-driven capacity improvements (though not always speed improvements) in disk drives usually reward innovators many times over? If so, why are they losing their religion?
Could it be the advent of flash memory storage devices? I heard a remark recently from an otherwise intelligent person that flash disk is "skimming the cream" off the hard disk market. Such statements are a function of an overwhelming amount of hype for the last couple of years around flash and are not borne out by any respectable research I've seen. In fact, take a look at the valuation of (and shareholder litigation directed at) companies like Violin Memory and Fusion-io. We seem to be turning a corner and moving away from the shiny new thing as we speak. So, if the disk industry isn't pursuing real innovation in its products for fear of flash, it needs to stop reading the pay-per-view prognostications of the industry analysts.
That's my two cents. When it comes to disk capacity, I don't want helium, I want HAMR.
About the author:
Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.
- Modernizing Data Centers with Software-Defined Storage –Red Hat
- Cloud Storage for Primary or Nearline Data –SearchStorage.com
- The Benefits of Edge Data Centers –Schneider Electric
- The All-Flash Data Center Transforms Business Economics & Competitiveness –Violin Systems