News Stay informed about the latest enterprise technology news and product updates.
This article is part of our Essential Guide: Predictive storage analytics, AI deliver smarter storage

Hitachi storage technologies underpin AI and cloud plans

Hitachi Vantara is making several pivots from the storage vendor previously known as Hitachi Data Systems. But has it gone too far from its core storage technology?

As an indication of its future direction, Hitachi Vantara devoted much of its recent user conference to explaining...

where its storage fits with AI and the internet of things.

While company executives painted the big picture at Hitachi Next in September, details on the Hitachi storage strategy received little attention. The specifics will start coming out within the next several months, said Nathan Moffitt, Hitachi senior director of AI operations software and infrastructure systems.

"Over the long term, there will be a couple pivots we're going to make," Moffitt said. "One is to have a storage operating system with the agility to cover a broader range of use cases at the core, the edge and the cloud. Another pivot is to coalesce and collate data sources to help you make better decisions."

Hitachi Vantara is a subsidiary of Tokyo-based Hitachi Ltd. The company was launched in 2017,combining its Hitachi Data Systems (HDS) storage unit, Hitachi Insight Group and Pentaho IoT data analytics.

But some wonder if the vendor already regrets its decision to move away from the HDS brand. Hitachi's cloud message is hazy, and it appears to be playing catch-up to competitors with integrated storage for AI, nonvolatile memory express (NVMe) flash and multi-cloud data services.

"I think Hitachi made such a wide swing away from the storage message that they realized they may have gone too far. Now they're trying to get the pendulum to swing back the other way," said Mark Peters, an analyst at Enterprise Strategy Group. "From everything they've said, it sounds like they're trying to go back to being Hitachi Data Systems, which had more of a focus on the storage."

Hitachi AI tools

The name change signaled Hitachi's intent to sell storage and other technologies to a limited number of customers, mostly high-dollar contracts with longer sales cycles.

The most recent Gartner Magic Quadrant report lists Hitachi Vantara's Virtual Storage Platform (VSP) as a leading all-flash array. But by design, Hitachi is selling less of its flagship storage directly to mainstream data centers. The vendor said it has mostly ceded hardware refresh cycles to competitors, preferring to target high-end industrial sectors.

The strategy enables Hitachi storage to be combined with other business products in the Hitachi portfolio. Hitachi Ltd. posted $19.9 billion in consolidated revenue in the quarter that ended in June.

The move to develop storage for analyzing machine-generated data fits the overall Hitachi portfolio, which includes companies spanning the automotive, construction, defense, electronics, energy and medical industries.

"Hitachi storage is [aiming] at a higher level, which allows other parts of the Hitachi portfolio to be brought into a solution. That helps them create sales opportunities at the line-of-business level, rather than just the IT level," said Eric Burgener, an analyst at IDC.

An example is Hitachi Transport System (HTS), which sells trains, tracks and other physical infrastructure. HTS reportedly is bidding on a multibillion-dollar contract with Finland's national railway system that would include storage and AI automation technologies.

Hitachi's storage innovations now center on mobility across local and cloud tiers. It's an approach other vendors have taken, most notably the NetApp Data Fabric technologies.

The "intelligent data pipeline" encompasses primary Hitachi VSP storage arrays, Hitachi Content Platform object storage, Hitachi Enterprise Cloud and multiple hybrid clouds, said Iri Trashanski, a senior vice president of infrastructure and edge products at Hitachi.

"We've seen data centers move from consolidated architecture to one that is more distributed. We added capabilities at the edge. We call it an intelligent data pipeline. We help you ingest, cleanse, enrich and monetize your data," Trashanski said.

An update in June to the flagship Hitachi Storage Virtualization Operating System (SVOS) added support for software-defined storage on commodity servers. SVOS was supported other vendors' hardware before, but only via the VSP controller.

Hitachi customized the code base for different models of VSP hardware.

"It's not a full virtual machine, just an encapsulation of the code that can be dropped onto any system," Moffitt said.

NVMe for hyper-converged expected in 2019

Hitachi in September launched an all-flash Unified Compute Platform hyper-converged infrastructure with integrated NVMe flash. NVMe-based VSP arrays are also on Hitachi's storage roadmap.

Even though most storage vendors already sell NVMe products, Hitachi said it will wait until 2019 to introduce a generation of NVMe flash-based VSP arrays. In the meantime, Hitachi will help customers identify the best use cases for NVMe flash, said Mark Adams, a Hitachi product marketing and business management director.

"NVMe is not a plug-and-play technology. There are a lot of things that need to be considered. We want to help customers make sure they deploy NVMe for the right use cases," Adams said.

Hitachi's custom flash hardware helps its SAS-based VSP arrays deliver performance that is close or equivalent to NVMe flash, Adams added. "We don't have any customers complaining that our SAS-based VSP arrays are too slow -- none," he said.

Join the conversation

7 comments

Send me notifications when other members comment.

Please create a username to comment.

What do you think of Hitachi Vantara's decision to adapt its storage for AI and IoT workloads?
Cancel
The recent article that came out today citing the 3V's of Big data should be enlightening to everyone. Volume, Velocity, Variety. Hitachi's recognition that traditional methods are inadequate for today's 3V's of data.  They are trying like others to solve the velocity problem failing to realize the volumes are going to only get bigger. And lastly everyone is failing to recognize that data is distributed sparsely and we keep trying to figure out how to take sparse data and make it dense further exacerbating the velocity problem.  At least Hitachi is starting to realize current architectures are not going to cut it - even with all the band-aid solutions being created. A new architecture where the compute is moved while the data remains stationery is required. Hitachi has great skills, I would keep an eye on them. AI and IOT will drive tons more data, as we seek information from it. So right on.
Cancel
The three Vs have been floated for years now. I'm not sure any vendor is close to figuring out. As you note, scripts are getting larger and larger, but capacity is relatively finite. It's the compute that sits idle.
Cancel
Exactly, there is one company that has figured it out. It is in Stealth now, but plans to unveil by early next year.   The key was figuring out how to  move a stateful computational element to the data in parallel, in a cache-less manner which they have done. 
Data is now the large object, like a building or a mountain. We don't in real life move the building to get its elevator repaired, we send the small thing to the building, the repairman. Computer architectures have to change. Hitachi knows it, so I support their storage moves and focus. They will be great partners to this new architecture in the future.  Compute is the small thing, Data is the large element !  You are spot on - the problem is the same 3V's but only getting worse. Whats been missing until the near future is the solution !!
Cancel
Could you send me a private message (email) on this stealth company? Off the record and under embargoed, of course. But I'd like to hear more about its technology. 

There are some interesting hardware deployments emerging, such as FPGA-based composable hardware and TCP/IP for NVMe flash. Although yet limited in scope, storage class memory modules could be valuable once the finite capacity limit is overcome (if ever). Too soon to tell which, if any, of these techs rise to widespread use. My view is software-defined storage is approaching its natural limit, and hardware-accelerated storage will emerge downstream to fill in the gap for of AI, multicloud and high-density edge.  
Cancel
I don't have your email. I am easy to find on linkedin.
Cancel
gkranz@techtarget.com

Cancel

-ADS BY GOOGLE

SearchDisasterRecovery

SearchDataBackup

SearchConvergedInfrastructure

Close