Hitachi Data Systems' Yoshida talks Sun/Oracle, USP refresh and storage virtualization

Hitachi Data Systems' Hu Yoshida talks about what's next for HDS and its storage virtualization products, including the next version of its USP enterprise array.

Hitachi Data Systems (HDS) said last week that its nine-year OEM agreement with Sun Microsystems Inc., newly acquired by Oracle Corp., will expire March 31. We caught up with Hu Yoshida, chief technical officer at Hitachi Data Systems, to get his take on the end of the road with Sun, what's next for HDS, and the challenges facing data storage managers as the global economy looks headed for recovery in 2010. HDS recently announced the end of the OEM deal with Sun for the USP storage array now that Oracle owns Sun. Was this expected? What's next for HDS — finding another partner or making up those sales by going direct?

Yoshida: Oracle and Hitachi were essentially in different businesses. Oracle is more focused around databases and information, and we're more about the infrastructure. It's a separate business with different priorities -- the relationship was inherited when Oracle bought Sun, and the contract was ending, it will expire at the end of this month. But there are plenty of areas [where] we will continue to collaborate in terms of application integration and optimization. We're still in talks with Oracle. Oracle has always been a good partner for us. In fact, Hitachi is the main distributor of Oracle in Japan. We have a long relationship with Oracle and will continue to do that. During this period, for our equipment sold through Sun, we will continue to work with Oracle to support that. HDS doesn't consider Exadata competitive? HDS and IBM are companies Oracle CEO Larry Ellison mentioned as competitors after buying Sun.

Yoshida: Exadata is more specifically focused around being a database engine. That's not a general-purpose storage product, so it's something we could also collaborate on in the future. It's more specifically targeted toward their expertise, which is databases.

This page concept is game-changing for us. Now we will manage storage based upon pages rather than volumes and files. That, I think, will be a focus on future products.
Hu Yoshida
Chief Technical OfficerHitachi Data Systems Isn't that part of your expertise in SAN as well?

Yoshida: There's plenty of room in the storage business for different types of solutions. We're providing more general-purpose types of solutions that could hit more of the general market, so we don't see direct competition between us. There are certainly overlaps, but not direct competition. So what can we expect HDS to focus on with the next big USP refresh?

Yoshida: Well, I'll give you a hint. We've announced our dynamic provisioning capability. When we implemented dynamic provisioning, we created the concept of a page — you get a pool of RAID groups and divide them up into pages, and do thin provisioning based on the page dynamically because it's all pre-formatted. This page concept is game-changing for us. Now we will manage storage based upon pages rather than volumes and files. That, I think, will be a focus on future products. Sun and by extension Oracle are focused on commodity hardware. EMC has also started putting Intel chips into Symmetrix V-Max, while any number of startups build systems based on commodity server hardware. Is all this commoditization a threat to the HDS business model?

Yoshida: The commodity approach, I think, doesn't give you the ability to meet special challenges — it's all off the shelf, so the functions you provide are going to be off the shelf. You don't have functions that can address some real or new problems. If you look at the server world, [physical] servers are scaling up — they're now becoming like mainframes. On top of that, to fill up those types of servers, you have virtualization, so instead of one platform you have maybe 10, 20, very soon even 100 [applications] running in a server. On top of that, connectivity is scaling up with 8 Gig Fibre Channel and FCoE [Fibre Channel over Ethernet]. Storage systems are going to have to scale up to support that. Yet what I see everybody in storage doing is scaling out with two controller nodes and commodity chips. Commoditization can lead you off into the weeds sometimes. Yes, it gives you low costs because you're looking at the capital expenditure, not at the operational costs. Oracle/Sun and other large storage vendors have been trying to build end-to-end IT infrastructure stacks recently — HDS is the only large storage company at this point without that kind of "stack" offering. What's your take on that?

Yoshida: Our approach has always been to be open and collaborative, leveraging our core competence with the competence of other vendors. And so bundling together with other vendors is not something that we have looked for. We're watching this market space and the acceptance of this in the market space. It seems to be a trend, but we're just looking at the market acceptance. This is typical of Hitachi—Hitachi is not the first to market with a lot of this. Storage virtualization first came out with a lot of network vendors. We chose not to do that — we introduced our product rather late, but when we did that, we did it with controller-based virtualization, which I think was the right approach. We weren't first with thin provisioning but our version we introduced a couple years ago with the paging concept I think is the right solution vs. the chunklet approach others have used. We're going to look at the market to understand what customers really need. What are the biggest challenges facing storage managers in 2010?

Yoshida: One of the technical challenges is that there is a lot of technology. We have virtualization, which, I believe we're over the hump on that one and most people are implementing it. There's dynamic provisioning, which is moving forward as well. I think there are a lot of products that are coming out that people are reviewing. I think people are going to be moving toward 8 Gigabit Fibre Channel this year and then they're going to be looking at FCoE. There's a lot of technology that's going to be on the plate this year. One of the challenges is how do they get around to doing it? Even though they can see the benefits of this, they can't get to it, mainly because they've cut back their staff and people are just so busy doing grunt work that they can't make the change, or be able to get the education, understand the new technology and be able to implement it.

What kind of increase in virtualization have you seen?

Yoshida: I see it just in deployments. We've had virtualization out there for some time and about half our customers were just using it as storage, not really using the virtualization. Maybe they'd use it for the migration on the front end, but then not use it really to attach external storage. In the last year as I go around to customers, from my own personal observation every customer I'm going to today is doing virtualization of external storage. The ramp-up on thin provisioning that was announced two years ago is also ramping up very dramatically. As a result, we had a record quarter at the end of last year in terms of our finances. We increased 7% year on year at Hitachi Data Systems. We're also seeing high double-digit growth in our midrange and triple-digit growth in our HNAS and HCAP that front-end our virtualization platform, and software and services also growing in the double digits.

Our enterprise storage has declined, but that is a positive thing, because our approach is to have the enterprise as the virtualization engine and a majority of the capacity being tier 2 and tier 3 storage behind it, so the net effect is even though our enterprise revenues may have declined, our total revenue has increased and we had a record quarter. It's more about the mix than trying to say we're an enterprise company. We're more than that now. We're more of a solutions company and a storage solutions company. In the past when we've talked about external storage behind the USP controller it's either been Hitachi's own storage or Hewlett-Packard's EVA. Are you seeing more heterogeneous arrays being virtualized behind the controller now?

Yoshida: Yes. A lot of times we come in there and can enhance other people's arrays. A real draw has been the dynamic provisioning. We can externalize [an EMC Corp. ] Clariion or LSI [array] with their fat volumes and when we virtualize them, we can then move them into a thin-provisioned pool. As we move them into the pool, we can look at the pages we create; if they're zero pages, we can reclaim the space. So, oftentimes, we can reclaim some 40% to 50% of the space just by doing that, and that has become very compelling in the last year. Do you have any numbers or percentages about customers virtualizing external storage?

Yoshida: Fifty percent of our controllers are virtualization enabled, and of those enabled, about 25% virtualize third-party storage.

Dig Deeper on Storage virtualization

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.