Hubert Yoshida, vice president and chief technology office (CTO) of Hitachi Data Systems Inc. (HDS) talked with...
SearchStorage.com about Hewlett-Packard Co.'s (HP) acquisition of AppIQ Inc. and the future of storage management, virtualization, the TagmaStore, SATA and hardware vs. software.
HDS rebrands AppIQ's storage management software, an implementation of Storage Networking Industry Association's Storage Management Initiative Specification (SMI-S), which the company has "enhanced" as Hitachi System Storage Manager. How do you think HP's acquisition of AppIQ might affect the SMI-S standard and your relationship with the company?
Hubert Yoshida: I don't think it will have any effect. As our software VP, Jack Domme, indicated, we're very happy HP acquired them. HP is a partner of ours and we see no impact on our future use of that software.
AppIQ is a key supporter of the SMI-S standard. Even though it has access to vendor-specific APIs [application programming interface], they would always wrap them in SMI-S wrappers so users could have a standard interface. We like what they do, and we think they will continue to do that with HP.
What's the future of storage management? Will there ever be a truly heterogeneous [storage resource management] SRM tool? Or is that a pipe dream?
Yoshida: I think we'll get closer to that as standards develop. The storage environment is always being made more complex as we add things to it. First SANs, now we're adding virtualization to that, and iSCSI. It's a very dynamic, changing environment. Keeping up with it is a challenge, but I think as we start to build up more of these standards, we can get to that single environment. That's always our goal.
What kind of timeframe do you envision?
Yoshida: Hard to say. Hopefully in the next five years.
Will the future be all about software or differentiation in hardware?
Yoshida: It's hard to say. It's both, I think. There are still innovations going on in hardware. If we have the right software, from the vendor's perspective, we'll be able to mask a lot of that. But there's not a way to mask fundamental changes -- like virtualization and our strategy of changing the controller rather than the appliance -- those things can't be masked. But usability can still be helped by software.
HDS launched the TagmaStore Universal Storage Platform (USP) storage array a year ago. How many customers have deployed it?
Explain the architecture of the TagmaStore.
Yoshida: We have separated the control units and the data in the back. It's a new process of storage with the NSC [network storage controller] -- you don't need a SAN to be able to move data -- we've separated the disk and the RAID controllers from things that have value, things that do migration and replication. It gives customers better choices as to how we price and configure storage.
We did this because we ran up against the innovator's dilemma, which is what happens when somebody else introduces another technology in your area, without all the functionality, but it's cheaper. Suddenly the innovators have two choices: one, become a commodity themselves, with a cost structure that means they'll go out of business; or two, ignore the competition and go out of business.
For us, there was a third choice, which was to change the playing field. The introduction of low-cost SATA drives was disruptive to high-end arrays. What we did was separate that commodity piece from the high-function piece, which was USP. We changed the playing field by doing that separation -- it allows us to be competitive on the commodity side, but on the high-value function side, we can also compete. We can attach the commodity behind the high-value function piece and provision all the others. We can provide the high-function piece and commoditize anything behind it.
How are the TagmaStore's virtualization capabilities being used today, and how might that change going forward?
Yoshida: Currently, I think about a third to almost 40% of our boxes go out with the virtualization software volume manager. I think most of it is used for consolidation and for migration between different tiers of storage, between different vendor boxes. In the future, I think we're going to see much more tiering between systems. We announced some additional software about two or three months ago to allow us to do a better job of management and applying policies to that -- the Tiered Storage Manager, which will automatically migrate data between tiers. But even before we get to tiered storage, there's lots of value in just doing migration and consolidation, so people can see what data they have. We think that with external storage in the future, the bulk could be modular, which is a lot less costly. The majority of storage in the enterprise today is in Tier-1 devices, and we think you could turn that ratio around.
How many customers have bought the smaller, less expensive version of the box -- the NSC-55?
Yoshida: That's the modular 19-inch rack with 200 volt single phase power -- you don't need a data center for it. I don't have that number -- we're entering into our quiet period because of quarterly reports at the end of the month. There is significant interest in it. Like anything new, there's going to be a ramp-up period, but I think we're on plan with that product.
How is the development of the NAS blade going for the TagmaStore? What's the timeline for that?
Yoshida: It's available now. We are shipping product to customers. We will have a NAS blade on the midrange NSC-55 product the beginning of next year..
EMC [Corp.] just announced it will stop selling its Windows NAS boxes -- the NetWin 110 and 200, and will resell third-party boxes instead. What do you think about that?
Yoshida: That's interesting. It may be a matter of margins for them. We receive great demand in the Windows NAS market -- Windows is very usable and people are used to it. I think the demand is there, and they may not have the margins. But I can't speculate for EMC.
What does the storage utility model mean to you? Is it an infrastructure description or a financial one? What do you think of the idea of outsourced storage?
Yoshida: Primarily, the major challenge to that is that someone has to carry the hardware depreciation cost. That's the financial problem of utility storage. One of the things that killed old storage service providers was that every user wanted their own box -- they wanted to be guaranteed no one else could see their data or impact their performance. Five years ago, the capacity wasn't there. In our systems today, the capacity and the technology of providing storage on demand is there. We have added logical partitioning, so in one frame you can provide multiple storage domains, and users sharing the same physical access point and the same array still can't see each other's data because it has a hardware-based address space. With that logical partitioning, there's no way one cache can talk to another one.
So there's safe multitenancy and equality of service. Technology-wise, we have those capabilities, and so we can move storage to the right price point that an application requires. But the biggest burden is financially how do you bear the cost of equipment till your revenue stream catches up? There are people who are successful in that business, but they also add to it a lot of services, which provide a revenue stream.
We have worked with some vendors but it's not a core part of our business. We do support other people doing that business, and Hitachi has a very strong services arm.
Users are keen to exploit existing investments before committing to more spending. What is Hitachi's product strategy in relation to this?
Yoshida: We've talked to so many of these enterprises, and they say storage growth has become irrational. They're spending less on hardware than on the management of that hardware. It takes $3 now to manage $1 of hardware. And the demand for the growth of storage is compounding -- it's not a linear growth. Things are going crazy for them.
There are four main problems as we've seen them. One is, they're buying too much. We have customers who just buy every quarter because it takes them too long to provision storage. Second, they're paying too much, because most of what they're buying is Tier-1 storage. Third, they're not utilizing it in the best way for their data centers. There are several reasons they can't utilize it correctly and they end up leaving data there and using a new box for a new application.
The fourth problem is that they're then running out of data center -- not in footprint, the space, but for power, especially if storage is put into a room full of blade servers that draw a lot of power. The 200-volt single-phase power, like we have in the NSC-55, means a lot to someone who's concerned about power in the data center.
With the ability to tier and move dynamically between tiers, better management between tiers, we can answer all four of their pain points. Also, our management tools are simplified and they allow users to start to automate storage management. If you can define the policies, we can implement and start to automate management and movement of data.
Speaking of tiered storage, SATA has been pretty well established as a lower tier technology. What's Hitachi's take on this?
Yoshida: We'd say SATA belongs in Tier-3. We'll continue to see more SATA drives. It does have its exposures --its mean time between failures [MTBF] is a lot lower than Fibre Channel drives, so you have to have extra things in there. In our systems, we use RAID-6, so you have two parity drives to protect against two points of failure. The other thing is when you have failures, you have to rebuild RAID stripes. By having two parity drives, you can do that rebuild 60% faster than if you only had one. That's a big deal because SATA drives have very large capacities. It can take several hours to repair a SATA drive that's, say, 400 GB. Having two parity drives allows you to rebuild much faster. Those are things you have to add in with SATA or you're going to suffer more outages. SATA is a good lower cost technology. It has its challenges, but if you configure for that, then it's fine.
How do you win deals against the competition?
Yoshida: Several things: our storage alone is the most scalable storage available both in the NSC and larger USP, so we can do a great deal of consolidation. We can connect tens of thousands of hosts to one array, and connect a massive amount of storage behind it. I think we also win with the modularity with external storage attached, and the ability to virtualize other people's storage so you don't have to have all your storage in one frame, and so you have consolidation with common management. In things like tiered storage, we can do that dynamically because we have a very large cache. Other appliances are lacking in connectivity and cache, and don't have the ability to do html tiering that we have. Today people are still trying to consolidate because it does reduce cost. We do consolidation without requiring you to dump everything into one big massive frame.
In what situations do you lose against the competition?
Yoshida: It's hard to displace something that's locked into functions -- like, for instance, if a user is doing remote copy with the [EMC] SRDF product. All the procedures that surround that are the most difficult thing to break through. With virtualization, we can offer an easier path to migration, but if a customer is locked into procedures like that, it's often difficult.
Recently, Dell [Inc.] overtook HDS to become the fourth largest supplier of storage worldwide according to IDC. Dell grew 27%, while Hitachi's sales sank 1.1%. Why do you think this was? What is Hitachi's strategy to climb out of that hole?
Yoshida: I think there are different product cycles. There's an acceleration period, and these things go up and down. I think you'll see a different picture very shortly. We're just doing the same things we've always done, sticking to our core competency, which is building very functional, reliable, highly scalable arrays. You'll see us moving ahead..