This month's cover story examines Hitachi Data Systems' (HDS) attempt to set a new standard for high-end storage....
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
The state of the art in big, bad arrays, according to HDS, is not only historically high capacity and screaming performance, but a new layer of control and virtualization. As you read the story, you'll see that these arrays have grown beyond managing the drives in their own frame to being a putative hub for a very large amount of storage behind them in other frames, even from other vendors.
Exciting, but is it a good idea?
We help you sort out the immediate benefits to this approach in 'HDS reinvents high-end arrays'. TagmaStore is impressive, and I applaud HDS for trying to bring some order to storage. But I wonder about the long-term implications.
HDS will not be alone in array-side virtualization. NetApp and IBM are already within striking distance. Within a year, most major array players will want to make their box your network-based controller. Switch makers do, too.
HDS has got it right about doing volume management close to the disk. But if you buy into virtualization, you'll have to sort out overlapping services at the array and switch level. The specter of doubly virtualizing something arises, along with the question of whether switches and arrays need to pass metadata about what they're virtualizing to each other. That's nowhere on the radar now.
When each major array player has a stake in the game, we can expect turf wars. Will this devolve into another silly exercise in account control and FUD (think WideSky)?
Pundit John Webster recently told Storage's Trends editor Alex Barrett in a related context that "if you don't like the proprietariness of a storage device, you can always move your proprietariness up into the network." John's quip also applies to putting uber-controllers in front of storage.
HDS seems to sincerely want to manage heterogeneous storage. But they won't be able to deliver that all at once. Will they ever overcome the inherent difficulties of doing that (including lack of cooperation from other vendors)?
In the '90s, IT people endured an arcane debate about global X.500 directories, which have some parallels to a virtual storage catalog. Despite their benefits, many large corporations still don't use directories (witness Active Directory) because they're difficult and time consuming.
Global directories can, to one degree or another, be switched in favor of a competitive offering. If you choose Vendor A's virtual storage controller, do you have the same freedom, or do you face a ground-up migration to a new box, which is bound to be prohibitively painful for many people? To my knowledge, there is nothing going on in SNIA or elsewhere to work toward a standard for the way virtual storage controllers store metadata. Without that, it seems to me, you could wind up walking the high wire without a net.