Storage Holy Grail: End bifurcated storage infrastructure management

Storage virtualization and other storage "uber-controllers" are a step toward better storage infrastructure management, but it's still not an integrated process.

This article can also be found in the Premium Editorial Download: Storage magazine: How to improve your virtual server storage setups:

Storage virtualization and other storage "uber-controllers" are a step toward better storage infrastructure management, but it's still not an integrated process.

In an earlier column, I talked about the evolution of storage infrastructure management and the various ways storage services were being aggregated to simplify their selective application to specific data assets and workloads.

The mainstream approach to managing storage infrastructure has been far from elegant or economical. First, vendors have been evolving their array's controller boards into full-fledged general-purpose servers, often running a recognizable OS as well as storage-specific applications ranging from proprietary file systems and RAID software to more exotic thin-provisioning or deduplication algorithms. The result is a "storage-as-appliance" model that has the advantage of creating sleek, self-contained islands of storage, each managed individually using its own element management software, but with the downside of making storage more difficult to manage as it scales.

From the perspective of workload and data, appliance storage was designed more for direct attachment to certain data than for sharing across multiple workloads. An Oracle database needed its own dedicated storage rig, as did Microsoft Exchange and so on. This one-application-one-appliance model worked well until data outgrew rig capacity. Fielding another rig required hiring another administrator to configure, optimize, manage and troubleshoot the new island.

Under those circumstances, data storage infrastructure management -- managing a fabric of such storage appliances -- was (and is) difficult to automate; hence, it's labor-intensive and costly from both a Capex (cost of specialized gear) and Opex (labor cost) perspective.

An alternative was to virtualize the hardware layer, turning off all the on-box software and placing those services on a software or hardware uber-controller that operated across all spindles in the infrastructure. I noted last time that several software-based uber-controllers are available in the form of storage virtualization software packages or "storage hypervisors" to use the more recently coined term. I also highlighted an uber-controller appliance from Tributary Systems called (appropriately enough) the Tributary Storage Director.

Between them, software-based storage hypervisors and hardware-based storage service management appliances usurp the on-board value-add software of the storage array and surface the functions as services that can be mapped to policies and applied selectively to data. The storage virtualization software approach delivers a centralized way to do this, becoming a "service-enhanced volume delivery engine" or maybe a "storage router" in terms of its function to place data onto spindles where desired services can be most efficiently applied.

The Tributary Systems approach is more federated. While the company's Storage Director can be clustered (to provide more ports for attaching more client systems and storage arrays), you can set up multiple Storage Directors around your infrastructure as required by storage I/O traffic to facilitate policy-based assignment of storage services to selected data. Managing storage in this configuration requires polling each Storage Director.

The good thing about the uber-controllers is that they abstract value-add functionality away from hardware, thereby reducing both the cost of the hardware and the lock-in to a particular vendor's rig. Moreover they enable the macro-level management of "storage as a service," delivering the means to manage capacity, performance and data protection holistically. Marketecture like "private storage clouds" or "software-defined storage" are veiled references to these management architectures.

While this new application-facing, service-oriented management approach has been a long time in coming, it's not all that's needed. Below the layer of services, capacity and performance is the hardware layer where cabling gets fouled, HBAs die, disk drives fail and solid-state drives burn out. Some folks who have aggregated storage rigs with storage hypervisors or other uber-controllers also believe that RAID needs to be done on hardware, so RAID configuration and management at the box or drive-tray level is not covered by service-level management.

That leaves us with a bifurcated management challenge in storage: aggregated services need to be managed and provisioned at the uber-controller, but someone also needs to use conventional storage resource management (SRM) software tools -- leveraging connections to discrete devices via proprietary APIs, SNMP MIBs and SMI-S providers -- in a desperate effort to see what is happening to I/O in real-time and to oversee the condition of the infrastructure plumbing. If you're keeping count, that's two management targets with no unified mechanism for collecting and presenting information so that management can advance toward greater automation.

It would be nice if every vendor would do something to implement open REST protocols for management of their arrays, which they sounded enthusiastic about doing a few years ago. In 2009, IBM announced it was embracing REST with its Project Zero … which is exactly what they have done with the protocol thus far. EMC, Hewlett-Packard, Microsoft and all the knee-nippers and ankle-biters of the storage world also rolled out roadmaps emphasizing REST, but only a few (notably X-IO) have delivered. X-IO shows how SRM and storage service management can be combined using the World Wide Web Consortium standard protocol.

The bottom line is that without an open-standards-based approach to unifying service and plumbing management, it will be significantly more difficult and costly to manage burgeoning storage infrastructures. Cloud storage and software-defined storage marketecture are distractions, and unified storage management (across all vendors' gear) is far more important than "unified storage."

About the author:
Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.

This was first published in May 2013

Dig deeper on Enterprise storage, planning and management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close