Not-so-open systems make managing data storage tough

With few standards and storage array vendors not inclined to give up their proprietary ways, managing data storage has become tougher than it should be.

This Content Component encountered an error
This Content Component encountered an error
This article can also be found in the Premium Editorial Download: Storage magazine: Rethinking the way storage architectures are packaged and presented:

With few standards and storage array vendors not inclined to give up their proprietary ways, managing data storage has become much tougher than it should be.

Last month I talked about the present situation in storage infrastructure management using a term derived from ancient Greek: anarchy. Anarchy is a convenient catchall for the challenges that have existed since firms began abandoning mainframe computing, with its centralized approach to managing storage and data assets, for a more decentralized model.

While unseating IBM as the reigning IT monarch may have sounded like a revolutionary idea, the rhetoric of the "open systems" movement never really panned out. Those who bought into the idealistic Rousseau-ian world view (hierarchical order corrupts) forgot to read their French history and Hugo and Dickens novels. Revolutions tend to eat their own and generally produce a lot of unintended consequences.

Following the open systems revolution, things quickly got pretty oligarchic, or downright anarchical, as IBM rivals swarmed into the void left by a retreating Big Blue. Taking a page (and paraphrasing) from Thomas Hobbes' Leviathan, the self-interest of each storage array vendor led it to seek "a stick big enough to raise over the heads of all the others," making life in the process "nasty, brutish, and short" for many tech startups.

Of course, as my bureaucratic friends like to remind me, there were a few alliances along the way, some of which led to de jure standards such as SCSI, Fibre Channel (FC), iSCSI and the Storage Management Initiative Specification (SMI-S).

But even those successes (when more closely examined) illustrate the foibles of standards-making by vendor committees. Vendors did work together from time to time to set some ground rules, but usually only when consumers expressed a preference for open standards that insulated them from the quick entry and exit of vendors into the market rather than proprietary cobbles that exposed them to vendor lock-in.

Mostly, those standards had to do with signaling, handshaking and plumbing -- not any sort of agreed-upon management paradigm for the ever-expanding storage infrastructure. Instead, each vendor sought a scheme of "element management" that, coincidentally, let the vendor host other "value-add" services directly on its array and charge significantly more for what was increasingly becoming a collection of commodity components.

Storage management took the form of running reports to discern trends, obtaining current status information, and perhaps doing some configuration and maintenance. The approach was acceptable at first, especially to server admins who only needed to deal with a single direct-attached storage rig. It became a significantly more challenging modus operandi when the number of storage devices proliferated and were interconnected into FC or iSCSI fabrics.

Note that I don't call these SANs because they weren't. A storage area network, by its earliest definition, was supposed to have been a "true" network, described like other networks using the OSI layer cake model in which one functional layer provides common management for all interconnected devices.

But FC didn't provide a management layer, so storage boxes were interconnected by both serial SCSI interconnects (FC, iSCSI and SAS) and IP connections. The latter were required to provide access to onboard monitoring and configuration controls for delivery "out of band" -- entirely separate from storage I/O traffic -- to storage admins.

This bifurcated design reflected the willingness of the industry to cooperate with SAN standards only up to the point where it made financial sense to do so. SANs provided more connection points for serial SCSI-compatible storage rigs, which was good for vendors. Rudimentary SANs also provided a way for former mainframe channel extension vendors to sell a new family of products (SAN "switches") at enormous profit.

Vendor technology evangelists said SANs would change everything by creating pools of storage that would enable a more elegant and simplified management approach. But they stopped short of delivering on that promise. Agreeing to a common interconnect was one thing; enabling common management was quite another.

SNIA's SMI-S started out as an earnest effort to create a grand management scheme, but it became much watered down, difficult for vendors to implement and subject to less-than-enthusiastic promotion.

Digital Equipment Corp., and later Compaq, articulated a real SAN strategy in 1997 with common management built in as a feature. It was called the Enterprise Network Storage Architecture (ENSA), but it was never implemented. Once Hewlett-Packard got hold of Compaq, ENSA "disappeared." In the words of a former ENSA developer:

"If we had fulfilled the ENSA vision and placed all value-add functionality on a switch or some other device where the functions could be shared across all spindles in a managed way, our bosses worried that the Asian developers would swoop into our market selling rigs with element management and lots of value-add software on their array controllers. They would eat our lunch."

That may also explain why the box-makers worked so hard to scare customers away from the likes of DataCore Software, FalconStor Software and other early pioneers of storage virtualization. Those companies recognized that the SAN management vision hadn't been fulfilled and they set out to do something about it by creating a software-based "super controller" that would sit over the physical storage fabric and provide more efficient sharing of services across all systems. By doing that, they revealed the hardware guys' secret: despite the logo on the bezel plate, everybody was just selling a box of Seagate hard drives. Moreover, storage virtualization advocates noted that managing an infrastructure of heterogeneous boxes on a one-off basis was a lot more difficult and expensive than managing them as a centralized resource with on-demand service provisioning. A lot of money was spent to squelch the upstarts.

At the same time, companies like Tributary Systems were creating interesting niche management products, leveraging existing infrastructure to non-disruptively insert "engines of service management" into the data path. Tributary's Storage Director is such an appliance, performing the role of a virtual tape library, but also brokering other data protection services to data that companies stand up in a disk-to-disk-to-tape architecture.

Next month, we'll look at how these innovations in storage management are becoming even more relevant in contemporary storage infrastructure.

About the author 
Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.

This was first published in April 2013

Dig deeper on Enterprise storage, planning and management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close