Today, storage automation software can help you do more with limited resources. But most vendors concede that the really cool stuff -- such as giving storage the smarts to detect and fix problems, retrieve different types of information and protect itself from unauthorized access -- is still a work in progress, and it will be quite a while before automation makes good on its full promise.
The discipline will evolve over the next two to five years into a more solid corporate citizen. It may take that long for the industry to duke it out over the various storage management- and automation-related standards-in-progress.
In the meantime, be prepared for this technology to set off the hype-meter, much like storage area networks (SANs) and virtualization before it. Different vendors define automation according to whatever product they've got at the moment, which makes an in-depth discussion on the topic the same as "trying to get your arms around Jell-O," says Dianne McAdam, an analyst at Illuminata in Nashua, NH.
Much of what's out there currently works only with SANs or with network-attached storage (NAS) devices and thus presumes that a shop has most or all of its storage already networked. Only a few storage-automation setups deal with direct-attached storage (DAS). (See
So what exactly is automation software? In its purest form, storage automation is the process of turning manual tasks into things the system manages by itself, with little or no human intervention required. This can include jobs such as storage provisioning, backup, restore and other tasks managed by an application or by the user. The goal is to translate business needs - such as an Oracle database that needs to be accessible round-the-clock - into a series of specific storage-related actions that the system can deal with on its own.
According to Marco Coulter, a divisional vice president at Computer Associates International, Islandia, NY, automation software today can capture the steps to perform a task (workflow), and access the tools needed from a central point, allowing centralized automation. It can also integrate the tools with the best practice workflow to guide non-skilled parties through the task, and automate tasks and refocus staff on business objectives.
Policies can be put in place to mandate that each desktop user is allowed only 200MB of storage, for example, and if anyone tries to sneak by that limit, the storage administrator must be alerted. Or maybe the offending user is sent an automatic e-mail saying they can't do that and what options exist for the overflow.
Likewise, automation software can be set up to morph an existing storage setup into a hierarchical storage management (HSM) system. Any data not accessed in, say, 30 days can be moved to tape and the administrator notified.
Most automation software works by discovering the various storage devices attached to the SAN or NAS. As devices are added or changed, the automation software knows this and keeps itself up to date. Once the devices are accounted for, the software then tries to determine usage information, such as which applications rely on each various storage device, how often it's accessed and what percentage of the device is full and empty.
Then the specific corporate policies are put in place, such as notifying an administrator before assigning more space to an application. Once those policies are in place, the software can really start to do its work. CreekPath Systems Inc., Longmont, CO, claims volume creation on the array level can be as automated as the customer wants it to be, says Paula Dallabetta, director of marketing.
"On an EMC box, we can go in at the .BIN file to create additional volumes, zone it and then close it at the volume management level," she says. Of course, if administrators don't want all that done behind the scenes, they can specify at which points in the process they want to be involved or asked before the system continues onto the next step.
One of the oldest automation players is InterSAN, Scotts Valley, CA, which provides SAN management software called Pathline. Pathline's three-tier architecture includes specialized agents that talk to different storage subsystems, a core software platform running on Solaris and handles management and integration tasks and a database that tracks it all.
Most shops assign storage in a multistep process filled with grunt work, says Karen Dutch, vice president of marketing. Although SANs offer flexibility in meeting storage needs, the downside comes with redundant, multiple paths that could fulfill any potential storage request. Therefore, an administrator needs to check out different subsystems to decide what's currently free and where it's located. Then the administrator needs to match up the specific application's needs - i.e., if it's an accounts billable system that needs high availability or redundancy - to what's available in various SAN subsystems. Most shops have giant spreadsheets to keep track of storage availability, which then needs to be updated manually any time a change is made.
Finally, the administrator needs to use element tools to do the technical provisioning steps, including LUN masking. All these steps must be done in the correct order.
Dutch claims that Pathline turns this into a one-click process that can be done in less than a minute, vs. approximately an hour for an expert using traditional methods to provision a single LUN on a single server. Not only does this save time, but the provisioning can be handled by a business user.
All told, automation software can mimic whatever policies and procedures you've already got in place, or it can help you redefine and quantify policies that have perhaps been too squishy for too long.
The biggest problem facing automation vendors is that their software needs to work with all the various interfaces used by storage vendors. "As long as an array vendor opens its functions through an API, we can automate the process," says CreekPath's Dallabetta. "But if the hardware doesn't have an open API or uses a command-line interface, then we can't manage it." For example, she says, IBM's Shark system does "not have any public APIs, although they're in the process of providing this."
ProvisionSoft, a recent entrant into the storage automation market, gets around this problem by layering on top of the management software that comes with big-name storage arrays. In the initial release, its DynamicIT works on top of EMC's Enterprise Command Center (ECC) and HP/ Compaq's StorageWorks, with more integrations in the works. That way, mundane tasks such as zoning and LUN masking are left to the hardware-specific packages, leaving DynamicIT to make automated provisioning decisions based on metrics such as current server and storage environment, policies, past usage and SLAs.
Dan Tanner, an analyst at the Aberdeen Group, Boston, MA, likens the situation to the central nervous system in a human body. There are things that we don't control, such as breathing on a regular basis, but that we could intervene with if we choose - such as holding our breath when we walk through an area with dense gasoline fumes. Likewise, the vision with storage automation is to have policies or other classes of things that the user can and should intervene with - from a business perspective - and then things below that are automatic and are never touched by human hands.
Chuck Hollis, vice president of markets and products at EMC Corp., defines three types of automation. One is the automation of repetitive tasks, such as scheduling backup and verifying it's been done. Second is integrating related tasks for an end-to-end solution, such as provisioning storage and making the host bus adapter, file system and related storage piece enabled to do the task.
The third type of automation is something Hollis calls "coaching with expertise," which is basically a system that helps users, for instance, find 100GB of storage for a new application in a system that's already maxed out.
EMC's automation plans
"If you take a look at automation today, you'd be sorely disappointed," Hollis admits. Currently, EMC can automate things such as performance service levels. But if an application requires a certain level of service and isn't getting that level, the administrator can be notified and corrective actions suggested. EMC software can also automatically fix capacity problems. If the NT file system should be 90% utilized and is heading over that limit, the system can automatically add more storage.
But these features only work across EMC's own arrays, Hollis says. The company's AutoIS initiative, announced in October 2001, is about taking these tasks and automating them across Hitachi, Compaq, NetApp, and other storage hardware. EMC's StorageScope already does this to some degree by integrating storage management information from different vendors and then presenting it in multiple views. Likewise, the firm's Resource Availability software monitors host-based operating systems, databases and other pieces and reports on the status and usage of related storage resources.
The next releases of both these products - as well as Workload Analyzer, which tracks performance metrics - will have "more intelligence" about other vendors' hardware, Hollis says.
Longer term, over the next two years, Hollis envisions real-time performance analysis, with suggested corrective actions, across different vendors' arrays, switches, servers and applications. "We're staring that right in the face," he says. Another key initiative is replication management across different vendors' gear, including remote replication for things such as disaster recovery, application testing and version control.
IBM's building bricks
At IBM, the company is getting ready to deliver on StorageTank next year - that will be the company's first serious foray into policy-based storage automation. Beyond that are initiatives such as eLiza, which focuses on self-healing servers, storage and software. The goal is to make storage more able to detect and fix problems, retrieve different types of information and protect itself from unauthorized access.
"The idea is to provide more of a mainframe mindset in a distributed storage world," says Brian Truskowski, chief technology officer at IBM's Storage group. "Over time, we'll see more things like having more instances of a [a logical partition] LPAR, where something else can take over in case of failure." These are features that have been long available in mainframes that are moving into distributed storage architectures.
Longer term, IBM is investing in a concept called intelligent storage bricks, Truskowski says. "They have knowledge of other bricks or components in the environment," including gear made by vendors other than IBM. They can also plug into today's storage architectures, he says. If one brick becomes overloaded or unable to function, another can take over. "If you buy storage in the form of intelligent bricks, there's a certain autonomic intelligence that eases the management of the environment."
Still, these bricks are a ways off - IBM is looking at this project as a research activity, not quite yet as a product. In the meantime, the company is investing in something it calls domain wizards - software that can help manage and automate storing an Oracle database on the Shark. "This won't help with the entire environment, but it can help optimize something specific," Truskowski says.
Are customers ready?
Most users aren't prepared for a fully automated storage setup, either, because their policies are murky or because they don't really have a handle on how storage could work in a perfect universe. Others may never want everything automated, says Illuminata's McAdam. "A lot of customers don't want to relinquish that kind of control. It's a leap of faith they'll have to take."
Gary Fox, vice president and director of enterprise data storage at Wachovia/First Union Bank in Charlotte, NC, agrees. "We would like to see that kind of automation, but it's got to be a rules-based system that we have some governance over," he says. "We have to be able to set parameters - not just let it go out and grab more space for something." The reason, he says, is that most organizations "don't police themselves very well," and someone has to be able to make judgments about how things are done. Indeed, Fox's group exists mainly to allocate storage space to the bank's internal customers.
For some users, McAdam compares the situation to when software installation became automated. As an "old systems programmer," she didn't know whether she could trust if the automation would really work. But over time it proved itself, and she realized that it was "great" because it helped her get home at night.
She envisions this type of push-back among storage administrators, too, until companies realize that automation ultimately helps reduce risk. "A lot of outages in the data center have to do with human error," and automation reduces that hazard, she says.
However, according to Stephan Elliott, Director of Storage Management for the Hurwitz Group in Framingham, MA, before the storage department can even think of deploying policy management software throughout the organization, it needs to get all its ducks in line to make sure there's buy-in on the political, cultural and legal issues surrounding policy management. Elliott says, "The human element to policy management is the most difficult part - the technology is much easier to deploy."
CA's Coulter sees the day when storage policy-management decisions - such as adding more storage or when to move data to long-term archiving - will be made within the company's business units instead of within the IT or storage department. "Why not put the power into the hands of the people who are paying for it," Coulter says. He concedes that systems administrators will still be responsible for installing the storage gear and seeing that things hum along smoothly, but Coulter says the people who work in the business units have a much better idea of their storage needs than the people in IT.
But back to the present: EMC's Hollis says only a few of his customers are currently investing in heavy-duty storage infrastructure software, including automation. These are shops mostly in the financial services, telecommunications and large-scale manufacturing industries that are also automating as much as they can elsewhere in their IT architectures, too. He sees storage automation becoming more of a mainstream force in about two years.
Automation can open up a rat's nest of questions about storage policies and procedures. Some key concerns are:
- What's the right set of policies?
- If the automation changes the IT workload, how should the IT group be restructured as a result?
- What services, training and education are needed to help use automation tools to their maximum potential?
"Automation requires a top-down storage approach," says Bill North, director of storage software research at International Data Corp., Mountain View, CA. Customers that automate only pieces of the complex puzzle are at risk. Without knowing what the application requirements or storage requirements are, one can't be sure about the quality of the resulting automation.
As North says, "There's a tremendous amount of work that needs to be done before we can get there. We're in the 'walk before you can run' stage."
About the author: Johanna Ambrosio is a freelance writer in Marlborough, MA. She can be reached at
This was first published in December 2002