Learn from mainframe storage

Open systems dudes--it's all been done before on the mainframe as far as systematic storage management goes. Check it out.

This article can also be found in the Premium Editorial Download: Storage magazine: Are your data storage costs too high?:

Transitioning from mainframes to a SAN
When transitioning from the direct-attached world into networked storage, much of the general skill set remains the same. Still, there are differences.

Roger Cable, a senior systems programmer at Allegheny Energy Inc. in Hagerstown, MD, currently supports MVS and some Unix. If he were to switch to a SAN setup, he says, "What I'd need to do is learn more about different operating systems and how they function, booting from the SAN," and other things that are SAN-specific. The company also runs Windows and Linux, so he would need to learn about those environments.

Also, don't neglect to train on tools associated with the different environments. It's human nature to learn only what you need for the specific operating system you work with the most, and SANs require broader-based knowledge.

There are also some mindset tweaks to look out for. "A major change for most mainframe people is the idea of managing remotely," says Mike Karp, senior analyst with Enterprise Management Associates in Boulder, CO. "Mainframers are used to being able to going into the computer room to fix something that's gone wrong." Web-based SAN management tools might help mainframers make this leap into the remote world.

Another big change is the whole idea of different platforms grabbing whatever storage they can see. Staffers will have to be trained carefully about the concepts of LUN masking and zoning. "When you plug an NT server into a SAN, if NT can see the volume, it will try to grab it. That ruins it for Solaris," says Dianne McAdam, a storage analyst at Illuminata Inc. in Nashua, NH. "So people have to understand they're in a shared world--they have to be careful to section off what a server can see with what LUN. We didn't have these problems in the mainframe world."

Chuck Hollis, vice president of markets and products at EMC, recommends staff rotations as a way of teaching mainframers what they need to know about networked storage, and to let networked storage people learn some of the mainframe disciplines that still apply. "I've seen a lot of shops do this successfully," he says. "Backup is still backup--the tools might change, but the process is the same."
If you're working in networked storage, you could probably learn a thing or two from your peers in Mainframe Land. Unless you happen to be one of those former mainframers--then you can definitely teach those neophytes a thing or two.

Specific technologies used in the mainframe and distributed worlds differ, of course, but it turns out that many policies and best practices apply to both realms. Disciplines such as change management, volume utilization, costing, planning, service level agreements and others translate well from Big Iron to networked storage.

"The reality is that much of what's being done in storage area networks (SANs) came from the mainframe world," says Mike Kahn, chairman of the Clipper Group, an analyst firm in Wellesley, MA. For instance, IBM's mainframe-based ESCON was one of the earliest SANs. "The other things that mainframes brought into the equation were structure and discipline--what people ran away from in open systems. But when you're talking about storage, you need well-laid-out methodologies for adding users, handling backup/recovery, security and other things," he says.

These kinds of policies are especially critical these days, given the trend toward server consolidation on the networked or open side of the house, and economic pressures that are pushing companies to make the most of what they already have. Throwing more storage arrays at a problem just doesn't work the way it used to, from either a cost-justification or implementation perspective.

Roger Cable, a senior systems programmer at Allegheny Energy Inc. in Hagerstown, MD, says that his company has transferred both general knowledge and specific practices from mainframes to networked storage. What helps is that Allegheny is using EMC's Symmetrix both on the mainframe and the open systems side. The two worlds aren't connected yet, but Cable says that the goal is to link the mainframe arrays into the SAN at some point.

For now, one of the "main strategies we've taken off our mainframe" has to do with backup and business continuance, Cable says. First, the basic disaster recovery strategy used for the mainframe was tweaked and then used for the open systems side of the house. Second, some of the specific technology translates, too. Allegheny uses EMC's TimeFinder software to split off business continuance volumes (BCVs) on both sides of the storage house. BCVs are local mirror images of production volumes, and they can be used for backup or for testing changes to applications or databases, among other things.

Also Allegheny's change management procedures are "pretty much the same" on both sets of platforms, Cable says.

Moving to a SAN
The Weather Channel, in Atlanta, GA, has moved from managing separate islands of direct-attached storage--on NT, Linux, Unix and Windows 2000 platforms--into a centralized SAN environment. Although the company has no mainframe, it does have an aging VAX minicomputer that the Weather Channel's staffers are trying to "get out the door," according to Vicki Hamilton, vice president of shared services in the company's IT operations group. The company is rewriting the VAX's applications to help that happen as soon as possible.

Begun in January 2001, implementing the Hitachi-based SAN is moving right along. "'Done' is a relative term," Hamilton says. "We're constantly making additions and changes."

She said the centralization has helped tremendously. "When you have distributed systems and servers, you have a lot of duplicated data." She says the company has "freed up a lot of storage," and that application response time has also improved. Although her group is just starting to gather statistics about the improvements, she says she's hearing positive feedback from her business users.

One set of constants from the old environment to the new include the policies that govern backup and restore, even though the tool has changed. "The policies didn't change--we still do rotational backups the same, then cumulative and entire systems backups," Hamilton says. "We still rotate our tapes every four months."

On the tool side, the Weather Channel now uses Veritas' NetBackup, and the process has become "a lot easier," she says.

Testing processes
Dianne McAdam, a storage analyst at Illuminata Inc. Nashua, NH, earned her mainframe chops as a systems programmer in a bank's corporate data center years ago. On a mainframe, storage utilization typically runs around 80% or 85%, she says, vs. typically 30% to 50% in the Windows environment. "We can bring some of the mainframe discipline to open systems and Windows," she says. "They're getting pressure to use what they've got more efficiently."

Also, she says that not everyone on the networked side of storage understands as deeply as most mainframers do about the importance of actually testing whatever backup procedures they have. "I see a lot of people who don't check to see that the backup has completely successfully--20% to 30% of them don't." People need to check to see how long it takes to restore a key database, for instance, to make sure their end users will have computing service resumed as soon a possible in case of an outage.

Brad Stamas, chairman of the Storage Networking Industry Association and StorageTek's director of storage domain management, says many of the issues remain the same, even if the platforms change. "Procedurally, you still have to decide what to back up, and how often and how you're going to do it. These are not software-based problems-they're more about analysis and assessment."

Mike Karp, senior analyst with Enterprise Management Associates in Boulder, CO, points to something else that translates from Big Iron to networked storage. "Inexpensive automation of HSM [Hierarchical Storage Management] is key for distributed systems," he says. The central theory behind hierarchical storage management is "you take the less deserving data, the stuff that hasn't been touched since 1998, and you move it onto slower disks or archived tape or optical."

Of course, HSM has to change to accommodate distributed systems-and in fact already has, Karp says. "The implementation changes not just because of the distributed environment, but because the technology itself keeps improving. We can offload data to hard disks now, and that was never an option in the mainframe world." Now there are different gradations of nearline, offline and online storage, and "we're now able to manage on issues by particular sets of data or applications," he adds. But even though it's not your grandmother's HSM, the fundamental concepts are still the same. "The issue all falls back to data management."

Overall, Karp admits that he really doesn't like mainframes. "I've been a rigorous advocate of the death of the mainframe for 15 years and have been proven wrong every damn time. But in storage, we've found a lot of value from the mainframe world," he says.

Some of the mainframe tools become even more important in the open systems and networked storage arenas. "When you're consolidating five or six applications, if nobody's doing capacity planning, then how those applications come together--and their additive needs--is a surprise," says Don McNicoll, a senior director at Hitachi Data Systems.

Disciplines learned from
mainframe storage
Policy-based storage management
Volume utilization
Backup/restore
Security
Change management
Costing
Planning

He points out that it's not a matter of distributed systems ignoring these important issues, it's just that there hasn't been that huge a need. With each system--and its attached storage--being managed as an independent entity, there's been no burning desire to know what's happening across all of the systems as a whole. It's only when individual storage and servers are consolidated that cross-systems storage management becomes a more urgent matter.

To address this, "some of our customers have given all their storage management back to the IT group, typically the mainframers," McNicoll says. EMC has made a living, in part, out of taking mainframe storage concepts and adapting them for the networked storage realm, says Chuck Hollis, vice president of markets and products at EMC. PowerPath, one of EMC's software packages, handles intelligent storage-path management for Unix, something the mainframe does without needing third-party software. Similarly, EMC sells StorageScope software to help Unix administrators figure out how much storage they're using and what they've got left. "The mainframe does this for free," Hollis says.

"See a pattern here? It's clear where a lot of the good ideas came from," Hollis says, even though he says that his background is primarily Unix.

Vive la difference
Mainframes don't have a monopoly on best practices, of course, and nobody's pretending that life in the distributed world is as easy as it was when Big Iron ruled the roost. Just about everything becomes more difficult and complex with networked storage. Each different platform adds its own quirk to the mix (see "Transitioning from mainframes to a SAN").

For example, as Illuminata's McAdam points out, different operating systems handle backup differently. "So if you have a major disaster, Windows may come back as of 4 a.m. and AIX may be back as of midnight. There's no consistency of data."

EMC's Hollis points out another difference: "File sharing, data sharing, collaborating around information-the whole concept of networked storage came from open systems. Mainframers want to lock data behind 32 passwords. Philosophically, they're very different."

Chris Saul, an enterprise consultant for IBM's Storage Systems Group, says that "in many organizations, the biggest challenge is one of politics." The old model was to have separate administrators for each operating system and its attached storage--a Unix administrator, a mainframe administrator and so on.

With a shared storage setup, that organization no longer works. Instead, there need to be experts broken along functional lines--people who can tune storage for multiple applications, or who know about backup and recovery and can cope with different operating systems. This way, the IT operations group can handle centralized capacity planning or volume utilization.

As Hollis says, at the end of the day, "they're all servers, and business users don't really care where IT decides to put an application--the expectations are the same. Religion and cultural bias matters only in IT."

IBM's Saul admits that the discipline and standards from the mainframe storage world can sound pretty dull. "But when you're delivering around-the-clock IT service, you really want boring. When the service is not there, you wish it was boring again."

Web Bonus:
Online resources from SearchStorage.com: "Look to the mainframe and learn," by Mark Lewis.

This was first published in December 2002

Dig deeper on Multiprotocol or unified storage

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close