Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Are your data storage costs too high?."

Download it now to read this article plus other related content.

Transitioning from mainframes to a SAN
When transitioning from the direct-attached world into networked storage, much of the general skill set remains the same. Still, there are differences.

Roger Cable, a senior systems programmer at Allegheny Energy Inc. in Hagerstown, MD, currently supports MVS and some Unix. If he were to switch to a SAN setup, he says, "What I'd need to do is learn more about different operating systems and how they function, booting from the SAN," and other things that are SAN-specific. The company also runs Windows and Linux, so he would need to learn about those environments.

Also, don't neglect to train on tools associated with the different environments. It's human nature to learn only what you need for the specific operating system you work with the most, and SANs require broader-based knowledge.

There are also some mindset tweaks to look out

Requires Free Membership to View

for. "A major change for most mainframe people is the idea of managing remotely," says Mike Karp, senior analyst with Enterprise Management Associates in Boulder, CO. "Mainframers are used to being able to going into the computer room to fix something that's gone wrong." Web-based SAN management tools might help mainframers make this leap into the remote world.

Another big change is the whole idea of different platforms grabbing whatever storage they can see. Staffers will have to be trained carefully about the concepts of LUN masking and zoning. "When you plug an NT server into a SAN, if NT can see the volume, it will try to grab it. That ruins it for Solaris," says Dianne McAdam, a storage analyst at Illuminata Inc. in Nashua, NH. "So people have to understand they're in a shared world--they have to be careful to section off what a server can see with what LUN. We didn't have these problems in the mainframe world."

Chuck Hollis, vice president of markets and products at EMC, recommends staff rotations as a way of teaching mainframers what they need to know about networked storage, and to let networked storage people learn some of the mainframe disciplines that still apply. "I've seen a lot of shops do this successfully," he says. "Backup is still backup--the tools might change, but the process is the same."
If you're working in networked storage, you could probably learn a thing or two from your peers in Mainframe Land. Unless you happen to be one of those former mainframers--then you can definitely teach those neophytes a thing or two.

Specific technologies used in the mainframe and distributed worlds differ, of course, but it turns out that many policies and best practices apply to both realms. Disciplines such as change management, volume utilization, costing, planning, service level agreements and others translate well from Big Iron to networked storage.

"The reality is that much of what's being done in storage area networks (SANs) came from the mainframe world," says Mike Kahn, chairman of the Clipper Group, an analyst firm in Wellesley, MA. For instance, IBM's mainframe-based ESCON was one of the earliest SANs. "The other things that mainframes brought into the equation were structure and discipline--what people ran away from in open systems. But when you're talking about storage, you need well-laid-out methodologies for adding users, handling backup/recovery, security and other things," he says.

These kinds of policies are especially critical these days, given the trend toward server consolidation on the networked or open side of the house, and economic pressures that are pushing companies to make the most of what they already have. Throwing more storage arrays at a problem just doesn't work the way it used to, from either a cost-justification or implementation perspective.

Roger Cable, a senior systems programmer at Allegheny Energy Inc. in Hagerstown, MD, says that his company has transferred both general knowledge and specific practices from mainframes to networked storage. What helps is that Allegheny is using EMC's Symmetrix both on the mainframe and the open systems side. The two worlds aren't connected yet, but Cable says that the goal is to link the mainframe arrays into the SAN at some point.

For now, one of the "main strategies we've taken off our mainframe" has to do with backup and business continuance, Cable says. First, the basic disaster recovery strategy used for the mainframe was tweaked and then used for the open systems side of the house. Second, some of the specific technology translates, too. Allegheny uses EMC's TimeFinder software to split off business continuance volumes (BCVs) on both sides of the storage house. BCVs are local mirror images of production volumes, and they can be used for backup or for testing changes to applications or databases, among other things.

Also Allegheny's change management procedures are "pretty much the same" on both sets of platforms, Cable says.

Moving to a SAN
The Weather Channel, in Atlanta, GA, has moved from managing separate islands of direct-attached storage--on NT, Linux, Unix and Windows 2000 platforms--into a centralized SAN environment. Although the company has no mainframe, it does have an aging VAX minicomputer that the Weather Channel's staffers are trying to "get out the door," according to Vicki Hamilton, vice president of shared services in the company's IT operations group. The company is rewriting the VAX's applications to help that happen as soon as possible.

Begun in January 2001, implementing the Hitachi-based SAN is moving right along. "'Done' is a relative term," Hamilton says. "We're constantly making additions and changes."

She said the centralization has helped tremendously. "When you have distributed systems and servers, you have a lot of duplicated data." She says the company has "freed up a lot of storage," and that application response time has also improved. Although her group is just starting to gather statistics about the improvements, she says she's hearing positive feedback from her business users.

One set of constants from the old environment to the new include the policies that govern backup and restore, even though the tool has changed. "The policies didn't change--we still do rotational backups the same, then cumulative and entire systems backups," Hamilton says. "We still rotate our tapes every four months."

On the tool side, the Weather Channel now uses Veritas' NetBackup, and the process has become "a lot easier," she says.

Testing processes
Dianne McAdam, a storage analyst at Illuminata Inc. Nashua, NH, earned her mainframe chops as a systems programmer in a bank's corporate data center years ago. On a mainframe, storage utilization typically runs around 80% or 85%, she says, vs. typically 30% to 50% in the Windows environment. "We can bring some of the mainframe discipline to open systems and Windows," she says. "They're getting pressure to use what they've got more efficiently."

Also, she says that not everyone on the networked side of storage understands as deeply as most mainframers do about the importance of actually testing whatever backup procedures they have. "I see a lot of people who don't check to see that the backup has completely successfully--20% to 30% of them don't." People need to check to see how long it takes to restore a key database, for instance, to make sure their end users will have computing service resumed as soon a possible in case of an outage.

Brad Stamas, chairman of the Storage Networking Industry Association and StorageTek's director of storage domain management, says many of the issues remain the same, even if the platforms change. "Procedurally, you still have to decide what to back up, and how often and how you're going to do it. These are not software-based problems-they're more about analysis and assessment."

Mike Karp, senior analyst with Enterprise Management Associates in Boulder, CO, points to something else that translates from Big Iron to networked storage. "Inexpensive automation of HSM [Hierarchical Storage Management] is key for distributed systems," he says. The central theory behind hierarchical storage management is "you take the less deserving data, the stuff that hasn't been touched since 1998, and you move it onto slower disks or archived tape or optical."

This was first published in December 2002

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: