In Sir Arthur Conan Doyle's The Lost World, explorers find themselves on a hidden plateau deep in South America that looks much like it did during the Jurassic Era: dinosaurs, giant plants, steaming climate. The only thing missing is a mainframe computer.
All kidding aside, and despite all the snide dinosaur comments, the mainframe continues to be a mainstay of many enterprise computing infrastructures. Originally by design--but increasingly by default--it also continues to be a separate world as unapproachable as Doyle's. When we set out to do this month's cover story, we wanted to respond to the frequent requests we get from you to "do something on mainframes." Integration of the mainframe with open-systems storage was one area where you had much interest and needed a lot more information.
In many shops, separate storage area networks (SANs) are being built for separate computer platforms. It's easy to see that one might be squeamish about having your typical cranky Windows applications banging on the same arrays that house data from your most bulletproof systems. But we've heard from an awful lot of people who won't even mix data from Unix and mainframe servers that sit right near each other in the data center. Why? "Just in case" seems the most common response.
But our cover story reveals that the situation is even worse than that. Even if you are slowly getting yourself used to the idea that mainframes and some Unix and Windows servers are grown-up enough to hit the same SAN, you'd have no way to manage them in common. And apparently, that will still be true in the near future, when reasonable fault isolation mechanisms for SANs are likely to become more available.When storage vendors talk about storage management standards and interoperability, the mainframe is generally not included.
It's all the more a shame because the mainframe is not a lost world when it comes to the concepts needed for good storage and data center management. Indeed, it's now widely recognized as a model--albeit a conceptual one--for the future of storage.
Many of the notions of process management, data management and just plain best practices are worth extending beyond the mainframe. Open systems, with its more interactive requirements and looser relationship between hardware and software has its corresponding requirements that the mainframe doesn't address. On the server side, we've had "tiered computing" for more than a decade.
Now, on the eve of moving to a "tiered storage" model, we find ourselves suddenly unable to mix top-tier data--independent of platform--under common management. At best, you'll wind up with two or more sets of tools to accommodate both your mainframes and open systems. For some people, that's no big deal because there will still be two groups of people doing the managing. For others, having one group use two sets of tools is acceptable. Still, it's a lost opportunity to cut the cost of data center management and make progress toward the unified enterprise infrastructure that was violently disrupted (obviously for the good) by the microprocessor.