Problem solve Get help with specific problems with your technologies, process and projects.

Follow-up to DASD/mainframe storage expert Q&A

One reader didn't agree with a mainframe answer by Expert Marc Farley. Read what he said about DASD and the mainframe.

Commentary: One of our SearchStorage readers was surfing around our Ask the Experts library of information and came across a response to a question on mainframe storage from SearchStorage SAN Expert and Author Marc Farley. He didn't completely agree with some of Marc's remarks, and thought a few others deserved more elaboration. Here, we provide his rebuttal and follow-up for the rest of our readers to rate its merits on your own.

The full question posed of the expert was:

What is the difference between mainframe DASD and distributed DASD? Do the leading platforms (IBM Shark, EMC Symmetrix) provide a solution for more traditional, non-networked storage needs? SAN seems to be a term synonymous with DASD these days, but please delineate ... Thanks, Jason

Specifically, here is where I disagree with Marc's answer:

  1. Switched ESCON and FICON are storage area networks. Switched fibre channel is a storage area network. In fact, except for the upper level protocol, all the lower level protocols and physical plumbing is the same. In fact, I would contend that ESCON and FICON are the only fully realized storage area network environments today.
  2. I am not aware of a mainframe-only disk system these days. All the major mainframe vendors build to the open market first and then add the ESCON and FICON protocols, usually on different adapters simply so the vendor can embed the microcode in silicon for better performance. This is true of the IBM Shark, HDS Freedom and EMC Symmetrix units. (It was also true of the Amdahl Platinum, but I am not as familiar with Amdahl's current offerings, so I can't say for sure.)
  3. Using a Shark as an example, the physical disks are the same regardless of type of data stored. Open systems designated disks are formatted in a fixed-block structure and then logically attached to the open systems adapters. OS/390 mainframe designated disks are formatted in a count-key-data (CKD) format and then logically attached to the S/390 adapters (ESCON or FICON).

    Then the operating system, be it Netware, Windows, zOS, zVM or VSE (to name some examples) imposes its own logical format on top of the perceived physical format. Ironically, in the case of IBM's Unix System's Services (USS), zOS inside an OS/390-formatted CKD linear VSAM file recreates (emulates) the fixed-block architecture of the open world. Another way of saying item 3, to use the latest mangled buzzwords, is that the major vendors provide both storage virtualization (masking physical characteristics) and SAN functionality (LUN masking, etc.) internal to their disk subsystems and then let the customer add their own external flavor of the day also. The current flexible offerings provide anything you want; you just have to figure out how to use the "features." It's no wonder people get confused.

  4. Marc is correct that CKD and serial SCSI are quite different. CKD refers to the command set used in the OS/390-compatible mainframe world. But SCSI has two meanings, one being the physical connection and the other being the command set. I realized that Marc meant SCSI in the command-set context but I'm not sure that was very clear to a casual reader. Bottom line: When talking about the mainframe-class disk subsystems, the only distinction I can see to categorize the disk is 'What processor(s) did you connect to the disk?'

    When talking about SANs, I would say the commonly understood definition is both (1) The physical plumbing to create the storage network (Fibre, switches, adapters, GBICs, ports, etc.); and, (2) The specific operating system level I/O drivers required to read and write data to the disk subsystem connected via the plumbing.

About the author: The author (who wishes to remain nameless to avoid the red tape of pre-approvals within his organization!) is an administrator based in the Midwest with 32 years of experience. He has specialized in resource management and storage for the past 24 years.

Dig Deeper on Data center storage

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.