Good afternoon Christopher. First, I am not a systems programmer or DASD Guru. Merely, an IT project manager looking for some help regarding performance and compatibility concerns when trying to use the IBM SHARK in a multi-environment scenario, primarily hosting NT applications on a Shark now hosting only mainframe applications (ERP/CIS/DW). The NT applications would be email (MS Exchange), Databases (MS SQL and ORACLE) and file/print services. In the Shark's current configuration, it has demonstrated excellent performance over previous DASD "just smokin". In our open systems, we are using COMPAQ's StorageWorks and have seen great performance on that side too. Both DASD arrays have been outstanding with regard to availability and reliability.
As we are trying to move forward (hopefully) towards a SAN environment (Brocade switch fabric), we are starting to look at storage solutions. We have an investment in both the SHARK and StorageWorks arrays and could expand the Shark (1.68T, 8GB cache, connect currently to S/390 R46 by ESCON) or buy additional StorageWorks capacity. While slightly less expensive, the COMPAQ solution would ensure the mainframe and open systems do not impact each other and that is a major concern we are having a problem working out. IBM and COMPAQ are selling each other?s stuff and have signed a "Non-Disclosure Agreement" so comparisons are not forthcoming. All I get is "mine is faster and better" from both vendors. But, some sources would say the SHARK is the far faster. DASD reference sites I have called tell me of no known problems or issues though very few have an NT presence on the SHARK. Mostly OS/390 and UNIX/AIX or UNIX/AIX and NT. One concern is the cache hostile nature of NT. I do see some statements indicating care must be taken with lashing up the OS/390 and NT because of cache flushing, as mainframe data is primarily sequential R/W and NT is primarily random R/W. Any issues seen or heard of? As we are seeing such great mainframe response on the applications, would hate to see any degradation of our mainframe applications due to my decision. Any other issues or problems you may have heard about? Really appreciate your comments.
Interesting situation. Let's discuss the options. The Shark uses clustered RS6000s inside the frame to provide great reliability. Cache is mirrored using an internal path for redundancy. Performance on mainframes can be optimized greatly by using PAV (parallel access volumes) internally in the array. The Shark is a great performer on mainframe and OK on open systems.
The Compaq solution uses a dual controller with mirrored cache also. Performance is very fast on open systems due to the internal architecture of using six seperate I/O paths to the drives, thereby giving you 240MB per second access per RAID group. The Compaq box also uses a cache algorithm that is dynamic in nature, by "learning" about the type of I/O to each RAID set and tuning cache "on the fly" for that type of access. In other words, an application doing high sequential throughput (video server) will be treated differently in cache than an application doing high random I/O (database).
OK, so now we know the architecture, lets look at the feasibility of one solution over the other.
First of all, the StorageWorks arrays do not support mainframe ESCON connections, nor does it support FICON for mainframe. The Shark does. The Shark array uses technology (PAV) that is not available in StorageWorks. The reason the two companies came together to resell each others solutions in the first place is that IBM did not have an open systems storage solution that was viable and StorageWorks did not have mainframe connectivity. Together they have a compelling story.
In my travels, I too have seen the Shark used mostly for mainframe access. Using open systems and Mainframe in the same array without a switched "point to point" architecture internally could increase contention for cache and bus bandwidth.
Compaq developed the StorageWorks arrays from the ground up for high performance in an open systems or clustered environment(the initial versions were used for VAX VMS clusters). I find it to be a better performer in those environments. IBM can sell the Shark at a very competitive price, but I think the StorageWorks solution would be better suited in the NT environment.
The good news is that your doing the right thing. Don't share Mainframe DASD with NT and Unix unless that solution provides point to point non-blocking connectivity to both platforms. I would keep using StorageWorks on NT, and Shark on Mainframe. In a SAN, you can connect and manage both platforms from a single console with open management platforms like TSM or Veritas and drill down into the arrays themselves with command console from StorageWorks.
You may want to try out Compaq's Virtual Replicator (VR) software for the file and print severs. A VR server could provide Block based I/O over Ethernet (Storage over IP) for file/print, thus decreasing your per port costs for those servers or clients where performance is not crucial. (No switch ports or HBAs needed!) VR also provides pooling and snapshots in software to make backup easier and not impact production. The VR drives look like a physically attached drive to the clients and can provide access speeds close to direct connect disks if your Ethernet environment is robust enough.
Dig Deeper on Storage management tools
Related Q&A from Christopher Poelker
SAN expert Chris Poelker compares connecting a SAN with wavelength cabling and dark fiber and discusses the pros and cons of each. Continue Reading
SAN expert Chris Poelker discusses how to change the size of a LUN in a Microsoft cluster server environment. Continue Reading
Storage expert Chris Poelker outlines WWN basics in order to answer the question: "Why do HBAs in a SAN have same base?" Continue Reading