olly - Fotolia
- Jon Toigo, Toigo Partners International
At the Jazz at Lincoln Center's Frederick P. Rose Hall on a brisk winter day in New York earlier this year, I had the pleasure of attending a launch event for the latest IBM z Systems mainframe, the z13.
I know, I know. For many of you, the first reaction is to stifle a yawn, turn the page, and read the next column or article in this e-zine. Mainframes aren't on your radar; they're just too complex, too expensive to own and operate, or just too old school.
On the other hand, from what I could see, the event was packed, brimming with IT professionals from financial services, manufacturing, retail, healthcare and even some of those cutting-edge Internet outfits. Some attendees, probably most, were current mainframe users; but there were a lot of talking points aimed at both newbies and former mainframers. In fact, IBM did its best to frame its value case for the z13 against the backdrop of "mobility," not big iron.
An agile mainframe
Sure, there were a couple of gear-head presentations covering hardware speeds and feeds (the IBM z13 processor unit consists of 3.99 billion transistors fabricated on 22 nm CMOS silicon with six, seven or eight cores delivering 5 GHz and is purchased in single-chip modules), but the lion's share of the agenda consisted of discussions by IBMers and customers regarding the "agility" of the platform and the "simplicity" with which resources could be assigned and reassigned to workloads -- making it a great "cloud in a box" offering. Even the most jaded folks found their interest piqued by a discussion of how the z13 could blend transactional processing with in-line business analytics that could pave the way for the big data crowd to begin delivering on their promises, including intelligent interpolation of multiple data variables to enable near real-time interactions with consumers.
With the IBM z13 in the mix, the ubiquitous mobile device (smartphone, phablet and so on) could be leveraged to support in-store sales: more effectively targeting customers with ads and coupons for face creams, toothpastes and boneless pork chops as soon as they walk into the store. IBM was bragging about new software for making the connections among sales, inventory management, customer loyalty cards and buying histories, and other components of smart marketing, and about its newly minted relationship with Apple, whose popular gear and apps provided the client side of the equation. This is not a topic one might expect to hear at a mainframe event.
IBM z13 uses KVM to spawn VMs
With the z13, IBM also embraced KVM, the increasingly popular hypervisor technology, and bragged that you could stand up 8,000 virtual machines inside its mainframe at a fraction of the cost per machine of an x86 Tinkertoy implementation using VMware or Microsoft hypervisors. For those who were already doing clustered x86 boxes and Hadoop, Big Blue was providing a means to connect all that infrastructure to the mainframe, easy peasy. That way, you could keep all your massively parallel clusters and MapR while consolidating your transactional work (and many production servers) into the blue box.
I had to admit that I found myself wanting one of these mainframes again. With the cabinet doors opened, it was evident that everything about the kit was modular and familiar. Processor unit (PU) chips feature 64 MB of shared L3 cache for use by all cores, plus they include a generous 960 MB of L4 cache and provide communications between drawers of PUs and their storage controllers. Everything else looked pretty familiar from the x86 server world. Specialty cards (motherboards, actually) provided connections to PCI busses, FICON and whatever other I/O interconnects one might need. Despite the somewhat exotic internal plumbing, with zLinux as an operating system, it dawned on me that most contemporary server admins would not find the environment or the platform that difficult to grasp.
What was missing from the presentations I saw was any discussion of storage outside of the memory components. IBM has a lot of stories to tell about storage architecture, from Tier-1 arrays with onboard hardware controllers with bloated software functionality, to high-performance JBODs with a hardware-based virtualization/compression uber controller -- the SAN Volume Controller -- to other fabric and network-based solutions leveraging FICON and Ethernet. But there was zero discussion of these elements in the launch-day presentations. I suspect they're saving up their storage discussion for their Edge conference in Las Vegas this spring.
Unfortunately, storage is where the proverbial rubber meets the road in all IT architectures these days. Virtualizing workloads with hypervisor computing, spreading workloads over massively clustered compute platforms and divvying up processing activities per a MapReduce scheme are interesting forks in the once-monolithic computing architecture that had organized the IT universe into one app/one CPU technology stacks for so many years. But moving to either of these architectures creates major disruptions in how we do storage.
With virtualization, we get the I/O blender problem that quickly reduces flash and disk storage to jumbles of random I/O rubble. The preferred solutions of the VMwares and the Microsofts are to deploy proprietary storage stacks that work only with workloads and data from the single hypervisor stack.
With Hadoop, we introduce a huge data mirroring and synchronization challenge even as we assume that everyone has the budget to simply toss any failed node and replace it with another node using our limitless budget. One wonders whether the architecture developed for supercomputing behind particle colliders is really optimized for general business workloads or contemporary budgetary realities.
Some observers might argue that IBM did a great intro of z13 by focusing on the efficiencies of processor caching. There may even be some truth to the idea that data in a mobile/big data world hasn't got time to be committed to spinning rust or flashy RAM -- it's constantly in motion, so conventional storage ideas do not apply. I doubt that legal eagles or auditors would agree, however. In truth, what the IBM z13 does do is return our thinking to data processing because of its attention to combining transaction processing with business analytics. In the end, it's this focus -- data processing and not information technology -- where the critical innovations are required. However, the underlying hardware platform cannot be ignored. It must be rock-solid, easy-to-manage and well-balanced in terms of performance, capacity and cost.
I am anxiously awaiting IBM Edge to get the rest of the z13 story.
Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.
IBM announces availability of zEnterprise mainframe system
A closer look at IBM's facility designed to support transactional memory
- NVM Express, NAND Flash Energize Storage Vendors –SearchStorage.com
- Hybrid Flash: The Essential guide –ComputerWeekly.com
- Containers and storage 101: The fundamentals of container storage –ComputerWeekly.com
- Demystifying storage performance metrics –ComputerWeekly.com