Maksim Samasiuk - Fotolia
- Jon Toigo, Toigo Partners International
Early in my career, someone smart explained how there's a constant game of leapfrog in computer architecture that would shape what I could do with IT at any given moment. The three players in this game -- compute, network and storage -- comprise the basic elements of the Von Neumann machine.
At different times, each player would temporarily gain an advantage in terms of capacity or performance. So, for a time, the speed of the storage device would be faster than the CPU or the bus, only to be overtaken by one of the other two elements, which in turn would be overtaken by the third component. Thankfully, tricks bright IT folk learn along the way, such as caching and spoofing, can be used to achieve balance among these mismatched components. To look smart, you need to know how to leverage these stratagems to maintain the look and feel of well-run enterprise computing systems, what we now refer to as a good user experience.
The most challenging days in IT are when a major change is about to occur, resulting in the distortion of an enterprise computing system's balance due to a significant mismatch in the performance of its component elements -- when tricks no longer suffice to deliver an acceptable user experience. Thankfully, that has happened very rarely in my 30-plus year career.
It occurs so infrequently because hardware advances are hardly ever revolutionary, despite the insistence of marketing folks that a Big Bang in enterprise computing systems is poised to happen with the introduction of every new product. In fact, separating marketing from reality is usually a bigger challenge than dealing with real systemic change, something to keep in mind every time vendors and pundits sound alarms that yet another revolution is just around the corner.
Case in point: Virtual computing
Not long ago, vendors touted server virtualization as kind of a Big Bang. Instead of the traditional "one application, one server" model of client-server enterprise computing systems, virtualization was going to usher in an era of server consolidation via a "one server, many applications" paradigm.
They told us the disruption caused by this model would be profound: Multicore CPUs would be used as multi-tenant hosting environments, with hypervisors providing the administrative tools required to derive maximum performance and agile resource sharing from commodity systems. Also, the consolidation of apps as virtual machines (VMs) would aggregate network traffic, necessitating bigger but fewer pipes. And shared storage would be deconstructed, with its physical components locally attached to each multi-tenant server box rather than shared among multiple single-app servers -- eliminating a cadre of storage specialists from IT staffing plans in the process.
Things didn't really turn out that way, of course.
True, multicore CPUs were used for multi-tenant VM hosting, but high-availability requirements and other considerations necessitated more server hardware, not less. Simply implementing multi-tenancy in a sequential I/O processing environment had a tendency to slow virtual machines down quite a bit, requiring that fewer VMs be hosted per server. So the golden dream of massive consolidation has been rarely, if ever, achieved.
As for networks, at gigabit per second (or less) speeds, network interface ports had to be increased by seven to 18 ports per server to accommodate so many apps running on the same chassis. The consolidation of so much network traffic to fewer servers mandated 10x improvements in traffic handling capacity, so companies were forced into the adoption of 10 gigabit Ethernet to derive even minimally satisfactory performance from their networks.
Hypervisor vendors demonized storage, blaming it for slow application performance even though this was rarely storage's fault at all. Simplistic sloganeering about evil SANs and hype around lightning-fast flash replaced intelligent storage engineering in vendor promotional materials, resulting in many silly and expensive storage renovations.
At the end of the day, server virtualization proved to be Jurassic infrastructure before it was even fully deployed. Unfortunately, firms are now poised to do it all again under the flag of another much-hyped Big Bang: "in-memory" everything.
What will the in-memory era require?
From where I'm sitting, the in-memory Big Bang will demand lots of dynamic RAM and maybe next-generation non-volatile electronic memories taking advantage of the emerging Non-Volatile Memory Host Controller Interface Specification, also known as NVM Express.
IBM is already placing 40-odd terabytes of system memory into its latest z Systems mainframe in anticipation of SAP and Oracle going all-in on in-memory online transaction processing. Most of us, however, will be using Intel Skylake, specialty network interface cards from Mellanox or -- perhaps -- some software innovations like Parallel Server from DataCore to glue together a bunch of x86 servers like Lego building blocks to create a poor man's hyper-converged shared memory fabric, probably running at 100 GbE.
That's a lot to absorb all at once. It's also exhausting to contemplate coming so close after the last Big Bang.
Other impending Big Bangs
The truth is that you probably won't have time to pause after 10 GbE switches, NICs and motherboards have been deployed before yet another Big Bang appears on the horizon.
For example, 1.5 million Storage Performance Council SPC-1 IOPS may soon become routine through parallel I/O processing. DataCore is already demonstrating 5 million IOPS on commodity server and storage gear -- and at a fraction of the cost of big dedicated arrays. StarWind Software and others, meanwhile, are refining the software-defined storage stack to optimize shared memory performance over 100 GbE.
The starting pistol has gone off: Let the race to be the next Big Bang in enterprise computing systems begin!
About the author:
Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.
Learn more about enterprise computing technology
What enterprise CIOs need to know about virtualization
Check the crystal ball for enterprise cloud computing
- Computer Weekly – 22 August 2017: How banking technology has changed since the ... –ComputerWeekly.com
- Software-Defined Storage For The Enterprise –SoftIron
- AWS-native Backup with Veeam - Product Demo –Veeam Software