peshkova - Fotolia
- Jon Toigo, Toigo Partners International
From time to time, in presentations by tech vendors, one hears reference to a "tick-tock." Tick-tock is jargon describing a perceived pattern in the events that occur over a designated time frame. In recognizing such a pattern, the tick-tock narrative provides an orderly perspective on the seemingly great disorder of technological advancement, while at the same time providing a framework for predicting the future. Both make us feel like the future is less scary.
As we'll discuss in this article, multicore processors could very well be the next tick-tock for storage.
From computer chip maker Intel's perspective, the "tick" part of this sequence involves the creation of a new microprocessor technology or the improvement of an existing one. The "tock" part is the commercialization of that technology innovation -- leading to its adoption/absorption across the consumer base.
Tick -- Intel doubles the number of transistors in a microcircuit and finds new and better ways to improve the clock speed of the processor. Tock -- denser and faster chips are mass produced and passed into the market where they become embedded on servers, PCs, tablets, smartphones and so on. The result is an overall improvement of the performance and capacity of computing infrastructure -- not to mention the aggregation of a huge profit for the innovating vendor. Tick-tock.
Moore's law is cited to describe the pace of the tick in terms of increasing number of transistors in a processor. In 1965, Intel co-founder Gordon E. Moore predicted that the number of transistors in an integrated circuit would double every two years -- a hypothesis that held true until about 2012.
A slightly different version of the same thesis, introduced in 1975 by Intel executive David House, held that the doubling of transistors would coincide with the doubling of CPU speed about every 18 months -- resulting in chips that were not only more dense, but also twice as fast with each generation.
House's hypothesis proved a bit less durable than Moore's, however. In 2005, processor speed improvements began to stall, owing to heat issues and other factors. The result was a movement by leading chip manufacturing companies away from single-core processor designs toward "multicore" processors. So, the tick of transistor doubling continued even though the doubling of clock speeds remained fairly static.
Multicore the new tick-tock
Multicore processors have been the basis of the new tick-tock for some time now. Year after year, we are presented with CPUs offering double the number of processor cores on the same die, even though chip speeds have not increased significantly or at all. Each physical core on the chip is an individual CPU, and each core carries forward the practice of Multithreading or "time-slicing" (that is, splitting the use of a single resource between multiple workloads with sufficient speed that we seem to have multiple concurrent processes) that was the hallmark of single core or uniprocessor systems.
Threading makes a single core into two or more logical cores whose processes can be scheduled to execute so quickly that it simulates concurrent or parallel processing. Multithreading, plus steadily increasing clock speeds, made for a powerful tick-tock in favor of single core that lasted until clock speed improvement limits were hit.
Since then, we have seen multiple multithreading single-core CPUs embedded on a single physical chip so that the number of logical cores (CPU threads) double the stated number of physical core resources. Buy a quad-core processor, you have eight logical cores; an eight-core processor gives you 16 logical cores.
Chip engineering nuanced, yet important
Why is this nuance about chip engineering important in a storage column? Simple: If we could allocate logical cores to perform specific tasks or roles (like I/O processing, for example) with the same alacrity as we can assign parts of a disk array to storing and retrieving the data for a certain application, we could do some serious optimization of the entire computing system. If we could assign a certain number of logical cores to processing the I/O of a given workload, we would have the potential to accelerate application performance by many orders of magnitude. That, in turn, could enable a sea change in application or virtual machine density on a given server.
So, why haven't we done it yet? For one thing, most of the smarts for engineering multiprocessor environments have left the playing field.
Back in the olden days (the 1970s through the early 1990s), every innovative tech company was hard at work on developing multiprocessor systems. Big names like IBM and Unisys, as well as smaller names like Encore, had teams working to figure out everything from operating system supports to motherboard and interconnect designs that would enable multiple microprocessors to be implemented on the same system and share the workload presented to them in an intelligent and efficient manner. Their research was mothballed, however, when single core processors were introduced into the market and the PC and server (a bigger PC) became the dominant system strategy.
Microsoft benefitted from simple CPUs and each evolution of the Windows OS capitalized on steady improvements in clock speed and multithreading technology. In fact, the tick-tock of chip improvement and Windows adoption became, for many years, a metaphor for computing technology advancement itself.
Another way to think about it, however, is that the goals of computer science were temporarily abandoned. The industry didn't seem terribly interested in improving the efficiency of systems, only in capitalizing on brute force improvements in chip speeds and time-slicing to make stuff run bigger and faster in order to keep the consumers buying each generation of chip and OS. VMware is merely the latest to use time-slicing, and minimally, multicore chip architecture (they are beginning to leverage certain chip capabilities for multi-tenancy, for dedicating physical cores to virtual machines).
Unleash the power of multicore, multithreading chips
To really unlock the potential power of multicore processors and multithreading chips, we would need to get back to multiprocessing, parallel computing design.
DataCore Software is the first to revisit these concepts, which co-founder and Chairman of the Board Ziya Aral helped to pioneer in the 1980s. The company has found a way to take a user-designated portion of the logical cores available on a server and to allocate them specifically for storage I/O handling.
The technique they are using is getting increasingly granular and will eventually enable very specific processor resource allocation to the I/O processing of discreet workload. Best of all, once set, it is adaptive and self-tunes the number of multicore processors being used to handle I/O workloads.
DataCore's Storage Performance Council SPC-1 benchmark numbers are telling: they have blown the socks off of the hardware guys in terms of storage performance while reducing the cost per I/O well below the current low-cost leader -- using any and all off-the-shelf interconnects and storage devices.
We are about to enter a whole new era with a completely new tick-tock for storage -- and perhaps for the full server-network-storage stack -- based on multiprocessor architecture and engineering applied to multicore processor-driven systems.
Everything old is new again.
The advantages of multicore processing
Exploit benefits of multicore chips
- Best Practices for Deploying SSD –SearchStorage.com
- SSD: Features, Functions and FAQ –SearchStorage.com
- Pros and Cons of PCI Express SSD –SearchStorage.com
- Solid State Storage: Trends, Pricing Concerns, and Predictions for the Future –SearchStorage.com