Rawpixel - Fotolia
One common question asked in virtually any discussion of parallel computing or parallel I/O is whether deployed mission-critical applications can take advantage of the technology. From the standpoint of business applications, the answer is clearly "no." Most applications deployed today rely on sequential processing -- instruction after instruction, processed in series.
But there are parallel computing-ready applications. Parallel databases, and to a certain extent, parallel file systems, have been used within the rarified world of high-performance computing for quite some time. However, these applications tend to be found within the narrow domains of government research labs and computer science departments of technical colleges and universities.
Signs that parallel techs are gaining interest
There are prospects that parallel technologies may find more widespread application in the business world, especially with the increased interest in big data analytics. For now, the use of parallel applications is just not that widespread.
Moreover, the holy grail of parallel computing -- a simple and automatic way to convert programs designed for sequential execution so they can use parallel computing architecture instead -- has yet to appear. Most applications, including the most demanding hypervisor-computing workloads, are simply sequentially architected wares that issue instructions as a series of requests on a shared resource, called a logical core, that at best simulate concurrent or parallel execution.
Those facts should not be viewed as limitations of parallel I/O to business computing. In fact, the application targeted by parallel I/O is not specifically the acceleration of business applications, operating systems or virtual machines. Rather, the application that is the focus of parallel I/O is the handling of I/O requests in a concurrent manner from multiple, hosted application software processes.
From multiprocessor to multicore
With the multiprocessor environments of the 1980s and 1990s, each processor handled an application. In some designs, a processor or group of processors was dedicated to doing nothing but processing, in parallel, all the I/O generated by the discrete application-processing CPUs. Lately, that design has been transferred to multicore processors.
In a multicore processor, there are two or more actual processing units called cores. Each core functions as a separate CPU to read and execute program instructions, usually with its own caches that can read instructions separately from the other physical cores on a chip. In addition to the physical multicore architecture, there is typically a set of processes -- software abstractions -- running in a contemporary chip, aka hyper-threading in Intel chips, that creates logical cores. Logical cores are also processing units that are capable of executing their own threads in parallel with other logical cores.
Parallel computing design can unlock the power of multicore processors
Parallel computing architecture not suited for every task
Storage leaders in the dark about I/O's impact on core applications
Dig Deeper on Parallel file system
Related Q&A from Jon Toigo
Cache memory and random access memory both place data closer to the processor to reduce latency in response times. Learn why cache memory can be the ... Continue Reading
Linear Tape File System and Linear Tape-Open technology can improve user access and durability in your tape archive system. Explore specific products... Continue Reading
Software-defined storage architecture can be implemented in several different forms that all expose software functionality to hardware across an ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.