One response to this question might be one of a real estate agent's old favorites: “Location, location, location!”...
Cache memory is usually part of the central processing unit, or part of a complex that includes the CPU and an adjacent chipset where memory is used to hold data and instructions that are most frequently accessed by an executing program -- usually from RAM-based memory locations. In the classic von Neumann computer, the RAM was the “chalkboard” where processors did the math of a program. Placing this data store closer to the processor itself -- so data requests and responses didn’t have to traverse the motherboard bus -- reduced the wait time or latency associated with processing and delivered better than average latency and faster chip performance.
A RAM cache, by contrast, tends to include some permanent memory embedded on the motherboard and memory modules that can be installed by the consumer into dedicated slots or attachment locations. These memories are accessed via the mainboard bus (channels or conduits etched into the motherboard that interconnect different devices and chipsets). CPU cache memory operates between 10 to 100 times faster than RAM, requiring only a few nanoseconds to respond to the CPU request. RAM cache, of course, is much speedier in its response time than magnetic media, which delivers I/O at rates measured in milliseconds.
It should be noted that somewhat slower flash memory is now being used to provide an additional cache at the magnetic media level -- on disk controllers -- in an effort to change the latency characteristics of disk, especially as disks become more capacious and access to data increases. Considerable ink has been spilled to suggest that flash -- or solid-state disks -- will at some point in the future displace magnetic disks altogether as a production storage medium.
How to choose the right type of server-side flash
How to avoid problems when using SSD for write caching
Dig Deeper on Enterprise storage, planning and management
Related Q&A from Jon Toigo
Parallel computing technology has not seen widespread use in the business world, but could that change? Jon Toigo discusses parallel I/O for ...continue reading
Software-defined storage architecture can be implemented in several different forms that all expose software functionality to hardware across an ...continue reading
Flash wear is an important concern in VMware and Hyper-V environments because features such as caching and deduplication can negatively impact ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.