idspopd - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

What is the difference between RAM and cache memory?

Cache memory and random access memory both place data closer to the processor to reduce latency in response times. Learn why cache memory can be the faster option.

One response to the difference between RAM and cache memory question might be one of a real estate agent's old...

favorites: "Location, location, location!"

Cache is usually part of the central processing unit, or part of a complex that includes the CPU and an adjacent chipset, while memory is used to hold data and instructions that are most frequently accessed by an executing program -- usually from RAM-based memory locations.

In the classic von Neumann computer, RAM was the "chalkboard" where processors did the math of a program. Placing this data store closer to the processor itself, so data requests and responses didn't have to traverse the motherboard bus, reduced the wait time or latency associated with processing and delivered better than average latency and faster chip performance.

RAM, by contrast, tends to include permanent memory embedded on the motherboard and memory modules that the consumer can install into dedicated slots or attachment locations. These memories are accessed via the mainboard bus (channels or conduits etched into the motherboard that interconnect different devices and chipsets).

Cache defined

The term cache generally refers to hardware or software used to temporarily store recently and frequently accessed data. It provides a faster way to access data, but tends to be more expensive than other types of memory and storage in a computer, including hard drives and SSDs.

CPU cache memory operates between 10 to 100 times faster than RAM, requiring only a few nanoseconds to respond to the CPU request.

The cache provides a small amount of faster memory that's local to cache clients, such as the CPU, applications, web browsers and OSes, and is rapidly accessible. Cache memory is usually volatile memory where data will not persist if power is lost.

All types of cache memory are used to reduce data access times and latency while also improving I/O. Because almost all application workloads depend on I/O operations, caching improves application performance. By speeding up data access and I/O, the cache also improves computer performance.

The relationship between cache memory and RAM

Cache memory includes extremely fast L1 cache, which is built directly into a computer's CPU. L2 cache is also sometimes built into the processor. However, L2 cache is more frequently placed on a chip adjacent to the CPU, as is L3 cache. As a result, L2 and L3 cache can be somewhat slower than L1.

Cache memory
See how cache memory works.

Dynamic RAM, or DRAM, on the other hand, is the main memory in a computer where the OS, applications and data in use are temporarily kept to enable the CPU to access them quickly. RAM is built into the motherboard and is accessed by the CPU across a motherboard backplane.

Speed and cost

When it comes to speed, there's a good bit of difference between RAM and cache memory. Because it's built into the CPU or on a chip adjacent to the processor, CPU cache memory operates between 10 to 100 times faster than RAM, requiring only a few nanoseconds to respond to the CPU request. When the CPU accesses RAM across the computer's motherboard, the system bus limits its speed. RAM data access is, however, faster than read-only memory and mechanical storage devices, such as hard disks and tape. Magnetic media delivers I/O at rates measured in milliseconds.

RAM vs. Cache memory

Cache memory's greater speed comes at a price. Another key difference between RAM and cache memory is that cache memory is more expensive than RAM.

It should be noted that slower flash memory is being used to provide an additional cache at the magnetic media level -- on disk controllers -- in an effort to change the latency characteristics of disk, especially as disks expand in capacity and access to data increases. Considerable ink has been spilled to suggest that flash -- or SSDs -- will at some point in the future displace magnetic hard disks altogether as a production storage medium.


Cache technology is used to make computer operations more efficient. With cache memory, the CPU gets faster access to frequently used instructions for the operation of programs and to frequently accessed data. Because it's built directly into the CPU, cache memory provides the fastest possible access to memory locations, supporting faster CPU performance. Adjacent chips that hold L2 and L3 memory usually have a direct pathway to the CPU to optimize performance.

A key difference between RAM and cache memory is RAM is used as a place to keep the OS, applications and data that are in use. RAM provides the CPU with quick access to those programs and data.

But RAM has its limits. Once a computer's RAM fills up, its processor must create virtual memory to compensate for the shortage of physical memory. Virtual memory is created by temporarily transferring inactive data from RAM to disk storage, using active memory in RAM and inactive memory in hard drives to form contiguous addresses that hold an application and its data.

Next Steps

How to choose the right type of server-side flash

How to avoid problems when using SSD for write caching

See how memory sizing and CPU capacity affect application performance

Dig Deeper on Flash storage for applications