cache memory

Contributor(s): Jon William Toigo

Cache memory, also called CPU memory, is random access memory (RAM) that a computer microprocessor can access more quickly than it can access regular RAM. This memory is typically integrated directly with the CPU chip or placed on a separate chip that has a separate bus interconnect with the CPU.

The basic purpose of cache memory is to store program instructions that are frequently re-referenced by software during operation. Fast access to these instructions increases the overall speed of the software program.

As the microprocessor processes data, it looks first in the cache memory; if it finds the instructions there (from a previous reading of data), it does not have to do a more time-consuming reading of data from larger memory or other data storage devices.

Most programs use very few resources once they have been opened and operated for a time, mainly because frequently re-referenced instructions tend to be cached. This explains why measurements of system performance in computers with slower processors but larger caches tend to be faster than measurements of system performance in computers with faster processors but more limited cache space.

Multi-tier or multilevel caching has become popular in server and desktop architectures, with different levels providing greater efficiency through managed tiering. Simply put, the less frequently access is made to certain data or instructions, the lower down the cache level the data or instructions are written.

Cache memory levels explained

Cache memory is fast and expensive. Traditionally, it is categorized as "levels" that describe its closeness and accessibility to the microprocessor:

  • Level 1 (L1) cache is extremely fast but relatively small, and is usually embedded in the processor chip (CPU).
  • Level 2 (L2) cache is often more capacious than L1; it may be located on the CPU or on a separate chip or coprocessor with a high-speed alternative system bus interconnecting the cache to the CPU, so as not to be slowed by traffic on the main system bus.
  • Level 3 (L3) cache is typically specialized memory that works to improve the performance of L1 and L2. It can be significantly slower than L1 or L2, but is usually double the speed of RAM. In the case of multicore processors, each core may have its own dedicated L1 and L2 cache, but share a common L3 cache. When an instruction is referenced in the L3 cache, it is typically elevated to a higher tier cache.
cache memory diagram

Memory cache configurations

Caching configurations continue to evolve, but memory cache traditionally works under three different configurations:

  • Direct mapping, in which each block is mapped to exactly one cache location. Conceptually, this is like rows in a table with three columns: the data block or cache line that contains the actual data fetched and stored, a tag that contains all or part of the address of the fetched data, and a flag bit that connotes the presence of a valid bit of data in the row entry.
  • Fully associative mapping is similar to direct mapping in structure, but allows a block to be mapped to any cache location rather than to a pre-specified cache location (as is the case with direct mapping).
  • Set associative mapping can be viewed as a compromise between direct mapping and fully associative mapping in which each block is mapped to a subset of cache locations. It is sometimes called N-way set associative mapping, which provides for a location in main memory to be cached to any of "N" locations in the L1 cache.

Specialized caches

In addition to instruction and data caches, there are other caches designed to provide specialized functions in a system. By some definitions, the L3 cache is a specialized cache because of its shared design. Other definitions separate instruction caching from data caching, referring to each as a specialized cache.

Other specialized memory caches include the translation lookaside buffer (TLB) whose function is to record virtual address to physical address translations.

Still other caches are not, technically speaking, memory caches at all. Disk caches, for example, may leverage RAM or flash memory to provide much the same kind of data caching as memory caches do with CPU instructions. If data is frequently accessed from disk, it is cached into DRAM or flash-based silicon storage technology for faster access and response.

SSD caching vs. primary storage

In the video here, Dennis Martin, founder and president of Demartek LLC, explains the pros and cons of using solid-state drives as cache and as primary storage.

Specialized caches also exist for such applications as Web browsers, databases, network address binding and client-side Network File System protocol support. These types of caches might be distributed across multiple networked hosts to provide greater scalability or performance to an application that uses them.

Increasing cache size

L1, L2 and L3 caches have been implemented in the past using a combination of processor and motherboard components. Recently, the trend has been toward consolidating all three levels of memory caching on the CPU itself. For this reason, the primary means for increasing cache size has begun to shift from the acquisition of a specific motherboard with different chipsets and bus architectures to buying the right CPU with the right amount of integrated L1, L2 and L3 cache.

Contrary to popular belief, implementing flash or greater amounts of DRAM on a system does not increase cache memory. This can be confusing since the term memory caching (hard disk buffering) is often used interchangeably with cache memory. The former, using DRAM or flash to buffer disk reads, is intended to improve storage I/O by caching data that is frequently referenced in a buffer ahead of slower performing magnetic disk or tape. Cache memory, by contrast, provides read buffering for the CPU.

This CompTIA A+video tutorial explains cache memory.

This was last updated in December 2014

Continue Reading About cache memory

Dig Deeper on Data storage management



Find more PRO+ content and other member only offers, here.

Join the conversation


Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Thankyou for making definitions understandable and simplified, a very practical way of learning. Had a question I never understood until now.
How to increase cache memory?
gd & simple language makes it simple to understand.
thankyou easy to understand
Thank you. That was brief but much informative
i easily understand .thank u
Thank you for making definitions understandable and simplified.
Is memory that can be retrieved very quickly. Cache memory usually stores duplicate pages or documents that are used frequently. Because it is cache or specifically stored memory, those items are presented faster than normal RAM
well understood thanks
Explain chache utility for memory uses
cache is so effective in system performance that a computer running a fast CPU with little cache can have lower benchmarks than a system running a somewhat slower CPU with more cache
cache memory like a ram(ramdon access memory only ) u can increase the cache from this option go to persolanize and click advance option in that select again advance option and once more a avance op[tion here u can set increase using ur pendriver
Unfortunately, Margaret and the video instructor have made very misleading statements. Neither explained that in modern processors, all cache levels are integrated into the CPU chip. there are no discrete cache chips anymore. There was a long time ago. All processors now contain all their cache and the cache controller internally. RAM is external to the CPU. The presentation is not hardware based at all, only concept based. There should be a block outline going around CPU-Cache-Cache Controller to show that these functions are contained in the processor chip. The presentation as it stands is extremely confusing and misleading to anyone . .
who knows a smidgen about computer hardware. Many, many years ago, the CPU, cache, and cache controller were separate chips. About 30 years ago. Also the primary reason cache is so fast is that to get to RAM, the CPU has to execute external bus cycles to read RAM. This takes much more CPU time than CPU internal cycles to internal cache.
I am not a software guy, but from that perspective, the processor has separate instruction and data caches. Machine level instructions that are repeated thousands of times, such as found in program sub-routines; like fetch and test bit, execute much much faster in the processor using instruction cache. It is true that even a 2.4Ghz processor would be hopelessly bogged down if it had to execute an external bus cycle to RAM for every line of code.
A smaller faster memory and stores copies of the data main memory
For newer computer systems, I would recommend that you replace the current CPU with one that has a higher capacity.

If you're running an older system, you will have to replace the cache chip (which is found on the motherboard).
remember before your replace make sure you folow seven feature of cpu to avoid overclocking

What are some tips you have for increasing cache memory?
Cache size is important since it reduces the probability that there will be a cache miss
Hi all iam thinking to buy hp aac series i3 5th generation processor whose cache memory is 3mb...iam buying for daily surfing n little bit of gaming purpose? Pls recommend would this device be sufficient and how should i increase cache memory?
Very helpful site. Thank you!!! but a little hard to understand.
How to increase cache memory?
I need complete details of cache L4 and its advantages and working mechanism?
Can a system be built without a cache?
Thank you to providing such a great things for us.
doesn't cache memory employ sram rather than dram?



File Extensions and File Formats

Powered by: