tiero - Fotolia
Technically speaking, cache memory refers to memory that is integral to the CPU, where it provides nanosecond speed access to frequently referenced instructions or data. The only way to increase cache memory of this kind is to upgrade your CPU and cache chip complex. The problem is, this might require a rip-and-replace of an existing computer, since few motherboards support next-generation processor upgrades in place. However, there are a few exceptions: Some older motherboards provided a vacant slot for cache chips that would enable an upgrade from lower capacity L2 or L3 cache to higher capacity.
Some refer to RAM as cache memory, and it may indeed provide a caching role both for certain data used by a program -- serving an L3 or L4 role -- and as a read or write cache intended to optimize the retrieval or storage times for underlying solid-state or magnetic storage (flash, disk or tape). To a certain extent, RAM capacity can be increased by adding additional memory modules. You need to check with your motherboard manufacturer to determine its limits on RAM expansion.
Caching is also typically used to describe techniques for removing magnetic storage latency -- the speed at which random data access is handled -- which may involve solid-state memories, such as flash memory. Physical limits may be present in how much flash you can add to a server chassis -- for example, the number of PCI slots for flash expansion cards or the number of ports for connecting solid-state disks -- but bus extensions enable the creation of high-speed fabrics (Fibre Channel, SAS, etc.) that can increase the number of memory-based devices that can be fielded.
It should be noted that more memory does not equal faster performance in many cases. Some misinformation about flash devices, for example, has found its way into the trade press, suggesting that adding flash to a server that is hosting virtual machines (VMs) will speed up the performance of those VMs. This is not always the case, as VM performance tends to be a function of poorly executed programs -- either the hypervisor or the guest application -- as demonstrated by high CPU cycles visible using any test tool and not by the slow processing of I/O that might be addressed through the use of I/O caching. In many cases, there are no log jams in I/O processing, as evidenced by extremely low or nonexistent queue depths in test results (a queue depth is equivalent to the length of a line of cars awaiting service at the drive-thru window of a fast food restaurant). Bottom line: You can increase cache memory but it will do nothing to speed up the app.
That said, a high-performance processor with an insufficient cache size will usually underperform. So, adding cache can sometimes deliver better overall balance and performance.
Violin Memory joins flash cache market with GridIron acquisition
How hypervisors improve VM performance with properly allocated memory
Dig Deeper on Storage optimization
Related Q&A from Jon Toigo
Linear Tape File System and Linear Tape-Open technology can improve user access and durability in your tape archive system. Explore specific products... Continue Reading
Parallel computing technology has not seen widespread use in the business world, but could that change? Jon Toigo discusses parallel I/O for ... Continue Reading
Software-defined storage architecture can be implemented in several different forms that all expose software functionality to hardware across an ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.