Cache Memory in Computer
Cache memory is a semiconductor chip used to retrieve data from a computer's memory efficiently. Generally, cache memory is much costlier than main memory or any other disk memory. The cache memory acts as a buffer between RAM and CPU. It is more readily available for the processor than the main memory in the form of DRAM.
Sometimes, it is also known as CPU memory because it is directly integrated into the CPU or placed on a chip directly connected with the CPU. It means that the cache memory is significantly closer to the processor. Due to this, the processor can easily access the Cache memory.
It is smaller than the main memory and has less storage space than the main memory. However, the speed of the Cache memory is very fast. The rate of the cache memory is somewhere 10 to 100 times faster than the main memory.
Cache memory and the cache is two different terms. The cache is generally temporary storage in the form of the hardware or the software. But cache memory is a special hardware component that permits the computer to create cache at different levels.
Types of Cache Memory
Cache memory is divided in terms of the level that describe the closeness and accessibility to the processor. It is generally divided into three levels which are the following:
Level1: This level is also known as the primary cache. It is quite small in size, but it is extremely fast. It is generally integrated into the processor chip in the form of a CPU cache.
Level2: This level is also known as the secondary cache. Level 2 cache memory is bigger than Level1 in size. It may be integrated on a separate chip or coprocessor. It has a high-speed system bus, which is directly connected to the CPU. Due to this, it does not get slowed by the traffic on the main system bus.
Level3: It is the special memory that is developed to enhance the performance of Level1 and Level2. Level1 or Level2 can be significantly faster than Level3. However, the speed of Level3 is generally double the speed of DRAM. In the multi-core processor, each core has its individual Level1 and Level2 cache but can share Level3 cache.
The processor, before anything to read or write a location in the main memory, first check the following entry in the cache:
- Whenever the processor finds that there is a memory location present in the cache, the cache hit will occur, and data will be accessible from the cache.
- The cache miss will have occurred when the processor does not find the cache's memory location after validation. The cache will allocate a new entry and copies the data from the main memory for every cache miss. After this, the request is fulfilled from the contents of the cache.
Hit ratio: The hit ratio is the measurement quantity used to measure the cache memory's performance.
Hit ratio = hit / (hit + miss) = no. of hits/total accesses
The performance of the cache can be enhanced by using the following things:
- The size of the block should be larger.
- The associativity should be higher.
- The miss rate should be less.
- Avoid miss penalty
- The time to hit the cache should be less.
The various cache memory mapping techniques are available below:
- Direct mapping
- Associative mapping
- Set-Associative mapping.
1) Direct Mapping: It is the basic mapping technique. In this mapping technique, each block is mapped to only one cache memory location. In this mapping technique, each memory block has been assigned to a distinct line in the cache. If the line is previously taken up by any memory block when a new block needs to be loaded, the old block will be flogged. Address space will be divided into two parts: an index field and a tag field. The tag field is stored in the cache, and the main memory holds all the things that have been left. If the Hit ratio is good, the performance of this technique will be good. We can say that this mapping technique's performance directly depends on the Hit ratio.
2) Associative Mapping: In the associative mapping technique, associative memory is used to store content and addresses of the memory word. In this mapping technique, any block can occupy any line of the cache. Thus, the word id's bits are used to identify which word in the block is needed, but the tag becomes all of the remaining bits. It allows placing any word anywhere in the cache memory. It is the fastest and the most flexible mapping form.
3) Set-associative Mapping: The Set-associative Mapping technique can be viewed as the agreement between the previous two types of cache memory mapping technique. This mapping technique maps each block in the subset of the location of the cache. It is sometimes also known as the "N-way set associative mapping." It will permit a cache of any "N" location from the Level 1 cache in the main memory. The best of the direct cache mapping technique and associative cache mapping technique are grouped to form the set-associative cache mapping technique.