Cache Replacement Policy in Operating System
Cache replacement policy is a mechanism used in computer systems, particularly in cache memory, to determine which cache entry to evict or replace when a new entry needs to be loaded into a full cache. The cache replacement policy aims to maximize the cache's effectiveness by keeping the most frequently accessed or most valuable data in the cache, while minimizing cache misses and optimizing overall performance.
Types of Cache Replacement Policies
Following is a list of various cache replacement policies:
Multiple cache replacement policies are commonly implemented, each presenting its own pros and cons. Among the popular cache replacement policies are:
1. Least Recently Used (LRU)
Using this policy, the entry that has not been accessed in the most recent time gets replaced. It goes on the premise that the information that has been recently accessed is more likely to be accessed again soon. The cache entry that has not been accessed in the longest replacements by the LRU policy, assumes that the data items that have been recently accessed are more likely to be retrieved again soon. Storing large amounts of data makes it necessary to implement systems like Least Recently Used (LRU) to track every individual’s data usage effectively over time. If it is not properly tracked or updated then it can cause issues for all users accessing that server at any given point in time; making it essential for larger caches.
Example:
Let us suppose there is a four-location empty cache and our data item's footprints follow A, B,C,D,B,A,E,F- utilizing LRU methodology informs us which among these needs to be expelled from memory at any period keeping things running smoothly. After each access, the LRU policy would lead to the following cache state:
- Access A: A is added to the cache because it is initially empty: [A]
- Access B: B is added to the cache after access: [A, B]
- Access C: C is added to the cache after accessing C. [A, B, C]
- Access D: D is added to the cache after accessing D.[A, B, C, D]
- Access B: Since B is already present, it is relocated to front of the cache: [A, C, D, B]
- Access A: A is moved to the front of the cache since it was already there: [C, D, B, A]
- Access E: The least recently used entry must be updated since the cache is now full. Since C was most recently used in this instance, E is utilised in its place: [D, B, A, E]
- Access F: The cache is once full; thus the most recent item has to be changed. F is substituted for D since it has been used the least recently: [B, A, E, F]
In end, while the cache is full, the least lately used access is eliminated in keeping with the LRU policy. The order of the most recently used entries on the front is maintained, even as the least recently used entries are pushed to the returned of the cache.This guarantees that frequently used items stay in the cache, increasing cache hits and speed.
2. First-In-First-Out (FIFO)
In this way that the cache access which has been present for the longest duration will be removed first in accordance with a fundamental queue structure where in precedence is given to entries that entered earliest.
The cache entry that has been in the cache for the longest is replaced by the FIFO policy. The first item to enter the cache is the first one to be replaced, following a straightforward queue structure.
Example:
Take into consideration the same cache with a 4-entry capacity and the access sequences A, B, C, D, B, A, E, and F. The cache state under the FIFO policy would be as follows:
- Access A: A is added to the cache because it is initially empty: [A]
- Access B: B is added to the cache after access: [A, B]
- Access C: C is added to the cache after accessing C. [A, B, C]
- Access D: After visiting D, D is added to the cache.[A, B, C, D]
- Access B: At this point, the cache is full, thus we must change an entry. The element that has been in the cache the longest must be replaced in accordance with the FIFO policy. As A was the first element to arrive in the cache in this instance, B has been substituted for it: [B, C, D, B]
- Access A: Once more, the cache is full and an entry must be replaced. The first item, B, is swapped out for A in accordance with FIFO: [C, D, B, A]
- Access E: Because the cache is still full, E is entered in lieu of the initial entry, C. [D, B, A, E]
- Access F: The cache is full once more; thus F is used in place of the initial item, D: [B, A, E, F]
In conclusion, a straightforward queue-like structure is used in the FIFO policy. As soon as the cache is filled, the first entry that was previously stored is overwritten. It disregards access frequency or usage patterns. Consequently, things that were accessed more recently may be deleted, while items that were accessed earlier may be kept in the cache until newer entries force them out.
3. Random Replacement
This policy randomly chooses a cache entry to replace without considering its use patterns. It is easy to implement, however it could not capture temporal locality well, which could lead to poor cache use.
Example:
Using the same cache and access sequence as before, the cache state with the random replacement policy would seem as follows:
- Access A: A is added to the cache because it is initially empty: [A]
- Access B: B is added to the cache after access: [A, B]
- Access C: C is added to the cache after accessing C. [A, B, C]
- Access D: The cache is updated with D: [A, B, C, D]
- Access B: Since B already exists in the cache, no replacement is required: [A, B, C, D]
- Access A: Since A has already been cached, no alternative is essential. [A, B, C, D]
- Access E: We must change a random entry since the cache is full. In this instance, D is picked at random and changed to E: [A, B, C, E]
- Access F: The cache is once more full, thus a random entry has to be changed. This time, E is picked at random and changed to F: [A, B, C, F]
In conclusion, the random replacement strategy randomly chooses which cache entry to replace without considering its usage or access frequency. When necessary, it simply picks a random record from the cache for eviction. In contrast to other policies that take access patterns into account, this policy may be simple to implement but may not adequately capture temporal locality, which might lead to poor cache utilisation.
4. Least Frequently Used (LFU)
This rule substitutes the element with the fewest number of accesses. Assuming that low access frequency objects are less likely to be accessed in the future, it seeks to eliminate them.
In accordance with the LFU guideline, the cache entry that has been accessed the fewest times is replaced. It attempts to get rid of things that get little use since it thinks they will not get much use again. Each cache entry is tracked by LFU, which evicts the entry with the lowest count after keeping track of how many times it has been visited.
Example:
Consider a similar cache and access pattern for LFU:
- Access A: A gets added to the cache with a count of 1 because the cache is initially empty. [A (1)]
- Access B: B is given a count of 1 and added to the cache: [A (1), B (1)]
- Access C: C is given a count of 1 and added to the cache: [A (1), B (1), C (1)]
- Access D: A, B, C, and D are all added to the cache with a count of one each. [A(1), B(1), C(1), D(1)]
- Access B: B previously existed in the cache, increasing its count to 2: [A (1), C (1), D (1), B (2)]
- Access A: Since access A previously existed in the cache, its count is raised to two: [C (1), D (1), B (2), A (2)]
- Access E: We must change an entry since the cache is full.The entry with the lowest remember, in this example A (2), needs to be modified, with respect to LFU. E, which has a count of one, takes the position of A: [C (1), D (1), B (2), E (1)]
- Access F: Once more, the cache is full and an entry has to be replaced. E (1), which now has the lowest count among the entries, is swapped out with F, which has a count of 1: [C (1), D (1), B (2), F (1)]
In conclusion, when the cache is full, the LFU policy removes the cache entry with the lowest count. It assumes that things with lower access frequency are less probably to be accessed in the future. Each time an object is accessed, the count is accelerated.This strategy prioritises retaining frequently accessed items in the cache while removing seldom accessed things.
5. Most Recently Used (MRU)
This rule swaps out the most recent entry to be accessed. It is predicated on the notion that goods that have recently been accessed are more likely to be used again.
Example:
Cache state with MRU for the same cache and access sequence would be:
- Access A: Because there is nothing in the cache at first, A is added to the cache: [A]
- Access B: The cache is updated with B: [A, B]
- Access C: Adding C to the cache: [A, B, C]
- Access D: Cache entry for D: [A, B, C, D]
- Access B: B was already in the cache, so it is moved to the end, reflecting that it was most recently used: [A, C, D, B]
- Access A: A is placed at the end, indicating that it was most recently used, because it was already in the cache: [C, D, B, A]
- Access E: The cache is full, and we need to change one entry. According to the MRU policy, the entry that was least recently used, which is C, must be changed. Assuming E in lieu of C [D, B, A, E]
- Access F: Once more, we need to change one entry since the cache is full. D was the most recently utilised entry currently available, thus F gets substituted for it: [B, A, E, F]
In conclusion, when the cache is full, the MRU policy removes the entry that was utilised most recently. It assumes that the most recent object utilised will probably be accessed again soon. The MRU policy prioritises maintaining recently accessed items in the cache and may delete those that have not been used lately by keeping the most recently used items at the front of the cache.
6. Adaptive Replacement Cache (ARC)
The ARC policy dynamically adjusts the cache size allocation between frequently used items and recently accessed items, combining the benefits of LRU and LFU policies. Over time, it adjusts to modifications in access patterns.
Example:
The ARC policy uses two lists to keep track of frequently and recently accessed objects, the LRU (Least Recently Used) list and the LFU (Least Frequently Used) list. The basic goal of ARC is to balance keeping recently accessed items with boosting regularly accessed things in order to capture temporal localisation and increase hit rates.
Here is a high-level description of how ARC works:
- The LRU and LFU lists, as well as the cache, are empty at first.
- The ARC policy monitors access requests and determines whether the requested item is already in the cache:
- It counts as a hit if the item is in the cache. In order to reflect the item's recent and frequent usage, the policy modifies the relevant counters and moves the item inside the LRU and LFU lists.
- A miss is when the object is not found in the cache. Based on the cache's present condition and its hit/miss history, the policy decides which item to remove.
- The LRU and LFU lists' cache size allocation is dynamically changed for the eviction decision in ARC:
- The policy removes an item from the LFU list if it is determined that the LFU list is more advantageous based on the hit and miss rates.
- An item is removed from the LRU list if the policy determines that the LRU list is more advantageous.
- The following considerations influence the decision to evict:
- The LFU list's "ghost" entries are objects that were removed from the cache but are still present there for a while. These "ghost" entries assist in avoiding the over-eviction of things that were first removed owing to low frequency but may reappear often.
- Promoting recently accessed things (to capture temporal locality) and keeping regularly accessed items (to maximise hit rates) in balance affects the eviction decision.
The dynamic modification of cache space allocation and the interaction between the LRU and LFU lists depending on hit and miss rates are what give ARC its complexity. To maximise cache performance, it is necessary to continuously monitor and update counters, maintain the proper widths of the lists, and make adaptive eviction choices.
A knowledge of ARC's basic principles and the trade-offs it seeks to balance might help one appreciate its behaviour and advantages in more complicated situations, even though a simple example may not completely convey all its complexities.
Conclusion
In conclusion, selecting the things that should be removed from the cache as it fills up depends greatly on the cache replacement policies. Different cache replacement policies, including LRU, FIFO, Random, LFU, and ARC, use various eviction decision-making processes.