Operating System Tutorial

Operating System Tutorial Types of Operating System Evolution of Operating System Functions of Operating System Operating System Properties Operating System Services Components of Operating System Needs of the Operating System

Operating Systems

Linux Operating System Unix Operating System Ubuntu Operating System Chrome Operating Systems Fedora Operating System MAC Operating System MS Windows Operating System Solaris Operating System Cooperative Operating System CorelDRAW Operating System CentOS FreeBSD Operating Systems Batch Operating System MS-DOS Operating System Commercial Mobile Operating Systems

Differences

Difference Between Multi-programming and Multitasking Difference between C-LOOK and C-SCAN Difference between Rotational Latency and Disk Assess Time Trap vs Interrupt Difference between C-SCAN and SSTF Difference between SCAN and FCFS Difference between Seek Time and Disk Access Time Difference between SSTF and LOOK Difference between Process and Program in the Operating System Difference between Protection and Security in Operating System

How To

How to implement Monitors using Semaphores How to Install a Different Operating System on a PC

Questions

What is Kernel and Types of Kernel What is DOS Operating System What is Thread and Types of Thread What is Process Scheduler and Process Queue What is Context Switching What is CPU Scheduling What is Producer-Consumer Problem What is Semaphore in Operating System Monitors in Operating System What is Deadlock What is Paging and Segmentation What is Demand Paging What is Virtual Memory What is a Long term Scheduler What is Page Replacement in Operating System What is BSR Mode What is Convoy Effect What is Job Sequencing in Operating System Why is it critical for the Scheduler to distinguish between I/O-bound and CPU-bound programs Why is there a Need for an Operating System

Misc

Process Management Process State Scheduling Algorithm FCFS (First-come-First-Serve) Scheduling SJF (Shortest Job First) Scheduling Round-Robin CPU Scheduling Priority Based Scheduling HRRN (Highest Response Ratio Next) Scheduling Process Synchronization Lock Variable Mechanism TSL Mechanism Turn Variable Mechanism Interested Variable Mechanism Deadlock Avoidance Strategies for Handling Deadlock Deadlock Prevention Deadlock Detection and Recovery Resource Allocation Graph Banker’s Algorithm in Operating System Fixed Partitioning and Dynamic Partitioning Partitioning Algorithms Disk Scheduling Algorithms FCFS and SSTF Disk Scheduling Algorithm SCAN and C-SCAN Disk Scheduling Algorithm Look and C-Look Disk Scheduling Algorithm File in Operating System File Access Methods in Operating System File Allocation Method Directory Structure in Operating System N-Step-SCAN Disk Scheduling Feedback Queue in Operating System Contiguous Memory Allocation in Operating System Real-time Operating System Starvation in Operating System Thrashing in Operating System 5 Goals of Operating System Advantages of Operating System Advantages of UNIX Operating System Bit Vector in Operating System Booting Process in Operating System Can a Computer Run Without the Operating System Dining Philosophers Problem in Operating System Free Space Management in Operating System Inter Process Communication in Operating System Swapping in Operating System Memory Management in Operating System Multiprogramming Operating System Multitasking Operating Systems Multi-user Operating Systems Non-Contiguous Memory Allocation in Operating System Page Table in Operating System Process Scheduling in Operating System Segmentation in Operating System Simple Structure in Operating System Single-User Operating System Two Phase Locking Protocol Advantages and Disadvantages of Operating System Arithmetic operations in binary number system Assemblers in the operating system Bakery Algorithm in Operating System Benefits of Ubuntu Operating System CPU Scheduling Criteria in Operating System Critical Section in Operating System Device Management in Operating System Linux Scheduler in Operating System Long Term Scheduler in Operating System Mutex in Operating System Operating System Failure Peterson's Solution in Operating System Privileged and Non-Privileged Instructions in Operating System Swapping in Operating System Types of Operating System Zombie and Orphan Process in Operating System 62-bit operating system Advantages and Disadvantages of Batch Operating System Boot Block and Bad Block in Operating System Contiguous and Non - Contiguous Memory Allocation in Operating System Control and Distribution Systems in Operations Management Control Program in Operating System Convergent Technologies in Operating System Convoy Effect in Operating System Copy Operating Systems to SSD Core Components of Operating System Core of UNIX Operating System Correct Value to return to the Operating System Corrupted Operating System Cos is Smart Card Operating System Cosmos Operating Systems Examples Generation of Operating System Hardware Solution in Operating System Process Control Block in Operating System Function of Kernel in Operating System Operating System Layers History of Debian Operating Systems Branches and Architecture of Debian Operating Systems Features and Packages of Debian Operating Systems Installation of Operating System on a New PC Organizational Structure and Development in Debian Operating Systems User Interface in Operating System Types Of Memory in OS Operating System in Nokia Multilevel Paging in OS Memory Mapping Techniques in OS Memory Layout of a Process in Operating System Hardware Protection in Operating System Functions of File Management in Operating System Core of Linux Operating System Cache Replacement Policy in Operating System Cache Line and Cache Size in Operating System What is Memory Mapping? Difference Between Network Operating System And Distributed Operating System What is the difference between a Hard link and a Soft Link? Principles of Preemptive Scheduling Process Scheduling Algorithms What is NOS? What is the Interrupt I/O Process? What is Time Sharing OS What is process termination? What is Time-Sharing Operating System What is Batch File File system manipulation What is Message-passing Technique in OS Logical Clock in Distributed System

Cache Replacement Policy in Operating System

Cache replacement policy is a mechanism used in computer systems, particularly in cache memory, to determine which cache entry to evict or replace when a new entry needs to be loaded into a full cache. The cache replacement policy aims to maximize the cache's effectiveness by keeping the most frequently accessed or most valuable data in the cache, while minimizing cache misses and optimizing overall performance.

Types of Cache Replacement Policies

Following is a list of various cache replacement policies:

Multiple cache replacement policies are commonly implemented, each presenting its own pros and cons. Among the popular cache replacement policies are:

1. Least Recently Used (LRU)

Using this policy, the entry that has not been accessed in the most recent time gets replaced. It goes on the premise that the information that has been recently accessed is more likely to be accessed again soon. The cache entry that has not been accessed in the longest replacements by the LRU policy, assumes that the data items that have been recently accessed are more likely to be retrieved again soon. Storing large amounts of data makes it necessary to implement systems like Least Recently Used (LRU) to track every individual’s data usage effectively over time. If it is not properly tracked or updated then it can cause issues for all users accessing that server at any given point in time; making it essential for larger caches.

Example:

Let us suppose there is a four-location empty cache and our data item's footprints follow A, B,C,D,B,A,E,F- utilizing LRU methodology informs us which among these needs to be expelled from memory at any period keeping things running smoothly. After each access, the LRU policy would lead to the following cache state:

  • Access A: A is added to the cache because it is initially empty: [A]
  • Access B: B is added to the cache after access: [A, B]
  • Access C: C is added to the cache after accessing C. [A, B, C]
  • Access D: D is added to the cache after accessing D.[A, B, C, D]
  • Access B: Since B is already present, it is relocated to front of the cache: [A, C, D, B]
  • Access A: A is moved to the front of the cache since it was already there: [C, D, B, A]
  • Access E: The least recently used entry must be updated since the cache is now full. Since C was most recently used in this instance, E is utilised in its place: [D, B, A, E]
  • Access F: The cache is once full; thus the most recent item has to be changed. F is substituted for D since it has been used the least recently: [B, A, E, F]

In end, while the cache is full, the least lately used access is eliminated in keeping with the LRU policy. The order of the most recently used entries on the front is maintained, even as the least recently used entries are pushed to the returned of the cache.This guarantees that frequently used items stay in the cache, increasing cache hits and speed.

2. First-In-First-Out (FIFO)

In this way that the cache access which has been present for the longest duration will be removed first in accordance with a fundamental queue structure where in precedence is given to entries that entered earliest.

The cache entry that has been in the cache for the longest is replaced by the FIFO policy. The first item to enter the cache is the first one to be replaced, following a straightforward queue structure.

Example:

Take into consideration the same cache with a 4-entry capacity and the access sequences A, B, C, D, B, A, E, and F. The cache state under the FIFO policy would be as follows:

  • Access A: A is added to the cache because it is initially empty: [A]
  • Access B: B is added to the cache after access: [A, B]
  • Access C: C is added to the cache after accessing C. [A, B, C]
  • Access D: After visiting D, D is added to the cache.[A, B, C, D]
  • Access B: At this point, the cache is full, thus we must change an entry. The element that has been in the cache the longest must be replaced in accordance with the FIFO policy. As A was the first element to arrive in the cache in this instance, B has been substituted for it: [B, C, D, B]
  • Access A: Once more, the cache is full and an entry must be replaced. The first item, B, is swapped out for A in accordance with FIFO: [C, D, B, A]
  • Access E: Because the cache is still full, E is entered in lieu of the initial entry, C. [D, B, A, E]
  • Access F: The cache is full once more; thus F is used in place of the initial item, D: [B, A, E, F]

In conclusion, a straightforward queue-like structure is used in the FIFO policy. As soon as the cache is filled, the first entry that was previously stored is overwritten. It disregards access frequency or usage patterns. Consequently, things that were accessed more recently may be deleted, while items that were accessed earlier may be kept in the cache until newer entries force them out.

3. Random Replacement

This policy randomly chooses a cache entry to replace without considering its use patterns. It is easy to implement, however it could not capture temporal locality well, which could lead to poor cache use.

Example:

Using the same cache and access sequence as before, the cache state with the random replacement policy would seem as follows:

  • Access A: A is added to the cache because it is initially empty: [A]
  • Access B: B is added to the cache after access: [A, B]
  • Access C: C is added to the cache after accessing C. [A, B, C]
  • Access D: The cache is updated with D: [A, B, C, D]
  • Access B: Since B already exists in the cache, no replacement is required: [A, B, C, D]
  • Access A: Since A has already been cached, no alternative is essential. [A, B, C, D]
  • Access E: We must change a random entry since the cache is full. In this instance, D is picked at random and changed to E: [A, B, C, E]
  • Access F: The cache is once more full, thus a random entry has to be changed. This time, E is picked at random and changed to F: [A, B, C, F]

In conclusion, the random replacement strategy randomly chooses which cache entry to replace without considering its usage or access frequency. When necessary, it simply picks a random record from the cache for eviction. In contrast to other policies that take access patterns into account, this policy may be simple to implement but may not adequately capture temporal locality, which might lead to poor cache utilisation.

4. Least Frequently Used (LFU)

This rule substitutes the element with the fewest number of accesses. Assuming that low access frequency objects are less likely to be accessed in the future, it seeks to eliminate them.

In accordance with the LFU guideline, the cache entry that has been accessed the fewest times is replaced. It attempts to get rid of things that get little use since it thinks they will not get much use again. Each cache entry is tracked by LFU, which evicts the entry with the lowest count after keeping track of how many times it has been visited.

Example:

Consider a similar cache and access pattern for LFU:

  • Access A: A gets added to the cache with a count of 1 because the cache is initially empty. [A (1)]
  • Access B: B is given a count of 1 and added to the cache: [A (1), B (1)]
  • Access C: C is given a count of 1 and added to the cache: [A (1), B (1), C (1)]
  • Access D: A, B, C, and D are all added to the cache with a count of one each. [A(1), B(1), C(1), D(1)]
  • Access B: B previously existed in the cache, increasing its count to 2: [A (1), C (1), D (1), B (2)]
  • Access A: Since access A previously existed in the cache, its count is raised to two: [C (1), D (1), B (2), A (2)]
  • Access E: We must change an entry since the cache is full.The entry with the lowest remember, in this example A (2), needs to be modified, with respect to LFU. E, which has a count of one, takes the position of A: [C (1), D (1), B (2), E (1)]
  • Access F: Once more, the cache is full and an entry has to be replaced. E (1), which now has the lowest count among the entries, is swapped out with F, which has a count of 1: [C (1), D (1), B (2), F (1)]

In conclusion, when the cache is full, the LFU policy removes the cache entry with the lowest count. It assumes that things with lower access frequency are less probably to be accessed in the future. Each time an object is accessed, the count is accelerated.This strategy prioritises retaining frequently accessed items in the cache while removing seldom accessed things.

5. Most Recently Used (MRU)

This rule swaps out the most recent entry to be accessed. It is predicated on the notion that goods that have recently been accessed are more likely to be used again.

Example:

Cache state with MRU for the same cache and access sequence would be:

  • Access A: Because there is nothing in the cache at first, A is added to the cache: [A]
  • Access B: The cache is updated with B: [A, B]
  • Access C: Adding C to the cache: [A, B, C]
  • Access D: Cache entry for D: [A, B, C, D]
  • Access B: B was already in the cache, so it is moved to the end, reflecting that it was most recently used: [A, C, D, B]
  • Access A: A is placed at the end, indicating that it was most recently used, because it was already in the cache: [C, D, B, A]
  • Access E: The cache is full, and we need to change one entry. According to the MRU policy, the entry that was least recently used, which is C, must be changed. Assuming E in lieu of C [D, B, A, E]
  • Access F: Once more, we need to change one entry since the cache is full. D was the most recently utilised entry currently available, thus F gets substituted for it: [B, A, E, F]

In conclusion, when the cache is full, the MRU policy removes the entry that was utilised most recently. It assumes that the most recent object utilised will probably be accessed again soon. The MRU policy prioritises maintaining recently accessed items in the cache and may delete those that have not been used lately by keeping the most recently used items at the front of the cache.

6. Adaptive Replacement Cache (ARC)

The ARC policy dynamically adjusts the cache size allocation between frequently used items and recently accessed items, combining the benefits of LRU and LFU policies. Over time, it adjusts to modifications in access patterns.

Example:

The ARC policy uses two lists to keep track of frequently and recently accessed objects, the LRU (Least Recently Used) list and the LFU (Least Frequently Used) list. The basic goal of ARC is to balance keeping recently accessed items with boosting regularly accessed things in order to capture temporal localisation and increase hit rates.

Here is a high-level description of how ARC works:

  1. The LRU and LFU lists, as well as the cache, are empty at first.
  2. The ARC policy monitors access requests and determines whether the requested item is already in the cache:
    1. It counts as a hit if the item is in the cache. In order to reflect the item's recent and frequent usage, the policy modifies the relevant counters and moves the item inside the LRU and LFU lists.
    2. A miss is when the object is not found in the cache. Based on the cache's present condition and its hit/miss history, the policy decides which item to remove.
  3. The LRU and LFU lists' cache size allocation is dynamically changed for the eviction decision in ARC:
    1. The policy removes an item from the LFU list if it is determined that the LFU list is more advantageous based on the hit and miss rates.
    2. An item is removed from the LRU list if the policy determines that the LRU list is more advantageous.
  4. The following considerations influence the decision to evict:
    1. The LFU list's "ghost" entries are objects that were removed from the cache but are still present there for a while. These "ghost" entries assist in avoiding the over-eviction of things that were first removed owing to low frequency but may reappear often.
    2. Promoting recently accessed things (to capture temporal locality) and keeping regularly accessed items (to maximise hit rates) in balance affects the eviction decision.

The dynamic modification of cache space allocation and the interaction between the LRU and LFU lists depending on hit and miss rates are what give ARC its complexity. To maximise cache performance, it is necessary to continuously monitor and update counters, maintain the proper widths of the lists, and make adaptive eviction choices.

A knowledge of ARC's basic principles and the trade-offs it seeks to balance might help one appreciate its behaviour and advantages in more complicated situations, even though a simple example may not completely convey all its complexities.

Conclusion

In conclusion, selecting the things that should be removed from the cache as it fills up depends greatly on the cache replacement policies. Different cache replacement policies, including LRU, FIFO, Random, LFU, and ARC, use various eviction decision-making processes.