CPU Scheduling Criteria in Operating System
CPU scheduling is the process by which the operating system manages the allocation of CPU time to different processes or threads. When multiple processes are competing for the CPU, the operating system must decide which process to execute next, and for how long.
- CPU scheduling is a critical component of modern operating systems because it allows multiple processes to share a single CPU, maximizing the utilization of computing resources. By switching between processes quickly and efficiently, CPU scheduling enables the operating system to provide the illusion of parallelism, allowing multiple programs to run seemingly simultaneously.
- The choice of scheduling algorithm depends on various factors such as the type of workload, system requirements, and performance goals.
- CPU scheduling is a process that allows the operating system to manage the allocation of CPU time to different processes or threads.
Following are the commonly used criteria for CPU scheduling in an operating system:
- CPU Utilization: The primary objective of a CPU scheduler is to maximize the CPU utilization. The CPU should be busy executing processes as much as possible to achieve maximum efficiency.
- Throughput: The throughput is defined as the number of processes that are completed per unit of time. The CPU scheduler must ensure that the maximum numbers of processes are completed in the shortest possible time.
- Turnaround Time: The turnaround time is the time taken from the submission of a process to its completion. The CPU scheduler must ensure that the turnaround time for each process is minimized.
- Waiting Time: The waiting time is the time that a process spends waiting in the ready queue before it is assigned to the CPU. The CPU scheduler must ensure that the waiting time for each process is minimized.
- Response Time: The response time is the time taken from the submission of a request until the first response is produced. The CPU scheduler must ensure that the response time for each process is minimized, especially for interactive processes that require quick responses.
- Fairness: The CPU scheduler should provide fair allocation of CPU time to all processes. No process should be starved for CPU time, and all processes should get a fair share of the CPU time.
Here's an example of how CPU scheduling works in an operating system:
Suppose there are three processes P1, P2, and P3, and they all request CPU time at the same time. Each process has a different CPU burst time, which is the time required to execute the process on the CPU.
Process P1 has a CPU burst time of 5 milliseconds, process P2 has a burst time of 7 milliseconds, and process P3 has a burst time of 3 milliseconds.
The CPU scheduling algorithm decides which process to execute first and how long to allocate the CPU time to each process.
For example, if we use the Shortest Job First (SJF) scheduling algorithm, the operating system will select the process with the shortest burst time first. In this case, the operating system will select P3 first because it has the shortest burst time of 3 milliseconds. After P3 finishes executing, the operating system will select the next shortest job, which is P1. Finally, P2 will be executed.
The actual order of execution may vary depending on the scheduling algorithm and the characteristics of the workload. The objective of CPU scheduling is to allocate CPU time efficiently while minimizing turnaround time, waiting time, and other performance metrics.
Here's an example of CPU scheduling using the First-Come-First-Serve (FCFS) algorithm:
Suppose we have three processes, P1, P2, and P3, that need to be executed on a single CPU, and their arrival times and burst times are as follows:
Process | Arrival Time | Burst Time |
P1 | 0 | 10 |
P2 | 1 | 4 |
P3 | 2 | 6 |
Using the FCFS algorithm, the operating system will execute the processes in the order in which they arrive, starting with P1.
The Gantt chart for the execution of the processes would look like this:
Process | P1 | P2 | P3 |
Time | 0 | 10 | 14 |
In this example, P1 arrives first, so it is executed first. P1 has a burst time of 10, so it runs for 10 units of time until it completes. Next, P2 arrives and is executed, running for 4 units of time until completion. Finally, P3 arrives and is executed, running for 6 units of time until completion.
Note: You must note that this is just one example of CPU scheduling, and different scheduling algorithms would produce different execution orders and completion times for the same set of processes.