Scheduling is the process which is used to share the computing resources such as memory, processor time, and bandwidth to the different processes, data flows, threads, and applications which need them. Scheduling is used to balance traffic on the system and for appropriate usage or distribution of resources and assign priorities according to the set of rules.
Through scheduling, a computer system is able to serve all requests as well as provide some quality services.
In a system, Scheduling is carried out with the help of an aptly scheduler, which primarily deals with three things:
1. Latency: – It is the turnaround time or, in other words, latency is the time to complete the job from the time of the request to the time of submission of the job, which also consists of waiting time.
2. Response Time: – Response Time is the time required to serve the process or request.
3. Throughput: – Throughput is the time to finish the task from starting to the end per unit time.
What is CPU Scheduling in OS
CPU scheduling is defined as a method which permits the process to use the CPU conveniently. It is an efficient way when another process execution is on hold or in a waiting state because of lack of resources such as I/O, etc. The purpose of CPU scheduling is to improve the efficiency of the system and make the system fast and fair.
The operating system (OS) chooses the process for execution in the ready queue, at the time when the CPU remains ideal. The process is chosen by the use of a short-term scheduler. The task of the scheduler is to choose the processes in the memory which are to be executed and then allocates the CPU to one of the processes.
The decision of CPU scheduling takes place under the following cases.
- When the CPU switches a process from running to waiting state.
Example: – If a process requests for a resource and the resource is held by another process, then in this situation, the process enters from the running state to the waiting state.
- If a process switches from running state to the ready state. Example: When an interrupt occurs between processes.
- If a process switches from waiting to running state. Example: The completion of I/O.
- If a process terminates.
In 1st and 4th case, there is no option for scheduling. A new process should be chosen for execution. But there is an option in case 2 and 3.
If scheduling is done under 1st and 4th case, then scheduling scheme is known as Non-Preemptive scheduling. otherwise, it is known as Preemptive Scheduling.
Types of Scheduling in OS
There are two types of Scheduling:
- Preemptive Scheduling
- Non-Preemptive Scheduling
Preemptive scheduling is a type of scheduling where a process switches from running state to ready state and waiting state to ready state. In Preemptive Scheduling, resources are allocated to the process for some time, and then it is taken back, and if that process still has CPU burst time remaining, then the process is again put back into the ready queue. The process remains in ready queue until it gets the next turn for its execution.
Preemptive scheduling is priority-based scheduling. The process having the highest priority is always processed and utilizes the resources like CPU.
Algorithm used in preemptive scheduling are Priority scheduling, Shortest Remaining Time First (SRTF), Round Robin (RR), etc.
Non-Preemptive Scheduling is another type of Scheduling which is used when a process terminates, or when a process changes from running state to waiting state. In Non-Preemptive Scheduling, if the resources are allocated to the process, then the process holds the CPU until the process is terminated or enters into a waiting state. In Non-Preemptive scheduling if the process is running then no interrupt is allowed in between the execution of the process. Instead, it waits until its CPU burst time is complete, and it can then assign the CPU to another operation.
Non-preemptive scheduling is the only scheduling which is used on some hardware platforms because it does not need any hardware for preemptive scheduling.
Difference between Preemptive and Non-Preemptive Scheduling
|Preemptive Scheduling||Non-Preemptive Scheduling|
|In Preemptive Scheduling, CPU is allocated for a limited time||In Non-Preemptive Scheduling, CPU is allocated until the process finishes its execution.|
|Interrupt may occur between the execution of the process.||Interrupt cannot occur until the process completes its execution.|
|Preemptive Scheduling is flexible||Non-Preemptive Scheduling is rigid|
|In preemptive Scheduling, there may be a chance of overhead of scheduling process||In Non-Preemptive Scheduling there is no overhead of scheduling the process.|
|Preemptive scheduling is related to cost.||Non-Preemptive Scheduling is not related to cost.|
|In preemptive scheduling, when the high-priority process moves into the ready queue, the low-priority process can starve.||In Non-Preemptive Scheduling, when a process having more burst time is running the CPU, then that process which have less burst time can starve.|
CPU Scheduling: Dispatcher
The Dispatcher is the element which is present in the CPU scheduling function. Dispatcher module is used in CPU scheduling, which provides control to the CPU in the selection of processes using short-term scheduler.
- Context switching
- Switching to user mode.
- Skipping to the user program’s correct location to again start the program from where it was left.
The dispatcher must be quick as possible, as it is called at every process’s turn. The time taken by the dispatcher to stop one process and resume another one, is called as Dispatch Latency.
The following figure shows the Dispatch latency.
CPU Scheduling: Scheduling Criteria
There are various criteria for CPU Scheduling:
- CPU Utilization
- Load Average
- Turnaround Time
- Waiting Time
- Response Time
Throughput: – Throughput is defined as the total number of processes completed its execution per unit time. Depending on the specific processes, this can vary from 10/second to 1/hour.
CPU Utilization: – CPU utilization is an essential task in which the operating system must ensure that the CPU stays as active as possible most of the time. for better output. It can be between from 0 to 100 percent, but in the Real-Time operating system, the range is 40 percent, for low-level, and for the high-level system it can be 90 percent. For better CPU utilization, CPU must stay busy all the time.
Load Average: – Load average is the average number of processes present in the ready queue and waiting for the CPU.
Turnaround Time: – Turnaround Time is defined as the total amount of time process consumed from its arrival to its completion. In other words, it is the total amount of time to execute a specific process.
Waiting Time: – Waiting time is the cumulative amount of time for which the process has waited for the allocation of the CPU.
Response Time: – Response Time is defined as the difference between the time of arrival and the time in which the process gets the CPU first.