BCS401 Operating System Semester IV · AY 2025-26 onward
Unit 3 · CPU Scheduling, Threads & Deadlocks

Lecture 1: Scheduling concepts & performance criteria

Learning outcomes

  • Describe common scheduling concepts and performance metrics used to evaluate scheduling algorithms.

Prerequisites

Process concept and process states Ready queue and CPU allocation basics Basic idea of multiprogramming Difference between CPU burst and I/O burst

Lecture Notes

Main content

Scheduling Concepts & Performance Criteria

Learning Outcome

Describe common scheduling concepts and performance metrics used to evaluate scheduling algorithms.


Introduction

In a multiprogramming operating system, many processes may be present in memory at the same time, but the CPU can execute only one process at a time on a single-core system. Therefore, the operating system must decide which ready process should get the CPU next. This decision-making is called CPU scheduling.

CPU scheduling is one of the most important functions of an operating system because it directly affects system performance. A poor scheduling policy can make the system feel slow and unfair, while a good scheduling policy can improve processor utilization, response time, and overall efficiency.

Key Idea

CPU scheduling is not only about giving the processor to some process. It is about doing so in a way that keeps the system efficient, responsive, and fair.

Basic scheduling idea

A process typically moves through several states during its lifetime. For scheduling, the most important state is the ready state. The scheduler selects one process from the ready queue and allocates the CPU to it.

Schedulers in an operating system

An operating system may use different schedulers for different purposes. The long-term scheduler decides which jobs are admitted into the system. The short-term scheduler, also called the CPU scheduler, selects one process from the ready queue and allocates the CPU to it. The medium-term scheduler may temporarily remove processes from memory and later bring them back to manage memory pressure.

Among these, the short-term scheduler is the most directly related to CPU scheduling. It is invoked very frequently, so it must be fast and efficient.

Dispatcher

Once the short-term scheduler has chosen a process, the dispatcher gives control of the CPU to that process. This involves:

  • performing a context switch,
  • switching the CPU to user mode if required, and
  • jumping to the correct instruction so the selected process can continue execution.

The time taken by the dispatcher to stop one process and start another is called dispatch latency. Lower dispatch latency is preferred because excessive switching wastes CPU time.

CPU burst and I/O burst cycle

Most processes do not use the CPU continuously. Instead, they alternate between periods of CPU activity and periods of waiting for input/output. These are called CPU bursts and I/O bursts.

CPU-bound processes usually have long CPU bursts, while interactive or I/O-bound processes tend to have shorter CPU bursts and frequent I/O waits. A scheduling policy should ideally handle both kinds of processes well.

When does CPU scheduling take place?

Scheduling decisions are commonly needed when:

  • a running process finishes its CPU burst and moves to waiting state,
  • a waiting process completes I/O and enters the ready state,
  • a process terminates, or
  • a running process is preempted by the operating system.

These situations form the practical basis of scheduling in an operating system.

Preemptive and non-preemptive scheduling

In non-preemptive scheduling, once a process gets the CPU, it keeps it until it either completes its CPU burst or voluntarily enters a waiting state. This approach is simpler and has lower overhead, but a long process can delay all other waiting processes.

In preemptive scheduling, the operating system can take the CPU away from a running process. This may happen when a higher-priority process arrives or when a time quantum expires. Preemptive scheduling improves responsiveness, especially in interactive systems, but introduces additional context-switch overhead.

Non-Preemptive

  • Simple to implement
  • Less context switching
  • Suitable for batch systems
  • Poor response for short urgent jobs

Preemptive

  • Better responsiveness
  • Useful in time-sharing systems
  • Can support priorities effectively
  • Higher switching overhead

Performance criteria for scheduling algorithms

A scheduling algorithm is judged by certain measurable performance criteria. These criteria help us compare different scheduling methods and decide which one is more suitable for a given environment.

Criterion Meaning Desired Goal
CPU Utilization Percentage of time the CPU is busy doing useful work Maximize
Throughput Number of processes completed per unit time Maximize
Turnaround Time Total time from process arrival to completion Minimize
Waiting Time Total time a process spends waiting in the ready queue Minimize
Response Time Time from arrival to the first response or first CPU allocation Minimize
Fairness Each process should get a reasonable chance to execute Avoid starvation

Important formulas

The following formulas are frequently used in scheduling numericals:

Turnaround Time (TAT) = Completion Time − Arrival Time

Waiting Time (WT) = Turnaround Time − CPU Burst Time

Response Time (RT) = First CPU Allocation Time − Arrival Time

Meaning of the criteria

CPU utilization tells us how effectively the CPU is being used. If the CPU is idle too often, the system is wasting processing power.

Throughput tells us how many jobs are completed in a certain amount of time. A higher throughput generally indicates better productive use of the system.

Turnaround time is especially important in batch systems, where users are concerned about how long a submitted job takes to finish.

Waiting time shows how long processes remain in the ready queue without getting CPU service. Excessive waiting time is a sign of poor scheduling efficiency.

Response time is very important in interactive and time-sharing systems because users expect fast initial feedback.

Fairness ensures that no process is postponed indefinitely. If a process waits forever while others continue to get service, the system suffers from starvation.

Quick Concept Check

Q1. Why is response time more important in interactive systems than in batch systems?

Q2. Can one scheduling algorithm optimize all criteria at the same time? Why or why not?

Q3. Why does a scheduler need to be fast, especially the short-term scheduler?

Conclusion

CPU scheduling is the mechanism by which the operating system decides which process should run next. To understand scheduling properly, one must understand the roles of the scheduler and dispatcher, the CPU–I/O burst cycle, the difference between preemptive and non-preemptive scheduling, and the major performance criteria used to evaluate scheduling algorithms.

These concepts form the foundation for the study of specific scheduling algorithms such as FCFS, SJF, Priority Scheduling, and Round Robin.

Worked Example

Worked Example: Calculating Basic Scheduling Metrics

Consider a process P1 with:

  • Arrival Time (AT) = 2 ms
  • CPU Burst Time (BT) = 5 ms
  • Completion Time (CT) = 11 ms
  • First CPU Allocation Time = 4 ms

Step 1: Turnaround Time

TAT = CT − AT = 11 − 2 = 9 ms

Step 2: Waiting Time

WT = TAT − BT = 9 − 5 = 4 ms

Step 3: Response Time

RT = First CPU Allocation Time − AT = 4 − 2 = 2 ms

Interpretation

  • The process spent 9 ms in the system from arrival to completion.
  • Out of this, 5 ms was actual CPU execution.
  • The remaining 4 ms was waiting time.
  • The process got its first response after 2 ms of arrival.

One-Page Summary

One-Page Summary: Scheduling Concepts & Performance Criteria

CPU scheduling is the operating system activity of selecting one process from the ready queue and allocating the CPU to it. In a multiprogramming system, several processes may be in memory at once, but only one process can use the CPU at a time on a single-core processor. Therefore, scheduling is necessary to keep the CPU busy and the system efficient.

The short-term scheduler is the CPU scheduler. It chooses one ready process for execution. The dispatcher then gives control of the CPU to that process. The time taken to do this is called dispatch latency. A process usually alternates between CPU bursts and I/O bursts, so scheduling decisions arise repeatedly as processes move between ready, running, and waiting states.

Scheduling may be non-preemptive or preemptive. In non-preemptive scheduling, a running process keeps the CPU until it completes or blocks. In preemptive scheduling, the operating system may interrupt a running process and allocate the CPU to another process. Preemptive scheduling improves responsiveness but increases context-switch overhead.

The main performance criteria used to evaluate scheduling algorithms are: CPU utilization, throughput, turnaround time, waiting time, response time, and fairness. CPU utilization and throughput should be maximized, while turnaround time, waiting time, and response time should be minimized. Fairness means that every process should eventually receive CPU service.

The basic formulas are: TAT = CT − AT, WT = TAT − BT, and RT = First CPU Allocation Time − AT.

These concepts are the foundation for understanding all scheduling algorithms. Before studying FCFS, SJF, Priority Scheduling, or Round Robin, one must first understand why scheduling is needed, what the scheduler and dispatcher do, and how system performance is measured.