BCS401 Operating System Semester IV · AY 2025-26 onward
Unit 2 · Concurrent Processes

Lecture 3: Mutual exclusion & critical section problem

Learning outcomes

  • Define mutual exclusion and explain why it is required in concurrent systems.
  • Identify a critical section in a process and describe the standard critical-section structure.
  • State and explain the three correctness requirements: mutual exclusion, progress, and bounded waiting.
  • Recognize typical shared resources and classify common critical-section mistakes (deadlock/starvation risk).

Prerequisites

-- Principle of concurrency and race condition basics (Lecture 2). -- Process/thread idea and the meaning of context switching. -- Basic understanding of “shared data” (shared variable/buffer/file/device).

Lecture Notes

Main content

1) Mutual Exclusion (ME)

Mutual exclusion means that if one process (or thread) is executing in its critical section (where shared data is accessed/updated), then no other process is allowed to execute its critical section for the same shared resource at the same time. This prevents race conditions and keeps shared data consistent.

Shared Resources Requiring ME

  • Shared variables and counters
  • Shared data structures (queues, stacks, linked lists)
  • Shared buffers (for example, producer–consumer)
  • Files (shared file pointer, append position)
  • Device controllers (printer, disk, network interface)
  • Kernel data structures (process table, ready queue, memory maps)

Consequences if ME is not enforced

  • Lost update: one write overwrites another due to an unsafe interleaving.
  • Inconsistent read: a process reads data while another is in the middle of updating it.
  • Data corruption: invariants break (for example, wrong linked-list pointers).
  • Non-deterministic results: output changes with timing/scheduling.
Fig 3.1: Mutual Exclusion — Only one process in the critical section Process P0 In Critical Section Process P1 Blocked / Waiting Shared Resource (Buffer / Counter / File) P1 waits until P0 exits the critical section and releases permission.
Mutual exclusion prevents conflicting access to shared resources and avoids race-condition failures.

2) Critical Section (CS) Structure

A critical section is the part of a program where shared data (or a shared resource) is accessed. Since the CPU can switch between processes at almost any moment, we wrap the critical section with an entry and exit protocol so that only one process can enter at a time.

2.1 Standard Critical-Section Structure

do {
  // Entry section      (request permission)
  // Critical section   (access/update shared data)
  // Exit section       (release permission)
  // Remainder section  (all other code)
} while (true);
Fig 3.2: Entry–Critical–Exit–Remainder Structure Entry Section request permission Critical Section shared data update Exit Section release permission Remainder Section no shared data accessed
Synchronization logic is placed in the Entry and Exit sections to protect the Critical Section.

3) The Critical Section Problem

The critical section problem is to design a protocol that processes follow so that: (i) only one process at a time can execute in its critical section (for a shared resource), and (ii) the system remains fair and does not postpone a requesting process indefinitely.

3.1 Correctness Requirements

Requirement Definition If violated…
Mutual Exclusion If process Pi is executing in its critical section, no other process may execute in its critical section. Race conditions and corruption.
Progress If no process is in the critical section and some processes wish to enter, the selection of the next process to enter cannot be postponed indefinitely by processes that are not trying to enter. Unnecessary delay / indefinite postponement.
Bounded Waiting After a process requests entry to its critical section, there must be a limit on the number of times other processes can enter their critical sections before the request is granted. Starvation (waits forever).
Exam tip: A solution is correct only if it satisfies all three conditions: mutual exclusion, progress, and bounded waiting.

4) How mutual exclusion is achieved (overview)

Different approaches exist to solve the critical-section problem. At this stage, remember the categories. (Implementations follow in upcoming lectures.)

Software Protocols

  • Dekker’s algorithm (two processes)
  • Peterson’s solution (two processes)

Hardware Support

  • Atomic instructions (Test-and-Set, Compare-and-Swap)
  • Interrupt control (limited use case)

OS Primitives

  • Mutex locks
  • Semaphores
  • Monitors and condition variables
Best practice: Keep critical sections short—only protect the shared-data update. Avoid long computations or I/O inside the critical section.

Reference basis: Operating System Concepts (10th ed.), Critical-Section Problem (Concurrency Control).

Worked Example

Example: Two processes updating a shared bank balance

Shared variable: balance. Two processes run concurrently: Deposit adds 100, and Withdraw subtracts 50. Without mutual exclusion, the final value can be incorrect due to lost update.

Shared data and operations

// shared
int balance = 1000;

// Deposit process
balance = balance + 100;

// Withdraw process
balance = balance - 50;

What can go wrong

  • Both processes can read the same old value (1000).
  • Deposit computes 1100, Withdraw computes 950.
  • If Withdraw writes last, final becomes 950 (deposit lost).
Fix concept: Put the balance update inside a critical section and enforce mutual exclusion using a lock/semaphore.
lock(mutex);
balance = balance + 100;   // or -50
unlock(mutex);

One-Page Summary

  • Mutual exclusion: only one process/thread at a time can access a shared resource in a conflicting way.
  • Critical section: the code segment where shared data/resource is accessed (must be protected).
  • Critical-section structure: Entry → Critical Section → Exit → Remainder.
  • Critical section problem: design a protocol that guarantees safe access with fairness and responsiveness.
  • Correctness requirements:
    • Mutual Exclusion (safety)
    • Progress (no unnecessary delay)
    • Bounded Waiting (no starvation)
  • Approaches: software protocols (Peterson/Dekker), hardware atomic operations, OS primitives (mutex/semaphores/monitors).
  • Best practice: keep the critical section short and protect only shared updates.