BCS401 Operating System Semester IV · AY 2025-26 onward
Unit 3 · CPU Scheduling, Threads & Deadlocks

Lecture 6: Thread management & models

Learning outcomes

  • Summarise various models for thread management and mapping user threads to kernel threads.

Prerequisites

Process concept and process vs thread difference Basic idea of concurrency and CPU execution Kernel mode and user mode Blocking system call and multicore system basics

Lecture Notes

Main content

Thread Management and Models

Learning Focus

Understand how threads are managed, how user threads are mapped to kernel threads, and why different mapping models lead to different performance, blocking behaviour, and parallelism.


1) Why Thread Management Matters

A process may contain multiple threads of execution. Each thread represents an independent sequence of instructions, while sharing the process code, data, and other resources. Thread management is important because the operating system must decide how threads are created, scheduled, supported by libraries, and mapped to the kernel.

The main design question is: How should user-level threads be mapped to kernel threads? The answer leads to different thread models, each with its own strengths and limitations.

2) User Threads and Kernel Threads

User-Level Threads

These are threads managed in user space by a thread library. The kernel may not know about each individual user thread.

  • Fast creation and management
  • Efficient switching in user space
  • Limited by the mapping model used underneath

Kernel-Level Threads

These are threads managed directly by the operating system kernel. The kernel schedules them like schedulable execution entities.

  • Can be scheduled by the OS independently
  • Support better concurrency and blocking behaviour
  • More overhead in creation and management

3) Thread Libraries

A thread library provides the programmer with an API for creating and managing threads. Thread libraries may be implemented in two main ways:

  1. User-level library: all code and data structures remain in user space, so library calls do not require system calls.
  2. Kernel-level library: the operating system directly supports the library, so thread operations often involve system calls.

Common thread libraries include POSIX Pthreads, Windows threads, and Java threads. In practice, Java threads are often implemented using the host operating system’s thread support.

4) Thread Mapping Models

The mapping model describes the relation between user threads and kernel threads. This mapping strongly affects performance, blocking, and the ability to use multiple CPU cores.

4.1 Many-to-One Model

In the many-to-one model, many user-level threads are mapped to one kernel thread.

Figure 1: Many-to-One Model
User Space Kernel Space User Thread U1 User Thread U2 User Thread U3 Kernel Thread K1

Advantages:

  • Thread management in user space is efficient.
  • Creation and switching of user threads are fast.

Disadvantages:

  • If one thread makes a blocking system call, the entire process blocks.
  • Only one thread can access the kernel at a time.
  • Multiple threads cannot run in parallel on multicore systems.

This model is historically important, but it is rarely used now because it does not exploit modern multicore processors well.

4.2 One-to-One Model

In the one-to-one model, each user thread is mapped to one kernel thread.

Figure 2: One-to-One Model
User Space Kernel Space User Thread U1 User Thread U2 User Thread U3 Kernel Thread K1 Kernel Thread K2 Kernel Thread K3

Advantages:

  • Better concurrency than many-to-one.
  • If one thread blocks, another can still run.
  • Multiple threads can execute in parallel on multiprocessors.

Disadvantages:

  • Creating many user threads means creating many kernel threads.
  • A large number of kernel threads may increase overhead and burden performance.

Modern operating systems such as Linux and the Windows family commonly use this model.

4.3 Many-to-Many Model

In the many-to-many model, many user threads are multiplexed to a smaller or equal number of kernel threads.

Figure 3: Many-to-Many Model
User Space Kernel Space U1 U2 U3 U4 Kernel Thread K1 Kernel Thread K2

Advantages:

  • Developers can create many user threads.
  • The system can still achieve parallelism on multiprocessors.
  • If one thread blocks, another kernel thread can be scheduled.
  • Better balance between flexibility and kernel overhead.

Disadvantages:

  • More complex to implement.
  • Harder to manage efficiently in practice.

This model is conceptually attractive, but many modern operating systems have moved away from it because implementation complexity is high and multicore processors have become common.

4.4 Two-Level Model

The two-level model is a variation of the many-to-many model. It still multiplexes many user threads to a smaller or equal number of kernel threads, but it also allows a particular user thread to be bound to a specific kernel thread.

Why this is useful:

  • It keeps the flexibility of many-to-many mapping.
  • It also allows selected threads to get dedicated kernel support when needed.

5) Comparing the Models

Model Mapping Blocking Effect Parallelism Main Issue
Many-to-One Many user threads → 1 kernel thread One blocking call may block entire process No true parallelism Cannot exploit multicore systems well
One-to-One 1 user thread → 1 kernel thread Another thread may continue if one blocks Yes Higher kernel-thread overhead
Many-to-Many Many user threads → fewer/equal kernel threads Better than many-to-one Yes Complex implementation
Two-Level Many-to-many + optional binding Better flexibility Yes Also complex

6) Practical Trend

Although the many-to-many model appears flexible, most modern operating systems now prefer the one-to-one model. The reason is simple: with multicore systems now standard, operating systems benefit from direct kernel scheduling of threads, even though the model may create more kernel-thread overhead.

7) Key Exam Points

  • User threads are managed by thread libraries in user space.
  • Kernel threads are known to and scheduled by the operating system.
  • The mapping model determines blocking behaviour, concurrency, and scalability.
  • Many-to-one is efficient but weak for blocking and parallelism.
  • One-to-one supports better concurrency and is widely used in modern OSs.
  • Many-to-many and two-level models are more flexible but harder to implement.

Worked Example

Worked Example: Comparing Thread Models for a Web Server

Suppose a web server process creates three user threads:

  • T1: reads a request from a client
  • T2: accesses a file from disk
  • T3: prepares and sends the response

Assume that T2 performs a blocking system call while reading from disk. Let us compare what happens under different thread models.

  1. Many-to-One Model: all three user threads are mapped to only one kernel thread. If T2 makes a blocking system call, the single kernel thread blocks. As a result, the whole process is blocked, and T1 and T3 cannot continue.
  2. One-to-One Model: each user thread has its own kernel thread. If T2 blocks, the kernel can still schedule the kernel threads of T1 or T3. Thus, other threads can continue execution.
  3. Many-to-Many Model: the user threads are multiplexed over multiple kernel threads. If one mapped kernel thread blocks, another kernel thread may still run the remaining user threads. Therefore, concurrency is preserved better than in many-to-one.
  4. Two-Level Model: similar to many-to-many, but a critical user thread may be bound to a kernel thread. This gives extra flexibility for important or performance-sensitive threads.
Figure: Effect of a Blocking Call in Different Models
Many-to-One T1 T2 (blocks) T3 One Kernel Thread → blocked One-to-One T1 T2 T3 K1 runs K2 blocked K3 runs Many-to-Many T1 T2 T3 K1 K2 If one mapped kernel thread blocks, another kernel thread may still run work.

Conclusion: The many-to-one model is weakest when blocking system calls occur, because the whole process may stall. The one-to-one and many-to-many models support much better concurrency. On modern multicore systems, this is one major reason why the one-to-one model is widely adopted.

One-Page Summary

One-Page Summary

Thread management in an operating system deals with how threads are created, controlled, and mapped for execution. A thread library provides the API for thread creation and management. Thread libraries may exist entirely in user space or may be supported directly by the operating system kernel.

Threads can be viewed as user-level threads or kernel-level threads. User threads are managed by a thread library in user space, while kernel threads are managed and scheduled directly by the operating system. The relation between these two gives rise to different thread mapping models.

In the many-to-one model, many user threads are mapped to one kernel thread. This is efficient in user space, but if one thread blocks, the whole process may block. Also, it does not allow true parallel execution on multicore systems.

In the one-to-one model, each user thread is mapped to one kernel thread. This gives better concurrency and allows multiple threads to run in parallel on multiprocessors. However, creating many kernel threads increases overhead.

In the many-to-many model, many user threads are multiplexed to a smaller or equal number of kernel threads. This gives both flexibility and parallelism, but the model is more difficult to implement.

The two-level model is a variation of many-to-many mapping. It still supports multiplexing, but it also allows a particular user thread to be bound to a specific kernel thread.

The practical trend in modern operating systems is toward the one-to-one model. This is because multicore processors are now standard, and operating systems benefit from direct kernel scheduling of threads despite the added overhead.

  • Many-to-One: efficient but poor for blocking and parallelism
  • One-to-One: better concurrency, widely used today
  • Many-to-Many: flexible but complex
  • Two-Level: many-to-many with optional thread binding

In short, thread models are not just naming conventions. They directly affect responsiveness, scalability, and the ability of a system to use multiple CPU cores effectively.