BCS401 Operating System Semester IV · AY 2025-26 onward
Unit 3 · CPU Scheduling, Threads & Deadlocks

Lecture 5: Threads: concepts & types

Learning outcomes

  • Explain the concept of threads, their benefits and the difference between user and kernel threads.

Prerequisites

Process concept Process address space basics CPU scheduling basics Basic idea of multicore execution

Lecture Notes

Main content

Threads: Concepts & Types

Learning Outcome

Explain the concept of threads, their benefits and the difference between user and kernel threads.


Introduction

A modern process is not always limited to a single flow of execution. In many operating systems, a process can contain multiple execution paths that run within the same address space. These execution paths are called threads.

A thread is often called the smallest unit of CPU execution. While a process provides the resource container, the thread provides the execution path inside that container. This makes threads lighter than processes and often more efficient for concurrent work inside the same application.

Key Idea

A process owns resources such as memory and open files. A thread is the execution unit that uses those resources.

What is a thread?

A thread is a basic unit of CPU utilization. Each thread has its own:

  • thread ID,
  • program counter,
  • register set, and
  • stack.

However, threads belonging to the same process share:

  • code segment,
  • data segment,
  • heap, and
  • other process resources such as open files.

Why threads are used

Threads are used because many applications need more than one activity to proceed at the same time. For example, a web browser may display the user interface in one thread, fetch network data in another, and render media in another.

By dividing work into threads, an application can become more responsive and can better utilize modern multicore processors.

Benefits of threads

Responsiveness

Threads improve responsiveness because one part of an application can continue running even when another part is blocked or performing a long task. For example, a user interface can remain active while a background thread performs file loading or network communication.

Resource sharing

Threads of the same process share memory and resources naturally. This makes communication between them easier and faster than communication between separate processes.

Economy

Creating and switching between threads is usually less expensive than creating and switching between processes. This is because threads share the same process resources instead of duplicating them.

Scalability and parallelism

On multicore systems, multiple threads of the same process can execute in parallel. This can improve performance and throughput, especially for applications that divide work into independent subtasks.

In short

Threads can make programs more responsive, more efficient, and more suitable for parallel execution.

Process versus thread

A process is a heavyweight execution environment because it has its own address space and system resources. A thread is lightweight because it runs within an existing process and shares most of that process’s resources.

Aspect Process Thread
Definition Independent program in execution Execution path within a process
Address Space Separate for each process Shared within the same process
Resource Sharing Limited and explicit Natural within same process
Creation Cost Higher Lower
Context Switch More expensive Usually less expensive

User threads and kernel threads

Threads can be understood from two different implementation viewpoints: user threads and kernel threads.

User threads

User threads are managed by a user-level thread library rather than directly by the operating system kernel. Thread creation, scheduling, and management are performed in user space.

Advantages of user threads:

  • faster to create and manage,
  • thread operations do not always require kernel intervention,
  • lower overhead in many cases.

Limitations of user threads:

  • if one user thread blocks in a blocking system call, the entire process may block,
  • true parallel execution on multiple cores may not be achieved depending on the mapping model,
  • the kernel may not be aware of individual user threads.

Kernel threads

Kernel threads are threads that are directly supported and managed by the operating system kernel. The kernel knows about each thread and can schedule them independently.

Advantages of kernel threads:

  • one thread can block without blocking all threads of the process,
  • threads can be scheduled independently by the operating system,
  • better support for parallel execution on multicore processors.

Limitations of kernel threads:

  • thread creation and management usually involve more overhead than pure user-level threads,
  • kernel support is required.

Relationship with threading models

The distinction between user and kernel threads is also reflected in threading models. In some systems, many user threads may be mapped to one kernel thread. In others, each user thread may correspond to a separate kernel thread. Some systems use more flexible many-to-many mappings.

The basic idea, however, remains simple: user threads are managed mainly in user space, while kernel threads are directly visible to and managed by the operating system.

Conclusion

Threads provide a way to perform multiple activities within the same process. They improve responsiveness, resource sharing, economy, and parallelism. Compared with processes, threads are lighter and more efficient for related concurrent tasks.

The main implementation distinction is between user threads and kernel threads. User threads are faster and lighter to manage, while kernel threads provide stronger operating-system support, better blocking behavior, and better multicore execution.

Worked Example

Worked Example: Threads in a Web Browser

Consider a web browser application running as one process. The browser may create multiple threads to perform different tasks:

  • Thread 1: handles the user interface,
  • Thread 2: downloads web page data from the network,
  • Thread 3: renders images or plays media content.

What do these threads share?

Since all three threads belong to the same process, they share:

  • the same program code,
  • the same data region,
  • the same heap, and
  • other process resources such as open files and network connections.

What is separate for each thread?

Each thread still has its own:

  • program counter,
  • register set, and
  • stack.

Why is this useful?

If the network download thread is busy waiting for data, the user interface thread can still remain active. This keeps the browser responsive to the user. On a multicore system, different threads may even run in parallel, improving performance further.

User thread and kernel thread viewpoint

If these threads are implemented purely as user threads, they may be fast to create, but a blocking system call may affect the whole process. If they are implemented as kernel threads, the operating system can schedule them independently, allowing better parallelism and blocking behavior.

Conclusion

This example shows why threads are widely used in interactive applications: they allow multiple related tasks to proceed efficiently within the same process.

One-Page Summary

One-Page Summary: Threads, Concepts & Types

A thread is the smallest unit of CPU execution. A process may contain one or more threads. Threads of the same process share the code segment, data segment, heap, and other process resources, but each thread has its own program counter, register set, and stack.

Threads are useful because they improve responsiveness, make resource sharing easier, reduce overhead compared with full processes, and support parallel execution on multicore systems. For this reason, modern applications often use multiple threads for tasks such as user interaction, background processing, communication, and computation.

Threads are lighter than processes because they do not require a separate address space. A process is a heavyweight resource container, while a thread is a lightweight execution path within that container.

Two important implementation types are user threads and kernel threads. User threads are managed in user space by a thread library and are generally faster to create and manage. Kernel threads are known directly to the operating system and can be scheduled independently by the kernel.

User threads have lower overhead, but if one blocks in certain situations, the whole process may be affected. Kernel threads provide better operating-system support, better blocking behavior, and better use of multicore hardware, though they usually involve more management overhead.

Thus, threads provide efficient internal concurrency within a process, and the distinction between user and kernel threads explains how that concurrency is implemented and managed.