Learning outcomes
- Explain the concept of threads, their benefits and the difference between user and kernel threads.
Explain the concept of threads, their benefits and the difference between user and kernel threads.
A modern process is not always limited to a single flow of execution. In many operating systems, a process can contain multiple execution paths that run within the same address space. These execution paths are called threads.
A thread is often called the smallest unit of CPU execution. While a process provides the resource container, the thread provides the execution path inside that container. This makes threads lighter than processes and often more efficient for concurrent work inside the same application.
A process owns resources such as memory and open files. A thread is the execution unit that uses those resources.
A thread is a basic unit of CPU utilization. Each thread has its own:
However, threads belonging to the same process share:
Threads are used because many applications need more than one activity to proceed at the same time. For example, a web browser may display the user interface in one thread, fetch network data in another, and render media in another.
By dividing work into threads, an application can become more responsive and can better utilize modern multicore processors.
Threads improve responsiveness because one part of an application can continue running even when another part is blocked or performing a long task. For example, a user interface can remain active while a background thread performs file loading or network communication.
Threads of the same process share memory and resources naturally. This makes communication between them easier and faster than communication between separate processes.
Creating and switching between threads is usually less expensive than creating and switching between processes. This is because threads share the same process resources instead of duplicating them.
On multicore systems, multiple threads of the same process can execute in parallel. This can improve performance and throughput, especially for applications that divide work into independent subtasks.
Threads can make programs more responsive, more efficient, and more suitable for parallel execution.
A process is a heavyweight execution environment because it has its own address space and system resources. A thread is lightweight because it runs within an existing process and shares most of that process’s resources.
| Aspect | Process | Thread |
|---|---|---|
| Definition | Independent program in execution | Execution path within a process |
| Address Space | Separate for each process | Shared within the same process |
| Resource Sharing | Limited and explicit | Natural within same process |
| Creation Cost | Higher | Lower |
| Context Switch | More expensive | Usually less expensive |
Threads can be understood from two different implementation viewpoints: user threads and kernel threads.
User threads are managed by a user-level thread library rather than directly by the operating system kernel. Thread creation, scheduling, and management are performed in user space.
Advantages of user threads:
Limitations of user threads:
Kernel threads are threads that are directly supported and managed by the operating system kernel. The kernel knows about each thread and can schedule them independently.
Advantages of kernel threads:
Limitations of kernel threads:
The distinction between user and kernel threads is also reflected in threading models. In some systems, many user threads may be mapped to one kernel thread. In others, each user thread may correspond to a separate kernel thread. Some systems use more flexible many-to-many mappings.
The basic idea, however, remains simple: user threads are managed mainly in user space, while kernel threads are directly visible to and managed by the operating system.
Threads provide a way to perform multiple activities within the same process. They improve responsiveness, resource sharing, economy, and parallelism. Compared with processes, threads are lighter and more efficient for related concurrent tasks.
The main implementation distinction is between user threads and kernel threads. User threads are faster and lighter to manage, while kernel threads provide stronger operating-system support, better blocking behavior, and better multicore execution.
Consider a web browser application running as one process. The browser may create multiple threads to perform different tasks:
Since all three threads belong to the same process, they share:
Each thread still has its own:
If the network download thread is busy waiting for data, the user interface thread can still remain active. This keeps the browser responsive to the user. On a multicore system, different threads may even run in parallel, improving performance further.
If these threads are implemented purely as user threads, they may be fast to create, but a blocking system call may affect the whole process. If they are implemented as kernel threads, the operating system can schedule them independently, allowing better parallelism and blocking behavior.
Conclusion
This example shows why threads are widely used in interactive applications: they allow multiple related tasks to proceed efficiently within the same process.
A thread is the smallest unit of CPU execution. A process may contain one or more threads. Threads of the same process share the code segment, data segment, heap, and other process resources, but each thread has its own program counter, register set, and stack.
Threads are useful because they improve responsiveness, make resource sharing easier, reduce overhead compared with full processes, and support parallel execution on multicore systems. For this reason, modern applications often use multiple threads for tasks such as user interaction, background processing, communication, and computation.
Threads are lighter than processes because they do not require a separate address space. A process is a heavyweight resource container, while a thread is a lightweight execution path within that container.
Two important implementation types are user threads and kernel threads. User threads are managed in user space by a thread library and are generally faster to create and manage. Kernel threads are known directly to the operating system and can be scheduled independently by the kernel.
User threads have lower overhead, but if one blocks in certain situations, the whole process may be affected. Kernel threads provide better operating-system support, better blocking behavior, and better use of multicore hardware, though they usually involve more management overhead.
Thus, threads provide efficient internal concurrency within a process, and the distinction between user and kernel threads explains how that concurrency is implemented and managed.