Open In App

States of a Process in Operating Systems

Last Updated : 22 Jan, 2025
Summarize
Comments
Improve
Suggest changes
Share
Like Article
Like
Report

In an operating system, a process is a program that is being executed. During its execution, a process goes through different states. Understanding these states helps us see how the operating system manages processes, ensuring that the computer runs efficiently. Please refer Process in Operating System to understand more details about processes.

Each process goes through several stages throughout its life cycle. In this article, We discuss different states of process in detail.

Process Lifecycle

When you run a program (which becomes a process), it goes through different phases before it completion. These phases, or states, can vary depending on the operating system, but the most common process lifecycle includes two, five, or seven states. Here’s a simple explanation of these states:

The Two-State Model

The simplest way to think about a process's lifecycle is with just two states:

  1. Running: This means the process is actively using the CPU to do its work.
  2. Not Running: This means the process is not currently using the CPU. It could be waiting for something, like user input or data, or it might just be paused.
Two State Process Model
Two State Process Model

When a new process is created, it starts in the not running state. Initially, this process is kept in a program called the dispatcher.

Here’s what happens step by step:

  1. Not Running State: When the process is first created, it is not using the CPU.
  2. Dispatcher Role: The dispatcher checks if the CPU is free (available for use).
  3. Moving to Running State: If the CPU is free, the dispatcher lets the process use the CPU, and it moves into the running state.
  4. CPU Scheduler Role: When the CPU is available, the CPU scheduler decides which process gets to run next. It picks the process based on a set of rules called the scheduling scheme, which varies from one operating system to another.

The Five-State Model

The five-state process lifecycle is an expanded version of the two-state model. The two-state model works well when all processes in the not running state are ready to run. However, in some operating systems, a process may not be able to run because it is waiting for something, like input or data from an external device. To handle this situation better, the not running state is divided into two separate states:

five-state-process-model
Five state Process Model

Here’s a simple explanation of the five-state process model:

  • New: This state represents a newly created process that hasn’t started running yet. It has not been loaded into the main memory, but its process control block (PCB) has been created, which holds important information about the process.
  • Ready: A process in this state is ready to run as soon as the CPU becomes available. It is waiting for the operating system to give it a chance to execute.
  • Running: This state means the process is currently being executed by the CPU. Since we’re assuming there is only one CPU, at any time, only one process can be in this state.
  • Blocked/Waiting: This state means the process cannot continue executing right now. It is waiting for some event to happen, like the completion of an input/output operation (for example, reading data from a disk).
  • Exit/Terminate: A process in this state has finished its execution or has been stopped by the user for some reason. At this point, it is released by the operating system and removed from memory.

The Seven-State Model

The states of a process are as follows: 

  • New State: In this step, the process is about to be created but not yet created. It is the program that is present in secondary memory that will be picked up by the OS to create the process.
  • Ready State: New -> Ready to run. After the creation of a process, the process enters the ready state i.e. the process is loaded into the main memory. The process here is ready to run and is waiting to get the CPU time for its execution. Processes that are ready for execution by the CPU are maintained in a queue called a ready queue for ready processes.
  • Run State: The process is chosen from the ready queue by the OS for execution and the instructions within the process are executed by any one of the available processors.
  • Blocked or Wait State: Whenever the process requests access to I/O needs input from the user or needs access to a critical region(the lock for which is already acquired) it enters the blocked or waits state. The process continues to wait in the main memory and does not require CPU. Once the I/O operation is completed the process goes to the ready state.
  • Terminated or Completed State: Process is killed as well as PCB is deleted. The resources allocated to the process will be released or deallocated.
  • Suspend Ready: Process that was initially in the ready state but was swapped out of main memory(refer to Virtual Memory topic) and placed onto external storage by the scheduler is said to be in suspend ready state. The process will transition back to a ready state whenever the process is again brought onto the main memory.
  • Suspend Wait or Suspend Blocked: Similar to suspend ready but uses the process which was performing I/O operation and lack of main memory caused them to move to secondary memory. When work is finished it may go to suspend ready.
state
  • CPU and I/O Bound Processes: If the process is intensive in terms of CPU operations, then it is called CPU bound process. Similarly, If the process is intensive in terms of I/O operations then it is called I/O bound process. 

How Does a Process Move From One State to Other State?

A process can move between different states in an operating system based on its execution status and resource availability. Here are some examples of how a process can move between different states:

  • New to Ready: When a process is created, it is in a new state. It moves to the ready state when the operating system has allocated resources to it and it is ready to be executed.
  • Ready to Running: When the CPU becomes available, the operating system selects a process from the ready queue depending on various scheduling algorithms and moves it to the running state.
  • Running to Blocked: When a process needs to wait for an event to occur (I/O operation or system call), it moves to the blocked state. For example, if a process needs to wait for user input, it moves to the blocked state until the user provides the input.
  • Running to Ready: When a running process is preempted by the operating system, it moves to the ready state. For example, if a higher-priority process becomes ready, the operating system may preempt the running process and move it to the ready state.
  • Blocked to Ready: When the event a blocked process was waiting for occurs, the process moves to the ready state. For example, if a process was waiting for user input and the input is provided, it moves to the ready state.
  • Running to Terminated: When a process completes its execution or is terminated by the operating system, it moves to the terminated state.

Types of Schedulers

  • Long-Term Scheduler:  Decides how many processes should be made to stay in the ready state. This decides the degree of multiprogramming. Once a decision is taken it lasts for a long time which also indicates that it runs infrequently. Hence it is called a long-term scheduler.
  • Short-Term Scheduler: Short-term scheduler will decide which process is to be executed next and then it will call the dispatcher. A dispatcher is a software that moves the process from ready to run and vice versa. In other words, it is context switching. It runs frequently. Short-term scheduler is also called CPU scheduler.
  • Medium Scheduler: Suspension decision is taken by the medium-term scheduler. The medium-term scheduler is used for swapping which is moving the process from main memory to secondary and vice versa. The swapping is done to reduce degree of multiprogramming.

Multiprogramming 

We have many processes ready to run. There are two types of multiprogramming:

  • Preemption: Process is forcefully removed from CPU. Pre-emption is also called time sharing or multitasking.
  • Non-Preemption: Processes are not removed until they complete the execution. Once control is given to the CPU for a process execution, till the CPU releases the control by itself, control cannot be taken back forcibly from the CPU.

Degree of Multiprogramming 

The number of processes that can reside in the ready state at maximum decides the degree of multiprogramming, e.g., if the degree of programming = 100, this means 100 processes can reside in the ready state at maximum.

Operation on The Process

  • Creation: The process will be ready once it has been created, enter the ready queue (main memory), and be prepared for execution.
  • Planning: The operating system picks one process to begin executing from among the numerous processes that are currently in the ready queue. Scheduling is the process of choosing the next process to run.
  • Application: The processor begins running the process as soon as it is scheduled to run. During execution, a process may become blocked or wait, at which point the processor switches to executing the other processes.
  • Killing or Deletion: The OS will terminate the process once its purpose has been fulfilled. The process's context will be over there.
  • Blocking: When a process is waiting for an event or resource, it is blocked. The operating system will place it in a blocked state, and it will not be able to execute until the event or resource becomes available.
  • Resumption: When the event or resource that caused a process to block becomes available, the process is removed from the blocked state and added back to the ready queue.
  • Context Switching: When the operating system switches from executing one process to another, it must save the current process's context and load the context of the next process to execute. This is known as context switching.
  • Inter-Process Communication: Processes may need to communicate with each other to share data or coordinate actions. The operating system provides mechanisms for inter-process communication, such as shared memory, message passing, and synchronization primitives.
  • Process Synchronization: Multiple processes may need to access a shared resource or critical section of code simultaneously. The operating system provides synchronization mechanisms to ensure that only one process can access the resource or critical section at a time.
  • Process States: Processes may be in one of several states, including ready, running, waiting, and terminated. The operating system manages the process states and transitions between them.

Features of The Process State

  • A process can move from the running state to the waiting state if it needs to wait for a resource to become available.
  • A process can move from the waiting state to the ready state when the resource it was waiting for becomes available.
  • A process can move from the ready state to the running state when it is selected by the operating system for execution.
  • The scheduling algorithm used by the operating system determines which process is selected to execute from the ready state.
  • The operating system may also move a process from the running state to the ready state to allow other processes to execute.
  • A process can move from the running state to the terminated state when it completes its execution.
  • A process can move from the waiting state directly to the terminated state if it is aborted or killed by the operating system or another process.
  • A process can go through ready, running and waiting state any number of times in its lifecycle but new and terminated happens only once.
  • The process state includes information about the program counter, CPU registers, memory allocation, and other resources used by the process.
  • The operating system maintains a process control block (PCB) for each process, which contains information about the process state, priority, scheduling information, and other process-related data.
  • The process state diagram is used to represent the transitions between different states of a process and is an essential concept in process management in operating systems.

Conclusion

In conclusion, understanding the states of a process in an operating system is essential for comprehending how the system efficiently manages multiple processes. These states—new, ready, running, waiting, and terminated—represent different stages in a process's life cycle. By transitioning through these states, the operating system ensures that processes are executed smoothly, resources are allocated effectively, and the overall performance of the computer is optimized. This knowledge helps us appreciate the complexity and efficiency behind the scenes of modern computing.


Next Article

Similar Reads