2. 2
Out Line
• Scheduling Concepts
• Performance Criteria
• Process Concept Process States
• Process Transition Diagram
• Schedulers
• Process Control Block (PCB)
• Process address space
• Process identification information
• Threads and their management
• Scheduling Algorithms
• Multiprocessor Scheduling.
3. 3
What is Scheduling
?
• Scheduling- processes/work is done to finish the work on
time.
• CPU Scheduling is a process that allows one process to
use the CPU while another process is delayed (in standby)
due to unavailability of any resources such as I / O etc,
thus making full use of the CPU.
• The purpose of CPU Scheduling is to make the system
more efficient, faster, and fairer.
4. 4
Why do we need Scheduling
?
• In Multiprogramming, if the long term scheduler picks more I/O bound
processes then most of the time, the CPU remains idol.
• The task of Operating system is to optimize the utilization of resources.
• If most of the running processes change their state from running to
waiting then there may always be a possibility of deadlock in the system.
• To reduce this overhead, the OS needs to schedule the jobs to get the
optimal utilization of CPU and to avoid the possibility to deadlock.
•
5. 5
CPU Scheduling Criteria
1.CPU Utilization: The main objective of any CPU scheduling algorithm is to keep the CPU as busy as
possible.
2. Throughput- A measure of the work done by the CPU is the number of processes being executed and
completed per unit of time.
3. Turnaround Time-Turnaround time is a criterion used in CPU scheduling that measures the time it takes
for a task or process to complete from the moment it is submitted to the system until it is fully processed and
ready for output.
Turn Around Time = Completion Time - Arrival Time
4. Waiting Time- Waiting time is a criterion used in CPU scheduling that measures the amount of time a task
or process waits in the ready queue before it is processed by the CPU.
Waiting Time = Turnaround Time - Burst Time.
5. Response Time- Response time is a criterion used in CPU scheduling that measures the time it takes for the
system to respond to a user's request or input.
Response Time = CPU Allocation Time(when the CPU was allocated for the first) - Arrival Time
6. Completion Time: The completion time is the time when the process stops executing, which means that the
process has completed its burst time and is completely executed.
6. 6
CPU Scheduling - Non Pre-emptive
• Under non preemptive scheduling, once the
CPU has been allocated to a process, the
process keeps the CPU until it releases the CPU
either by terminating or by switching to the
waiting state.
Non Pre-emptive:
7. 7
CPU Scheduling - Non Pre-emptive
• Once a process has been given the CPU can
taken away due to higher priority process
arrived or time out.
Pre-emptive:
9. 9
Importance of CPU Scheduling Criteria
• Efficient resource utilization − By maximizing CPU utilization and throughput, CPU
scheduling ensures that the processor is being used to its full potential. This leads to
increased productivity and efficient use of system resources.
• Fairness − CPU scheduling algorithms that prioritize waiting time and response time
help ensure that all processes have a fair chance to access the CPU. This is important in
multi-user environments where multiple users are competing for the same resources.
• Responsiveness − CPU scheduling algorithms that prioritize response time ensure that
processes that require immediate attention (such as user input or real-time systems) are
executed quickly, improving the overall responsiveness of the system.
• Predictability − CPU scheduling algorithms that prioritize turnaround time provide a
predictable execution time for processes, which is important for meeting deadlines and
ensuring that critical tasks are completed on time.
10. 10
Purpose of a Scheduling
• Maximum CPU utilization
• Fare allocation of CPU
• Maximum throughput
• Minimum turnaround time
• Minimum waiting time
• Minimum response time
11. 11
Types of Scheduling Algorithms
• There are the following algorithms which can be used to schedule the jobs.
1. First Come First Serve
• It is the simplest algorithm to implement. The process with the minimal arrival time will
get the CPU first. The lesser the arrival time, the sooner will the process gets the CPU. It
is the non-preemptive type of scheduling.
2. Round Robin
• In the Round Robin scheduling algorithm, the OS defines a time quantum (slice). All the
processes will get executed in the cyclic way. Each of the process will get the CPU for a
small amount of time (called time quantum) and then get back to the ready queue to wait
for its next turn. It is a preemptive type of scheduling.
3. Shortest Job First
• The job with the shortest burst time will get the CPU first. The lesser the burst time, the
sooner will the process get the CPU. It is the non-preemptive type of scheduling.
12. 12
Types of Scheduling Algorithms
4. Shortest remaining time first
•It is the preemptive form of SJF. In this algorithm, the OS schedules the Job according to
the remaining time of the execution.
5. Priority based scheduling
•In this algorithm, the priority will be assigned to each of the processes. The higher the
priority, the sooner will the process get the CPU. If the priority of the two processes is same
then they will be scheduled according to their arrival time.
6.Multi level Queue Scheduling: A multi-level queue scheduling algorithm partitions the
ready queue into several separate queues. The processes are permanently assigned to one
queue, generally based on some property of the process, such as memory size, process
priority, or process type. Each queue has its own scheduling algorithm.
13. 13
FCFS Scheduling Algorithm
In FCFS Scheduling
The process which arrives first in the ready queue is firstly assigned the CPU.
In case of a tie, process with smaller process id is executed first.
It is always non-preemptive in nature.
Jobs are executed on first come, first serve basis.
It is a non-preemptive, pre-emptive scheduling algorithm.
Easy to understand and implement.
Its implementation is based on FIFO queue.
Poor in performance as average wait time is high.
14. 14
FCFS Scheduling Algorithm
Advantages-
It is simple and easy to understand.
It can be easily implemented using queue data structure.
It does not lead to starvation.
Disadvantages-
It does not consider the priority or burst time of the processes.
It suffers from convoy effect i.e. processes with higher burst time arrived before the
processes with smaller burst time.
16. 16
Round Robin Scheduling Algorithm
CPU is assigned to the process on the basis of FCFS for a fixed amount of time.
This fixed amount of time is called as time quantum or time slice.
After the time quantum expires, the running process is preempted and sent to the ready queue.
Then, the processor is assigned to the next arrived process.
It is always preemptive in nature.
Advantages-
It gives the best performance in terms of average response time.
It is best suited for time sharing system, client server architecture and interactive system.
Disadvantages-
It leads to starvation for processes with larger burst time as they have to repeat the cycle many times.
Its performance heavily depends on time quantum.
Priorities can not be set for the processes.
19. 19
Shortest Job first(SJF)
• Process which have the shortest burst time are scheduled first.
• If two processes have the same bust time, then FCFS is used to break the tie.
• This is a non-pre-emptive, pre-emptive scheduling algorithm.
• Best approach to minimize waiting time.
• Easy to implement in Batch systems where required CPU time is known in advance.
• Impossible to implement in interactive systems where required CPU time is not known.
• The processer should know in advance how much time process will take.
• Pre-emptive mode of Shortest Job First is called as Shortest Remaining Time First
(SRTF).
20. 20
SJF Scheduling
• Advantages-
• SRTF is optimal and guarantees the minimum average waiting time.
• It provides a standard for other algorithms since no other algorithm performs better than
it.
• Disadvantages-
• It can not be implemented practically since burst time of the processes can not be known
in advance.
• It leads to starvation for processes with larger burst time.
• Priorities can not be set for the processes.
• Processes with larger burst time have poor response time.
24. 24
Process Management in OS
• The operating system is responsible for the following activities in
connection with Process Management:
• Scheduling processes and threads on the CPUs.
• Creating and deleting both user and system processes.
• Suspending and resuming processes.
• Providing mechanisms for process synchronization.
• Providing mechanisms for process communication.
25. 25
Priority Scheduling
• Out of all the available processes, CPU is assigned to the process having the highest priority. In case of a
tie, it is broken by FCFS Scheduling. Priority Scheduling can be used in both preemptive and non-
preemptive mode.
• The waiting time for the process having the highest priority will always be zero in preemptive mode.
• The waiting time for the process having the highest priority may not be zero in non-preemptive mode.
• Priority scheduling in preemptive and non-preemptive mode behaves exactly same under following
conditions-
• The arrival time of all the processes is same
• All the processes become available
• Advantages-
• It considers the priority of the processes and allows the important processes to run first.
• Priority scheduling in pre-emptive mode is best suited for real time operating system.
• Disadvantages-
• Processes with lesser priority may starve for CPU.
• There is no idea of response time and waiting time.
28. 28
Priority Scheduling with Preemptive
Now,
Average Turn Around time = (15 + 11 + 1 + 5 + 6) / 5 = 38 / 5 = 7.6 unit
Average waiting time = (11 + 8 + 0 + 0 + 4) / 5 = 23 / 5 = 4.6 unit
29. 29
Priority Scheduling with Preemptive
Now,
Average Turn Around time = (15 + 11 + 1 + 5 + 6) / 5 = 38 / 5 = 7.6 unit
Average waiting time = (11 + 8 + 0 + 0 + 4) / 5 = 23 / 5 = 4.6 unit
30. 30
Multilevel Queue Scheduling
• Let us consider an example of a multilevel queue-scheduling algorithm with five
queues:
• 1. System Processes
• 2. Interactive Processes
• 3. Interactive Editing Processes
• 4. Batch Processes
• 5. Student Processes
• Each queue has absolute priority over lower-priority queues.
• No process in the batch queue, for example, could run unless the queues for system
processes, interactive processes, and interactive editing processes were all empty.
• If an interactive editing process entered the ready queue while a batch process was
running, the batch process will be pre-empted.
35. 35
Concept of Process
Process can be define in various ways:
Program in execution.
An instance of program running on a computer.
The entity that can be assigned and execute on a processor.
36. 36
Concept of Process
Program is passive entity stored on disk (executable
file), process is active
Program becomes process when executable file
loaded into memory.
Processes that are running in the background mode
(e.g..checking email)
37. 37
Two State Process Model
Process operations: (two state process model)
Process creation
Process
terminations
38. 38
Process operations- process creation
• When OS is booted, several system process are created.
They provide system services. They do not need user
interaction it just execute in background.
System initialization:
• A running process can issue system call to create new
process. i.e. in Unix system call name “fork” is used to
create new process.
Execution of a process creation system call:
39. 39
Process operations- process creation
• User can start new process by typing command or
double click on some application icon.
A user request to create a new process
• This applies only to the batch system. Here user submit
batch jobs to the system. When there are enough
resources are free to execute a next job a new job is
selected from batch queue.
Initiation of a batch job:
40. 40
Process operations- process termination
• Terminate when done their job. i.e.
when all the instructions from
program get executed.
• Event user can select option to
terminate the process.
Normal exit
(voluntary)
• Process even terminated when some
error occurred. Example divide by
zero error.
Error exit
(voluntary):
41. 41
Process operations- process termination
• Fatal error are generated due to
user mistake in executing program.
Such as file name not found error.
Fatal error
(involuntary):
• If some other process request for
OS to kill the process. Then UNIX it
can be done by “kill” system call
while in windows “Terminate
Process”
Killed by another
process
(involuntary):
42. 42
Process Relationship
One to one:
• when single execution of sequential program is in progress.
• Program consist of main program and set of functions, during
execution control flows between main program and set of
functions.
• OS is not aware of the existence of function. So program
consist of single process.
Many to one:
• Many simultaneous executions of program.
• Program informs OS about its parts that are to be executed
concurrently thus OS consider each part as process.
• So one program any many process.
• This process are know as concurrent process.
43. 43
Components of a Process
Stack Temporary data like method or function parameters, return address,
and local variables are stored in the process stack.
Heap This is the memory that is dynamically allocated to a process during
its execution.
Text This comprises the contents present in the processor’s registers as well
as the current activity reflected by the value of the program counter.
Data The global as well as static variables are included in this section.
44. 44
Process States
• new: The process is being created.
• running: Instructions are being executed.
• waiting: The process is waiting for some event to occur.
• ready: The process is waiting to be assigned to a processor.
• terminated: The process has finished execution.
46. 46
Attributes of a Process
The Attributes of the process are used by the Operating System to create the process control
block (PCB) for each of them. This is also called context of the process. Attributes which
are stored in the PCB are described below.
47. 47
Attributes of a Process
1.Process ID: When a process is created, a unique id is assigned to the process which is
used for unique identification of the process in the system.
2. Program counter: A program counter stores the address of the last instruction of the
process on which the process was suspended. The CPU uses this address when the
execution of this process is resumed.
3. Process State: The Process, from its creation to the completion, goes through various
states which are new, ready, running and waiting.
48. 48
Attributes of a Process
4. Priority: Every process has its own priority. The process with the highest priority among
the processes gets the CPU first. This is also stored on the process control block.
5. General Purpose Registers: Every process has its own set of registers which are used to
hold the data which is generated during the execution of the process.
6. List of open files: During the Execution, Every process uses some files which need to be
present in the main memory. OS also maintains a list of open files in the PCB.
7. List of open devices: OS also maintain the list of all open devices which are used during
the execution of the process.
49. 49
Schedulers
The process manager’s activity is process scheduling, which involves removing the
running process from the CPU and selecting another process based on a specific
strategy.
The scheduler’s purpose is to implement the virtual machine so that each process
appears to be running on its own computer to the user.
50. 50
Schedulers
1. Long term scheduler
•The job scheduler is another name for Long-Term scheduler.
• It selects processes from the pool (or the secondary memory) and then maintains them in
the primary memory’s ready queue.
•The Multiprogramming degree is mostly controlled by the Long-Term Scheduler.
•The goal of the Long-Term scheduler is to select the best mix of IO and CPU bound
processes from the pool of jobs.
•If the job scheduler selects more IO bound processes, all of the jobs may become stuck,
the CPU will be idle for the majority of the time, and multiprogramming will be reduced as
a result. Hence, the Long-Term scheduler’s job is crucial and could have a Long-Term
impact on the system.
51. 51
Schedulers
Short term scheduler:
•CPU scheduler is another name for Short-Term scheduler. It chooses one job from the
ready queue and then sends it to the CPU for processing.
•To determine which work will be dispatched for execution, a scheduling method is
utilized.
•The Short-Term scheduler’s task can be essential in the sense that if it chooses a job with a
long CPU burst time, all subsequent jobs will have to wait in a ready queue for a long
period.
52. 52
Schedulers
Medium term scheduler:
•The switched-out processes are handled by the Medium-Term scheduler.
•If the running state processes require some IO time to complete, the state must be changed
from running to waiting.
•It stops the process from executing in order to make space for other processes.
•Swapped out processes are examples of this, and the operation is known as swapping.
•The Medium-Term scheduler here is in charge of stopping and starting processes.
•The degree of multiprogramming is reduced. To have a great blend of operations in the
ready queue, swapping is required.
53. 53
Process Queues
1. Job Queue: In starting, all the processes get stored in the job queue. It is maintained in the secondary
memory. The long term scheduler (Job scheduler) picks some of the jobs and put them in the primary
memory.
2. Ready Queue: Ready queue is maintained in primary memory. The short term scheduler picks the job
from the ready queue and dispatch to the CPU for the execution.
3. Waiting Queue: When the process needs some IO operation in order to complete its execution, OS
changes the state of the process from running to waiting. The context (PCB) associated with the process gets
stored on the waiting queue which will be used by the Processor when the process finishes the IO.
54. 54
Process Control Block (PCB)
• A Process Control Block is a data structure maintained by
the Operating System for every process.
55. 55
Process Control Block (PCB)
Process State The current state of the process i.e., whether it is ready, running, waiting, or whatever
Process privileges This is required to allow/disallow access to system resources.
Process ID Unique identification for each of the process in the operating system.
Pointer A pointer to parent process.
Program Counter Program Counter is a pointer to the address of the next instruction to be executed for this
process.
CPU registers Various CPU registers where process need to be stored for execution for running state.
CPU Scheduling Information Process priority and other scheduling information which is required to
schedule the process.
Memory management information This includes the information of page table, memory limits, Segment
table depending on memory used by the operating system.
Accounting information This includes the amount of CPU used for process execution, time limits, execution
ID etc.
IO status information This includes a list of I/O devices allocated to the process.
56. 56
Process Address Space
Address space may also denote a range of
physical or virtual addresses which can be
accessed by a processor.
While a Process Address space is set of logical
addresses that a process references in its code.
For example, for a 32-bit address allowed, the
addresses can range from 0 to 0x7fffffff; that
is, 2^31 possible numbers
The OS here also has an additional job to map
the logical addresses to the actual physical
addresses too.
57. 57
Process Address Space
Components of a Process Address Space
•The total amount of shared memory a system
can allocate depends on several factors.
•The overall space may include sections such as
stack space, program size required, memory
mapped files, shared libraries, as well as memory
allocated from the heap.
•Memory allocation policies and address spaces
used by the varied operating systems are
complicated.
•They may also differ from one operating system
to another.
58. 58
Threads
Definition :-
“A thread is the smallest unit of execution.”
▪ Figure : A process with two threads
of execution on a single processor
machine.
▪ A thread is a light-weight process.
60. 60
Thread in Operating System
A thread is a single sequence stream with in a process.
Threads are also called lightweight processes as they possess some of the
properties of processes.
Each thread belongs to exactly one process.
Operating system that supports multithreading, the process can consist of
many threads.
A thread refers to an execution unit in the process that has its own program
counter, stack, as well as a set of registers.
61. 61
Thread in Operating System
A thread is a single sequential flow of
execution of tasks of a process so it is also
known as thread of execution or thread of
control.
Each thread of the same process makes use of
a separate program counter and a stack of
activation records and control blocks.
Components of Threads
1.Stack space
2.Register set
3.Program counter
62. 62
Advantages of Threading
Responsiveness: A multithreaded application increases
responsiveness to the user.
Resource Sharing: Resources like code and data are shared
between threads, thus allowing a multithreaded application to
have several threads of activity within the same address space.
Increased concurrency: Threads may be running parallely on
different processors, increasing concurrency in a multiprocessor
machine.
Lesser cost: It costs less to create and context-switch threads
than processes.
Lesser context-switch time: Threads take lesser context-switch
time than processes.
63. 63
Types of Threads
▪ There are two types of threads to be managed in a modern system:
User threads and kernel threads.
▪ User threads are supported above the kernel, without kernel
support. These are the threads that application programmers would
put into their programs.
▪ Kernel threads are supported within the kernel of the OS itself. All
modern OS’s support kernel level threads, allowing the kernel to
perform multiple simultaneous tasks and/or to service multiple kernel
system calls simultaneously.
64. 64
Types of Thread
In the operating system, there are two types of threads.
Kernel level thread. User-level thread.
65. 65
User Level Thread
User Level thread (ULT) :
User level library provides support for creating managing and scheduling threads.
The Kernel is not aware of the existing of threads.
Thread creation and scheduling are done in user space.
User level threads are fast to create and manage.
1. Advantages of ULT –
1.Simple representation since thread has only program counter, register set,
stack space.
2.Simple to create since no intervention of kernel.
3.Thread switching is fast since no OS calls need to be made.
2. Limitations of ULT –
1.No or less co-ordination among the threads and Kernel.
2.If one thread causes a page fault, the entire process blocks.
66. 66
Kernel Level Thread
Kernel Level Thread (KLT):Supported directed by the operating system
kernel.
Creation scheduling and management are done by kernel in kernel space.
They are slower to create and manage.
In multiprocessor kernel can schedule threads on different processors.
Advantages of KLT –
Since kernel has full knowledge about the threads in the system, scheduler
may decide to give more time to processes having large number of threads.
Good for applications that frequently block.
Limitations of KLT
Slow and inefficient.
It requires thread control block so it is an overhead.
69. 69
Multiprocessing vs Multithreading
▪ Multiprocessing is the use of two or more central
processing units (CPUs) within a single computer system.
▪ Multi-threading is a widespread programming and
execution model that allows multiple threads to exist
within the context of a single process.
70. 70
Multithreading Models
Some operating system provide a combined user level thread and Kernel level thread
facility.
Solaris is a good example of this combined approach.
In a combined system, multiple threads within the same application can run in parallel
on multiple processors and a blocking system call need not block the entire process.
Multithreading models are three types.
Many to many relationship.
Many to one relationship.
One to one relationship.
71. 71
Many to Many Model
• In this model, many user level threads multiplexes to the Kernel thread of smaller or
equal numbers.
• The number of Kernel threads may be specific to either a particular application or a
particular machine
• In this model, developers can create as many user threads as necessary and the
corresponding Kernel threads can run in parallels on a multiprocessor.
72. 72
Many to One Model
Many-to-one model maps many user level threads to one Kernel-level thread.
Thread management is done in user space by the thread library.
When thread makes a blocking system call, the entire process will be blocked. Only one thread can access
the Kernel at a time, so multiple threads are unable to run in parallel on multiprocessors.
If the user-level thread libraries are implemented in the operating system in such a way that the system
does not support them, then the Kernel threads use the many-to-one relationship modes.
73. 73
One to One Model
• There is one-to-one relationship of user-level thread to the kernel-level thread.
• This model provides more concurrency than the many-to-one model.
• It also allows another thread to run when a thread makes a blocking system call.
• It supports multiple threads to execute in parallel on microprocessors.