SlideShare a Scribd company logo
UNIT III
 Non Contiguous memory allocation
 Paging
 Segmentation
 Virtual Memory
 Demand paging
 Page fault
 Page replacement algorithms
FIFO
LRU
Optimal
 Thrashing
 Page fault frequency
Unit III
Memory management
 Address binding
 Logical and physical address
space
 Dynamic loading and linking
 Contiguous memory allocation
 static and dynamic
partitioned memory
 Fragmentation
 Swapping
 Relocation
 Compaction
 Protection
 Whenever the programs are to be executed, they should be present in the
main memory.
 In multiprogramming, many programs are present in the main memory, but
the capacity of memory is limited.
 All of the programs cannot be present in memory at the same time.
 Hence, memory management is to be performed so all of the programs get
space in memory, and they get executed from time to time.
 Various memory management schemes are there. The selection of a
particular scheme depends on many factors, especially the hardware design
of the system.
Memory management
 Main memory is the temporary read/write memory of a computer. It is a set of
continuous locations.
 Each location has a specific address. Each address is a binary address. For
the convenience of the programmer and user, they are represented in
hexadecimal numbers.
 The programs, which are a set of instructions, are stored in secondary
memory. When they need to be executed, they are loaded to main memory
from the secondary memory.
 The main memory instructions are stored in various locations according to
space availability.
Main Memory
 The groups of memory locations are called the address space.
 Address space can be of two types:
 physical address space
 logical address space
Address Space
 The address of any location in physical memory is
called the physical address.
 Physical memory means the main memory or Random
Access Memory (RAM). In main memory every location
has an address.
 Whenever any data or information is read from it, its
address is given. That address is called the physical
address.
 A group of many physical addresses is called the
physical address space.
Physical Address Space
 The physical address is provided by the
hardware.
 It is a binary number made with a
combination of 0 and 1.
 It refers to a particular cell or location of
primary memory.
 Such addresses have some end limits.
 The limits normally start from zero to
some end limit
 All addresses do not only belong to the
system’s main memory.
Physical Address Space (contd.)
 The address of any location of virtual memory is called the logical address.
 It is also known as the virtual address.
 The group of many logical addresses is called the logical address space or
virtual address space.
Logical Address Space
 Logical address is provided by the operating system kernel.
 Logical address space may not be continuous in memory.
 It might also be present in the form of segments.
 Sometimes logical addresses might be of the same value as a physical
address.
 The logical address is mapped to get the physical address.
 This mapping is done using address translation.
 The physical address space and logical address space are independent of
each other.
Logical Address Space (contd.)
Memory-Management Unit (MMU)
 Hardware device that maps virtual to physical address.
 In MMU scheme, the value in the relocation register is added
to every address generated by a user process at the time it is
sent to memory.
 The user program deals with logical addresses; it never sees
the real physical addresses.
Address Binding
• The CPU generates a logical address
when a process wants to address a
location.
• The value of the relocation register is
added to the logical address.
• The new value which is produced as
output is the value of a physical address.
• This physical address will point to a
location in main memory.
• This location will be used as a pointer to
the main memory, where the operation is
to be performed.
• Mapping of the virtual address into a
physical address is also known as
address translation or address binding.
Binding of Instructions and Data to Memory
 Compile time: If memory location known a priori,
absolute code can be generated; must recompile code if
starting location changes.
 Load time: Must generate relocatable code if memory
location is not known at compile time.
 Execution time: Binding delayed until run time if the
process can be moved during its execution from one
memory segment to another. Need hardware support for
address maps (e.g., base and limit registers).
Address binding of instructions and data to memory addresses can
happen at three different stages:
Dynamic Loading
 Routine is not loaded until it is called
 Better memory-space utilization; unused routine is never
loaded.
 Useful when large amounts of code are needed to handle
infrequently occurring cases.
 No special support from the operating system is required
implemented through program design.
Dynamic Linking
 Linking postponed until execution time.
 Small piece of code, stub, used to locate the appropriate memory-resident library
routine.
 Stub replaces itself with the address of the routine, and executes the routine.
 Operating system needed to check if routine is in processes’ memory address.
Overlays
 Keep in memory only those instructions and data that are needed at any given
time.
 Needed when process is larger than amount of memory allocated to it.
 Implemented by user, no special support needed from operating system,
programming design of overlay structure is complex
Swapping
 A process can be swapped temporarily out of memory to a backing store, and then
brought back into memory for continued execution.
 Backing store – fast disk large enough to accommodate copies of all memory
images for all users; must provide direct access to these memory images.
 Roll out, roll in – swapping variant used for priority-based scheduling algorithms;
lower-priority process is swapped out so higher-priority process can be loaded and
executed.
 Major part of swap time is transfer time; total transfer time is directly proportional to
the amount of memory swapped.
 Modified versions of swapping are found on many systems, i.e., UNIX and
Microsoft Windows.
Schematic View of Swapping
Contiguous Allocation
 Main memory usually into two partitions:
 Resident operating system, usually held in low memory with interrupt vector.
 User processes then held in high memory.
 Single-partition allocation
 Relocation-register scheme used to protect user processes from each other, and from
changing operating-system code and data.
 Relocation register contains value of smallest physical address; limit register contains
range of logical addresses – each logical address must be less than the limit register.
Contiguous Allocation (Cont.)
 Multiple-partition allocation
 Hole – block of available memory; holes of various size are
scattered throughout memory.
 When a process arrives, it is allocated memory from a hole large
enough to accommodate it.
 Operating system maintains information about:
a) allocated partitions b) free partitions (hole)
OS
process 5
process 8
process 2
OS
process 5
process 2
OS
process 5
process 2
OS
process 5
process 9
process 2
process 9
process 10
Dynamic Storage-Allocation Problem
 First-fit: Allocate the first hole that is big enough.
 Best-fit: Allocate the smallest hole that is big enough; must
search entire list, unless ordered by size. Produces the
smallest leftover hole.
 Worst-fit: Allocate the largest hole; must also search entier
list. Produces the largest leftover hole.
How to satisfy a request of size n from a list of free holes.
First-fit and best-fit better than worst-fit in terms of speed and
storage utilization.
Fragmentation
 External fragmentation – total memory space exists to satisfy a request, but it is
not contiguous.
 Internal fragmentation – allocated memory may be slightly larger than requested
memory; this size difference is memory internal to a partition, but not being used.
 Reduce external fragmentation by compaction
 Shuffle memory contents to place all free memory together in one large block.
 Compaction is possible only if relocation is dynamic, and is done at execution time.
 I/O problem
 Latch job in memory while it is involved in I/O.
 Do I/O only into OS buffers.
External Fragmentation
P1 P1 P1
P2 P2
P3 P3 P3
P4
If P5 requires 5 blocks of storage, it will
not be allocated memory blocks as 5
contiguous blocks are not available.
Internal Fragmentation
Suppose the block size is 512KB
If P1 requires 200KB, then the rest of the
memory within the block remains unused
P1
Memory
remains
unused
P2
P3
 Relocating the programs to new memory areas
 Used to eliminate external fragmentation by compaction
 OS must be updated with the new location address
Relocation
Relocation
P1 P1 P1
P2 P2
P3 P3 P3
P4
P4 P1 P1 P1
P2 P2
P3 P3 P3
P4 is
relocated
 Compaction is bringing free space together into one place in order that free
memory is available in a contiguous manner.
 When two adjacent holes appear then they can be merged together to form a
single big hole.
 This larger hole can be used by the process that needs large memory
requirements.
 Another method is to relocate the processes, thus creating free space in a
contiguous manner.
 Compaction is used randomly, as free space will be created as soon as processes
are terminated and leave the memory.
Compaction
Compaction
P1 P1 P1
P2 P2
P3 P3 P3
P4
P4 P1 P1 P1
P2
P2
P3 P3 P3
Memory Protection
 Memory protection implemented by associating protection bit with each frame.
 Valid-invalid bit attached to each entry in the page table:
 “valid” indicates that the associated page is in the process’ logical address space, and is
thus a legal page.
 “invalid” indicates that the page is not in the process’ logical address space.
 Memory is allocated in non-continuous way
 It can be classified as:
 Paging
 Segmentation
Non Contiguous memory allocation
 In the paging technique of memory management, the physical memory is divided
into fixed-sized blocks.
 Paging is a method of non-contiguous memory allocation.
 In paging, the logical address space is divided into fixed-sized blocks known as
pages.
 The physical address space is also divided into fixed sized blocks known as
frames.
 Every page must be mapped to a frame.
Paging
 Divides logical memory into blocks of the same size called pages.
 The size of the frames is usually in the power of 2.
 The size of frame usually ranges between 512 bytes and 8192 bytes.
 Management technique has to keep track of all free frames.
 If a program needs n pages of memory, then n free frames are found in the
memory and the program is loaded there.
 Pages are scattered throughout the memory.
 A page table is maintained to translate logical addresses into physical
addresses.
 Paging suffers from internal fragmentation.
 Paging does not suffer from external fragmentation.
Paging (contd.)
Paging (contd.)
The address generated by the CPU is divided into two parts: page number
and page offset.
• Page number (p) – used as an index into a page table that contains the base
address of each page in physical memory.
• Page offset (d) – combined with the base address to define the physical
memory address that is sent to the memory unit.
Segmentation
 Memory-management scheme that supports user view of memory.
 A program is a collection of segments. A segment is a logical unit such as:
main program,
procedure,
function,
local variables, global variables,
common block,
stack,
symbol table, arrays
Logical View of Segmentation
1
3
2
4
1
4
2
3
user space physical memory space
Segmentation Architecture
 Logical address consists of a two tuple:
<segment-number, offset>,
 Segment table – maps two-dimensional physical addresses; each table
entry has:
 base – contains the starting physical address where the segments reside in
memory.
 limit – specifies the length of the segment.
 Segment-table base register (STBR) points to the segment table’s location
in memory.
 Segment-table length register (STLR) indicates number of segments used
by a program;
segment number s is legal if s < STLR.
Segmentation Architecture (Cont.)
 Relocation
 dynamic
 by segment table
 Sharing
 shared segments
 same segment number
 Allocation
 first fit/best fit
 external fragmentation
Segmentation Architecture (Cont.)
 Protection. With each entry in segment table associate:
 validation bit = 0  illegal segment
 read/write/execute privileges
 Protection bits associated with segments; code sharing occurs at segment
level.
 Since segments vary in length, memory allocation is a dynamic storage-
allocation problem.
Sharing of segments
Virtual Memory
 Background
 Demand Paging
 Performance of Demand Paging
 Page Replacement
 Page-Replacement Algorithms
 Allocation of Frames
 Thrashing
 Other Considerations
 Demand Segmenation
Background
 Virtual memory – separation of user logical memory from
physical memory.
 Only part of the program needs to be in memory for execution.
 Logical address space can therefore be much larger than physical
address space.
 Need to allow pages to be swapped in and out.
 Virtual memory can be implemented via:
 Demand paging
 Demand segmentation
Demand Paging
 Bring a page into memory only when it is needed.
 Less I/O needed
 Less memory needed
 Faster response
 More users
 Page is needed  reference to it
 invalid reference  abort
 not-in-memory  bring to memory
Valid-Invalid Bit
 With each page table entry a valid–invalid bit is
associated
(1  in-memory, 0  not-in-memory)
 Initially valid–invalid bit is set to 0 on all entries.
 During address translation, if valid–invalid bit in
page table entry is 0  page fault.
valid-invalid bit
1
1
1
1
0
0
0

Frame #
page table
Page Fault
 If there is ever a reference to a page, first reference will trap to
OS  page fault
 OS looks at another table to decide:
 Invalid reference  abort.
 Just not in memory.
 Get empty frame.
 Swap page into frame.
 Reset tables, validation bit = 1.
 Restart instruction
What happens if there is no free frame?
 Page replacement – find some page in memory, but not really in use, swap
it out.
 algorithm
 performance – want an algorithm which will result in minimum number of page
faults.
 Same page may be brought into memory several times.
Performance of Demand Paging
 Page Fault Rate 0  p  1.0
 if p = 0 no page faults
 if p = 1, every reference is a fault
 Effective Access Time (EAT)
EAT = (1 – p) x memory access
+ p (page fault overhead
+ swap page out
+ swap page in
+ restart overhead)
Page Replacement Algorithms
 In an operating system that uses paging for memory management, a page
replacement algorithm is needed to decide which page needs to be
replaced when new page comes in.
 Page Fault – A page fault happens when a running program accesses a
memory page that is mapped into the virtual address space, but not
loaded in physical memory.
 Since actual physical memory is much smaller than virtual memory, page
faults happen. In case of page fault, Operating System might have to
replace one of the existing pages with the newly needed page.
 Different page replacement algorithms suggest different ways to decide
which page to replace. The target for all algorithms is to reduce the
number of page faults.
First-In-First-Out (FIFO) Algorithm
 This is the simplest page replacement algorithm.
 In this algorithm, the operating system keeps track of all pages in
the memory in a queue, the oldest page is in the front of the queue.
 When a page needs to be replaced page in the front of the queue is
selected for removal.
First-In-First-Out (FIFO) Algorithm
Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames.
Number of
page
faults=6
First-In-First-Out (FIFO) Algorithm
Consider page reference string 1, 2, 3, 3, 0, 3, 2, 4, 5, 6, 3 with 4 frames. Implement
FIFO Page Replacement Algorithm and find the number of page faults.
Belady’s Anomaly
 Belady’s anomaly proves that it is possible to have more page
faults when increasing the number of page frames while using
the First in First Out (FIFO) page replacement algorithm.
 Consider reference string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4 and 3
frames. If FIFO is implemented how many page faults occur?
 If we increase frames to 4, how many page faults occur?
Belady’s Anomaly
 Demonstrate Belady’s anomaly for the reference string:
1,2,3,4,1,2,5,1,2,3,4,5
No. of frames No. of Page faults
2 12
3 9
4 10
5 5
6 5
7 5
Optimal Algorithm
In this algorithm, pages are replaced which would not be used for the longest
duration of time in the future.
Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,3 with 4
page frame.
Number of
page
faults=6
Optimal Algorithm
Consider page reference string 1, 2, 3, 4 , 0, 1, 3, 2, 4, 5, 6, 3 with 4 frames. Implement
Optimal Page Replacement Algorithm and find the number of page faults.
Optimal Algorithm
Consider page reference string 1, 2, 3, 4 , 0, 5, 6, 3, 2, 1, 0, 1, 2 with 4 frames.
Implement Optimal Page Replacement Algorithm and find the number of page faults.
Optimal page replacement is perfect, but not possible in practice as the operating
system cannot know future requests. The use of Optimal Page replacement is to set
up a benchmark so that other replacement algorithms can be analyzed against it.
Least Recently Used (LRU) Algorithm
 In this algorithm, page will be replaced which is least recently used.
 Example: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4
page frames.
Number of
page
faults=6
Least Recently Used (LRU) Algorithm
Consider page reference string 1, 2, 3, 4 , 0, 1, 3, 2, 4, 5, 6, 3 with 4 frames. Implement
LRU Page Replacement Algorithm and find the number of page faults.
Counting Algorithms
 Keep a counter of the number of references that have been made to each page.
 Least Frequently Used (LFU) Algorithm: replaces page with smallest count.
 Most Frequently Used (MFU) Algorithm: based on the argument that the page with
the smallest count was probably just brought in and has yet to be used.
Counting Algorithms
 Implement Least Frequently Used (LFU) Algorithm on reference string 1, 1, 2, 3, 3,
3, 4, 2, 1, 2, 2, 2, 1, 4, 2, 1 with 3 frames. How many page faults occur?
 Implement Most Frequently Used (MFU) Algorithm on reference string 1, 1, 2, 3, 3,
3, 4, 2, 1, 2, 2, 2, 1, 5, 2, 1 with 3 frames. How many page faults occur?
Practice Question
 Consider the following page reference string:
1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6
How many page faults would occur using LRU, FIFO and Optimal Page replacement
algorithms using 1,2,3,4,5,6 and 7 frames?
LRU
No. of
frames
Page Fault Page Fault
Rates
1 20 =(20/20)
*100
2 18
3 15
4 10
5 8
6 7
7 7
FIFO
No. of
frames
Page
Fault
1 20
2 18
3 16
4 14
5 10
6 10
7 7
Optimal
No. of
frames
Page
Fault
1 20
2 15
3 11
4 8
5 7
6 7
7 7
Thrashing
 If a process does not have “enough” pages, the page-fault rate is very
high. This leads to:
 low CPU utilization.
 operating system thinks that it needs to increase the degree of
multiprogramming.
 another process added to the system.
 Thrashing  a process is busy swapping pages in and out.
Thrashing Diagram
 Why does paging work?
Locality model
 Process migrates from one locality to another.
 Localities may overlap.
 Why does thrashing occur?
 size of locality > total memory size
Demand Segmentation
 Used when insufficient hardware to implement demand paging.
 OS allocates memory in segments, which it keeps track of through
segment descriptors
 Segment descriptor contains a valid bit to indicate whether the segment is
currently in memory.
 If segment is in main memory, access continues
 If not in memory, segment fault.

More Related Content

PDF
CS6401 OPERATING SYSTEMS Unit 3
Kathirvel Ayyaswamy
 
PPTX
Memory Management
sangrampatil81
 
PDF
CH08.pdf
ImranKhan880955
 
PDF
Cs8493 unit 3
Kathirvel Ayyaswamy
 
PDF
Cs8493 unit 3
Kathirvel Ayyaswamy
 
PPT
Bab 4
n k
 
PPT
Memory management principles in operating systems
nazimsattar
 
PPT
Chapter 9 OS
C.U
 
CS6401 OPERATING SYSTEMS Unit 3
Kathirvel Ayyaswamy
 
Memory Management
sangrampatil81
 
CH08.pdf
ImranKhan880955
 
Cs8493 unit 3
Kathirvel Ayyaswamy
 
Cs8493 unit 3
Kathirvel Ayyaswamy
 
Bab 4
n k
 
Memory management principles in operating systems
nazimsattar
 
Chapter 9 OS
C.U
 

Similar to HW29kkkkkkkkkkkkkkkkkkkmmmmkkmmkkk454.pptx (20)

PPT
Chapter 8 - Main Memory
Wayne Jones Jnr
 
PPT
Ch8
tech2click
 
DOCX
Opetating System Memory management
Johan Granados Montero
 
PDF
Unit iiios Storage Management
donny101
 
PPT
OSCh9
Joe Christensen
 
PPT
Ch9 OS
C.U
 
PPTX
Memory Managment(OS).pptx
RohitPaul71
 
PDF
Memory management OS
UmeshchandraYadav5
 
PPT
Main memory os - prashant odhavani- 160920107003
Prashant odhavani
 
PPT
Lecture20-21-22.ppt
ssuserf67e3a
 
DOCX
Operating system
Hussain Ahmady
 
PPTX
CSE2010- Module 4 V1.pptx
MadhuraK13
 
PPT
Memory management
Mohammad Sadiq
 
PPTX
Week 12 Operating System Lectures lec 2.pptx
MujtabaVlogs
 
PPTX
Unit 5Memory management.pptx
SourabhRaj29
 
PPTX
4-Memory Management -Main memoryyesno.pptx
adeljoby2004
 
PDF
Operating system Memory management
Shashank Asthana
 
PPT
Operating System
Subhasis Dash
 
Chapter 8 - Main Memory
Wayne Jones Jnr
 
Opetating System Memory management
Johan Granados Montero
 
Unit iiios Storage Management
donny101
 
Ch9 OS
C.U
 
Memory Managment(OS).pptx
RohitPaul71
 
Memory management OS
UmeshchandraYadav5
 
Main memory os - prashant odhavani- 160920107003
Prashant odhavani
 
Lecture20-21-22.ppt
ssuserf67e3a
 
Operating system
Hussain Ahmady
 
CSE2010- Module 4 V1.pptx
MadhuraK13
 
Memory management
Mohammad Sadiq
 
Week 12 Operating System Lectures lec 2.pptx
MujtabaVlogs
 
Unit 5Memory management.pptx
SourabhRaj29
 
4-Memory Management -Main memoryyesno.pptx
adeljoby2004
 
Operating system Memory management
Shashank Asthana
 
Operating System
Subhasis Dash
 
Ad

Recently uploaded (20)

PDF
Exploring-Forces 5.pdf/8th science curiosity/by sandeep swamy notes/ppt
Sandeep Swamy
 
PPTX
PREVENTIVE PEDIATRIC. pptx
AneetaSharma15
 
PPTX
An introduction to Prepositions for beginners.pptx
drsiddhantnagine
 
PDF
Review of Related Literature & Studies.pdf
Thelma Villaflores
 
PPTX
How to Manage Global Discount in Odoo 18 POS
Celine George
 
PDF
Wings of Fire Book by Dr. A.P.J Abdul Kalam Full PDF
hetalvaishnav93
 
PDF
1.Natural-Resources-and-Their-Use.ppt pdf /8th class social science Exploring...
Sandeep Swamy
 
DOCX
UPPER GASTRO INTESTINAL DISORDER.docx
BANDITA PATRA
 
PDF
The Picture of Dorian Gray summary and depiction
opaliyahemel
 
PDF
Landforms and landscapes data surprise preview
jpinnuck
 
PDF
5.EXPLORING-FORCES-Detailed-Notes.pdf/8TH CLASS SCIENCE CURIOSITY
Sandeep Swamy
 
PDF
Phylum Arthropoda: Characteristics and Classification, Entomology Lecture
Miraj Khan
 
PPTX
CARE OF UNCONSCIOUS PATIENTS .pptx
AneetaSharma15
 
PPTX
Skill Development Program For Physiotherapy Students by SRY.pptx
Prof.Dr.Y.SHANTHOSHRAJA MPT Orthopedic., MSc Microbiology
 
PDF
What is CFA?? Complete Guide to the Chartered Financial Analyst Program
sp4989653
 
PDF
Arihant Class 10 All in One Maths full pdf
sajal kumar
 
PPTX
vedic maths in python:unleasing ancient wisdom with modern code
mistrymuskan14
 
PDF
Types of Literary Text: Poetry and Prose
kaelandreabibit
 
PPTX
Care of patients with elImination deviation.pptx
AneetaSharma15
 
PPTX
Dakar Framework Education For All- 2000(Act)
santoshmohalik1
 
Exploring-Forces 5.pdf/8th science curiosity/by sandeep swamy notes/ppt
Sandeep Swamy
 
PREVENTIVE PEDIATRIC. pptx
AneetaSharma15
 
An introduction to Prepositions for beginners.pptx
drsiddhantnagine
 
Review of Related Literature & Studies.pdf
Thelma Villaflores
 
How to Manage Global Discount in Odoo 18 POS
Celine George
 
Wings of Fire Book by Dr. A.P.J Abdul Kalam Full PDF
hetalvaishnav93
 
1.Natural-Resources-and-Their-Use.ppt pdf /8th class social science Exploring...
Sandeep Swamy
 
UPPER GASTRO INTESTINAL DISORDER.docx
BANDITA PATRA
 
The Picture of Dorian Gray summary and depiction
opaliyahemel
 
Landforms and landscapes data surprise preview
jpinnuck
 
5.EXPLORING-FORCES-Detailed-Notes.pdf/8TH CLASS SCIENCE CURIOSITY
Sandeep Swamy
 
Phylum Arthropoda: Characteristics and Classification, Entomology Lecture
Miraj Khan
 
CARE OF UNCONSCIOUS PATIENTS .pptx
AneetaSharma15
 
Skill Development Program For Physiotherapy Students by SRY.pptx
Prof.Dr.Y.SHANTHOSHRAJA MPT Orthopedic., MSc Microbiology
 
What is CFA?? Complete Guide to the Chartered Financial Analyst Program
sp4989653
 
Arihant Class 10 All in One Maths full pdf
sajal kumar
 
vedic maths in python:unleasing ancient wisdom with modern code
mistrymuskan14
 
Types of Literary Text: Poetry and Prose
kaelandreabibit
 
Care of patients with elImination deviation.pptx
AneetaSharma15
 
Dakar Framework Education For All- 2000(Act)
santoshmohalik1
 
Ad

HW29kkkkkkkkkkkkkkkkkkkmmmmkkmmkkk454.pptx

  • 2.  Non Contiguous memory allocation  Paging  Segmentation  Virtual Memory  Demand paging  Page fault  Page replacement algorithms FIFO LRU Optimal  Thrashing  Page fault frequency Unit III Memory management  Address binding  Logical and physical address space  Dynamic loading and linking  Contiguous memory allocation  static and dynamic partitioned memory  Fragmentation  Swapping  Relocation  Compaction  Protection
  • 3.  Whenever the programs are to be executed, they should be present in the main memory.  In multiprogramming, many programs are present in the main memory, but the capacity of memory is limited.  All of the programs cannot be present in memory at the same time.  Hence, memory management is to be performed so all of the programs get space in memory, and they get executed from time to time.  Various memory management schemes are there. The selection of a particular scheme depends on many factors, especially the hardware design of the system. Memory management
  • 4.  Main memory is the temporary read/write memory of a computer. It is a set of continuous locations.  Each location has a specific address. Each address is a binary address. For the convenience of the programmer and user, they are represented in hexadecimal numbers.  The programs, which are a set of instructions, are stored in secondary memory. When they need to be executed, they are loaded to main memory from the secondary memory.  The main memory instructions are stored in various locations according to space availability. Main Memory
  • 5.  The groups of memory locations are called the address space.  Address space can be of two types:  physical address space  logical address space Address Space
  • 6.  The address of any location in physical memory is called the physical address.  Physical memory means the main memory or Random Access Memory (RAM). In main memory every location has an address.  Whenever any data or information is read from it, its address is given. That address is called the physical address.  A group of many physical addresses is called the physical address space. Physical Address Space
  • 7.  The physical address is provided by the hardware.  It is a binary number made with a combination of 0 and 1.  It refers to a particular cell or location of primary memory.  Such addresses have some end limits.  The limits normally start from zero to some end limit  All addresses do not only belong to the system’s main memory. Physical Address Space (contd.)
  • 8.  The address of any location of virtual memory is called the logical address.  It is also known as the virtual address.  The group of many logical addresses is called the logical address space or virtual address space. Logical Address Space
  • 9.  Logical address is provided by the operating system kernel.  Logical address space may not be continuous in memory.  It might also be present in the form of segments.  Sometimes logical addresses might be of the same value as a physical address.  The logical address is mapped to get the physical address.  This mapping is done using address translation.  The physical address space and logical address space are independent of each other. Logical Address Space (contd.)
  • 10. Memory-Management Unit (MMU)  Hardware device that maps virtual to physical address.  In MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory.  The user program deals with logical addresses; it never sees the real physical addresses.
  • 11. Address Binding • The CPU generates a logical address when a process wants to address a location. • The value of the relocation register is added to the logical address. • The new value which is produced as output is the value of a physical address. • This physical address will point to a location in main memory. • This location will be used as a pointer to the main memory, where the operation is to be performed. • Mapping of the virtual address into a physical address is also known as address translation or address binding.
  • 12. Binding of Instructions and Data to Memory  Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes.  Load time: Must generate relocatable code if memory location is not known at compile time.  Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limit registers). Address binding of instructions and data to memory addresses can happen at three different stages:
  • 13. Dynamic Loading  Routine is not loaded until it is called  Better memory-space utilization; unused routine is never loaded.  Useful when large amounts of code are needed to handle infrequently occurring cases.  No special support from the operating system is required implemented through program design.
  • 14. Dynamic Linking  Linking postponed until execution time.  Small piece of code, stub, used to locate the appropriate memory-resident library routine.  Stub replaces itself with the address of the routine, and executes the routine.  Operating system needed to check if routine is in processes’ memory address.
  • 15. Overlays  Keep in memory only those instructions and data that are needed at any given time.  Needed when process is larger than amount of memory allocated to it.  Implemented by user, no special support needed from operating system, programming design of overlay structure is complex
  • 16. Swapping  A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution.  Backing store – fast disk large enough to accommodate copies of all memory images for all users; must provide direct access to these memory images.  Roll out, roll in – swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed.  Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped.  Modified versions of swapping are found on many systems, i.e., UNIX and Microsoft Windows.
  • 17. Schematic View of Swapping
  • 18. Contiguous Allocation  Main memory usually into two partitions:  Resident operating system, usually held in low memory with interrupt vector.  User processes then held in high memory.  Single-partition allocation  Relocation-register scheme used to protect user processes from each other, and from changing operating-system code and data.  Relocation register contains value of smallest physical address; limit register contains range of logical addresses – each logical address must be less than the limit register.
  • 19. Contiguous Allocation (Cont.)  Multiple-partition allocation  Hole – block of available memory; holes of various size are scattered throughout memory.  When a process arrives, it is allocated memory from a hole large enough to accommodate it.  Operating system maintains information about: a) allocated partitions b) free partitions (hole) OS process 5 process 8 process 2 OS process 5 process 2 OS process 5 process 2 OS process 5 process 9 process 2 process 9 process 10
  • 20. Dynamic Storage-Allocation Problem  First-fit: Allocate the first hole that is big enough.  Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size. Produces the smallest leftover hole.  Worst-fit: Allocate the largest hole; must also search entier list. Produces the largest leftover hole. How to satisfy a request of size n from a list of free holes. First-fit and best-fit better than worst-fit in terms of speed and storage utilization.
  • 21. Fragmentation  External fragmentation – total memory space exists to satisfy a request, but it is not contiguous.  Internal fragmentation – allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used.  Reduce external fragmentation by compaction  Shuffle memory contents to place all free memory together in one large block.  Compaction is possible only if relocation is dynamic, and is done at execution time.  I/O problem  Latch job in memory while it is involved in I/O.  Do I/O only into OS buffers.
  • 22. External Fragmentation P1 P1 P1 P2 P2 P3 P3 P3 P4 If P5 requires 5 blocks of storage, it will not be allocated memory blocks as 5 contiguous blocks are not available.
  • 23. Internal Fragmentation Suppose the block size is 512KB If P1 requires 200KB, then the rest of the memory within the block remains unused P1 Memory remains unused P2 P3
  • 24.  Relocating the programs to new memory areas  Used to eliminate external fragmentation by compaction  OS must be updated with the new location address Relocation
  • 25. Relocation P1 P1 P1 P2 P2 P3 P3 P3 P4 P4 P1 P1 P1 P2 P2 P3 P3 P3 P4 is relocated
  • 26.  Compaction is bringing free space together into one place in order that free memory is available in a contiguous manner.  When two adjacent holes appear then they can be merged together to form a single big hole.  This larger hole can be used by the process that needs large memory requirements.  Another method is to relocate the processes, thus creating free space in a contiguous manner.  Compaction is used randomly, as free space will be created as soon as processes are terminated and leave the memory. Compaction
  • 27. Compaction P1 P1 P1 P2 P2 P3 P3 P3 P4 P4 P1 P1 P1 P2 P2 P3 P3 P3
  • 28. Memory Protection  Memory protection implemented by associating protection bit with each frame.  Valid-invalid bit attached to each entry in the page table:  “valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page.  “invalid” indicates that the page is not in the process’ logical address space.
  • 29.  Memory is allocated in non-continuous way  It can be classified as:  Paging  Segmentation Non Contiguous memory allocation
  • 30.  In the paging technique of memory management, the physical memory is divided into fixed-sized blocks.  Paging is a method of non-contiguous memory allocation.  In paging, the logical address space is divided into fixed-sized blocks known as pages.  The physical address space is also divided into fixed sized blocks known as frames.  Every page must be mapped to a frame. Paging
  • 31.  Divides logical memory into blocks of the same size called pages.  The size of the frames is usually in the power of 2.  The size of frame usually ranges between 512 bytes and 8192 bytes.  Management technique has to keep track of all free frames.  If a program needs n pages of memory, then n free frames are found in the memory and the program is loaded there.  Pages are scattered throughout the memory.  A page table is maintained to translate logical addresses into physical addresses.  Paging suffers from internal fragmentation.  Paging does not suffer from external fragmentation. Paging (contd.)
  • 32. Paging (contd.) The address generated by the CPU is divided into two parts: page number and page offset. • Page number (p) – used as an index into a page table that contains the base address of each page in physical memory. • Page offset (d) – combined with the base address to define the physical memory address that is sent to the memory unit.
  • 33. Segmentation  Memory-management scheme that supports user view of memory.  A program is a collection of segments. A segment is a logical unit such as: main program, procedure, function, local variables, global variables, common block, stack, symbol table, arrays
  • 34. Logical View of Segmentation 1 3 2 4 1 4 2 3 user space physical memory space
  • 35. Segmentation Architecture  Logical address consists of a two tuple: <segment-number, offset>,  Segment table – maps two-dimensional physical addresses; each table entry has:  base – contains the starting physical address where the segments reside in memory.  limit – specifies the length of the segment.  Segment-table base register (STBR) points to the segment table’s location in memory.  Segment-table length register (STLR) indicates number of segments used by a program; segment number s is legal if s < STLR.
  • 36. Segmentation Architecture (Cont.)  Relocation  dynamic  by segment table  Sharing  shared segments  same segment number  Allocation  first fit/best fit  external fragmentation
  • 37. Segmentation Architecture (Cont.)  Protection. With each entry in segment table associate:  validation bit = 0  illegal segment  read/write/execute privileges  Protection bits associated with segments; code sharing occurs at segment level.  Since segments vary in length, memory allocation is a dynamic storage- allocation problem.
  • 39. Virtual Memory  Background  Demand Paging  Performance of Demand Paging  Page Replacement  Page-Replacement Algorithms  Allocation of Frames  Thrashing  Other Considerations  Demand Segmenation
  • 40. Background  Virtual memory – separation of user logical memory from physical memory.  Only part of the program needs to be in memory for execution.  Logical address space can therefore be much larger than physical address space.  Need to allow pages to be swapped in and out.  Virtual memory can be implemented via:  Demand paging  Demand segmentation
  • 41. Demand Paging  Bring a page into memory only when it is needed.  Less I/O needed  Less memory needed  Faster response  More users  Page is needed  reference to it  invalid reference  abort  not-in-memory  bring to memory
  • 42. Valid-Invalid Bit  With each page table entry a valid–invalid bit is associated (1  in-memory, 0  not-in-memory)  Initially valid–invalid bit is set to 0 on all entries.  During address translation, if valid–invalid bit in page table entry is 0  page fault. valid-invalid bit 1 1 1 1 0 0 0  Frame # page table
  • 43. Page Fault  If there is ever a reference to a page, first reference will trap to OS  page fault  OS looks at another table to decide:  Invalid reference  abort.  Just not in memory.  Get empty frame.  Swap page into frame.  Reset tables, validation bit = 1.  Restart instruction
  • 44. What happens if there is no free frame?  Page replacement – find some page in memory, but not really in use, swap it out.  algorithm  performance – want an algorithm which will result in minimum number of page faults.  Same page may be brought into memory several times.
  • 45. Performance of Demand Paging  Page Fault Rate 0  p  1.0  if p = 0 no page faults  if p = 1, every reference is a fault  Effective Access Time (EAT) EAT = (1 – p) x memory access + p (page fault overhead + swap page out + swap page in + restart overhead)
  • 46. Page Replacement Algorithms  In an operating system that uses paging for memory management, a page replacement algorithm is needed to decide which page needs to be replaced when new page comes in.  Page Fault – A page fault happens when a running program accesses a memory page that is mapped into the virtual address space, but not loaded in physical memory.  Since actual physical memory is much smaller than virtual memory, page faults happen. In case of page fault, Operating System might have to replace one of the existing pages with the newly needed page.  Different page replacement algorithms suggest different ways to decide which page to replace. The target for all algorithms is to reduce the number of page faults.
  • 47. First-In-First-Out (FIFO) Algorithm  This is the simplest page replacement algorithm.  In this algorithm, the operating system keeps track of all pages in the memory in a queue, the oldest page is in the front of the queue.  When a page needs to be replaced page in the front of the queue is selected for removal.
  • 48. First-In-First-Out (FIFO) Algorithm Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames. Number of page faults=6
  • 49. First-In-First-Out (FIFO) Algorithm Consider page reference string 1, 2, 3, 3, 0, 3, 2, 4, 5, 6, 3 with 4 frames. Implement FIFO Page Replacement Algorithm and find the number of page faults.
  • 50. Belady’s Anomaly  Belady’s anomaly proves that it is possible to have more page faults when increasing the number of page frames while using the First in First Out (FIFO) page replacement algorithm.  Consider reference string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4 and 3 frames. If FIFO is implemented how many page faults occur?  If we increase frames to 4, how many page faults occur?
  • 51. Belady’s Anomaly  Demonstrate Belady’s anomaly for the reference string: 1,2,3,4,1,2,5,1,2,3,4,5 No. of frames No. of Page faults 2 12 3 9 4 10 5 5 6 5 7 5
  • 52. Optimal Algorithm In this algorithm, pages are replaced which would not be used for the longest duration of time in the future. Example: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2,3 with 4 page frame. Number of page faults=6
  • 53. Optimal Algorithm Consider page reference string 1, 2, 3, 4 , 0, 1, 3, 2, 4, 5, 6, 3 with 4 frames. Implement Optimal Page Replacement Algorithm and find the number of page faults.
  • 54. Optimal Algorithm Consider page reference string 1, 2, 3, 4 , 0, 5, 6, 3, 2, 1, 0, 1, 2 with 4 frames. Implement Optimal Page Replacement Algorithm and find the number of page faults. Optimal page replacement is perfect, but not possible in practice as the operating system cannot know future requests. The use of Optimal Page replacement is to set up a benchmark so that other replacement algorithms can be analyzed against it.
  • 55. Least Recently Used (LRU) Algorithm  In this algorithm, page will be replaced which is least recently used.  Example: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frames. Number of page faults=6
  • 56. Least Recently Used (LRU) Algorithm Consider page reference string 1, 2, 3, 4 , 0, 1, 3, 2, 4, 5, 6, 3 with 4 frames. Implement LRU Page Replacement Algorithm and find the number of page faults.
  • 57. Counting Algorithms  Keep a counter of the number of references that have been made to each page.  Least Frequently Used (LFU) Algorithm: replaces page with smallest count.  Most Frequently Used (MFU) Algorithm: based on the argument that the page with the smallest count was probably just brought in and has yet to be used.
  • 58. Counting Algorithms  Implement Least Frequently Used (LFU) Algorithm on reference string 1, 1, 2, 3, 3, 3, 4, 2, 1, 2, 2, 2, 1, 4, 2, 1 with 3 frames. How many page faults occur?  Implement Most Frequently Used (MFU) Algorithm on reference string 1, 1, 2, 3, 3, 3, 4, 2, 1, 2, 2, 2, 1, 5, 2, 1 with 3 frames. How many page faults occur?
  • 59. Practice Question  Consider the following page reference string: 1,2,3,4,2,1,5,6,2,1,2,3,7,6,3,2,1,2,3,6 How many page faults would occur using LRU, FIFO and Optimal Page replacement algorithms using 1,2,3,4,5,6 and 7 frames? LRU No. of frames Page Fault Page Fault Rates 1 20 =(20/20) *100 2 18 3 15 4 10 5 8 6 7 7 7 FIFO No. of frames Page Fault 1 20 2 18 3 16 4 14 5 10 6 10 7 7 Optimal No. of frames Page Fault 1 20 2 15 3 11 4 8 5 7 6 7 7 7
  • 60. Thrashing  If a process does not have “enough” pages, the page-fault rate is very high. This leads to:  low CPU utilization.  operating system thinks that it needs to increase the degree of multiprogramming.  another process added to the system.  Thrashing  a process is busy swapping pages in and out.
  • 61. Thrashing Diagram  Why does paging work? Locality model  Process migrates from one locality to another.  Localities may overlap.  Why does thrashing occur?  size of locality > total memory size
  • 62. Demand Segmentation  Used when insufficient hardware to implement demand paging.  OS allocates memory in segments, which it keeps track of through segment descriptors  Segment descriptor contains a valid bit to indicate whether the segment is currently in memory.  If segment is in main memory, access continues  If not in memory, segment fault.