SlideShare a Scribd company logo
Chapter 1: Introduction
Chapter 1: Introduction
• What Operating Systems Do
• Computer-System Organization
• Computer-System Architecture
• Operating-System Structure
• Operating-System Operations
• Process Management
• Memory Management
• Storage Management
• Protection and Security
• Distributed Systems
• Special-Purpose Systems
• Computing Environments
• Open-Source Operating Systems
Objectives
• To provide a grand tour of the major operating systems components
• To provide coverage of basic computer system organization
What is an Operating System?
• A program that acts as an intermediary between a user of a
computer and the computer hardware
• Operating system goals:
• Execute user programs and make solving user problems easier
• Make the computer system convenient to use
• Use the computer hardware in an efficient manner
Computer System Structure
• Computer system can be divided into four components:
• Hardware – provides basic computing resources
• CPU, memory, I/O devices
• Operating system
• Controls and coordinates use of hardware among various applications and users
• Application programs – define the ways in which the system resources
are used to solve the computing problems of the users
• Word processors, compilers, web browsers, database systems, video games
• Users
• People, machines, other computers
Four Components of a Computer System
What Operating Systems Do
• Depends on the point of view
• Users want convenience, ease of use
• Don’t care about resource utilization
• But shared computer such as mainframe or minicomputer must keep all
users happy
• Users of dedicate systems such as workstations have dedicated
resources but frequently use shared resources from servers
• Handheld computers are resource poor, optimized for usability and
battery life
• Some computers have little or no user interface, such as embedded
computers in devices and automobiles
Operating System Definition
• OS is a resource allocator
• Manages all resources
• Decides between conflicting requests for efficient and fair resource use
• OS is a control program
• Controls execution of programs to prevent errors and improper use of the
computer
Operating System Definition (Cont.)
• No universally accepted definition
• “Everything a vendor ships when you order an operating system”
is good approximation
• But varies wildly
• “The one program running at all times on the computer” is the
kernel. Everything else is either a system program (ships with the
operating system) or an application program.
Computer Startup
• bootstrap program is loaded at power-up or reboot
• Typically stored in ROM or EPROM, generally known as firmware
• Initializes all aspects of system
• Loads operating system kernel and starts execution
Computer System Organization
• Computer-system operation
• One or more CPUs, device controllers connect through common bus
providing access to shared memory
• Concurrent execution of CPUs and devices competing for memory cycles
Computer-System Operation
• I/O devices and the CPU can execute concurrently
• Each device controller is in charge of a particular device type
• Each device controller has a local buffer
• CPU moves data from/to main memory to/from local buffers
• I/O is from the device to local buffer of controller
• Device controller informs CPU that it has finished its operation by
causing an interrupt
Common Functions of Interrupts
• Interrupt transfers control to the interrupt service routine
generally, through the interrupt vector, which contains the
addresses of all the service routines
• Interrupt architecture must save the address of the interrupted
instruction
• Incoming interrupts are disabled while another interrupt is being
processed to prevent a lost interrupt
• A trap is a software-generated interrupt caused either by an error
or a user request
• An operating system is interrupt driven
14
Classes of Interrupts
15
Program Flow of Control Without
Interrupts
16
Program Flow of Control With
Interrupts, Short I/O Wait
User
Program
WRITE
WRITE
WRITE
I/O
Program
I/O
Command
Interrupt
Handler
END
1
2a
2b
3a
3b
4
5
(b) Interrupts; short I/O wait
17
Program Flow of Control With
Interrupts; Long I/O Wait
18
Interrupt Handler
• Program to service a particular I/O device
• Generally part of the operating system
19
Interrupts
• Suspends the normal sequence of execution
20
Interrupt Cycle
21
Interrupt Cycle
• Processor checks for interrupts
• If no interrupts fetch the next instruction for the current program
• If an interrupt is pending, suspend execution of the current program,
and execute the interrupt-handler routine
22
Timing Diagram Based on Short I/O
Wait
23
Timing Diagram Based on Short I/O
Wait
24
Simple Interrupt Processing
25
Changes in Memory and Registers for
an Interrupt
26
Changes in Memory and Registers for
an Interrupt
27
Multiple Interrupts
•Disable interrupts while an interrupt is
being processed
28
Multiple Interrupts
•Define priorities for interrupts
29
Multiple Interrupts
Interrupt Handling
• The operating system preserves the state of the CPU by storing
registers and the program counter
• Determines which type of interrupt has occurred:
• polling
• vectored interrupt system
• Separate segments of code determine what action should be taken
for each type of interrupt
Interrupt Timeline
I/O Structure
• After I/O starts, control returns to user program only upon I/O
completion
• Wait instruction idles the CPU until the next interrupt
• Wait loop (contention for memory access)
• At most one I/O request is outstanding at a time, no simultaneous I/O processing
• After I/O starts, control returns to user program without waiting for I/O
completion
• System call – request to the operating system to allow user to wait for I/O
completion
• Device-status table contains entry for each I/O device indicating its type,
address, and state
• Operating system indexes into I/O device table to determine device status and to
modify table entry to include interrupt
Direct Memory Access Structure
• Used for high-speed I/O devices able to transmit information at
close to memory speeds
• Device controller transfers blocks of data from buffer storage
directly to main memory without CPU intervention
• Only one interrupt is generated per block, rather than the one
interrupt per byte
Storage Structure
• Main memory – only large storage media that the CPU can access
directly
• Random access
• Typically volatile
• Secondary storage – extension of main memory that provides large
nonvolatile storage capacity
• Magnetic disks – rigid metal or glass platters covered with magnetic
recording material
• Disk surface is logically divided into tracks, which are subdivided into
sectors
• The disk controller determines the logical interaction between the device
and the computer
Storage Hierarchy
• Storage systems organized in hierarchy
• Speed
• Cost
• Volatility
• Caching – copying information into faster storage system; main
memory can be viewed as a cache for secondary storage
Storage-Device Hierarchy
Caching
• Important principle, performed at many levels in a computer (in
hardware, operating system, software)
• Information in use copied from slower to faster storage
temporarily
• Faster storage (cache) checked first to determine if information
is there
• If it is, information used directly from the cache (fast)
• If not, data copied to cache and used there
• Cache smaller than storage being cached
• Cache management important design problem
• Cache size and replacement policy
Computer-System Architecture
• Most systems use a single general-purpose processor (PDAs through
mainframes)
• Most systems have special-purpose processors as well
• Multiprocessors systems growing in use and importance
• Also known as parallel systems, tightly-coupled systems
• Advantages include:
1. Increased throughput
2. Economy of scale
3. Increased reliability – graceful degradation or fault tolerance
• Two types:
1. Asymmetric Multiprocessing
2. Symmetric Multiprocessing
How a Modern Computer Works
A von Neumann architecture
Systems with Multiple CPUs
• Collection of independent CPUs (or computers) that appears to the
users/applications as a single system
• Technology trends
• Powerful, yet cheap, microprocessors
• Advances in communications
• Physical limits on computing power of a single CPU
• Examples
• Network of workstations
• Servers with multiple processors
• Network of computers of a company
• Microcontrollers inside a car
40
Advantages
• Data sharing: allows many users to share a common data base
• Resource sharing: expensive devices such as a color printer
• Parallelism and speed-up: multiprocessor system can have more
computing power than a mainframe
• Better price/performance ratio than mainframes
• Reliability: Fault-tolerance can be provided against crashes of individual
machines
• Flexibility: spread the workload over available machines
• Modular expandability: Computing power can be added in small
increments (upgrading CPUs like memory)
41
Design Issues
• Transparency: How to achieve a single-system image
• How to hide distribution of memory from applications?
• How to maintain consistency of data?
• Performance
• How to exploit parallelism?
• How to reduce communication delays?
• Scalability: As more components (say, processors) are added,
performance should not degrade
• Centralized schemes (e.g. broadcast messages) don’t work
• Security
42
Classification
• Multiprocessors
• Multiple CPUs with shared memory
• Memory access delays about 10 – 50 nsec
• Multicomputers
• Multiple computers, each with own CPU and memory, connected by a high-
speed interconnect
• Tightly coupled with delays in micro-seconds
• Distributed Systems
• Loosely coupled systems connected over Local Area Network (LAN), or even
long-haul networks such as Internet
• Delays can be seconds, and unpredictable
43
Mutiprocessors
44
Multiprocessor Systems
• Multiple CPUs with a shared memory
• From an application’s perspective, difference with single-processor system
need not be visible
• Virtual memory where pages may reside in memories associated with other CPUs
• Applications can exploit parallelism for speed-up
• Topics to cover
1. Multiprocessor architectures (Section 8.1.1)
2. Cache coherence
3. OS organization (Section 8.1.2)
4. Synchronization (Section 8.1.3)
5. Scheduling (Section 8.1.4)
45
Multiprocessor Systems
• Continuous need for faster computers
• shared memory model
• message passing multiprocessor
• wide area distributed system
Multiprocessors
Definition:
A computer system in which two or
more CPUs share full access to a
common RAM
Multiprocessor Hardware (1)
Bus-based multiprocessors
48
Multiprocessor Architecture
• UMA (Uniform Memory Access)
• Time to access each memory word is the same
• Bus-based UMA
• CPUs connected to memory modules through switches
• NUMA (Non-uniform memory access)
• Memory distributed (partitioned among processors)
• Different access times for local and remote accesses
49
Bus-based UMA
• All CPUs and memory module connected over a shared bus
• To reduce traffic, each CPU also has a cache
• Key design issue: how to maintain coherency of data that appears in
multiple places?
• Each CPU can have a local memory module also that is not shared with
others
• Compilers can be designed to exploit the memory structure
• Typically, such an architecture can support 16 or 32 CPUs as a common
bus is a bottleneck (memory access not parallelized)
50
Switched UMA
• Goal: To reduce traffic on bus, provide multiple connections between
CPUs and memory units so that many accesses can be concurrent
• Crossbar Switch: Grid with horizontal lines from CPUs and vertical lines
from memory modules
• Crossbar at (i,j) can connect i-th CPU with j-th memory module
• As long as different processors are accessing different modules, all
requests can be in parallel
• Non-blocking: waiting caused only by contention for memory, but not for
bus
• Disadvantage: Too many connections (quadratic)
• Many other networks: omega, counting, …
51
Crossbar Switch
52
Cache Coherence
• Many processors can have locally cached copies of the same object
• Level of granularity can be an object or a block of 64 bytes
• We want to maximize concurrency
• If many processors just want to read, then each one can have a local copy, and
reads won’t generate any bus traffic
• We want to ensure coherence
• If a processor writes a value, then all subsequent reads by other processors
should return the latest value
• Coherence refers to a logically consistent global ordering of reads and
writes of multiple processors
• Modern multiprocessors support intricate schemes
53
Consistency and replication
• Need to replicate (cache) to improve performance
• How updates are propagated between cached replicas
• How to keep them consistent
• How to keep them consistency (much more complicated than
sequential processor)
• When a processor change the vale value of its copy of a variable,
• the other copies are invalidated (invalidate protocol), or
• the other copies are updated (update protocol).
54
Symmetric Multiprocessing Architecture
A Dual-Core Design
Multiprocessor OS
• How should OS software be organized?
• OS should handle allocation of processes to processors. Challenge due to
shared data structures such as process tables and ready queues
• OS should handle disk I/O for the system as a whole
• Two standard architectures
• Master-slave
• Symmetric multiprocessors (SMP)
57
Master-Slave Organization
• Master CPU runs kernel, all others run user processes
• Only one copy of all OS data structures
• All system calls handled by master CPU
• Problem: Master CPU can be a bottleneck
58
Symmetric Multiprocessing (SMP)
• Only one kernel space, but OS can run on any CPU
• Whenever a user process makes a system call, the same CPU runs OS to
process it
• Key issue: Multiple system calls can run in parallel on different CPUs
• Need locks on all OS data structures to ensure mutual exclusion for critical
updates
• Design issue: OS routines should have independence so that level of
granularity for locking gives good performance
59
Bus
Multiprocessor OS Types (1)
Each CPU has its own operating system
Bus
Multiprocessor OS Types (2)
Master-Slave multiprocessors
Bus
Multiprocessor OS Types (3)
• Symmetric Multiprocessors
• SMP multiprocessor model
Bus
Clustered Systems
• Like multiprocessor systems, but multiple systems working together
• Usually sharing storage via a storage-area network (SAN)
• Provides a high-availability service which survives failures
• Asymmetric clustering has one machine in hot-standby mode
• Symmetric clustering has multiple nodes running applications, monitoring each other
• Some clusters are for high-performance computing (HPC)
• Applications must be written to use parallelization
Clustered Systems
Operating System Structure
• Multiprogramming needed for efficiency
• Single user cannot keep CPU and I/O devices busy at all times
• Multiprogramming organizes jobs (code and data) so CPU always has one to execute
• A subset of total jobs in system is kept in memory
• One job selected and run via job scheduling
• When it has to wait (for I/O for example), OS switches to another job
• Timesharing (multitasking) is logical extension in which CPU switches jobs so
frequently that users can interact with each job while it is running, creating interactive
computing
• Response time should be < 1 second
• Each user has at least one program executing in memory process
• If several jobs ready to run at the same time  CPU scheduling
• If processes don’t fit in memory, swapping moves them in and out to run
• Virtual memory allows execution of processes not completely in memory
Memory Layout for Multiprogrammed System
Operating-System Operations
• Interrupt driven by hardware
• Software error or request creates exception or trap
• Division by zero, request for operating system service
• Other process problems include infinite loop, processes modifying
each other or the operating system
• Dual-mode operation allows OS to protect itself and other system
components
• User mode and kernel mode
• Mode bit provided by hardware
• Provides ability to distinguish when system is running user code or kernel code
• Some instructions designated as privileged, only executable in kernel mode
• System call changes mode to kernel, return from call resets it to user
Transition from User to Kernel Mode
• Timer to prevent infinite loop / process hogging resources
• Set interrupt after specific period
• Operating system decrements counter
• When counter zero generate an interrupt
• Set up before scheduling process to regain control or terminate program
that exceeds allotted time
Process Management
• A process is a program in execution. It is a unit of work within the
system. Program is a passive entity, process is an active entity.
• Process needs resources to accomplish its task
• CPU, memory, I/O, files
• Initialization data
• Process termination requires reclaim of any reusable resources
• Single-threaded process has one program counter specifying
location of next instruction to execute
• Process executes instructions sequentially, one at a time, until
completion
• Multi-threaded process has one program counter per thread
• Typically system has many processes, some user, some operating
system running concurrently on one or more CPUs
• Concurrency by multiplexing the CPUs among the processes / threads
Process Management Activities
• Creating and deleting both user and system processes
• Suspending and resuming processes
• Providing mechanisms for process synchronization
• Providing mechanisms for process communication
• Providing mechanisms for deadlock handling
The operating system is responsible for the following activities in
connection with process management:
Memory Management
• All data in memory before and after processing
• All instructions in memory in order to execute
• Memory management determines what is in memory when
• Optimizing CPU utilization and computer response to users
• Memory management activities
• Keeping track of which parts of memory are currently being used and by
whom
• Deciding which processes (or parts thereof) and data to move into and out
of memory
• Allocating and deallocating memory space as needed
Storage Management
• OS provides uniform, logical view of information storage
• Abstracts physical properties to logical storage unit - file
• Each medium is controlled by device (i.e., disk drive, tape drive)
• Varying properties include access speed, capacity, data-transfer rate, access method
(sequential or random)
• File-System management
• Files usually organized into directories
• Access control on most systems to determine who can access what
• OS activities include
• Creating and deleting files and directories
• Primitives to manipulate files and dirs
• Mapping files onto secondary storage
• Backup files onto stable (non-volatile) storage media
Mass-Storage Management
• Usually disks used to store data that does not fit in main memory
or data that must be kept for a “long” period of time
• Proper management is of central importance
• Entire speed of computer operation hinges on disk subsystem and
its algorithms
• OS activities
• Free-space management
• Storage allocation
• Disk scheduling
• Some storage need not be fast
• Tertiary storage includes optical storage, magnetic tape
• Still must be managed – by OS or applications
• Varies between WORM (write-once, read-many-times) and RW (read-
write)
Performance of Various Levels of Storage
• Movement between levels of storage hierarchy can be explicit or
implicit
Migration of Integer A from Disk to Register
• Multitasking environments must be careful to use most recent value,
no matter where it is stored in the storage hierarchy
• Multiprocessor environment must provide cache coherency in
hardware such that all CPUs have the most recent value in their
cache
• Distributed environment situation even more complex
• Several copies of a datum can exist
• Various solutions covered in Chapter 17
I/O Subsystem
• One purpose of OS is to hide peculiarities of hardware devices from
the user
• I/O subsystem responsible for
• Memory management of I/O including buffering (storing data temporarily
while it is being transferred), caching (storing parts of data in faster storage
for performance), spooling (the overlapping of output of one job with input
of other jobs)
• General device-driver interface
• Drivers for specific hardware devices
Protection and Security
• Protection – any mechanism for controlling access of processes or
users to resources defined by the OS
• Security – defense of the system against internal and external
attacks
• Huge range, including denial-of-service, worms, viruses, identity theft,
theft of service
• Systems generally first distinguish among users, to determine who
can do what
• User identities (user IDs, security IDs) include name and associated
number, one per user
• User ID then associated with all files, processes of that user to determine
access control
• Group identifier (group ID) allows set of users to be defined and controls
managed, then also associated with each process, file
• Privilege escalation allows user to change to effective ID with more rights
Distributed Computing
• Collection of separate, possibly heterogeneous, systems networked
together
• Network is a communications path
• Local Area Network (LAN)
• Wide Area Network (WAN)
• Metropolitan Area Network (MAN)
• Network Operating System provides features between systems across
network
• Communication scheme allows systems to exchange messages
• Illusion of a single system
Special-Purpose Systems
• Real-time embedded systems most prevalent form of computers
• Vary considerable, special purpose, limited purpose OS, real-time OS
• Multimedia systems
• Streams of data must be delivered according to time restrictions
• Handheld systems
• PDAs, smart phones, limited CPU, memory, power
• Reduced feature set OS, limited I/O
Computing Environments
• Traditional computer
• Blurring over time
• Office environment
• PCs connected to a network, terminals attached to mainframe or
minicomputers providing batch and timesharing
• Now portals allowing networked and remote systems access to
same resources
• Home networks
• Used to be single system, then modems
• Now firewalled, networked
Computing Environments (Cont.)
 Client-Server Computing
 Dumb terminals supplanted by smart PCs
 Many systems now servers, responding to requests generated
by clients
 Compute-server provides an interface to client to request
services (i.e., database)
 File-server provides interface for clients to store and
retrieve files
Peer-to-Peer Computing
• Another model of distributed system
• P2P does not distinguish clients and servers
• Instead all nodes are considered peers
• May each act as client, server or both
• Node must join P2P network
• Registers its service with central lookup service on network, or
• Broadcast request for service and respond to requests for service via discovery
protocol
• Examples include Napster and Gnutella
Web-Based Computing
• Web has become ubiquitous
• PCs most prevalent devices
• More devices becoming networked to allow web access
• New category of devices to manage web traffic among similar
servers: load balancers
• Use of operating systems like Windows 95, client-side, have
evolved into Linux and Windows XP, which can be clients and
servers
Open-Source Operating Systems
• Operating systems made available in source-code format rather
than just binary closed-source
• Counter to the copy protection and Digital Rights Management
(DRM) movement
• Started by Free Software Foundation (FSF), which has “copyleft”
GNU Public License (GPL)
• Examples include GNU/Linux and BSD UNIX (including core of Mac
OS X), and many more
End of Chapter 1

More Related Content

Similar to Operating system introduction and introduction (20)

PPTX
introduction to operating systems and services.pptx
anilvarsha1
 
PDF
Introduction to Operating Systems.pdf
Harika Pudugosula
 
PDF
Lecture 1- Introduction to Operating Systems.pdf
Amanuelmergia
 
PPTX
OS Introduction
SanskarAggarwal6
 
PPTX
What is an Operating Systems?
JayaKamal
 
PPTX
Introduction to operating systems
Kumbirai Junior Muzavazi
 
PPT
Module 1 Introduction.ppt
shreesha16
 
PPTX
Operating system 1Chapter One- Introduction(0) (1).pptx
jamsibro140
 
PDF
Operating System Overview.pdf
PrashantKhobragade3
 
PDF
PB1MAT_TIF17 - Pertemuan 1-2ghfctrerdxt.pdf
Join Sigalingging
 
PPTX
UNIT 1 - UNDERSTANDINGTHE PRINCIPLES OF OPERATING SYSTEM.pptx
LeahRachael
 
PDF
Engg-0505-IT-Operating-Systems-2nd-year.pdf
nikhil287188
 
PDF
OS Content.pdf
VAIBHAVSAHU55
 
PPTX
Unit_2_CSE111.pptx computer orientation project
FutureLegends
 
PDF
Operating Systems PPT 1 (1).pdf
FahanaAbdulVahab
 
PDF
operating systems hybrid notes for computerscience.pdf
rayanrajab1
 
PPT
Introduction of os and types
Prakash Sir
 
PPTX
Parallel Processing & Pipelining in Computer Architecture_Prof.Sumalatha.pptx
Sumalatha A
 
PPT
unit1 part1.ppt
suresh554942
 
PPTX
Unit 1 ppt os jkhiutufyhfhtjdtrsdcjgnhb,
shubhangimalas1
 
introduction to operating systems and services.pptx
anilvarsha1
 
Introduction to Operating Systems.pdf
Harika Pudugosula
 
Lecture 1- Introduction to Operating Systems.pdf
Amanuelmergia
 
OS Introduction
SanskarAggarwal6
 
What is an Operating Systems?
JayaKamal
 
Introduction to operating systems
Kumbirai Junior Muzavazi
 
Module 1 Introduction.ppt
shreesha16
 
Operating system 1Chapter One- Introduction(0) (1).pptx
jamsibro140
 
Operating System Overview.pdf
PrashantKhobragade3
 
PB1MAT_TIF17 - Pertemuan 1-2ghfctrerdxt.pdf
Join Sigalingging
 
UNIT 1 - UNDERSTANDINGTHE PRINCIPLES OF OPERATING SYSTEM.pptx
LeahRachael
 
Engg-0505-IT-Operating-Systems-2nd-year.pdf
nikhil287188
 
OS Content.pdf
VAIBHAVSAHU55
 
Unit_2_CSE111.pptx computer orientation project
FutureLegends
 
Operating Systems PPT 1 (1).pdf
FahanaAbdulVahab
 
operating systems hybrid notes for computerscience.pdf
rayanrajab1
 
Introduction of os and types
Prakash Sir
 
Parallel Processing & Pipelining in Computer Architecture_Prof.Sumalatha.pptx
Sumalatha A
 
unit1 part1.ppt
suresh554942
 
Unit 1 ppt os jkhiutufyhfhtjdtrsdcjgnhb,
shubhangimalas1
 

Recently uploaded (20)

PDF
Pressure Measurement training for engineers and Technicians
AIESOLUTIONS
 
PPTX
原版一样(Acadia毕业证书)加拿大阿卡迪亚大学毕业证办理方法
Taqyea
 
PPTX
Damage of stability of a ship and how its change .pptx
ehamadulhaque
 
PDF
Design Thinking basics for Engineers.pdf
CMR University
 
PPTX
Evaluation and thermal analysis of shell and tube heat exchanger as per requi...
shahveer210504
 
PPTX
Product Development & DevelopmentLecture02.pptx
zeeshanwazir2
 
PDF
PORTFOLIO Golam Kibria Khan — architect with a passion for thoughtful design...
MasumKhan59
 
PPTX
Thermal runway and thermal stability.pptx
godow93766
 
PPTX
Solar Thermal Energy System Seminar.pptx
Gpc Purapuza
 
PDF
Reasons for the succes of MENARD PRESSUREMETER.pdf
majdiamz
 
PPTX
Types of Bearing_Specifications_PPT.pptx
PranjulAgrahariAkash
 
PPTX
Shinkawa Proposal to meet Vibration API670.pptx
AchmadBashori2
 
PDF
Set Relation Function Practice session 24.05.2025.pdf
DrStephenStrange4
 
PDF
Zilliz Cloud Demo for performance and scale
Zilliz
 
DOC
MRRS Strength and Durability of Concrete
CivilMythili
 
PPTX
Heart Bleed Bug - A case study (Course: Cryptography and Network Security)
Adri Jovin
 
PDF
Biomechanics of Gait: Engineering Solutions for Rehabilitation (www.kiu.ac.ug)
publication11
 
DOCX
CS-802 (A) BDH Lab manual IPS Academy Indore
thegodhimself05
 
PPTX
MobileComputingMANET2023 MobileComputingMANET2023.pptx
masterfake98765
 
PPT
PPT2_Metal formingMECHANICALENGINEEIRNG .ppt
Praveen Kumar
 
Pressure Measurement training for engineers and Technicians
AIESOLUTIONS
 
原版一样(Acadia毕业证书)加拿大阿卡迪亚大学毕业证办理方法
Taqyea
 
Damage of stability of a ship and how its change .pptx
ehamadulhaque
 
Design Thinking basics for Engineers.pdf
CMR University
 
Evaluation and thermal analysis of shell and tube heat exchanger as per requi...
shahveer210504
 
Product Development & DevelopmentLecture02.pptx
zeeshanwazir2
 
PORTFOLIO Golam Kibria Khan — architect with a passion for thoughtful design...
MasumKhan59
 
Thermal runway and thermal stability.pptx
godow93766
 
Solar Thermal Energy System Seminar.pptx
Gpc Purapuza
 
Reasons for the succes of MENARD PRESSUREMETER.pdf
majdiamz
 
Types of Bearing_Specifications_PPT.pptx
PranjulAgrahariAkash
 
Shinkawa Proposal to meet Vibration API670.pptx
AchmadBashori2
 
Set Relation Function Practice session 24.05.2025.pdf
DrStephenStrange4
 
Zilliz Cloud Demo for performance and scale
Zilliz
 
MRRS Strength and Durability of Concrete
CivilMythili
 
Heart Bleed Bug - A case study (Course: Cryptography and Network Security)
Adri Jovin
 
Biomechanics of Gait: Engineering Solutions for Rehabilitation (www.kiu.ac.ug)
publication11
 
CS-802 (A) BDH Lab manual IPS Academy Indore
thegodhimself05
 
MobileComputingMANET2023 MobileComputingMANET2023.pptx
masterfake98765
 
PPT2_Metal formingMECHANICALENGINEEIRNG .ppt
Praveen Kumar
 
Ad

Operating system introduction and introduction

  • 2. Chapter 1: Introduction • What Operating Systems Do • Computer-System Organization • Computer-System Architecture • Operating-System Structure • Operating-System Operations • Process Management • Memory Management • Storage Management • Protection and Security • Distributed Systems • Special-Purpose Systems • Computing Environments • Open-Source Operating Systems
  • 3. Objectives • To provide a grand tour of the major operating systems components • To provide coverage of basic computer system organization
  • 4. What is an Operating System? • A program that acts as an intermediary between a user of a computer and the computer hardware • Operating system goals: • Execute user programs and make solving user problems easier • Make the computer system convenient to use • Use the computer hardware in an efficient manner
  • 5. Computer System Structure • Computer system can be divided into four components: • Hardware – provides basic computing resources • CPU, memory, I/O devices • Operating system • Controls and coordinates use of hardware among various applications and users • Application programs – define the ways in which the system resources are used to solve the computing problems of the users • Word processors, compilers, web browsers, database systems, video games • Users • People, machines, other computers
  • 6. Four Components of a Computer System
  • 7. What Operating Systems Do • Depends on the point of view • Users want convenience, ease of use • Don’t care about resource utilization • But shared computer such as mainframe or minicomputer must keep all users happy • Users of dedicate systems such as workstations have dedicated resources but frequently use shared resources from servers • Handheld computers are resource poor, optimized for usability and battery life • Some computers have little or no user interface, such as embedded computers in devices and automobiles
  • 8. Operating System Definition • OS is a resource allocator • Manages all resources • Decides between conflicting requests for efficient and fair resource use • OS is a control program • Controls execution of programs to prevent errors and improper use of the computer
  • 9. Operating System Definition (Cont.) • No universally accepted definition • “Everything a vendor ships when you order an operating system” is good approximation • But varies wildly • “The one program running at all times on the computer” is the kernel. Everything else is either a system program (ships with the operating system) or an application program.
  • 10. Computer Startup • bootstrap program is loaded at power-up or reboot • Typically stored in ROM or EPROM, generally known as firmware • Initializes all aspects of system • Loads operating system kernel and starts execution
  • 11. Computer System Organization • Computer-system operation • One or more CPUs, device controllers connect through common bus providing access to shared memory • Concurrent execution of CPUs and devices competing for memory cycles
  • 12. Computer-System Operation • I/O devices and the CPU can execute concurrently • Each device controller is in charge of a particular device type • Each device controller has a local buffer • CPU moves data from/to main memory to/from local buffers • I/O is from the device to local buffer of controller • Device controller informs CPU that it has finished its operation by causing an interrupt
  • 13. Common Functions of Interrupts • Interrupt transfers control to the interrupt service routine generally, through the interrupt vector, which contains the addresses of all the service routines • Interrupt architecture must save the address of the interrupted instruction • Incoming interrupts are disabled while another interrupt is being processed to prevent a lost interrupt • A trap is a software-generated interrupt caused either by an error or a user request • An operating system is interrupt driven
  • 15. 15 Program Flow of Control Without Interrupts
  • 16. 16 Program Flow of Control With Interrupts, Short I/O Wait User Program WRITE WRITE WRITE I/O Program I/O Command Interrupt Handler END 1 2a 2b 3a 3b 4 5 (b) Interrupts; short I/O wait
  • 17. 17 Program Flow of Control With Interrupts; Long I/O Wait
  • 18. 18 Interrupt Handler • Program to service a particular I/O device • Generally part of the operating system
  • 19. 19 Interrupts • Suspends the normal sequence of execution
  • 21. 21 Interrupt Cycle • Processor checks for interrupts • If no interrupts fetch the next instruction for the current program • If an interrupt is pending, suspend execution of the current program, and execute the interrupt-handler routine
  • 22. 22 Timing Diagram Based on Short I/O Wait
  • 23. 23 Timing Diagram Based on Short I/O Wait
  • 25. 25 Changes in Memory and Registers for an Interrupt
  • 26. 26 Changes in Memory and Registers for an Interrupt
  • 27. 27 Multiple Interrupts •Disable interrupts while an interrupt is being processed
  • 30. Interrupt Handling • The operating system preserves the state of the CPU by storing registers and the program counter • Determines which type of interrupt has occurred: • polling • vectored interrupt system • Separate segments of code determine what action should be taken for each type of interrupt
  • 32. I/O Structure • After I/O starts, control returns to user program only upon I/O completion • Wait instruction idles the CPU until the next interrupt • Wait loop (contention for memory access) • At most one I/O request is outstanding at a time, no simultaneous I/O processing • After I/O starts, control returns to user program without waiting for I/O completion • System call – request to the operating system to allow user to wait for I/O completion • Device-status table contains entry for each I/O device indicating its type, address, and state • Operating system indexes into I/O device table to determine device status and to modify table entry to include interrupt
  • 33. Direct Memory Access Structure • Used for high-speed I/O devices able to transmit information at close to memory speeds • Device controller transfers blocks of data from buffer storage directly to main memory without CPU intervention • Only one interrupt is generated per block, rather than the one interrupt per byte
  • 34. Storage Structure • Main memory – only large storage media that the CPU can access directly • Random access • Typically volatile • Secondary storage – extension of main memory that provides large nonvolatile storage capacity • Magnetic disks – rigid metal or glass platters covered with magnetic recording material • Disk surface is logically divided into tracks, which are subdivided into sectors • The disk controller determines the logical interaction between the device and the computer
  • 35. Storage Hierarchy • Storage systems organized in hierarchy • Speed • Cost • Volatility • Caching – copying information into faster storage system; main memory can be viewed as a cache for secondary storage
  • 37. Caching • Important principle, performed at many levels in a computer (in hardware, operating system, software) • Information in use copied from slower to faster storage temporarily • Faster storage (cache) checked first to determine if information is there • If it is, information used directly from the cache (fast) • If not, data copied to cache and used there • Cache smaller than storage being cached • Cache management important design problem • Cache size and replacement policy
  • 38. Computer-System Architecture • Most systems use a single general-purpose processor (PDAs through mainframes) • Most systems have special-purpose processors as well • Multiprocessors systems growing in use and importance • Also known as parallel systems, tightly-coupled systems • Advantages include: 1. Increased throughput 2. Economy of scale 3. Increased reliability – graceful degradation or fault tolerance • Two types: 1. Asymmetric Multiprocessing 2. Symmetric Multiprocessing
  • 39. How a Modern Computer Works A von Neumann architecture
  • 40. Systems with Multiple CPUs • Collection of independent CPUs (or computers) that appears to the users/applications as a single system • Technology trends • Powerful, yet cheap, microprocessors • Advances in communications • Physical limits on computing power of a single CPU • Examples • Network of workstations • Servers with multiple processors • Network of computers of a company • Microcontrollers inside a car 40
  • 41. Advantages • Data sharing: allows many users to share a common data base • Resource sharing: expensive devices such as a color printer • Parallelism and speed-up: multiprocessor system can have more computing power than a mainframe • Better price/performance ratio than mainframes • Reliability: Fault-tolerance can be provided against crashes of individual machines • Flexibility: spread the workload over available machines • Modular expandability: Computing power can be added in small increments (upgrading CPUs like memory) 41
  • 42. Design Issues • Transparency: How to achieve a single-system image • How to hide distribution of memory from applications? • How to maintain consistency of data? • Performance • How to exploit parallelism? • How to reduce communication delays? • Scalability: As more components (say, processors) are added, performance should not degrade • Centralized schemes (e.g. broadcast messages) don’t work • Security 42
  • 43. Classification • Multiprocessors • Multiple CPUs with shared memory • Memory access delays about 10 – 50 nsec • Multicomputers • Multiple computers, each with own CPU and memory, connected by a high- speed interconnect • Tightly coupled with delays in micro-seconds • Distributed Systems • Loosely coupled systems connected over Local Area Network (LAN), or even long-haul networks such as Internet • Delays can be seconds, and unpredictable 43
  • 45. Multiprocessor Systems • Multiple CPUs with a shared memory • From an application’s perspective, difference with single-processor system need not be visible • Virtual memory where pages may reside in memories associated with other CPUs • Applications can exploit parallelism for speed-up • Topics to cover 1. Multiprocessor architectures (Section 8.1.1) 2. Cache coherence 3. OS organization (Section 8.1.2) 4. Synchronization (Section 8.1.3) 5. Scheduling (Section 8.1.4) 45
  • 46. Multiprocessor Systems • Continuous need for faster computers • shared memory model • message passing multiprocessor • wide area distributed system
  • 47. Multiprocessors Definition: A computer system in which two or more CPUs share full access to a common RAM
  • 49. Multiprocessor Architecture • UMA (Uniform Memory Access) • Time to access each memory word is the same • Bus-based UMA • CPUs connected to memory modules through switches • NUMA (Non-uniform memory access) • Memory distributed (partitioned among processors) • Different access times for local and remote accesses 49
  • 50. Bus-based UMA • All CPUs and memory module connected over a shared bus • To reduce traffic, each CPU also has a cache • Key design issue: how to maintain coherency of data that appears in multiple places? • Each CPU can have a local memory module also that is not shared with others • Compilers can be designed to exploit the memory structure • Typically, such an architecture can support 16 or 32 CPUs as a common bus is a bottleneck (memory access not parallelized) 50
  • 51. Switched UMA • Goal: To reduce traffic on bus, provide multiple connections between CPUs and memory units so that many accesses can be concurrent • Crossbar Switch: Grid with horizontal lines from CPUs and vertical lines from memory modules • Crossbar at (i,j) can connect i-th CPU with j-th memory module • As long as different processors are accessing different modules, all requests can be in parallel • Non-blocking: waiting caused only by contention for memory, but not for bus • Disadvantage: Too many connections (quadratic) • Many other networks: omega, counting, … 51
  • 53. Cache Coherence • Many processors can have locally cached copies of the same object • Level of granularity can be an object or a block of 64 bytes • We want to maximize concurrency • If many processors just want to read, then each one can have a local copy, and reads won’t generate any bus traffic • We want to ensure coherence • If a processor writes a value, then all subsequent reads by other processors should return the latest value • Coherence refers to a logically consistent global ordering of reads and writes of multiple processors • Modern multiprocessors support intricate schemes 53
  • 54. Consistency and replication • Need to replicate (cache) to improve performance • How updates are propagated between cached replicas • How to keep them consistent • How to keep them consistency (much more complicated than sequential processor) • When a processor change the vale value of its copy of a variable, • the other copies are invalidated (invalidate protocol), or • the other copies are updated (update protocol). 54
  • 57. Multiprocessor OS • How should OS software be organized? • OS should handle allocation of processes to processors. Challenge due to shared data structures such as process tables and ready queues • OS should handle disk I/O for the system as a whole • Two standard architectures • Master-slave • Symmetric multiprocessors (SMP) 57
  • 58. Master-Slave Organization • Master CPU runs kernel, all others run user processes • Only one copy of all OS data structures • All system calls handled by master CPU • Problem: Master CPU can be a bottleneck 58
  • 59. Symmetric Multiprocessing (SMP) • Only one kernel space, but OS can run on any CPU • Whenever a user process makes a system call, the same CPU runs OS to process it • Key issue: Multiple system calls can run in parallel on different CPUs • Need locks on all OS data structures to ensure mutual exclusion for critical updates • Design issue: OS routines should have independence so that level of granularity for locking gives good performance 59 Bus
  • 60. Multiprocessor OS Types (1) Each CPU has its own operating system Bus
  • 61. Multiprocessor OS Types (2) Master-Slave multiprocessors Bus
  • 62. Multiprocessor OS Types (3) • Symmetric Multiprocessors • SMP multiprocessor model Bus
  • 63. Clustered Systems • Like multiprocessor systems, but multiple systems working together • Usually sharing storage via a storage-area network (SAN) • Provides a high-availability service which survives failures • Asymmetric clustering has one machine in hot-standby mode • Symmetric clustering has multiple nodes running applications, monitoring each other • Some clusters are for high-performance computing (HPC) • Applications must be written to use parallelization
  • 65. Operating System Structure • Multiprogramming needed for efficiency • Single user cannot keep CPU and I/O devices busy at all times • Multiprogramming organizes jobs (code and data) so CPU always has one to execute • A subset of total jobs in system is kept in memory • One job selected and run via job scheduling • When it has to wait (for I/O for example), OS switches to another job • Timesharing (multitasking) is logical extension in which CPU switches jobs so frequently that users can interact with each job while it is running, creating interactive computing • Response time should be < 1 second • Each user has at least one program executing in memory process • If several jobs ready to run at the same time  CPU scheduling • If processes don’t fit in memory, swapping moves them in and out to run • Virtual memory allows execution of processes not completely in memory
  • 66. Memory Layout for Multiprogrammed System
  • 67. Operating-System Operations • Interrupt driven by hardware • Software error or request creates exception or trap • Division by zero, request for operating system service • Other process problems include infinite loop, processes modifying each other or the operating system • Dual-mode operation allows OS to protect itself and other system components • User mode and kernel mode • Mode bit provided by hardware • Provides ability to distinguish when system is running user code or kernel code • Some instructions designated as privileged, only executable in kernel mode • System call changes mode to kernel, return from call resets it to user
  • 68. Transition from User to Kernel Mode • Timer to prevent infinite loop / process hogging resources • Set interrupt after specific period • Operating system decrements counter • When counter zero generate an interrupt • Set up before scheduling process to regain control or terminate program that exceeds allotted time
  • 69. Process Management • A process is a program in execution. It is a unit of work within the system. Program is a passive entity, process is an active entity. • Process needs resources to accomplish its task • CPU, memory, I/O, files • Initialization data • Process termination requires reclaim of any reusable resources • Single-threaded process has one program counter specifying location of next instruction to execute • Process executes instructions sequentially, one at a time, until completion • Multi-threaded process has one program counter per thread • Typically system has many processes, some user, some operating system running concurrently on one or more CPUs • Concurrency by multiplexing the CPUs among the processes / threads
  • 70. Process Management Activities • Creating and deleting both user and system processes • Suspending and resuming processes • Providing mechanisms for process synchronization • Providing mechanisms for process communication • Providing mechanisms for deadlock handling The operating system is responsible for the following activities in connection with process management:
  • 71. Memory Management • All data in memory before and after processing • All instructions in memory in order to execute • Memory management determines what is in memory when • Optimizing CPU utilization and computer response to users • Memory management activities • Keeping track of which parts of memory are currently being used and by whom • Deciding which processes (or parts thereof) and data to move into and out of memory • Allocating and deallocating memory space as needed
  • 72. Storage Management • OS provides uniform, logical view of information storage • Abstracts physical properties to logical storage unit - file • Each medium is controlled by device (i.e., disk drive, tape drive) • Varying properties include access speed, capacity, data-transfer rate, access method (sequential or random) • File-System management • Files usually organized into directories • Access control on most systems to determine who can access what • OS activities include • Creating and deleting files and directories • Primitives to manipulate files and dirs • Mapping files onto secondary storage • Backup files onto stable (non-volatile) storage media
  • 73. Mass-Storage Management • Usually disks used to store data that does not fit in main memory or data that must be kept for a “long” period of time • Proper management is of central importance • Entire speed of computer operation hinges on disk subsystem and its algorithms • OS activities • Free-space management • Storage allocation • Disk scheduling • Some storage need not be fast • Tertiary storage includes optical storage, magnetic tape • Still must be managed – by OS or applications • Varies between WORM (write-once, read-many-times) and RW (read- write)
  • 74. Performance of Various Levels of Storage • Movement between levels of storage hierarchy can be explicit or implicit
  • 75. Migration of Integer A from Disk to Register • Multitasking environments must be careful to use most recent value, no matter where it is stored in the storage hierarchy • Multiprocessor environment must provide cache coherency in hardware such that all CPUs have the most recent value in their cache • Distributed environment situation even more complex • Several copies of a datum can exist • Various solutions covered in Chapter 17
  • 76. I/O Subsystem • One purpose of OS is to hide peculiarities of hardware devices from the user • I/O subsystem responsible for • Memory management of I/O including buffering (storing data temporarily while it is being transferred), caching (storing parts of data in faster storage for performance), spooling (the overlapping of output of one job with input of other jobs) • General device-driver interface • Drivers for specific hardware devices
  • 77. Protection and Security • Protection – any mechanism for controlling access of processes or users to resources defined by the OS • Security – defense of the system against internal and external attacks • Huge range, including denial-of-service, worms, viruses, identity theft, theft of service • Systems generally first distinguish among users, to determine who can do what • User identities (user IDs, security IDs) include name and associated number, one per user • User ID then associated with all files, processes of that user to determine access control • Group identifier (group ID) allows set of users to be defined and controls managed, then also associated with each process, file • Privilege escalation allows user to change to effective ID with more rights
  • 78. Distributed Computing • Collection of separate, possibly heterogeneous, systems networked together • Network is a communications path • Local Area Network (LAN) • Wide Area Network (WAN) • Metropolitan Area Network (MAN) • Network Operating System provides features between systems across network • Communication scheme allows systems to exchange messages • Illusion of a single system
  • 79. Special-Purpose Systems • Real-time embedded systems most prevalent form of computers • Vary considerable, special purpose, limited purpose OS, real-time OS • Multimedia systems • Streams of data must be delivered according to time restrictions • Handheld systems • PDAs, smart phones, limited CPU, memory, power • Reduced feature set OS, limited I/O
  • 80. Computing Environments • Traditional computer • Blurring over time • Office environment • PCs connected to a network, terminals attached to mainframe or minicomputers providing batch and timesharing • Now portals allowing networked and remote systems access to same resources • Home networks • Used to be single system, then modems • Now firewalled, networked
  • 81. Computing Environments (Cont.)  Client-Server Computing  Dumb terminals supplanted by smart PCs  Many systems now servers, responding to requests generated by clients  Compute-server provides an interface to client to request services (i.e., database)  File-server provides interface for clients to store and retrieve files
  • 82. Peer-to-Peer Computing • Another model of distributed system • P2P does not distinguish clients and servers • Instead all nodes are considered peers • May each act as client, server or both • Node must join P2P network • Registers its service with central lookup service on network, or • Broadcast request for service and respond to requests for service via discovery protocol • Examples include Napster and Gnutella
  • 83. Web-Based Computing • Web has become ubiquitous • PCs most prevalent devices • More devices becoming networked to allow web access • New category of devices to manage web traffic among similar servers: load balancers • Use of operating systems like Windows 95, client-side, have evolved into Linux and Windows XP, which can be clients and servers
  • 84. Open-Source Operating Systems • Operating systems made available in source-code format rather than just binary closed-source • Counter to the copy protection and Digital Rights Management (DRM) movement • Started by Free Software Foundation (FSF), which has “copyleft” GNU Public License (GPL) • Examples include GNU/Linux and BSD UNIX (including core of Mac OS X), and many more