SlideShare a Scribd company logo
MASS STORAGE
STRUCTURE
MASS-STORAGE SYSTEMS
 Overview of Mass Storage Structure
 Disk Structure
 Disk Scheduling
 Disk Management
 Swap-Space Management
 RAID Structure
 Stable-Storage Implementation
OBJECTIVES
 Describe the physical structure of secondary and
tertiary storage devices and the resulting effects
on the uses of the devices
 Explain the performance characteristics of mass-
storage devices
 Discuss operating-system services provided for
mass storage, including RAID and HSM
OVERVIEW OF MASS STORAGE
STRUCTURE
 Magnetic disks provide bulk of secondary storage of modern
computers
 Drives rotate at 60 to 200 times per second
 Transfer rate is rate at which data flow between drive and
computer
 Positioning time (random-access time) is time to move disk
arm to desired cylinder (seek time) and time for desired sector
to rotate under the disk head (rotational latency)
 Head crash results from disk head making contact with the
disk surface
 That’s bad
 Disks can be removable
 Drive attached to computer via I/O bus
 Busses vary, including EIDE, ATA, SATA, USB, Fibre
Channel, SCSI
 Host controller in computer uses bus to talk to disk
controller built into drive or storage array
 Magnetic tape
 Was early secondary-storage medium
 Relatively permanent and holds large quantities of data
 Access time slow
 Random access ~1000 times slower than disk
 Mainly used for backup, storage of infrequently-used
data, transfer medium between systems
 Kept in spool and wound or rewound past read-write head
 Once data under head, transfer rates comparable to disk
 20-200GB typical storage
 Common technologies are 4mm, 8mm, 19mm, LTO-2 and
SDLT
DISK STRUCTURE
 Disk drives are addressed as large 1-
dimensional arrays of logical blocks, where
the logical block is the smallest unit of
transfer
 The 1-dimensional array of logical blocks is
mapped into the sectors of the disk
sequentially
 Sector 0 is the first sector of the first track on
the outermost cylinder
 Mapping proceeds in order through that
track, then the rest of the tracks in that
cylinder, and then through the rest of the
cylinders from outermost to innermost
DISK SCHEDULING
 The operating system is responsible for using hardware
efficiently — for the disk drives, this means having a
fast access time and disk bandwidth
 Access time has two major components
 Seek time is the time for the disk are to move the
heads to the cylinder containing the desired sector
 Rotational latency is the additional time waiting
for the disk to rotate the desired sector to the disk
head
 Minimize seek time
 Seek time ≈ seek distance
 Disk bandwidth is the total number of bytes
transferred, divided by the total time between the first
request for service and the completion of the last
transfer
DISK SCHEDULING
 Several algorithms exist to schedule the servicing
of disk I/O requests
 We illustrate them with a request queue (0-199)
98, 183, 37, 122, 14, 124, 65, 67
Head pointer 53
FCFS
Illustration shows total head movement of 640 cylinders
SSTF
 Selects the request with the minimum seek
time from the current head position
 SSTF scheduling is a form of SJF
scheduling; may cause starvation of some
requests
 Illustration shows total head movement of
236 cylinders
SSTF
SCAN
 The disk arm starts at one end of the disk,
and moves toward the other end, servicing
requests until it gets to the other end of the
disk, where the head movement is reversed
and servicing continues.
 SCAN algorithm Sometimes called the
elevator algorithm
 Illustration shows total head movement of
208 cylinders
SCAN
C-SCAN
 Provides a more uniform wait time than SCAN
 The head moves from one end of the disk to
the other, servicing requests as it goes
 When it reaches the other end, however, it
immediately returns to the beginning of the
disk, without servicing any requests on the
return trip
 Treats the cylinders as a circular list that
wraps around from the last cylinder to the first
one
C-SCAN
C-LOOK
Version of C-SCAN
Arm only goes as far as the last
request in each direction, then
reverses direction immediately,
without first going all the way to the
end of the disk
C-LOOK
SELECTING A DISK-SCHEDULING
ALGORITHM
 SSTF is common and has a natural appeal
 SCAN and C-SCAN perform better for systems that
place a heavy load on the disk
 Performance depends on the number and types of
requests
 Requests for disk service can be influenced by the file-
allocation method
 The disk-scheduling algorithm should be written as a
separate module of the operating system, allowing it to
be replaced with a different algorithm if necessary
 Either SSTF or LOOK is a reasonable choice for the
default algorithm
DISK MANAGEMENT
 Low-level formatting, or physical formatting — Dividing a
disk into sectors that the disk controller can read and write
 To use a disk to hold files, the operating system still needs to
record its own data structures on the disk
 Partition the disk into one or more groups of cylinders
 Logical formatting or “making a file system”
 To increase efficiency most file systems group blocks into
clusters
 Disk I/O done in blocks
 File I/O done in clusters
 Boot block initializes system
 The bootstrap is stored in ROM
 Bootstrap loader program
 Methods such as sector sparing used to handle bad blocks
BOOTING FROM A DISK IN WINDOWS
2000
SWAP-SPACE MANAGEMENT
 Swap-space — Virtual memory uses disk space as an
extension of main memory
 Swap-space can be carved out of the normal file
system, or, more commonly, it can be in a separate disk
partition
 Swap-space management
 4.3BSD allocates swap space when process starts;
holds text segment (the program) and data segment
 Kernel uses swap maps to track swap-space use
 Solaris 2 allocates swap space only when a page is
forced out of physical memory, not when the virtual
memory page is first created
DATA STRUCTURES FOR SWAPPING ON
LINUX SYSTEMS
RAID STRUCTURE
 RAID – multiple disk drives provides
reliability via redundancy
 Increases the mean time to failure
 Frequently combined with NVRAM to
improve write performance
 RAID is arranged into six different levels
RAID
 Several improvements in disk-use techniques involve the use of
multiple disks working cooperatively
 Disk striping uses a group of disks as one storage unit
 RAID schemes improve performance and improve the reliability
of the storage system by storing redundant data
 Mirroring or shadowing (RAID 1) keeps duplicate of each
disk
 Striped mirrors (RAID 1+0) or mirrored stripes (RAID 0+1)
provides high performance and high reliability
 Block interleaved parity (RAID 4, 5, 6) uses much less
redundancy
 RAID within a storage array can still fail if the array fails, so
automatic replication of the data between arrays is common
 Frequently, a small number of hot-spare disks are left
unallocated, automatically replacing a failed disk and having
data rebuilt onto them
RAID LEVELS
RAID (0 + 1) AND (1 + 0)
EXTENSIONS
 RAID alone does not prevent or detect data corruption
or other errors, just disk failures
 Solaris ZFS adds checksums of all data and metadata
 Checksums kept with pointer to object, to detect if
object is the right one and whether it changed
 Can detect and correct data and metadata corruption
 ZFS also removes volumes, partitions
 Disks allocated in pools
 Filesystems with a pool share that pool, use and
release space like “malloc” and “free” memory
allocate / release calls
ZFS CHECKSUMS ALL METADATA AND
DATA
TRADITIONAL AND POOLED STORAGE
STABLE-STORAGE IMPLEMENTATION
 Write-ahead log scheme requires stable
storage
 To implement stable storage:
 Replicate information on more than one
nonvolatile storage media with
independent failure modes
 Update information in a controlled
manner to ensure that we can recover the
stable data after any failure during data
transfer or recovery

More Related Content

PPTX
Sheik Mohamed Shadik - BSc - Project Details
shadikbsc
 
PPT
Pandi
Pandi C
 
PPTX
Os
Hero Prabhu
 
PPTX
Mass storage structure
pramila kanagaraj
 
PPTX
Mass storage systemsos
Gokila Manickam
 
PPT
Chapter 12 - Mass Storage Systems
Wayne Jones Jnr
 
PPT
Ch10
ushaindhu
 
Sheik Mohamed Shadik - BSc - Project Details
shadikbsc
 
Pandi
Pandi C
 
Mass storage structure
pramila kanagaraj
 
Mass storage systemsos
Gokila Manickam
 
Chapter 12 - Mass Storage Systems
Wayne Jones Jnr
 
Ch10
ushaindhu
 

What's hot (18)

PPTX
Massstorage
mari sami
 
PPT
OSCh14
Joe Christensen
 
PPT
Kavi
Kavi Pradeep
 
PPT
Kavi
Agnas Jasmine
 
DOCX
Mass storage structurefinal
marangburu42
 
PPT
Thiru
Thiru Selvi
 
PPT
operating system
subashini mari
 
PPT
Operating Systems
Geetha Kannan
 
PPT
Disk scheduling
J.T.A.JONES
 
PPTX
Mass Storage Structure
Vimalanathan D
 
PPT
Disk structure
Shareb Ismaeel
 
PDF
Disk Management
Shipra Swati
 
PPTX
Mass storage device
Raza Umer
 
PPTX
Viknesh
Hero Prabhu
 
PPT
Swap-space Management
Agnas Jasmine
 
PPT
Disk management
Agnas Jasmine
 
PPTX
OS Slide Ch12 13
庭緯 陳
 
PPT
Chapter 12 Model Answers
Sheroug.M
 
Massstorage
mari sami
 
Mass storage structurefinal
marangburu42
 
operating system
subashini mari
 
Operating Systems
Geetha Kannan
 
Disk scheduling
J.T.A.JONES
 
Mass Storage Structure
Vimalanathan D
 
Disk structure
Shareb Ismaeel
 
Disk Management
Shipra Swati
 
Mass storage device
Raza Umer
 
Viknesh
Hero Prabhu
 
Swap-space Management
Agnas Jasmine
 
Disk management
Agnas Jasmine
 
OS Slide Ch12 13
庭緯 陳
 
Chapter 12 Model Answers
Sheroug.M
 
Ad

Viewers also liked (12)

PPTX
3Com 3CBLSF50-ME
savomir
 
PPTX
Presentación dgcp mesicic
Manu Gonzalez Caballero
 
PPTX
Cuenta atras
segundociclocm
 
PDF
ToGoGet.com - counter disruptinng.
Lesly Revenge
 
PDF
Magnus lönnbergprofile
Magnus Lönnberg
 
PPTX
Presentacion - Actividades Fisicas y Deportivas
Sebastian Mendoza
 
PPTX
3Com 10002220 REV AB
savomir
 
PPTX
Misión leonardo
leonardoramirez7271
 
PPTX
3Com 3CR17333A-91
savomir
 
PPT
Untitled-10
olenaterekha
 
PPTX
Caso 3 pablosky el plan de marketing internacional
Alexander Mete Pola
 
PPTX
Gender selection
genderselectionaustralia
 
3Com 3CBLSF50-ME
savomir
 
Presentación dgcp mesicic
Manu Gonzalez Caballero
 
Cuenta atras
segundociclocm
 
ToGoGet.com - counter disruptinng.
Lesly Revenge
 
Magnus lönnbergprofile
Magnus Lönnberg
 
Presentacion - Actividades Fisicas y Deportivas
Sebastian Mendoza
 
3Com 10002220 REV AB
savomir
 
Misión leonardo
leonardoramirez7271
 
3Com 3CR17333A-91
savomir
 
Untitled-10
olenaterekha
 
Caso 3 pablosky el plan de marketing internacional
Alexander Mete Pola
 
Gender selection
genderselectionaustralia
 
Ad

Similar to Palpandi (20)

PPT
Ch14 OS
C.U
 
PPT
Disk Scheduling
A29ShirleyDhawadkar
 
PPT
Ch12
tech2click
 
PDF
CH10.pdf
ImranKhan880955
 
PPT
Disk scheduling
Hi-Techpoint
 
PDF
Cs8493 unit 4
Kathirvel Ayyaswamy
 
PPT
Secondary storage structure-Operating System Concepts
Arjun Kaimattathil
 
PPT
7 disk managment
ashishkhatu1
 
PPTX
Operation System
ANANTHI1997
 
PPTX
Operation System
ROHINIPRIYA1997
 
PPTX
I/O System and Case study
Lavanya G
 
PPT
Unit 6 Device management.ppt Unit 6 Device management.ppt
hamowi2047
 
PPTX
UNIT III.pptx
NIVETHA37590
 
PPTX
mass storage structure concepts with explanation
revathyniranjana1
 
PPTX
Disk scheduling
Suraj Shukla
 
PDF
unit-4.pdf
071ROHETHSIT
 
PPTX
04.01 file organization
Bishal Ghimire
 
Ch14 OS
C.U
 
Disk Scheduling
A29ShirleyDhawadkar
 
CH10.pdf
ImranKhan880955
 
Disk scheduling
Hi-Techpoint
 
Cs8493 unit 4
Kathirvel Ayyaswamy
 
Secondary storage structure-Operating System Concepts
Arjun Kaimattathil
 
7 disk managment
ashishkhatu1
 
Operation System
ANANTHI1997
 
Operation System
ROHINIPRIYA1997
 
I/O System and Case study
Lavanya G
 
Unit 6 Device management.ppt Unit 6 Device management.ppt
hamowi2047
 
UNIT III.pptx
NIVETHA37590
 
mass storage structure concepts with explanation
revathyniranjana1
 
Disk scheduling
Suraj Shukla
 
unit-4.pdf
071ROHETHSIT
 
04.01 file organization
Bishal Ghimire
 

Recently uploaded (20)

PPTX
cocomo-220726173706-141e08f0.tyuiuuupptx
DharaniMani4
 
PPTX
Aryanbarot28.pptx Introduction of window os for the projects
aryanbarot004
 
PPTX
Query and optimizing operating system.pptx
YoomifTube
 
PDF
Endalamaw Kebede.pdfvvbhjjnhgggftygtttfgh
SirajudinAkmel1
 
PPT
Susunan & Bagian DRAWING 153UWYHSGDGH.ppt
RezaFbriadi
 
PDF
INTEL CPU 3RD GEN.pdf variadas de computacion
juancardozzo26
 
PPTX
Basics of Memristors from zero to hero.pptx
onterusmail
 
PPTX
INTERNET OF THINGS (IOT) network of interconnected devices.
rp1256748
 
PPTX
basic_parts-of_computer-1618-754-622.pptx
patelravi16187
 
PPTX
Boolean Algebra-Properties and Theorems.pptx
bhavanavarri5458
 
PPTX
Final Draft Presentation for dtaa and direct tax
rajbhanushali3981
 
PPT
3 01032017tyuiryhjrhyureyhjkfdhghfrugjhf
DharaniMani4
 
PPTX
PHISHING ATTACKS. _. _.pptx[]
kumarrana7525
 
PPTX
atoma.pptxejejejejeejejjeejeejeju3u3u3u3
manthan912009
 
PPTX
PPT on the topic of programming language
dishasindhava
 
PPTX
ASP MVC asderfewerwrwerwrefeewwfdewfewfdsfsd
faresslaam82
 
PPTX
Basics of Memristors and fundamentals.pptx
onterusmail
 
PPTX
办理HFM文凭|购买代特莫尔德音乐学院毕业证文凭100%复刻安全可靠的
1cz3lou8
 
PPTX
great itemsgreat itemsgreat itemsgreat items.pptx
saurabh13smr
 
PPT
L1-Intro.ppt nhfjkhghjjnnnmkkjhigtyhhjjj
MdKarimUllahEmon
 
cocomo-220726173706-141e08f0.tyuiuuupptx
DharaniMani4
 
Aryanbarot28.pptx Introduction of window os for the projects
aryanbarot004
 
Query and optimizing operating system.pptx
YoomifTube
 
Endalamaw Kebede.pdfvvbhjjnhgggftygtttfgh
SirajudinAkmel1
 
Susunan & Bagian DRAWING 153UWYHSGDGH.ppt
RezaFbriadi
 
INTEL CPU 3RD GEN.pdf variadas de computacion
juancardozzo26
 
Basics of Memristors from zero to hero.pptx
onterusmail
 
INTERNET OF THINGS (IOT) network of interconnected devices.
rp1256748
 
basic_parts-of_computer-1618-754-622.pptx
patelravi16187
 
Boolean Algebra-Properties and Theorems.pptx
bhavanavarri5458
 
Final Draft Presentation for dtaa and direct tax
rajbhanushali3981
 
3 01032017tyuiryhjrhyureyhjkfdhghfrugjhf
DharaniMani4
 
PHISHING ATTACKS. _. _.pptx[]
kumarrana7525
 
atoma.pptxejejejejeejejjeejeejeju3u3u3u3
manthan912009
 
PPT on the topic of programming language
dishasindhava
 
ASP MVC asderfewerwrwerwrefeewwfdewfewfdsfsd
faresslaam82
 
Basics of Memristors and fundamentals.pptx
onterusmail
 
办理HFM文凭|购买代特莫尔德音乐学院毕业证文凭100%复刻安全可靠的
1cz3lou8
 
great itemsgreat itemsgreat itemsgreat items.pptx
saurabh13smr
 
L1-Intro.ppt nhfjkhghjjnnnmkkjhigtyhhjjj
MdKarimUllahEmon
 

Palpandi

  • 2. MASS-STORAGE SYSTEMS  Overview of Mass Storage Structure  Disk Structure  Disk Scheduling  Disk Management  Swap-Space Management  RAID Structure  Stable-Storage Implementation
  • 3. OBJECTIVES  Describe the physical structure of secondary and tertiary storage devices and the resulting effects on the uses of the devices  Explain the performance characteristics of mass- storage devices  Discuss operating-system services provided for mass storage, including RAID and HSM
  • 4. OVERVIEW OF MASS STORAGE STRUCTURE  Magnetic disks provide bulk of secondary storage of modern computers  Drives rotate at 60 to 200 times per second  Transfer rate is rate at which data flow between drive and computer  Positioning time (random-access time) is time to move disk arm to desired cylinder (seek time) and time for desired sector to rotate under the disk head (rotational latency)  Head crash results from disk head making contact with the disk surface  That’s bad  Disks can be removable  Drive attached to computer via I/O bus  Busses vary, including EIDE, ATA, SATA, USB, Fibre Channel, SCSI  Host controller in computer uses bus to talk to disk controller built into drive or storage array
  • 5.  Magnetic tape  Was early secondary-storage medium  Relatively permanent and holds large quantities of data  Access time slow  Random access ~1000 times slower than disk  Mainly used for backup, storage of infrequently-used data, transfer medium between systems  Kept in spool and wound or rewound past read-write head  Once data under head, transfer rates comparable to disk  20-200GB typical storage  Common technologies are 4mm, 8mm, 19mm, LTO-2 and SDLT
  • 6. DISK STRUCTURE  Disk drives are addressed as large 1- dimensional arrays of logical blocks, where the logical block is the smallest unit of transfer  The 1-dimensional array of logical blocks is mapped into the sectors of the disk sequentially  Sector 0 is the first sector of the first track on the outermost cylinder  Mapping proceeds in order through that track, then the rest of the tracks in that cylinder, and then through the rest of the cylinders from outermost to innermost
  • 7. DISK SCHEDULING  The operating system is responsible for using hardware efficiently — for the disk drives, this means having a fast access time and disk bandwidth  Access time has two major components  Seek time is the time for the disk are to move the heads to the cylinder containing the desired sector  Rotational latency is the additional time waiting for the disk to rotate the desired sector to the disk head  Minimize seek time  Seek time ≈ seek distance  Disk bandwidth is the total number of bytes transferred, divided by the total time between the first request for service and the completion of the last transfer
  • 8. DISK SCHEDULING  Several algorithms exist to schedule the servicing of disk I/O requests  We illustrate them with a request queue (0-199) 98, 183, 37, 122, 14, 124, 65, 67 Head pointer 53
  • 9. FCFS Illustration shows total head movement of 640 cylinders
  • 10. SSTF  Selects the request with the minimum seek time from the current head position  SSTF scheduling is a form of SJF scheduling; may cause starvation of some requests  Illustration shows total head movement of 236 cylinders
  • 11. SSTF
  • 12. SCAN  The disk arm starts at one end of the disk, and moves toward the other end, servicing requests until it gets to the other end of the disk, where the head movement is reversed and servicing continues.  SCAN algorithm Sometimes called the elevator algorithm  Illustration shows total head movement of 208 cylinders
  • 13. SCAN
  • 14. C-SCAN  Provides a more uniform wait time than SCAN  The head moves from one end of the disk to the other, servicing requests as it goes  When it reaches the other end, however, it immediately returns to the beginning of the disk, without servicing any requests on the return trip  Treats the cylinders as a circular list that wraps around from the last cylinder to the first one
  • 16. C-LOOK Version of C-SCAN Arm only goes as far as the last request in each direction, then reverses direction immediately, without first going all the way to the end of the disk
  • 18. SELECTING A DISK-SCHEDULING ALGORITHM  SSTF is common and has a natural appeal  SCAN and C-SCAN perform better for systems that place a heavy load on the disk  Performance depends on the number and types of requests  Requests for disk service can be influenced by the file- allocation method  The disk-scheduling algorithm should be written as a separate module of the operating system, allowing it to be replaced with a different algorithm if necessary  Either SSTF or LOOK is a reasonable choice for the default algorithm
  • 19. DISK MANAGEMENT  Low-level formatting, or physical formatting — Dividing a disk into sectors that the disk controller can read and write  To use a disk to hold files, the operating system still needs to record its own data structures on the disk  Partition the disk into one or more groups of cylinders  Logical formatting or “making a file system”  To increase efficiency most file systems group blocks into clusters  Disk I/O done in blocks  File I/O done in clusters  Boot block initializes system  The bootstrap is stored in ROM  Bootstrap loader program  Methods such as sector sparing used to handle bad blocks
  • 20. BOOTING FROM A DISK IN WINDOWS 2000
  • 21. SWAP-SPACE MANAGEMENT  Swap-space — Virtual memory uses disk space as an extension of main memory  Swap-space can be carved out of the normal file system, or, more commonly, it can be in a separate disk partition  Swap-space management  4.3BSD allocates swap space when process starts; holds text segment (the program) and data segment  Kernel uses swap maps to track swap-space use  Solaris 2 allocates swap space only when a page is forced out of physical memory, not when the virtual memory page is first created
  • 22. DATA STRUCTURES FOR SWAPPING ON LINUX SYSTEMS
  • 23. RAID STRUCTURE  RAID – multiple disk drives provides reliability via redundancy  Increases the mean time to failure  Frequently combined with NVRAM to improve write performance  RAID is arranged into six different levels
  • 24. RAID  Several improvements in disk-use techniques involve the use of multiple disks working cooperatively  Disk striping uses a group of disks as one storage unit  RAID schemes improve performance and improve the reliability of the storage system by storing redundant data  Mirroring or shadowing (RAID 1) keeps duplicate of each disk  Striped mirrors (RAID 1+0) or mirrored stripes (RAID 0+1) provides high performance and high reliability  Block interleaved parity (RAID 4, 5, 6) uses much less redundancy  RAID within a storage array can still fail if the array fails, so automatic replication of the data between arrays is common  Frequently, a small number of hot-spare disks are left unallocated, automatically replacing a failed disk and having data rebuilt onto them
  • 26. RAID (0 + 1) AND (1 + 0)
  • 27. EXTENSIONS  RAID alone does not prevent or detect data corruption or other errors, just disk failures  Solaris ZFS adds checksums of all data and metadata  Checksums kept with pointer to object, to detect if object is the right one and whether it changed  Can detect and correct data and metadata corruption  ZFS also removes volumes, partitions  Disks allocated in pools  Filesystems with a pool share that pool, use and release space like “malloc” and “free” memory allocate / release calls
  • 28. ZFS CHECKSUMS ALL METADATA AND DATA
  • 30. STABLE-STORAGE IMPLEMENTATION  Write-ahead log scheme requires stable storage  To implement stable storage:  Replicate information on more than one nonvolatile storage media with independent failure modes  Update information in a controlled manner to ensure that we can recover the stable data after any failure during data transfer or recovery