SlideShare a Scribd company logo
CPU Virtualization and Scheduling
Hwanju Kim
1
CPU VIRTUALIZATION
2
De-privileging OS
• De-privileging OS
• X86 protection ring (before HW-assisted virtualization)
• Ring 0 – VMM
• Ring 1 – Guest OS
• Ring 3 – Application
OS
Application
VMM
OS
Application
OS VMM
ring0
ring3
ring2
ring1
3/35
De-privileging OS
• Trap-and-emulation
• “Trap-and-emulate (virtualize)” privileged instructions
sensitive instructions
ring0
ring3
ring2
ring1
4/35
Sensitive Instructions
• Class of instructions
• Normal instructions
• Not trapped by privilege layer
• Privileged instructions
• Automatically trapped by privilege layer
• Sensitive instructions
• Must be emulated (virtualized) for fidelity and safety
• e.g., Processor mode changes, HW accesses, …
• “Virtualizable architecture”
• Sensitive instructions Privileged instructions
• Trap-and-emulate every sensitive instruction
Decided by architecture
Decided by VMM
⊆
5/35
Virtualization-Unfriendly x86
• x86 is not virtualizable before 2005
• “Not all sensitive instructions are privileged”
• Cannot emulate sensitive instructions that are not privileged
• e.g., SGDT, SLDT, SIDT …
• Running unmodified OSes w/o SW modification is impossible!
• Full-virtualization by VMware in 1999
• Binary translation
• + No OS source modification (Windows is possible!)
• - Performance overhead
• Para-virtualization by Xen in 2003
• Hypercall
• + Near-native performance
• - OS modification
6/35
Hypercall vs. Binary Translation
• Source-level vs. Binary-level modification
...
…
…
val = store_idt()
…
…
…
emulate_store_idt(val) {
return virtual_idtr
}
OS source code
...
…
…
mov val, idtr
…
…
…
OS binary
VMM
call emulate_store_idtval = emulate_store_idt()Hypercall
Binary
Translation
Method to
optimize performance
(e.g., batching traps)
Optimization by
caching translated
instructions
7/35
Interrupt Virtualization
• Interrupt redirection
• Interrupts and exceptions are delivered to ring0
• Interrupt redirection is handled by VMM or
privileged VM
ring0
ring3
ring2
ring1
IDT of VMM
Interrupts or exceptions
IDT of Guest OS
IDT of Guest OS
Currently running VM
8/35
HW-Assisted Virtualization
• x86 became finally virtualizable in 2005-2006
• “SW trends drive HW evolution”
• Intel VT and AMD-SVM
VMX root mode
VMX non-root mode
VMExit
VMEntry
VMCS
Host state
Guest state
Control data
What events to trap
Why did a trap occur
Load at VMEntry
Load at VMExit
Ring 3
Ring 2
Ring 1
Ring 0
Ring 3
Ring 2
Ring 1
Ring 0
Intel VT
VMM or Host OS
Host apps
Guest OS
Guest apps
9/35
HW-Assisted Virtualization
• Advantages
• No binary translation
• No OS modification
• Simplifying VMM
• KVM was born and included in Linux mainline in 2007
• Vmware, Xen, etc. adopt HW-assisted virtualization
• Several lightweight VMMs were implemented
• lguest, tiny VMM, …
• Contributions to wide adoption of virtualization
• Disadvantages
• More expensive trap (VMEXIT)
• Outdating sophisticated and clever SW techniques 
10/35
Technical Issues
• Expensive VMEXIT cost
• Save/restore whole machine states
• HW: Reducing latency continuously
• SW: Eliminating unnecessary VMEXIT and reducing
the time of handling VMEXIT
Software Techniques for Avoiding Hardware Virtualization Exits [USENIX’12]
11/35
Nested-Virtualization-Unfriendly x86
• Multi-level architecture support
• IBM system z architecture
• Single-level architecture support
• Intel VMX and AMD SVM
Bare-metal hypervisor
Guest hypervisor
Guest OS
Bare-metal hypervisor
Guest hypervisor
Guest OS
What’s next? 12/35
ARM CPU Virtualization
• Para-virtualization
• ARM is also not virtualizable before HW virtualization
• Xen on ARM by Samsung
• KVM for ARM [OLS’10]
• Replacing a sensitive instruction with an encoded SWI
• Taking advantage of RISC
• Script-based patching
• OKL4 microvisor
Sensitive instruction encoding types
Most ARM-based VMMs turn to
supporting ARM HW virtualization
for efficient computing
13/35
ARM CPU Virtualization
• Hyp mode
• Cortex-A15
• Similar to VMX root mode
14/35
Summary
• Incredibly rapid SW and HW evolutions driven
by IT industry needs
• Less than 10 years from VMware and Xen’s SW
technologies to HW-assisted virtualization
• Academia is tightly coupled with industry
• Research groups and corporates are willing to share their
state-of-the-art technologies in top conferences
• Even mobile environments are ready for virtualization
• ARM HW virtualization boosts this trend
15/35
CPU SCHEDULING
16
CPU Scheduling
• Hierarchical scheduling
Virtual
CPU
OS VMM
17/35
CPU Scheduling
• The common role of CPU schedulers
• Allocating “a fraction of CPU time” to “a SW entity”
• Thread and virtual CPU are SW schedulable entities
• Linux CFS (Completely Fair Scheduler) is used for
both thread scheduling and KVM scheduling
• Xen has adopted popular schedulers in OS domain
• BVT (Borrowed-Virtual-Time) [SOSP’99]
• SEDF (Simple Earliest Deadline First)
• EDF is for real-time scheduling
• Credit – Proportional share scheduler for SMP
• Default scheduler
18/35
Priority vs. Proportional-Share
• Priority-based scheduling
• Scheduling based on the notion of “relative priority”
• Fairness based on starvation avoidance
• Suitable for dedicated environments
• Desktop and mobile environments
• Linux schedulers before CFS, Windows scheduler,
Many mobile OS schedulers
19/35
Priority vs. Proportional-Share
• Proportional-share scheduling
• Scheduling based on the notion of “relative shares”
• Fairness based on shares
• Suitable for shared environments
• Shared workstations
• Pay-per-use clouds
• Virtual desktop infrastructure
• Linux CFS, Xen Credit, VMware
Lottery Scheduling: Flexible Proportional-Share Resource Scheduling [OSDI’94]
Proportional-share scheduling fits
for virtualized environments where
independent VMs are co-located
20/35
Proportional-Share Scheduling
• Also called weighted fair scheduling
• “Weight”
• Relative shares
• “Shares”
• = Total shares ×
𝑊𝑒𝑖𝑔ℎ𝑡
𝑇𝑜𝑡𝑎𝑙 𝑤𝑒𝑖𝑔ℎ𝑡
• “Virtual time”
• ∝ Real time ×
1
𝑊𝑒𝑖𝑔ℎ𝑡
• Making equal progress of
virtual time
• Pick the earliest virtual time at
every scheduling decision time
Borrowed-Virtual-Time (BVT) scheduling:
supporting latency-sensitive threads in
a general-purpose scheduler [SOSP’99]
gcc : bigsim = 2 : 1
Real time (mcu)
Virtualtime
21/35
Proportional-Share Scheduling
• Proportional-share scheduler for SMP VMs
• Common scheduler for commodity VMMs
• Employed by KVM, Xen, VMware, etc.
• VM’s shares (S) =
Total shares x (weight / total weight)
• VCPU’s shares = S / # of active VCPUs
• Active vCPU: Non-idle vCPU
Single-threaded workload Multi-threaded (programmed) workload
VCPU0
(1024)
VCPU0
(256)
VCPU1
(256)
VCPU2
(256)
VCPU3
(256)
e.g., 4-VCPU VM (S = 1024)
Symmetric vCPUs
Existing schedulers view active vCPUs
as containers with identical power
22/35
Challenges on VMM Scheduler
• Challenges due to the primary principles of
VMM, compared to OS scheduling research
VM
pCPU
VMM scheduler
pCPU
vCPU vCPU
OS scheduler
vCPU
OS scheduler
VMM
vCPU vCPU
OS scheduler
Task Task Task Task Task TaskTask Task
VMVM
1. Semantic gap
( OS independence)
: Two independent
scheduling layers
2. Scarce Information
( Small TCB)
: Difficulty in extracting
workload characteristics
3. Inter-VM fairness
( Performance isolation)
: Favoring a VM must not compromise inter-VM fairness
• I/O operations
• Privileged instructions
• Process and thread
information
• Inter-process
communications
• I/O operations and
semantics
• System calls
• etc…
Each VM is virtualized
as a black box
I believe I’m on a
dedicated machine
Lightweightness
(No cross-layer optimization)
Efficiency
(Intelligent VMM)
23/35
Research on VMM Scheduling
• Classification of VMM scheduling research
VMM scheduling
Explicit
specification
Administrative
specification
VSched[SC’05], SoftRT[VEE’10], RT
[RTCSA’10], BVT and sEDF of Xen
Guest OS
cooperation
SVD[JRWRTC’07], PaS[ICPADS’09],
GAPS[EuroPar’08]
Workload-based
identification
CaS[VEE’07], Boost[VEE’08],
TAVS [VEE’09], Cache[ANCS’08],
IO[HPDC’10], DBCS [ASPLOS’13]
24/35
CPU SCHEDULING
Task-aware Virtual Machine Scheduling for
I/O Performance
25
Problem of VM Scheduling
• Task-agnostic scheduling
VMM
vCPU vCPU
Run queue sorted based on CPU fairness
Mixed
task
CPU-
bound
task
I/O-
bound
task
I/O event
That event is mine
and I’m waiting
for it
Your vCPU has low priority now!
I don’t even know this event is for
your I/O-bound task!
Sorry not to schedule you
immediately…
Head Tail
26/35
Task-aware VM Scheduling [VEE’09]
• Goals
• Tracking I/O-boundness with task granularity
• Improving the response time of I/O-bound tasks
• Keeping inter-VM fairness
• Challenges
PCPU
VMM
Mixed
task
CPU-
bound
task
I/O-
bound
task
I/O event
Mixed
task
CPU-
bound
task
I/O-
bound
task
VM VM
1. I/O-bound task identification
2. I/O event correlation
3. Partial boosting
27/35
Task-aware VM Scheduling
1. I/O-bound Task Identification
• Observable information at the VMM
• I/O events
• Task switching events [Jones et al., USENIX’06]
• CPU time quantum of each task
• Inference based on common OS techniques
• General OS techniques (Linux, Windows, FreeBSD,
…) to infer and handle I/O-bound tasks
• 1. Small CPU time quantum (main)
• 2. Preemptive scheduling in response to I/O events
(supportive)
Example (Intel x86)
CR3 update CR3 update
I/O event Task time quantum
28/35
Task-aware VM Scheduling
2. I/O Event Correlation: Block I/O
• Request-response correlation
• Window-based correlation
• Correlation for delayed read events by guest OS
• e.g., block I/O scheduler
• Overhead per VCPU = window size x 4bytes (task ID)
T1 T2 T3 T4
read
Actual
read request
user
kernel
VMM
Inspection window Any I/O-bound
task in the window
29/35
Task-aware VM Scheduling
2. I/O Event Correlation: Network I/O
• History-based prediction
• Asynchronous packet reception
• Monitoring “the firstly woken task” in response to
an incoming packet
• N-bit saturating counter for each destination port number
Portmap 00
Non-
I/O-
bound
01
Weak
I/O-
bound
10
I/O-
bound
11
Strong
I/O-
bound
If the firstly woken task is I/O-bound
Otherwise
If portmap counter’s MSB is set,
this packet is for I/O-bound tasks
Example: 2-bit counter
Destination
port number
Overhead per VM = N x 8KB
30/35
Task-aware VM Scheduling
3. Partial Boosting
• Priority boosting with task-level granularity
• Borrowing future time slice to promptly handle an
incoming I/O event as long as fairness is kept
• Partial boosting lasts during the run of I/O-bound
tasks
VMM
VM1 VM2
Run queue sorted based on CPU fairness
I/O event
VM3
CPU-
bound
task
CPU-
bound
task
Head Tail
I/O-
bound
task
If this I/O event is destined for VM3 and
is inferred to be handled by its I/O-bound task,
Initiate partial boosting for VM3 VCPU
31/35
Task-aware VM Scheduling
- Evaluation
• Real workloads
Ubuntu Linux Windows XP
I/O-bound
tasks
CPU-bound
tasks
<Workloads>
1 VM: I/O-bound & CPU-bound task
5 VMs: CPU-bound task
12-50% I/O performance
improvement with
inter-VM fairness
32/35
How About Multiprocessor VMs?
• Virtual Asymmetric Multiprocessor [ApSys’12]
• Dynamically varying vCPU performance based on
hosted workloads
pCPU pCPU pCPU pCPU
vCPU
vCPU
vCPU
vCPU
vCPU
vCPU
vCPU
vCPU
vCPU
vCPU
VM
Interactive Background
Time
shared
Virtual SMP (vSMP)
pCPU pCPU pCPU pCPU
vCPU
vCPU
vCPU
vCPU
vCPU
vCPU
vCPU
VMInteractive
Background
Virtual AMP (vAMP)
vCPU
Equally contended
regardless of
user interactions
Proposal
The size of vCPU =
The amount of CPU shares
Fast vCPUs Slow vCPUs
33/35
Other Issues on CPU Sharing
• CPU cache interference issues
• Most CPU schedulers are conscious only of CPU time
• But, shared last-level cache (LLC) can also largely
affect the performance
Q-Clouds: Managing Performance Interference Effects for QoS-Aware Clouds [EuroSys’10]
34/35
Summary
• CPU scheduling for VMs
• OS and VMM share their scheduling mechanisms
and policies
• Proportional-share scheduling well fits for VM-based shared
environments for inter-VM fairness
• But, the semantic gap weakens efficiency of CPU
scheduling
• Knowledge about OS and workload characteristics
gives an opportunity to improve VMM scheduling
• Other resources such as LLC should also be
considered
35/35

More Related Content

What's hot (20)

PPTX
Virtual Machine Concept
fatimaanique1
 
PPTX
Linux Initialization Process (2)
shimosawa
 
PPTX
Parallel Processors (SIMD)
Ali Raza
 
PPTX
Server virtualization
Kingston Smiler
 
PDF
Introduction to char device driver
Vandana Salve
 
PDF
Reconnaissance of Virtio: What’s new and how it’s all connected?
Samsung Open Source Group
 
PDF
Linux Memory Management
Anil Kumar Pugalia
 
PPTX
Linux booting Process
Gaurav Sharma
 
PPTX
Virtualization
Utkarsh Soni
 
PPT
Server virtualization by VMWare
sgurnam73
 
PDF
Xen Hypervisor
Susheel Thakur
 
PPTX
Introduction to Hyper-V
Mark Wilson
 
PDF
Introduction to Linux Drivers
Anil Kumar Pugalia
 
PDF
Different types of virtualisation
Alessandro Guli
 
PDF
Introduction to virtualization
Sasikumar Thirumoorthy
 
PDF
Shell scripting
Manav Prasad
 
PPTX
A History of Linux
Damian T. Gordon
 
PDF
LAS16-200: SCMI - System Management and Control Interface
Linaro
 
PPT
Virtual machine
Nikunj Dhameliya
 
PPTX
Virtual machine
IGZ Software house
 
Virtual Machine Concept
fatimaanique1
 
Linux Initialization Process (2)
shimosawa
 
Parallel Processors (SIMD)
Ali Raza
 
Server virtualization
Kingston Smiler
 
Introduction to char device driver
Vandana Salve
 
Reconnaissance of Virtio: What’s new and how it’s all connected?
Samsung Open Source Group
 
Linux Memory Management
Anil Kumar Pugalia
 
Linux booting Process
Gaurav Sharma
 
Virtualization
Utkarsh Soni
 
Server virtualization by VMWare
sgurnam73
 
Xen Hypervisor
Susheel Thakur
 
Introduction to Hyper-V
Mark Wilson
 
Introduction to Linux Drivers
Anil Kumar Pugalia
 
Different types of virtualisation
Alessandro Guli
 
Introduction to virtualization
Sasikumar Thirumoorthy
 
Shell scripting
Manav Prasad
 
A History of Linux
Damian T. Gordon
 
LAS16-200: SCMI - System Management and Control Interface
Linaro
 
Virtual machine
Nikunj Dhameliya
 
Virtual machine
IGZ Software house
 

Viewers also liked (6)

PPTX
4. Memory virtualization and management
Hwanju Kim
 
PDF
Xen & virtualization
Susheel Thakur
 
PDF
Xen and the Art of Virtualization
Susheel Thakur
 
PPTX
Application Performance & Flexibility on Exokernel Systems paper review
Vimukthi Wickramasinghe
 
PPTX
Utopia - A context based scheduler for Xen
amodj
 
PDF
Xen Memory Management
The Linux Foundation
 
4. Memory virtualization and management
Hwanju Kim
 
Xen & virtualization
Susheel Thakur
 
Xen and the Art of Virtualization
Susheel Thakur
 
Application Performance & Flexibility on Exokernel Systems paper review
Vimukthi Wickramasinghe
 
Utopia - A context based scheduler for Xen
amodj
 
Xen Memory Management
The Linux Foundation
 
Ad

Similar to 3. CPU virtualization and scheduling (20)

PPT
virtual machine.ppt
SushantShinde74
 
PPTX
webinar vmware v-sphere performance management Challenges and Best Practices
Metron
 
PPTX
17-virtualization.pptx
KowsalyaJayakumar2
 
PPTX
Tuning VIM performance for unikernels
Stefano Salsano
 
PPTX
2. OS vs. VMM
Hwanju Kim
 
PPTX
Hyun goo oVirt study - Presentation
Johnny Hyun Goo
 
PPTX
Operating system Virtualization_NEW.pptx
Senthil Vit
 
PDF
Joyent's Bryan Cantrill: Experiences Porting KVM to SmartOS at KVM Forum, Aug...
Peter Tripp
 
PDF
Experiences porting KVM to SmartOS
bcantrill
 
PDF
VMware Backups That Work—Lessons Learned From VADP Performance Benchmark Testing
Symantec
 
PPTX
KIIT_Cloud_scaling and Virtualization.pptx
bhaskarkumar0125
 
PPTX
Virtualization
3M Construction
 
PPTX
virtualization.pptx
Ali Fraz Khan
 
PPTX
virtualization(1).pptx
AkashRajBehera
 
PPTX
Virtualization 101 - DeepDive
Amit Agarwal
 
PDF
Virtualization Technology Overview
OpenCity Community
 
PPTX
virtualization123 file that is good for study
ellec281
 
PDF
VMworld 2013: Silent Killer: How Latency Destroys Performance...And What to D...
VMworld
 
PPTX
Virtualization Cloud computing technology
anjali23376
 
PPTX
Insight into the world of logs management
UsmanSafdar21
 
virtual machine.ppt
SushantShinde74
 
webinar vmware v-sphere performance management Challenges and Best Practices
Metron
 
17-virtualization.pptx
KowsalyaJayakumar2
 
Tuning VIM performance for unikernels
Stefano Salsano
 
2. OS vs. VMM
Hwanju Kim
 
Hyun goo oVirt study - Presentation
Johnny Hyun Goo
 
Operating system Virtualization_NEW.pptx
Senthil Vit
 
Joyent's Bryan Cantrill: Experiences Porting KVM to SmartOS at KVM Forum, Aug...
Peter Tripp
 
Experiences porting KVM to SmartOS
bcantrill
 
VMware Backups That Work—Lessons Learned From VADP Performance Benchmark Testing
Symantec
 
KIIT_Cloud_scaling and Virtualization.pptx
bhaskarkumar0125
 
Virtualization
3M Construction
 
virtualization.pptx
Ali Fraz Khan
 
virtualization(1).pptx
AkashRajBehera
 
Virtualization 101 - DeepDive
Amit Agarwal
 
Virtualization Technology Overview
OpenCity Community
 
virtualization123 file that is good for study
ellec281
 
VMworld 2013: Silent Killer: How Latency Destroys Performance...And What to D...
VMworld
 
Virtualization Cloud computing technology
anjali23376
 
Insight into the world of logs management
UsmanSafdar21
 
Ad

More from Hwanju Kim (6)

PPTX
CPU Scheduling for Virtual Desktop Infrastructure
Hwanju Kim
 
PPTX
6. Live VM migration
Hwanju Kim
 
PPTX
1.Introduction to virtualization
Hwanju Kim
 
PPTX
Demand-Based Coordinated Scheduling for SMP VMs
Hwanju Kim
 
PDF
Scheduler Support for Video-oriented Multimedia on Client-side Virtualization
Hwanju Kim
 
PDF
Task-aware Virtual Machine Scheduling for I/O Performance
Hwanju Kim
 
CPU Scheduling for Virtual Desktop Infrastructure
Hwanju Kim
 
6. Live VM migration
Hwanju Kim
 
1.Introduction to virtualization
Hwanju Kim
 
Demand-Based Coordinated Scheduling for SMP VMs
Hwanju Kim
 
Scheduler Support for Video-oriented Multimedia on Client-side Virtualization
Hwanju Kim
 
Task-aware Virtual Machine Scheduling for I/O Performance
Hwanju Kim
 

Recently uploaded (20)

PDF
MAD Unit - 2 Activity and Fragment Management in Android (Diploma IT)
JappanMavani
 
PDF
Book.pdf01_Intro.ppt algorithm for preperation stu used
archu26
 
PDF
Ethics and Trustworthy AI in Healthcare – Governing Sensitive Data, Profiling...
AlqualsaDIResearchGr
 
PPTX
Element 7. CHEMICAL AND BIOLOGICAL AGENT.pptx
merrandomohandas
 
PDF
6th International Conference on Machine Learning Techniques and Data Science ...
ijistjournal
 
DOCX
8th International Conference on Electrical Engineering (ELEN 2025)
elelijjournal653
 
PDF
Unified_Cloud_Comm_Presentation anil singh ppt
anilsingh298751
 
PPTX
美国电子版毕业证南卡罗莱纳大学上州分校水印成绩单USC学费发票定做学位证书编号怎么查
Taqyea
 
PDF
PORTFOLIO Golam Kibria Khan — architect with a passion for thoughtful design...
MasumKhan59
 
PPTX
Day2 B2 Best.pptx
helenjenefa1
 
PDF
monopile foundation seminar topic for civil engineering students
Ahina5
 
PPTX
原版一样(Acadia毕业证书)加拿大阿卡迪亚大学毕业证办理方法
Taqyea
 
PDF
Water Design_Manual_2005. KENYA FOR WASTER SUPPLY AND SEWERAGE
DancanNgutuku
 
PDF
Set Relation Function Practice session 24.05.2025.pdf
DrStephenStrange4
 
PPTX
GitOps_Repo_Structure for begeinner(Scaffolindg)
DanialHabibi2
 
PPTX
Product Development & DevelopmentLecture02.pptx
zeeshanwazir2
 
PPTX
The Role of Information Technology in Environmental Protectio....pptx
nallamillisriram
 
PPTX
Shinkawa Proposal to meet Vibration API670.pptx
AchmadBashori2
 
PPTX
Snet+Pro+Service+Software_SNET+Pro+2+Instructions.pptx
jenilsatikuvar1
 
PDF
Reasons for the succes of MENARD PRESSUREMETER.pdf
majdiamz
 
MAD Unit - 2 Activity and Fragment Management in Android (Diploma IT)
JappanMavani
 
Book.pdf01_Intro.ppt algorithm for preperation stu used
archu26
 
Ethics and Trustworthy AI in Healthcare – Governing Sensitive Data, Profiling...
AlqualsaDIResearchGr
 
Element 7. CHEMICAL AND BIOLOGICAL AGENT.pptx
merrandomohandas
 
6th International Conference on Machine Learning Techniques and Data Science ...
ijistjournal
 
8th International Conference on Electrical Engineering (ELEN 2025)
elelijjournal653
 
Unified_Cloud_Comm_Presentation anil singh ppt
anilsingh298751
 
美国电子版毕业证南卡罗莱纳大学上州分校水印成绩单USC学费发票定做学位证书编号怎么查
Taqyea
 
PORTFOLIO Golam Kibria Khan — architect with a passion for thoughtful design...
MasumKhan59
 
Day2 B2 Best.pptx
helenjenefa1
 
monopile foundation seminar topic for civil engineering students
Ahina5
 
原版一样(Acadia毕业证书)加拿大阿卡迪亚大学毕业证办理方法
Taqyea
 
Water Design_Manual_2005. KENYA FOR WASTER SUPPLY AND SEWERAGE
DancanNgutuku
 
Set Relation Function Practice session 24.05.2025.pdf
DrStephenStrange4
 
GitOps_Repo_Structure for begeinner(Scaffolindg)
DanialHabibi2
 
Product Development & DevelopmentLecture02.pptx
zeeshanwazir2
 
The Role of Information Technology in Environmental Protectio....pptx
nallamillisriram
 
Shinkawa Proposal to meet Vibration API670.pptx
AchmadBashori2
 
Snet+Pro+Service+Software_SNET+Pro+2+Instructions.pptx
jenilsatikuvar1
 
Reasons for the succes of MENARD PRESSUREMETER.pdf
majdiamz
 

3. CPU virtualization and scheduling

  • 1. CPU Virtualization and Scheduling Hwanju Kim 1
  • 3. De-privileging OS • De-privileging OS • X86 protection ring (before HW-assisted virtualization) • Ring 0 – VMM • Ring 1 – Guest OS • Ring 3 – Application OS Application VMM OS Application OS VMM ring0 ring3 ring2 ring1 3/35
  • 4. De-privileging OS • Trap-and-emulation • “Trap-and-emulate (virtualize)” privileged instructions sensitive instructions ring0 ring3 ring2 ring1 4/35
  • 5. Sensitive Instructions • Class of instructions • Normal instructions • Not trapped by privilege layer • Privileged instructions • Automatically trapped by privilege layer • Sensitive instructions • Must be emulated (virtualized) for fidelity and safety • e.g., Processor mode changes, HW accesses, … • “Virtualizable architecture” • Sensitive instructions Privileged instructions • Trap-and-emulate every sensitive instruction Decided by architecture Decided by VMM ⊆ 5/35
  • 6. Virtualization-Unfriendly x86 • x86 is not virtualizable before 2005 • “Not all sensitive instructions are privileged” • Cannot emulate sensitive instructions that are not privileged • e.g., SGDT, SLDT, SIDT … • Running unmodified OSes w/o SW modification is impossible! • Full-virtualization by VMware in 1999 • Binary translation • + No OS source modification (Windows is possible!) • - Performance overhead • Para-virtualization by Xen in 2003 • Hypercall • + Near-native performance • - OS modification 6/35
  • 7. Hypercall vs. Binary Translation • Source-level vs. Binary-level modification ... … … val = store_idt() … … … emulate_store_idt(val) { return virtual_idtr } OS source code ... … … mov val, idtr … … … OS binary VMM call emulate_store_idtval = emulate_store_idt()Hypercall Binary Translation Method to optimize performance (e.g., batching traps) Optimization by caching translated instructions 7/35
  • 8. Interrupt Virtualization • Interrupt redirection • Interrupts and exceptions are delivered to ring0 • Interrupt redirection is handled by VMM or privileged VM ring0 ring3 ring2 ring1 IDT of VMM Interrupts or exceptions IDT of Guest OS IDT of Guest OS Currently running VM 8/35
  • 9. HW-Assisted Virtualization • x86 became finally virtualizable in 2005-2006 • “SW trends drive HW evolution” • Intel VT and AMD-SVM VMX root mode VMX non-root mode VMExit VMEntry VMCS Host state Guest state Control data What events to trap Why did a trap occur Load at VMEntry Load at VMExit Ring 3 Ring 2 Ring 1 Ring 0 Ring 3 Ring 2 Ring 1 Ring 0 Intel VT VMM or Host OS Host apps Guest OS Guest apps 9/35
  • 10. HW-Assisted Virtualization • Advantages • No binary translation • No OS modification • Simplifying VMM • KVM was born and included in Linux mainline in 2007 • Vmware, Xen, etc. adopt HW-assisted virtualization • Several lightweight VMMs were implemented • lguest, tiny VMM, … • Contributions to wide adoption of virtualization • Disadvantages • More expensive trap (VMEXIT) • Outdating sophisticated and clever SW techniques  10/35
  • 11. Technical Issues • Expensive VMEXIT cost • Save/restore whole machine states • HW: Reducing latency continuously • SW: Eliminating unnecessary VMEXIT and reducing the time of handling VMEXIT Software Techniques for Avoiding Hardware Virtualization Exits [USENIX’12] 11/35
  • 12. Nested-Virtualization-Unfriendly x86 • Multi-level architecture support • IBM system z architecture • Single-level architecture support • Intel VMX and AMD SVM Bare-metal hypervisor Guest hypervisor Guest OS Bare-metal hypervisor Guest hypervisor Guest OS What’s next? 12/35
  • 13. ARM CPU Virtualization • Para-virtualization • ARM is also not virtualizable before HW virtualization • Xen on ARM by Samsung • KVM for ARM [OLS’10] • Replacing a sensitive instruction with an encoded SWI • Taking advantage of RISC • Script-based patching • OKL4 microvisor Sensitive instruction encoding types Most ARM-based VMMs turn to supporting ARM HW virtualization for efficient computing 13/35
  • 14. ARM CPU Virtualization • Hyp mode • Cortex-A15 • Similar to VMX root mode 14/35
  • 15. Summary • Incredibly rapid SW and HW evolutions driven by IT industry needs • Less than 10 years from VMware and Xen’s SW technologies to HW-assisted virtualization • Academia is tightly coupled with industry • Research groups and corporates are willing to share their state-of-the-art technologies in top conferences • Even mobile environments are ready for virtualization • ARM HW virtualization boosts this trend 15/35
  • 17. CPU Scheduling • Hierarchical scheduling Virtual CPU OS VMM 17/35
  • 18. CPU Scheduling • The common role of CPU schedulers • Allocating “a fraction of CPU time” to “a SW entity” • Thread and virtual CPU are SW schedulable entities • Linux CFS (Completely Fair Scheduler) is used for both thread scheduling and KVM scheduling • Xen has adopted popular schedulers in OS domain • BVT (Borrowed-Virtual-Time) [SOSP’99] • SEDF (Simple Earliest Deadline First) • EDF is for real-time scheduling • Credit – Proportional share scheduler for SMP • Default scheduler 18/35
  • 19. Priority vs. Proportional-Share • Priority-based scheduling • Scheduling based on the notion of “relative priority” • Fairness based on starvation avoidance • Suitable for dedicated environments • Desktop and mobile environments • Linux schedulers before CFS, Windows scheduler, Many mobile OS schedulers 19/35
  • 20. Priority vs. Proportional-Share • Proportional-share scheduling • Scheduling based on the notion of “relative shares” • Fairness based on shares • Suitable for shared environments • Shared workstations • Pay-per-use clouds • Virtual desktop infrastructure • Linux CFS, Xen Credit, VMware Lottery Scheduling: Flexible Proportional-Share Resource Scheduling [OSDI’94] Proportional-share scheduling fits for virtualized environments where independent VMs are co-located 20/35
  • 21. Proportional-Share Scheduling • Also called weighted fair scheduling • “Weight” • Relative shares • “Shares” • = Total shares × 𝑊𝑒𝑖𝑔ℎ𝑡 𝑇𝑜𝑡𝑎𝑙 𝑤𝑒𝑖𝑔ℎ𝑡 • “Virtual time” • ∝ Real time × 1 𝑊𝑒𝑖𝑔ℎ𝑡 • Making equal progress of virtual time • Pick the earliest virtual time at every scheduling decision time Borrowed-Virtual-Time (BVT) scheduling: supporting latency-sensitive threads in a general-purpose scheduler [SOSP’99] gcc : bigsim = 2 : 1 Real time (mcu) Virtualtime 21/35
  • 22. Proportional-Share Scheduling • Proportional-share scheduler for SMP VMs • Common scheduler for commodity VMMs • Employed by KVM, Xen, VMware, etc. • VM’s shares (S) = Total shares x (weight / total weight) • VCPU’s shares = S / # of active VCPUs • Active vCPU: Non-idle vCPU Single-threaded workload Multi-threaded (programmed) workload VCPU0 (1024) VCPU0 (256) VCPU1 (256) VCPU2 (256) VCPU3 (256) e.g., 4-VCPU VM (S = 1024) Symmetric vCPUs Existing schedulers view active vCPUs as containers with identical power 22/35
  • 23. Challenges on VMM Scheduler • Challenges due to the primary principles of VMM, compared to OS scheduling research VM pCPU VMM scheduler pCPU vCPU vCPU OS scheduler vCPU OS scheduler VMM vCPU vCPU OS scheduler Task Task Task Task Task TaskTask Task VMVM 1. Semantic gap ( OS independence) : Two independent scheduling layers 2. Scarce Information ( Small TCB) : Difficulty in extracting workload characteristics 3. Inter-VM fairness ( Performance isolation) : Favoring a VM must not compromise inter-VM fairness • I/O operations • Privileged instructions • Process and thread information • Inter-process communications • I/O operations and semantics • System calls • etc… Each VM is virtualized as a black box I believe I’m on a dedicated machine Lightweightness (No cross-layer optimization) Efficiency (Intelligent VMM) 23/35
  • 24. Research on VMM Scheduling • Classification of VMM scheduling research VMM scheduling Explicit specification Administrative specification VSched[SC’05], SoftRT[VEE’10], RT [RTCSA’10], BVT and sEDF of Xen Guest OS cooperation SVD[JRWRTC’07], PaS[ICPADS’09], GAPS[EuroPar’08] Workload-based identification CaS[VEE’07], Boost[VEE’08], TAVS [VEE’09], Cache[ANCS’08], IO[HPDC’10], DBCS [ASPLOS’13] 24/35
  • 25. CPU SCHEDULING Task-aware Virtual Machine Scheduling for I/O Performance 25
  • 26. Problem of VM Scheduling • Task-agnostic scheduling VMM vCPU vCPU Run queue sorted based on CPU fairness Mixed task CPU- bound task I/O- bound task I/O event That event is mine and I’m waiting for it Your vCPU has low priority now! I don’t even know this event is for your I/O-bound task! Sorry not to schedule you immediately… Head Tail 26/35
  • 27. Task-aware VM Scheduling [VEE’09] • Goals • Tracking I/O-boundness with task granularity • Improving the response time of I/O-bound tasks • Keeping inter-VM fairness • Challenges PCPU VMM Mixed task CPU- bound task I/O- bound task I/O event Mixed task CPU- bound task I/O- bound task VM VM 1. I/O-bound task identification 2. I/O event correlation 3. Partial boosting 27/35
  • 28. Task-aware VM Scheduling 1. I/O-bound Task Identification • Observable information at the VMM • I/O events • Task switching events [Jones et al., USENIX’06] • CPU time quantum of each task • Inference based on common OS techniques • General OS techniques (Linux, Windows, FreeBSD, …) to infer and handle I/O-bound tasks • 1. Small CPU time quantum (main) • 2. Preemptive scheduling in response to I/O events (supportive) Example (Intel x86) CR3 update CR3 update I/O event Task time quantum 28/35
  • 29. Task-aware VM Scheduling 2. I/O Event Correlation: Block I/O • Request-response correlation • Window-based correlation • Correlation for delayed read events by guest OS • e.g., block I/O scheduler • Overhead per VCPU = window size x 4bytes (task ID) T1 T2 T3 T4 read Actual read request user kernel VMM Inspection window Any I/O-bound task in the window 29/35
  • 30. Task-aware VM Scheduling 2. I/O Event Correlation: Network I/O • History-based prediction • Asynchronous packet reception • Monitoring “the firstly woken task” in response to an incoming packet • N-bit saturating counter for each destination port number Portmap 00 Non- I/O- bound 01 Weak I/O- bound 10 I/O- bound 11 Strong I/O- bound If the firstly woken task is I/O-bound Otherwise If portmap counter’s MSB is set, this packet is for I/O-bound tasks Example: 2-bit counter Destination port number Overhead per VM = N x 8KB 30/35
  • 31. Task-aware VM Scheduling 3. Partial Boosting • Priority boosting with task-level granularity • Borrowing future time slice to promptly handle an incoming I/O event as long as fairness is kept • Partial boosting lasts during the run of I/O-bound tasks VMM VM1 VM2 Run queue sorted based on CPU fairness I/O event VM3 CPU- bound task CPU- bound task Head Tail I/O- bound task If this I/O event is destined for VM3 and is inferred to be handled by its I/O-bound task, Initiate partial boosting for VM3 VCPU 31/35
  • 32. Task-aware VM Scheduling - Evaluation • Real workloads Ubuntu Linux Windows XP I/O-bound tasks CPU-bound tasks <Workloads> 1 VM: I/O-bound & CPU-bound task 5 VMs: CPU-bound task 12-50% I/O performance improvement with inter-VM fairness 32/35
  • 33. How About Multiprocessor VMs? • Virtual Asymmetric Multiprocessor [ApSys’12] • Dynamically varying vCPU performance based on hosted workloads pCPU pCPU pCPU pCPU vCPU vCPU vCPU vCPU vCPU vCPU vCPU vCPU vCPU vCPU VM Interactive Background Time shared Virtual SMP (vSMP) pCPU pCPU pCPU pCPU vCPU vCPU vCPU vCPU vCPU vCPU vCPU VMInteractive Background Virtual AMP (vAMP) vCPU Equally contended regardless of user interactions Proposal The size of vCPU = The amount of CPU shares Fast vCPUs Slow vCPUs 33/35
  • 34. Other Issues on CPU Sharing • CPU cache interference issues • Most CPU schedulers are conscious only of CPU time • But, shared last-level cache (LLC) can also largely affect the performance Q-Clouds: Managing Performance Interference Effects for QoS-Aware Clouds [EuroSys’10] 34/35
  • 35. Summary • CPU scheduling for VMs • OS and VMM share their scheduling mechanisms and policies • Proportional-share scheduling well fits for VM-based shared environments for inter-VM fairness • But, the semantic gap weakens efficiency of CPU scheduling • Knowledge about OS and workload characteristics gives an opportunity to improve VMM scheduling • Other resources such as LLC should also be considered 35/35