1
Storage Policy Based
Management
Cormac Hogan - @CormacJHogan
Blog – cormachogan.com
Chief Technologist - Storage & Availability
Polska VMUG
2017
2
Data in the news!
Dane w wiadomościach
3
4
How do you manage all of that data?
How do you keep it safe?
How can you choose data services, such as
replication and encryption, on a per-application
or a per–VM or virtual disk basis?
Storage Policy Based Management
Agenda
• Introduction
– vSphere APIs for Storage Awareness (VASA)
– Storage Policy Based Management (SPBM)
• SPBM and vSAN
• SPBM and Virtual Volumes (VVols)
• SPBM and VAIO (IO Filters)
– Host-based data services, 3rd parties as well as VMware provided
• SPBM integration with other VMware products
– with vRealize Automation / vRealize Orchestration
– with VMware Horizon View
• Q&A
5
6
Introduction to
vSphere APIs for Storage Awareness
(VASA)
VASA – vSphere APIs for Storage Awareness
• VASA – vSphere APIs for Storage Awareness – gives
vSphere insight into data services, either on storage
systems or on hosts.
• VASA providers publish storage capabilities to
vSphere.
• With Virtual Volumes, VASA is also used to initiate
certain operations on the array from vSphere
– e.g. Create VVol, Delete VVol, Take a Snapshot
7
8
Introduction
to
Storage Policy Based Management
The Storage Policy Based Management (SPBM) Paradigm
• SPBM is the foundation of
VMware's Software Defined
Storage vision
• Common framework to allow
storage and host related
capabilities to be consumed
via policies.
• Applies data services (e.g.
replication, encryption,
performance) on a per VM, or
even per VMDK level, through
policies
9
Creating Policies via Rules and Rule Sets
• Rule
– A Rule references a combination of a metadata tag and a related value, indicating the quality or
quantity of the capability that is desired.
– These two items act as a key and a value that, when referenced together through a Rule,
become a condition that must be met for compliance.
• E.g. Place VM on datastore where Encryption = True
• Rule Sets
– A Rule Set is comprised of one or more Rules.
– Multiple “Rule Sets” can be leveraged to allow a single storage policy to define alternative
selection parameters, even from several storage providers.
• E.g. Place VM on vSAN datastore where Deduplication = On OR VVol datastore where Deduplication = On.
10
11
12
VAIO
vSAN,
VVOLs,
VMFS
13
SPBM and vSAN
VMware vSAN
• Storage scale out architecture built into the hypervisor
• Aggregates locally attached storage from each ESXi
host in a cluster
• Dynamic capacity and performance scalability
• Flash optimized storage solution
• Fully integrated with vSphere:
• vCenter, vMotion, Storage vMotion, DRS, HA, FT, …
• VM-centric data operations through SPBM (policies)
14
VSAN 10GbE network
esxi-01 esxi-02 esxi-03
vSAN and HA/DRS Cluster
vSAN Shared Datastore
15
vSAN
VASA
Provider
Storage policy rules available in vSAN 6.6.1
• Primary level of Failures To Tolerate (Primary FTT for cross-site stretched cluster protection)
• Secondary level of Failures To Tolerate (Secondary FTT for local stretched cluster protection)
• Failure Tolerance Method (Mirroring [Raid1:default] or Erasure Coding [Raid5/Raid6])
• IOPS limit for object
• Disable object checksum
• Force provisioning
• Number of disk stripes per object
• Flash read cache reservation (%)
• Object space reservation (%)
• Affinity (when PFTT=0 in stretched clusters)
16
Defining a policy for vSAN
• Policies define levels of
protection and performance
• Applied at a per VM level, or
per VMDK level
• vSAN currently provides 10
unique storage capabilities to
vCenter Server
17
What If APIs
Assign it to a new or existing VM, or vmdk
• When the policy is selected, vSAN
uses it to place/distribute the
VM/VMDK to guarantee availability
and performance
• Policies can be changed on-the-fly
– In some cases, 2X space may be
temporarily required to change it
– May also introduce rebuild/resync
traffic, so advice is to treat policy
change on-the-fly as maintenance
task
18
Policy Setting - Number of Failures to Tolerate (FTT)
• “FTT” defines the number of
failures a VM/VMDK can tolerate.
• For RAID-1, “n” failures tolerated
means “n+1” copies of the object
are created and “2n+1” host
contributing storage are required!
esxi-01 esxi-02 esxi-03
vmdk
RAID-1
FTT=1
esxi-04
witnessvmdk
~50% of I/O ~50% of I/O
19
Policy Setting - Number of Disk Stripes Per Object
• Defines the minimum number of
capacity devices across which
each replica of a storage object
is distributed.
• Higher values may result in
better performance. Stripe width
can improve performance of
write destaging, and fetching of
reads
• Higher values may put more
constraints on flexibility of
meeting storage compliance
policies
• Primarily used to achieve
highest performance, even at
expense of flexibility
esxi-01 esxi-02 esxi-03
stripe-2a
RAID-1
esxi-04
witnessstripe-2b
RAID-0 RAID-0
stripe-1a
stripe-1b
FTT=1
Stripe width=2
20
Policy Setting – Fault Tolerance Method (FTM) - RAID-5
• Available in all-flash configurations only
• Example: FTT = 1 with FTM = RAID-5
– 3+1 (4 host minimum, 1 host can fail
without data loss)
– 5 hosts would tolerate 1 host failure
or maintenance mode state, and still
maintain redundancy
– 1.33x instead of 2x overhead.
– 30% savings (20GB disk consumes
40GB with RAID-1, now consumes
~27GB with RAID-5)
RAID-5
ESXi Host
parity
data
data
data
ESXi Host
data
parity
data
data
ESXi Host
data
data
parity
data
ESXi Host
data
data
data
parity
21
Policy Setting - Fault Tolerance Method (FTM) - RAID-6
• Available in all-flash configurations only
• Example: FTT = 2 with FTM = RAID-6
– 4+2 (6 host minimum. 1 host can fail
without data loss)
– 7 hosts would tolerate 1 host failure
or maintenance mode state, and still
maintain redundancy)
– 1.5x instead of 3x overhead.
– 50% savings. (20GB disk consumes
60GB with RAID-1, now consumes
~30GB with RAID-6)
RAID-6
ESXi Host
parity
data
data
ESXi Host
parity
data
data
ESXi Host
data
parity
data
ESXi Host
data
parity
data
ESXi Host
data
data
parity
ESXi Host
data
data
parity
22
Sky’s the limit for expansion on an agile cloud
• Europe’s leading media brand
• 22 million subscribers
• Pay TV, on-demand Internet streaming, broadband mobile
• Always looking for new markets and new revenue stream
• Challenge: Bring new services online, cost-effectively, without
impacting existing services. Avoid creating expensive silos per
service.
• vSAN enabled Sky to scale out its video service on time and on
budget, delivering a fast, cost-effective and reliable platform for
video transport.
23
24
SPBM and Virtual Volumes
Why VVols?
25
Typical SAN
• Lots of paths to manage
• Lot of devices to manage
• Risk of hitting path/device limits
• IO Blender effect
VVols are 1st class citizen on storage array
26
Data services on array are consumed
on a per VM/VMDK basis via SPBM
• Less paths/devices to manage
• Array appears as a Volume
• More scalable than LUNs
• 1:1 relationship with VM:storage
PE
• No Filesystem
• ESXi manages array through
VASA APIs.
• Arrays are logically partitioned
into containers, called Storage
Containers.
• NO LUNS
• VM files, called Virtual Volumes,
stored natively on the Storage
Containers.
• IO from ESXi to array is
addressed through an access
point called, Protocol Endpoint
• Data Services (snapshot, etc.)
are offloaded to the array
• Managed through SPBM.
27
High Level Architecture
Overview
vSphere
Storage Policy-Based Mgmt.
Virtual Volumes
Storage Policy
Capacity
Availability
Performance
Data Protection
Security
PE PE
Published Capabilities
Snapshot
Replication
Deduplication
Encryption
VASA Provider
28
VASA Provider (VP)
Virtual Volumes
VASA Provider
• Software component developed by
storage array vendors
• Provides “storage awareness” of array’s
data services
• VASA Provider can be implemented within
the array’s management firmware, in the
array controller or as a virtual appliance.
• Responsible for creating, deleting of
Virtual Volumes (VMs, clones, snapshots)
Characteristics
Protocol Endpoints (PE)
Virtual Volumes
VASA ProviderPE
• Separate the access points from the
storage itself
• Allows for fewer access points (compared
to LUN approach)
Why Protocol Endpoints?
• Access points that enables
communication between ESXi hosts and
storage array systems
• SCSI T10 Secondary Addressing scheme
to access VVol (PE + VVol Offset)
What are Protocol Endpoints?
29
Protocol Endpoints (PE)
VASA ProvideriSCSI/NFSPE
Virtual Volumes
• Compatible with all SAN and NAS
Protocols:
- iSCSI
- NFS
- FC
- FCoE
• Existing multi-path policies and NFS
topology requirements can be applied
to the PE
• NFS v3 and v4.1 supported.
Scope of Protocol Endpoints
30
0
Storage Container (SC)
Virtual Volumes
• Logical storage constructs for grouping of
virtual volumes.
• Setup by Storage Administrator
• Capacity is based on physical storage
• Logically partition or isolate VMs with
diverse storage needs and requirement
• Minimum one storage container per array
• Maximum depends on the array
• A single Storage Container can be
simultaneously accessed via multiple
Protocol Endpoints
• It is NOT a LUN
What are storage containers?
32
33
VVol walk-thru
with
Nimble Storage
[Now part of HPE]
34
Nimble Storage [now HPE]
Populate vCenter info
on
Storage Array
Add Nimble info directly
into vSphere
35
Full visibility into VM
• Home
• Swap
• VMDK
Storage Container
• Create a folder
• Set management
type to VMware
Virtual Volumes
• Set a capacity limit
36
Nimble Storage - VASA Provider
(automatically populated from array)
37
Protocol Endpoint automatically discovered!
Nimble Storage VVol Policy Setup – granular data services per-VM
38
Nimble Storage VVol Policy Setup
39
Some VVol Adoption figures from HPE – 3PAR
40
41
SPBM and vSphere APIs for I/O Filters
(VAIO)
42
VMM
VMX
Filter Framework
Filter 1
Filter 2
Filter 3
Filter n
Guest OS
I/O
Virtual Disk
I/O
43
VAIO
Data Services
Provided by 3rd parties
I/O Filters from 3rd parties – Cache Acceleration and Replication
44
45
VAIO
Data Services
provided by VMware in
vSphere 6.5
46
2 new features introduced with vSphere 6.5
- Encryption
- Storage I/O Control v2
Implementation is done via I/O Filters
Introduced in vSphere 6.5 - Storage I/O Control v2
• VM Storage Policies in vSphere 6.5 has a new option called “Common Rules”.
• These are used for configuring data services provided by hosts, such as Storage I/O Control
and Encryption. It is the same mechanism used for VAIO/IO Filters.
47
Now managed
via policy and
not set on a per
VM basis –
reduced
operational
overhead
QoS
Introduced in vSphere 6.5 - vSphere VM Encryption
• A new VM encryption mechanism.
• Implemented in the hypervisor,
making vSphere VM encryption
agnostic to the Guest OS.
• This not just data-at-rest encryption;
it also encrypts data in-flight.
• vSphere VM Encryption in vSphere
6.5 is policy driven.
• Requires an external Key
Management Server - KMS (not
provided by VMware)
48
3rd Party and vSphere IO Filters can co-exist
49
There are 3 I/O Filters on these hosts:
- VM Encryption
- Storage I/O Control
- Cache Accelerator from Infinio
Case Study from Infinio – VAIO Cache Acceleration
• The University of Georgia Center for Continuing Education and Hotel
– Conference center located in Athens, Georgia, USA
• Using DELL Compellent All Flash Array
• Pilot on vSphere Cluster running over 50 VMs
– file and print services
– digital signage applications
– back office applications like SQL and QuickBooks
50
Response times were fast – as low as
170 microseconds – which is
even faster than our all-flash array!”
51
SPBM and vRealize Automation/vRealize Orchestration
vRealize Automation 7.3 + vRealize Orchestration 6.5 and SPBM
• vRealize Automation (vRA) 7.3 enables SPBM through vRealize Orchestration (vRO)
– vRA itself does not know about SPBM, so relies on vRO
– SPBM policies must be preconfigured
– SPBM policies can be changed on-the-fly (day #2 operation)
• Leverages the latest vCenter Server (6.5) plug-in shipped with vRO out-of-the-box
• All SPBM policies are accessible through API in vRO/vRA
52
53
54
SPBM and VMware Horizon View
Horizon View 7.2 and SPBM (with vSAN)
Policy (as appears in vCenter) Description Stripes FTT %RCR %OSR
VM_HOME_<guid> VM home directory 1 1 0 0
REPLICA_DISK_<guid>
Linked Clone Replica Disk, Instant Clone
Replica Disk
1 1 10 0
PERSISTENT_DISK_<guid> Linked Clone Persistent Disk 1 1 0 100
OS_DISK_FLOATING_<guid>
Floating Linked Clone OS and
disposable disks, floating Instant Clone
OS and disposable disks
1 1 0 0
OS_DISK_<guid>
Dedicated Linked Clone OS and
disposable disks
1 1 0 0
FULL_CLONE_DISK_FLOATING_<guid> Floating Full Clone Virtual Disk 1 0 0 0
FULL_CLONE_DISK_<guid> Dedicated Full Clone Virtual Disk 1 1 0 0
55
• Policies are automatically created when Horizon View is deployed on vSAN datastores
56
Summary
• The amount of data in the world is exploding!
• Data is critical to your organization, and in many
cases, how you innovate with this data keeps
you ahead of your competitors.
• Managing that data, keeping it safe and providing
the appropriate data services at the granularity
of an application can be complex
• Storage Policy Based Management, a
fundamental building block to VMware’s Software
Defined Storage achieves this.
• SPBM is integrated with all vSphere storage
technologies, from vSAN to VVols to VAIO.
• With SPBM, data services (e.g. deduplication,
encryption, replication, RAID level) can be
assigned to your data on a per VM or per VMDK
basis.
57
58
Dziękuję
Q&AThank You

More Related Content

PPTX
STO7534 VSAN Day 2 Operations (VMworld 2016)
PPTX
VMworld 2017 vSAN Network Design
PPTX
VMworld 2017 - Top 10 things to know about vSAN
PPTX
VMworld 2017 Core Storage
PPTX
VMware virtual SAN 6 overview
PDF
VMware Virtual SAN Presentation
PPTX
What is coming for VMware vSphere?
PPTX
Building a Stretched Cluster using Virtual SAN 6.1
STO7534 VSAN Day 2 Operations (VMworld 2016)
VMworld 2017 vSAN Network Design
VMworld 2017 - Top 10 things to know about vSAN
VMworld 2017 Core Storage
VMware virtual SAN 6 overview
VMware Virtual SAN Presentation
What is coming for VMware vSphere?
Building a Stretched Cluster using Virtual SAN 6.1

What's hot (20)

PPTX
VMware VSAN Technical Deep Dive - March 2014
PDF
VMworld 2014: Virtual SAN Architecture Deep Dive
PPTX
VMware vSAN - Novosco, June 2017
PPTX
Virtual SAN 6.2, hyper-converged infrastructure software
PDF
VSAN-VMWorld2015-Rev08
PPTX
Five common customer use cases for Virtual SAN - VMworld US / 2015
PDF
VMware Vsan vtug 2014
PDF
Presentazione VMware @ VMUGIT UserCon 2015
PPTX
STO7535 Virtual SAN Proof of Concept - VMworld 2016
PPTX
A day in the life of a VSAN I/O - STO7875
PDF
VMware - Virtual SAN - IT Changes Everything
PDF
VMworld 2013: VMware Virtual SAN Technical Best Practices
PPT
VMware Virtual SAN slideshow
PDF
VMworld 2014: vSphere Distributed Switch
PPTX
Presentation v mware virtual san 6.0
PDF
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...
PPTX
VMworld 2016 - INF8036 - enforcing a vSphere cluster design with powercli aut...
PDF
VSAN – Architettura e Design
PDF
The dark side of stretched cluster
PPTX
vSAN architecture components
VMware VSAN Technical Deep Dive - March 2014
VMworld 2014: Virtual SAN Architecture Deep Dive
VMware vSAN - Novosco, June 2017
Virtual SAN 6.2, hyper-converged infrastructure software
VSAN-VMWorld2015-Rev08
Five common customer use cases for Virtual SAN - VMworld US / 2015
VMware Vsan vtug 2014
Presentazione VMware @ VMUGIT UserCon 2015
STO7535 Virtual SAN Proof of Concept - VMworld 2016
A day in the life of a VSAN I/O - STO7875
VMware - Virtual SAN - IT Changes Everything
VMworld 2013: VMware Virtual SAN Technical Best Practices
VMware Virtual SAN slideshow
VMworld 2014: vSphere Distributed Switch
Presentation v mware virtual san 6.0
VMworld 2013: Lowering TCO for Virtual Desktops with VMware View and VMware V...
VMworld 2016 - INF8036 - enforcing a vSphere cluster design with powercli aut...
VSAN – Architettura e Design
The dark side of stretched cluster
vSAN architecture components
Ad

Similar to 2017 VMUG Storage Policy Based Management (20)

PPTX
VMworld 2015: Explaining Advanced Virtual Volumes Configurations
PPTX
VMworld 2016: Virtual Volumes Technical Deep Dive
PDF
VMworld 2014: Virtual Volumes Technical Deep Dive
PPTX
VMworld 2015: Virtual Volumes Technical Deep Dive
PPTX
VMworld - sto7650 -Software defined storage @VMmware primer
PPTX
V sphere virtual volumes technical overview
PDF
Presentazione HPE @ VMUGIT UserCon 2015
PDF
Vsphere esxi-vcenter-server-50-storage-guide
PDF
VMUGIT UC 2013 - 10 Cormac Hogan
PDF
AMER Webcast: VMware Virtual SAN
PDF
Partner Presentation vSphere6-VSAN-vCloud-vRealize
PDF
VMworld 2013: Tech Preview: Accelerating Data Operations Using VMware VVols a...
PDF
VMware: Enabling Software-Defined Storage Using Virtual SAN (Technical Decisi...
PPTX
Gianni Resti
PDF
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
PPTX
Storage for Virtual Environments 2011 R2
PPT
Vsphere 4-partner-training180
PPTX
#DFWVMUG - Automating the Next Generation Datacenter
PDF
Configuring vSphere Storage Vmware 8.000
PPTX
Troubleshooting Storage Devices Using vRealize Operations (formerly vC Ops)
VMworld 2015: Explaining Advanced Virtual Volumes Configurations
VMworld 2016: Virtual Volumes Technical Deep Dive
VMworld 2014: Virtual Volumes Technical Deep Dive
VMworld 2015: Virtual Volumes Technical Deep Dive
VMworld - sto7650 -Software defined storage @VMmware primer
V sphere virtual volumes technical overview
Presentazione HPE @ VMUGIT UserCon 2015
Vsphere esxi-vcenter-server-50-storage-guide
VMUGIT UC 2013 - 10 Cormac Hogan
AMER Webcast: VMware Virtual SAN
Partner Presentation vSphere6-VSAN-vCloud-vRealize
VMworld 2013: Tech Preview: Accelerating Data Operations Using VMware VVols a...
VMware: Enabling Software-Defined Storage Using Virtual SAN (Technical Decisi...
Gianni Resti
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
Storage for Virtual Environments 2011 R2
Vsphere 4-partner-training180
#DFWVMUG - Automating the Next Generation Datacenter
Configuring vSphere Storage Vmware 8.000
Troubleshooting Storage Devices Using vRealize Operations (formerly vC Ops)
Ad

Recently uploaded (20)

PDF
Transform-Quality-Engineering-with-AI-A-60-Day-Blueprint-for-Digital-Success.pdf
PDF
Enhancing plagiarism detection using data pre-processing and machine learning...
PDF
4 layer Arch & Reference Arch of IoT.pdf
PDF
A review of recent deep learning applications in wood surface defect identifi...
PDF
The influence of sentiment analysis in enhancing early warning system model f...
PDF
“A New Era of 3D Sensing: Transforming Industries and Creating Opportunities,...
PDF
NewMind AI Weekly Chronicles – August ’25 Week III
PPTX
Custom Battery Pack Design Considerations for Performance and Safety
PPTX
AI IN MARKETING- PRESENTED BY ANWAR KABIR 1st June 2025.pptx
PDF
Transform-Your-Supply-Chain-with-AI-Driven-Quality-Engineering.pdf
PDF
Credit Without Borders: AI and Financial Inclusion in Bangladesh
PPTX
Microsoft Excel 365/2024 Beginner's training
PDF
Accessing-Finance-in-Jordan-MENA 2024 2025.pdf
PPTX
TEXTILE technology diploma scope and career opportunities
PPTX
Build Your First AI Agent with UiPath.pptx
PPT
Geologic Time for studying geology for geologist
PDF
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
PDF
5-Ways-AI-is-Revolutionizing-Telecom-Quality-Engineering.pdf
PDF
OpenACC and Open Hackathons Monthly Highlights July 2025
PDF
Transform-Your-Streaming-Platform-with-AI-Driven-Quality-Engineering.pdf
Transform-Quality-Engineering-with-AI-A-60-Day-Blueprint-for-Digital-Success.pdf
Enhancing plagiarism detection using data pre-processing and machine learning...
4 layer Arch & Reference Arch of IoT.pdf
A review of recent deep learning applications in wood surface defect identifi...
The influence of sentiment analysis in enhancing early warning system model f...
“A New Era of 3D Sensing: Transforming Industries and Creating Opportunities,...
NewMind AI Weekly Chronicles – August ’25 Week III
Custom Battery Pack Design Considerations for Performance and Safety
AI IN MARKETING- PRESENTED BY ANWAR KABIR 1st June 2025.pptx
Transform-Your-Supply-Chain-with-AI-Driven-Quality-Engineering.pdf
Credit Without Borders: AI and Financial Inclusion in Bangladesh
Microsoft Excel 365/2024 Beginner's training
Accessing-Finance-in-Jordan-MENA 2024 2025.pdf
TEXTILE technology diploma scope and career opportunities
Build Your First AI Agent with UiPath.pptx
Geologic Time for studying geology for geologist
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
5-Ways-AI-is-Revolutionizing-Telecom-Quality-Engineering.pdf
OpenACC and Open Hackathons Monthly Highlights July 2025
Transform-Your-Streaming-Platform-with-AI-Driven-Quality-Engineering.pdf

2017 VMUG Storage Policy Based Management

  • 1. 1 Storage Policy Based Management Cormac Hogan - @CormacJHogan Blog – cormachogan.com Chief Technologist - Storage & Availability Polska VMUG 2017
  • 2. 2 Data in the news! Dane w wiadomościach
  • 3. 3
  • 4. 4 How do you manage all of that data? How do you keep it safe? How can you choose data services, such as replication and encryption, on a per-application or a per–VM or virtual disk basis? Storage Policy Based Management
  • 5. Agenda • Introduction – vSphere APIs for Storage Awareness (VASA) – Storage Policy Based Management (SPBM) • SPBM and vSAN • SPBM and Virtual Volumes (VVols) • SPBM and VAIO (IO Filters) – Host-based data services, 3rd parties as well as VMware provided • SPBM integration with other VMware products – with vRealize Automation / vRealize Orchestration – with VMware Horizon View • Q&A 5
  • 6. 6 Introduction to vSphere APIs for Storage Awareness (VASA)
  • 7. VASA – vSphere APIs for Storage Awareness • VASA – vSphere APIs for Storage Awareness – gives vSphere insight into data services, either on storage systems or on hosts. • VASA providers publish storage capabilities to vSphere. • With Virtual Volumes, VASA is also used to initiate certain operations on the array from vSphere – e.g. Create VVol, Delete VVol, Take a Snapshot 7
  • 9. The Storage Policy Based Management (SPBM) Paradigm • SPBM is the foundation of VMware's Software Defined Storage vision • Common framework to allow storage and host related capabilities to be consumed via policies. • Applies data services (e.g. replication, encryption, performance) on a per VM, or even per VMDK level, through policies 9
  • 10. Creating Policies via Rules and Rule Sets • Rule – A Rule references a combination of a metadata tag and a related value, indicating the quality or quantity of the capability that is desired. – These two items act as a key and a value that, when referenced together through a Rule, become a condition that must be met for compliance. • E.g. Place VM on datastore where Encryption = True • Rule Sets – A Rule Set is comprised of one or more Rules. – Multiple “Rule Sets” can be leveraged to allow a single storage policy to define alternative selection parameters, even from several storage providers. • E.g. Place VM on vSAN datastore where Deduplication = On OR VVol datastore where Deduplication = On. 10
  • 11. 11
  • 14. VMware vSAN • Storage scale out architecture built into the hypervisor • Aggregates locally attached storage from each ESXi host in a cluster • Dynamic capacity and performance scalability • Flash optimized storage solution • Fully integrated with vSphere: • vCenter, vMotion, Storage vMotion, DRS, HA, FT, … • VM-centric data operations through SPBM (policies) 14 VSAN 10GbE network esxi-01 esxi-02 esxi-03 vSAN and HA/DRS Cluster vSAN Shared Datastore
  • 16. Storage policy rules available in vSAN 6.6.1 • Primary level of Failures To Tolerate (Primary FTT for cross-site stretched cluster protection) • Secondary level of Failures To Tolerate (Secondary FTT for local stretched cluster protection) • Failure Tolerance Method (Mirroring [Raid1:default] or Erasure Coding [Raid5/Raid6]) • IOPS limit for object • Disable object checksum • Force provisioning • Number of disk stripes per object • Flash read cache reservation (%) • Object space reservation (%) • Affinity (when PFTT=0 in stretched clusters) 16
  • 17. Defining a policy for vSAN • Policies define levels of protection and performance • Applied at a per VM level, or per VMDK level • vSAN currently provides 10 unique storage capabilities to vCenter Server 17 What If APIs
  • 18. Assign it to a new or existing VM, or vmdk • When the policy is selected, vSAN uses it to place/distribute the VM/VMDK to guarantee availability and performance • Policies can be changed on-the-fly – In some cases, 2X space may be temporarily required to change it – May also introduce rebuild/resync traffic, so advice is to treat policy change on-the-fly as maintenance task 18
  • 19. Policy Setting - Number of Failures to Tolerate (FTT) • “FTT” defines the number of failures a VM/VMDK can tolerate. • For RAID-1, “n” failures tolerated means “n+1” copies of the object are created and “2n+1” host contributing storage are required! esxi-01 esxi-02 esxi-03 vmdk RAID-1 FTT=1 esxi-04 witnessvmdk ~50% of I/O ~50% of I/O 19
  • 20. Policy Setting - Number of Disk Stripes Per Object • Defines the minimum number of capacity devices across which each replica of a storage object is distributed. • Higher values may result in better performance. Stripe width can improve performance of write destaging, and fetching of reads • Higher values may put more constraints on flexibility of meeting storage compliance policies • Primarily used to achieve highest performance, even at expense of flexibility esxi-01 esxi-02 esxi-03 stripe-2a RAID-1 esxi-04 witnessstripe-2b RAID-0 RAID-0 stripe-1a stripe-1b FTT=1 Stripe width=2 20
  • 21. Policy Setting – Fault Tolerance Method (FTM) - RAID-5 • Available in all-flash configurations only • Example: FTT = 1 with FTM = RAID-5 – 3+1 (4 host minimum, 1 host can fail without data loss) – 5 hosts would tolerate 1 host failure or maintenance mode state, and still maintain redundancy – 1.33x instead of 2x overhead. – 30% savings (20GB disk consumes 40GB with RAID-1, now consumes ~27GB with RAID-5) RAID-5 ESXi Host parity data data data ESXi Host data parity data data ESXi Host data data parity data ESXi Host data data data parity 21
  • 22. Policy Setting - Fault Tolerance Method (FTM) - RAID-6 • Available in all-flash configurations only • Example: FTT = 2 with FTM = RAID-6 – 4+2 (6 host minimum. 1 host can fail without data loss) – 7 hosts would tolerate 1 host failure or maintenance mode state, and still maintain redundancy) – 1.5x instead of 3x overhead. – 50% savings. (20GB disk consumes 60GB with RAID-1, now consumes ~30GB with RAID-6) RAID-6 ESXi Host parity data data ESXi Host parity data data ESXi Host data parity data ESXi Host data parity data ESXi Host data data parity ESXi Host data data parity 22
  • 23. Sky’s the limit for expansion on an agile cloud • Europe’s leading media brand • 22 million subscribers • Pay TV, on-demand Internet streaming, broadband mobile • Always looking for new markets and new revenue stream • Challenge: Bring new services online, cost-effectively, without impacting existing services. Avoid creating expensive silos per service. • vSAN enabled Sky to scale out its video service on time and on budget, delivering a fast, cost-effective and reliable platform for video transport. 23
  • 25. Why VVols? 25 Typical SAN • Lots of paths to manage • Lot of devices to manage • Risk of hitting path/device limits • IO Blender effect
  • 26. VVols are 1st class citizen on storage array 26 Data services on array are consumed on a per VM/VMDK basis via SPBM • Less paths/devices to manage • Array appears as a Volume • More scalable than LUNs • 1:1 relationship with VM:storage PE
  • 27. • No Filesystem • ESXi manages array through VASA APIs. • Arrays are logically partitioned into containers, called Storage Containers. • NO LUNS • VM files, called Virtual Volumes, stored natively on the Storage Containers. • IO from ESXi to array is addressed through an access point called, Protocol Endpoint • Data Services (snapshot, etc.) are offloaded to the array • Managed through SPBM. 27 High Level Architecture Overview vSphere Storage Policy-Based Mgmt. Virtual Volumes Storage Policy Capacity Availability Performance Data Protection Security PE PE Published Capabilities Snapshot Replication Deduplication Encryption VASA Provider
  • 28. 28 VASA Provider (VP) Virtual Volumes VASA Provider • Software component developed by storage array vendors • Provides “storage awareness” of array’s data services • VASA Provider can be implemented within the array’s management firmware, in the array controller or as a virtual appliance. • Responsible for creating, deleting of Virtual Volumes (VMs, clones, snapshots) Characteristics
  • 29. Protocol Endpoints (PE) Virtual Volumes VASA ProviderPE • Separate the access points from the storage itself • Allows for fewer access points (compared to LUN approach) Why Protocol Endpoints? • Access points that enables communication between ESXi hosts and storage array systems • SCSI T10 Secondary Addressing scheme to access VVol (PE + VVol Offset) What are Protocol Endpoints? 29
  • 30. Protocol Endpoints (PE) VASA ProvideriSCSI/NFSPE Virtual Volumes • Compatible with all SAN and NAS Protocols: - iSCSI - NFS - FC - FCoE • Existing multi-path policies and NFS topology requirements can be applied to the PE • NFS v3 and v4.1 supported. Scope of Protocol Endpoints 30
  • 31. 0 Storage Container (SC) Virtual Volumes • Logical storage constructs for grouping of virtual volumes. • Setup by Storage Administrator • Capacity is based on physical storage • Logically partition or isolate VMs with diverse storage needs and requirement • Minimum one storage container per array • Maximum depends on the array • A single Storage Container can be simultaneously accessed via multiple Protocol Endpoints • It is NOT a LUN What are storage containers? 32
  • 33. 34 Nimble Storage [now HPE] Populate vCenter info on Storage Array Add Nimble info directly into vSphere
  • 34. 35 Full visibility into VM • Home • Swap • VMDK Storage Container • Create a folder • Set management type to VMware Virtual Volumes • Set a capacity limit
  • 35. 36 Nimble Storage - VASA Provider (automatically populated from array)
  • 37. Nimble Storage VVol Policy Setup – granular data services per-VM 38
  • 38. Nimble Storage VVol Policy Setup 39
  • 39. Some VVol Adoption figures from HPE – 3PAR 40
  • 40. 41 SPBM and vSphere APIs for I/O Filters (VAIO)
  • 41. 42 VMM VMX Filter Framework Filter 1 Filter 2 Filter 3 Filter n Guest OS I/O Virtual Disk I/O
  • 43. I/O Filters from 3rd parties – Cache Acceleration and Replication 44
  • 44. 45 VAIO Data Services provided by VMware in vSphere 6.5
  • 45. 46 2 new features introduced with vSphere 6.5 - Encryption - Storage I/O Control v2 Implementation is done via I/O Filters
  • 46. Introduced in vSphere 6.5 - Storage I/O Control v2 • VM Storage Policies in vSphere 6.5 has a new option called “Common Rules”. • These are used for configuring data services provided by hosts, such as Storage I/O Control and Encryption. It is the same mechanism used for VAIO/IO Filters. 47 Now managed via policy and not set on a per VM basis – reduced operational overhead QoS
  • 47. Introduced in vSphere 6.5 - vSphere VM Encryption • A new VM encryption mechanism. • Implemented in the hypervisor, making vSphere VM encryption agnostic to the Guest OS. • This not just data-at-rest encryption; it also encrypts data in-flight. • vSphere VM Encryption in vSphere 6.5 is policy driven. • Requires an external Key Management Server - KMS (not provided by VMware) 48
  • 48. 3rd Party and vSphere IO Filters can co-exist 49 There are 3 I/O Filters on these hosts: - VM Encryption - Storage I/O Control - Cache Accelerator from Infinio
  • 49. Case Study from Infinio – VAIO Cache Acceleration • The University of Georgia Center for Continuing Education and Hotel – Conference center located in Athens, Georgia, USA • Using DELL Compellent All Flash Array • Pilot on vSphere Cluster running over 50 VMs – file and print services – digital signage applications – back office applications like SQL and QuickBooks 50 Response times were fast – as low as 170 microseconds – which is even faster than our all-flash array!”
  • 50. 51 SPBM and vRealize Automation/vRealize Orchestration
  • 51. vRealize Automation 7.3 + vRealize Orchestration 6.5 and SPBM • vRealize Automation (vRA) 7.3 enables SPBM through vRealize Orchestration (vRO) – vRA itself does not know about SPBM, so relies on vRO – SPBM policies must be preconfigured – SPBM policies can be changed on-the-fly (day #2 operation) • Leverages the latest vCenter Server (6.5) plug-in shipped with vRO out-of-the-box • All SPBM policies are accessible through API in vRO/vRA 52
  • 52. 53
  • 53. 54 SPBM and VMware Horizon View
  • 54. Horizon View 7.2 and SPBM (with vSAN) Policy (as appears in vCenter) Description Stripes FTT %RCR %OSR VM_HOME_<guid> VM home directory 1 1 0 0 REPLICA_DISK_<guid> Linked Clone Replica Disk, Instant Clone Replica Disk 1 1 10 0 PERSISTENT_DISK_<guid> Linked Clone Persistent Disk 1 1 0 100 OS_DISK_FLOATING_<guid> Floating Linked Clone OS and disposable disks, floating Instant Clone OS and disposable disks 1 1 0 0 OS_DISK_<guid> Dedicated Linked Clone OS and disposable disks 1 1 0 0 FULL_CLONE_DISK_FLOATING_<guid> Floating Full Clone Virtual Disk 1 0 0 0 FULL_CLONE_DISK_<guid> Dedicated Full Clone Virtual Disk 1 1 0 0 55 • Policies are automatically created when Horizon View is deployed on vSAN datastores
  • 55. 56
  • 56. Summary • The amount of data in the world is exploding! • Data is critical to your organization, and in many cases, how you innovate with this data keeps you ahead of your competitors. • Managing that data, keeping it safe and providing the appropriate data services at the granularity of an application can be complex • Storage Policy Based Management, a fundamental building block to VMware’s Software Defined Storage achieves this. • SPBM is integrated with all vSphere storage technologies, from vSAN to VVols to VAIO. • With SPBM, data services (e.g. deduplication, encryption, replication, RAID level) can be assigned to your data on a per VM or per VMDK basis. 57

Editor's Notes

  • #3: Data, and most especially what you do with it to offer new/better experiences for your customers, is going to be the key differentiator between you and your competition
  • #4: Self-driving cars – Other projections state that they will generate 1GB of data per second. Equifax – personal data from 143 million US citizens. Cost CxOs their jobs. Hurricane Irma in the US, – Are you prepared for Disaster Recovery? Now what if you put these 2 together? What if someone hacked a self-driving car?
  • #8: VVOLS KB - https://kb.vmware.com/kb/2113013 Storage providers inform vCenter Server about specific storage devices, and present characteristics of the devices and datastores (as storage capabilities).
  • #10: Storage Policy-Based Management (SPBM) is the foundation of the VMware SDS Control Plane and enables vSphere administrators to over come upfront storage provisioning challenges, such as capacity planning, differentiated service levels and managing capacity headroom, whether using vSAN or Virtual Volumes (VVols) on external storage arrays. SPBM provides a single unified control plane across a broad range of data services and storage solutions. The framework helps to align storage with application demands of your virtual machines. SPBM is about ease, and agility. Traditional architectural models relied heavily on the capabilities of an independent storage system in order to meet protection and performance requirements of workloads. Unfortunately the traditional model was overly restrictive in part because standalone hardware based storage solutions were not VM aware, and were limited in their abilities to unique settings to various workloads. Storage Policy Based Management (SPBM) lets you define requirements for VMs or collection of VMs. This SPBM framework is the same framework used for storage arrays supporting VVOLs. Therefore, a common approach to managing and protecting data can be employed, regardless of the backing storage. ---------------------------------- Overview: Key to software defined storage (SDS) architectural model SPBM is the common framework to abstract traditional storage related settings away from hardware, and into hypervisor Applies storage related settings for protection and performance on a per VM, or even per VMDK level ----------------------------------
  • #11: https://blogs.vmware.com/vsphere/2014/10/vsphere-storage-policy-based-management-overview-part-2.html
  • #13: Common Rules – these come from I/O Filters on hosts (VMCrypt, SIOCv2, VAIO) Rule-Sets come from storage, either vSAN or VVOls.
  • #16: http://cormachogan.com/2013/09/06/vsan-part-5-the-role-of-vasa/
  • #17: Defining a policy will let vSAN use “what if” APIs so that you can see the “result” of having such a policy applied to a VM of a certain size. Very useful as it gives you an idea of what the “cost” is of certain attributes. Mirroring = RAID-1 Erasure Coding = RAID-5/RAID-6
  • #18: Key Message/Talk track: Creating a storage policy is nothing more than defining what your requirements are for a VM, or a collection of VMs. These requirements are typically around protection and performance of the VM. A new policy can be created and applied to a VM, or an existing policy can be adjusted. The VM will adopt the new performance and protection settings without any down time. ---------------------------------- Overview: Policies define levels of protection and performance Applied at a per VM level, or per vmdk level vSAN currently provides five unique storage capabilities to vCenter Server ---------------------------------- Details: Storage policy rules available (in 6.6) are: Number of disk stripes per object Flash read cache reservation (%) Primary level of failures to tolerate (PFTT - for stretched clusters) Secondary level of failures to tolerate (SFTT – for local protection) Failure Tolerance method Affinity IOPS limit for object Disable object checksum Force provisioning Object space reservation (%) Defining a policy will let vSAN use “what if” APIs so that you can see the “result” of having such a policy applied to a VM of a certain size. Very useful as it gives you an idea of what the “cost” is of certain attributes. ----------------------------------
  • #19: Key Message/Talk track: After a policy is created, it can easily be applied to an individual VMDK of a VM, an entire VM, or a collection of VMs in the data center. Applying at a VMDK level can be useful for applications that have different needs within defined drives of of the guest OS. For instance, a drive dedicated for the database may have different requirements than the drive dedicated for transaction logs. ---------------------------------- Overview: When the policy is selected, vSAN uses it to place/distribute the VM to guarantee availability and Performance Policies can be changed without any interruption to the VM ---------------------------------- Details: Defining a policy will let vSAN use “what if” APIs so that you can see the “result” of having such a policy applied to a VM of a certain size. Very useful as it gives you an idea of what the “cost” is of certain attributes. Only one SPBM policy is allowed to be applied. vSAN does not support the appended SPBM policies. Policies can also be assigned by rules or tags. An example might be all VMs with “Prod-SQL” in the VM name or resource group might be set at RAID-1 and an FTT=2. VM named “Test-Web” would never be applied to this SPBM policy, and would adopt the default policy for the environment. ----------------------------------
  • #20: Key Message/Talk track: Failures to Tolerate (FTT) is a rule that defines how many failures can be tolerated to let the VM or other object continue to run in the event of a failure. This is one of the key pillars behind vSAN’s ability to protect a VM from failure of a fault domain (disk, disk group, host, defined fault domain, or site) ---------------------------------- Overview: “FTT” defines the number of hosts, disk or network failures a storage object can tolerate. For “n” failures tolerated, “n+1” copies of the object are created and “2n+1” host contributing storage are required! Primary Failures to Tolerate (PFTT) defines the number of sites that can accept failure. (0, 1) Secondary Failures to Tolerate (SFTT) defines the number within a site that can accept failure (0, 1, 2, 3) ---------------------------------- Details: FTT can and will be dependent on a number of factors. A few important factors include: The number of hosts in the vSAN cluster The Failure Tolerance Method (FTM) that is defined for the object. Using a RAID-1 (mirroring) Fault Tolerance Method (FTM), an FTT of 2 would mean that a minimum number of hosts in a cluster would be 5. FTT=3 would require 7 hosts. Number of Failures Mirror copies Witnesses Min. Hosts Hosts + Maintenance 0 1 0 1 host n/a 1 2 1 3 hosts 4 hosts 2 3 2 5 hosts 6 hosts 3 4 3 7 hosts 8 hosts ---------------------------------------------------------------------------------------------- There is also Primary and Secondary Failures to Tolerate (PFTT and SFTT) are for vSAN stretched clusters PFTT defines the number of sites failures (0, 1) SFTT defines the number of failures within a site (0, 1, 2, 3)
  • #21: Key Message/Talk track: This policy, sometimes known as “stripe width” defines the minimum number of capacity devices across which each replica of a storage object is distributed. Increasing the predefined number of stripes per object beyond 1 is intended to help performance. ---------------------------------- Overview: Defines the minimum number of capacity devices across which each replica of a storage object is distributed. Higher values may result in better performance. Stripe width can improve performance of write Destaging, and fetching of uncached reads Higher values may put more constraints on flexibility of meeting storage compliance policies To be used only if performance is an issue ---------------------------------- Details: Most beneficial on the following scenarios: A non cached read on a hybrid configuration, where one is typically reliant on the rotational latencies of a single spinning disk. Reads on an all-flash configuration, where fetching I/O may be able to be improved in some situations. Destaging buffered writes to persistent tier (all flash, or hybrid). This will relieve some of the backpressure that could be induced by large amount of write activity, whether they are sequential or random in nature. vSAN may create more stripes than what is defined. With DD&C, component A with a strip width of 1 will not necessarily live just on disk 1, but rather, be sprinkled around the various capacity disks of a disk group. It becomes an implicit stripe width setting, but will not show up in the UI as a traditional change in a stripe width. Component size can impact stripe width, as an object over 255GB will be split into two components. This however could end up on the same disk, or a different disk group. ----------------------------------
  • #22: Key Message/Talk track: A failure tolerance method (FTM) is the way data will maintain redundancy. The simplest FTM is a RAID-1 mirror. This would have a mirror copy of objects/components across multiple hosts. Another FTM is RAID-5/RAID-6, where data is striped across multiple hosts with parity information written to provide tolerance of a failure. Parity is striped across all hosts. When done over the network using software only, this is sometimes referred to as erasure coding. This is done inline; there is no post-processing required. VMware’s implementation of erasure coding stripes the data with parity across the minimum number of hosts in order to comply with the policy. RAID-5 will offer a guaranteed 30% savings in capacity overhead compared to RAID-1 ---------------------------------- Overview: Available in all-flash configurations only Example: FTT = 1 with FTM = RAID-5 3+1 (4 host minimum, 1 host can fail without data loss) 5 hosts would tolerate 1 host failure or maintenance mode state, and still maintain redundancy 1.33x instead of 2x overhead. 30% savings 20GB disk consumes 40GB with RAID-1, now consumes ~27GB with RAID-5 ---------------------------------- Details: RAID-5/6 does have I/O amplification on writes (only). RAID-5. Single write operation results in 2 reads and 2 writes RAID-6. Single write operation results in 3 reads and 3 writes (due to double parity) RAID-5/6 only supports the FTT of 1, or 2 (implied by choosing RAID-5 or RAID-6). Will not support FTT=0, or FTT=3 The realized dedup & compression ratios will be different when employing RAID-5/6 than when using RAID-1 mirroring. Space efficiency using erasure codes will be more of a guaranteed space reduction because of the lack of implied multiple full copies. Even if the DD&C ratio may be less on objects that use RAID-5/6, the effective overall capacity used will be equal to, if not better than RAID-1 with DD&C. FTM can and will be dependent on a number of factors. A few important factors include: The number of hosts in the vSAN cluster Stripe width defined for the objects Using a RAID-5, and an implied FTT of 1 would mean that a minimum number of hosts in a cluster would be 4. With 4 hosts, 1 host can fail without data loss (but will lose redundancy). To maintain full redundancy with a single host in maintenance mode, the minimum would be 5 hosts. Cluster sizes for RAID-5 need to be 4 or more hosts. Not multiples of 4 hosts. Since VMware has a design goal of not relying on data locality, this implementation of erasure coding does not bring any negative results by distributing the RAID-5/6 stripe across multiple hosts. ---------------------------------- Internal:
  • #23: Key Message/Talk track: VMware’s RAID-6 is a dual parity version of the erasure coding scheme used in the RAID-5 FTM. An FTM of RAID-6 will imply an ability to tolerate 2 failures (e.g. FTT=2) and maintain operation. RAID-6 will offer a guaranteed 50% savings in capacity overhead compared to RAID-1 using an FTT of 2. Just as with RAID-5 erasure coding, this is all done inline, with no post processing required. Parity is striped across all hosts. VMware’s implementation of erasure coding stripes the data with parity across the minimum number of hosts in order to comply with the policy. RAID-6 will offer a guaranteed 50% savings in capacity overhead compared to RAID-1 and an FTT of 2. ---------------------------------- Overview: Available in all-flash configurations only Example: FTT = 2 with FTM = RAID- 4+2 (6 host minimum. 1 host can fail without data loss 7 hosts would tolerate 1 host failure or maintenance mode state, and still maintain redundancy 1.5x instead of 3x overhead. 50% savings 20GB disk consumes 60GB with RAID-1, now consumes ~30GB with RAID-6 ---------------------------------- Details: RAID-5/6 does have I/O amplification on writes (only). RAID-5. Single write operation results in 2 reads and 2 writes RAID-6. Single write operation results in 3 reads and 3 writes (due to double parity) RAID-5/6 only supports the FTT of 1, or 2 (implied by choosing RAID-5 or RAID-6). Will not support FTT=0, or FTT=3 The realized dedup & compression ratios will be different when employing RAID-5/6 than when using RAID-1 mirroring. Space efficiency using erasure codes will be more of a guaranteed space reduction because of the lack of implied multiple full copies. Even if the DD&C ratio may be less on objects that use RAID-5/6, the effective overall capacity used will be equal to, if not better than RAID-1 with DD&C. FTM can and will be dependent on a number of factors. A few important factors include: The number of hosts in the vSAN cluster Stripe width defined for the objects Using a RAID-6, and an implied FTT of 2 would mean that a minimum number of hosts in a cluster would be 6. With 6 hosts, 2 hosts can fail without data loss (but will lose redundancy). To maintain full redundancy with a single host in maintenance mode, the minimum would be 7 hosts. Cluster sizes for RAID-6 need to be 6 or more hosts. Not multiples of 6 hosts. Since VMware has a design goal of not relying on data locality, this implementation of erasure coding does not bring any negative results by distributing the RAID-5/6 stripe across multiple hosts. ----------------------------------
  • #24: We get a lot of questions about whether vSAN is available for prime-time production use. With 10,000 customers, vSAN is now used everywhere for all manner of applications. Here is one such example, where vSAN is used in a mission critical role.
  • #25: VVol 2.0" refers to additional functionality supported in vSphere for VVol targets written specifically for it, notably replication. Many VVol solutions still offer only what you might call "VVol 1.0". Regardless, the vSphere Compatibility Guide will tell you whether a given VVol storage system is certified to work with vSphere 6.5, which could be "VVol 1.0" or "VVol 2.0". To be clear vSphere 6.5 does NOT REQUIRE "VVol 2.0" on the storage side.
  • #26: The IO Blender effect – lots of different I/O types – random/sequential, read/write, different block sizes, being handled by the same LUN. All sorts of mechanisms were introduced to alleviate this situation, such as RAID, wide-striping, QoS, etc. On the vSphere side of things, we introduced SIOC, SDRS, etc. Many customers kept spreadsheets of what VMs were supposed to be on which LUNs for performance and data service purposes.
  • #28: VASA providers the Control Plane. PEs provide the Data Plane
  • #29: https://blogs.vmware.com/virtualblocks/2016/11/30/vasa-provider-considerations-controller-embedded-vs-virtual-appliance/ VASA Provider in VVols: Provides storage awareness services Centralized connectivity for ESXi and vCenter Servers Responsible for creating Virtual Volumes (VVols) Provide support for VASA APIs used for ESXi Responsible for defining binding operations Offloading VM related operations directly to array
  • #30: Why the concept of a PE? In today’s LUN-Datastore world, the datastore has two purposes – It serves as the access point for ESXi to send IO to and it also serves as storage container to store many VM files (VMDKs). This dual-purpose nature of this entity poses several challenges. It should not be necessary to have so many access points to the storage. Because of the rigid nature of the size of the datastore, and the fewer number of datastores, multiple VMs are stored together in the same datastore even if the VMs have different requirements. This leads to the so-called IO blender effect. So, how about we separate out the concept of the access point from the storage? This way, we can fewer number of access points to several number of storage entities. And hence the introduction of PE.
  • #31: NFS v41 support statement: https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.storage.doc/GUID-AAA99054-4D81-49F8-9927-65E9B08577AD.html
  • #32: During a rescan ESX will identify PE and maintain then in DBs. Multi-pathing on the PE ensures high availability Concept of queue depth in a PE? Yes, PEs are given queue depth of 128. Compare with a LUN which only had 32 or 64, and how many VMs per LUN.
  • #33: Need at least 1 SC per array. You can have as many as the array can support. An SC cannot span across array
  • #35: Login to UI. Select Administration. Select vSphere Integration. Populate VC info. Select plugins – in this case, web client and VASA Provider.
  • #36: Note that not all VASA implementations give you this level of detail. Also, others may take a different approach to configuring PEs and Storage Containers.
  • #37: Octo is the name of a “group” on the Nimble Array which I provided as part of the registration – it could be anything.
  • #39: Storage = Nimble Storage Add a Rule e.g. encryption Add another rule e.g. protection
  • #40: Compatible = nimble. Other refs: https://www.hpe.com/h20195/v2/getpdf.aspx/4AA5-6907ENW.pdf (HPE and Vvols)
  • #41: Figures provided by HPE – August 2017 (VMworld 2017 Las Vegas)
  • #42: https://code.vmware.com/programs/vsphere-apis-for-io-filtering
  • #43: https://code.vmware.com/programs/vsphere-apis-for-io-filtering IO request moving between the guest operating system (Initiator), located in the Virtual Machine Monitor(VMM), and a virtual disk (Consumer) are filtered through a series of two IO Filters (per disk), one filter per filter class, invoked in filter class order. For example, a replication filter executes before a cache filter. Once the IO request has been filtered by all the filters for the particular disk, the IO request moves on to its destination, either the VM or the virtual disk. Partner will develop IO Filter plug-ins to provide filtering virtual machines. Each IO Filter registers a set of callbacks with the Filter Framework, pertaining to different disk operations. If a filter fails an operation, only the filters prior to it are informed of the failure. Any filter can complete, fail, pass, or defer an IO request. A filter will defer an IO if the filter has to do a blocking operation like sending the data over the network, but wants to allow further IOs to get processed as well. If a filter performs a blocking operation during the regular IO processing path, it would affect the IOPS of the virtual disk, since we wouldn't be processing any further IOs until the blocking operation completes. If the filter defers an IO request, the Filter Framework will not pass the request to subsequent filters in the class order until the filter completes the request and notifies the Filter Framework that the IO may proceed.
  • #44: Available since vSphere 6.5.
  • #45: https://www.vmware.com/resources/compatibility/search.php?deviceCategory=vaio 6 certified partner VAIO products, out of which 3 are Cache and 3 are Replication. Cache accelerators using local flash devices (or some memory) to accelerate reads, and sometimes writes.
  • #46: Available since vSphere 6.5.
  • #47: This is before I added the I/O Accelerator from Infinio. These are provided by default in vSphere.
  • #48: When the policy has been created, it may be assigned to newly deployed VMs during provisioning, or to already existing VMs by assigning this new policy to the whole VM (or just an individual VMDK) by editing its settings.
  • #49: What is the relationship between vCenter Server and KMS Server? VMware vCenter now contains a KMIP client, which works with many common KMIP key managers (KMS). VMware does not own the KMS. Plan for backup, DR, recovery, etc., with your KMS provider. You must be able to retrieve the encryption keys in the event of a failure, or you may render your VMs unusable. Administrators should not encrypt their vCenter Server. Possible “chicken-and-egg” situation where you need vCenter to boot (KMS client) so it can get the key from the KMS to unencrypt its files, but it will not be able to boot as its files are encrypted. vCenter Server does not manage encryption. It is only a client of the KMS. With VM Home encrypted, only administrators with ‘encryption privileges’ can access the console of the virtual machine. One misconception: VM Home folder is not encrypted. Only some files in the VM Home folder are encrypted. Some (non-sensitive) VM files and log files are not encrypted. Core dumps are encrypted on ESXi hosts with encrypted VMs. Encrypted virtual machines cannot be exported to an OVF, nor can they be suspended.
  • #50: The VM Encryption and SIOC are available by default. Infinio is a third party plugin for cache acceleration - I installed this separately.
  • #51: http://www.infinio.com/sites/default/files/resources/Case%20Study%20-%20UG%20Center%20and%20Hotel%20-%20FINAL.pdf
  • #53: Screenshots courtesy of http://www.virtualjad.com/2017/05/scoop-vrealize-automation-7-3.html https://blogs.vmware.com/virtualblocks/2017/05/23/storage-policy-based-management-vrealize-automation/
  • #54: I don’t know much about this, but I believe that changing the policy will also Storage vMotion the VM to another datastore that meets the policy requirements – checking with Jad.
  • #56: When you use Virtual SAN, Horizon defines four virtual machine storage requirements, such as capacity, performance, and availability, in the form of default storage policy profiles and automatically deploys them for virtual desktops onto vCenter Server.  The policies are automatically and individually applied per disk (Virtual SAN objects) and maintained throughout the lifecycle of the virtual desktop. Storage is provisioned and automatically configured according to the assigned policies. You can modify these policies in vCenter.  Horizon creates vSAN policies for linked-clone desktop pools, instant-clone desktop pools, full-clone desktop pools, or an automated farm per Horizon cluster.