Virtualizing SQL

How to implement on virtual
 infrastructure to maximize
    performance and HA
The Golden Rule
• Design considerations for SQL workloads are
  essentially the same between physical and virtual
  environments now
   – vCPU can perform the same as CPU now, even virtual
     multi-core sockets
   – If guaranteed I/O SLA is demanded, dedicated disks can be
     provisioned to virtual infrastructure just as easy as physical
     (easier in most cases). Storage vMotion means you can
     always adjust later completely transparent to SQL
   – Virtual memory is always backed by physical memory
     unless a cluster in under pressure. So vRAM is no different
     from RAM as long as you are not overprovisioning your
     virtual infrastructure
Base Advantages of Virtualized SQL
• High Availability
    – Provides all SQL VMs with an inherited cluster-like ability, without any
      of the headache(when SLA’s do not require very low RTO)
• Snapshot
    – Ability to snapshot prior to major changes (ie: Windows updates)
      allows for faster rollback (to that point in time)
• Flexibility
    – Easily increase memory or CPU to meet periodic demand spikes. End
      of month activities, or periods of high usage for front end applications
• Portability
    – Move workloads/servers to higher-power hosts without interruption
      of service. Refresh hardware without reinstall. Provides for easier and
      more plentiful disaster recovery options
The Way Virtualization Used to Be
• Old drawbacks – Anemic VMs from a SQL
  perspective
  – Storage
     • Response times could get worse
     • Throughput could be decreased
  – Compute
     • Lack of memory
     • Lack of vCPU
  – Cost
     • Consolidation ratio blown. Host memory density not worth it
Multi-Vendor Virtualization Today
• New strengths
   – Storage
       • 2TB disk size, 60 SCSI targets per VM
       • Improvements in storage. much better latency on storage and IO, up to 1 million IOPS
         from a single VM with vSphere 5.1!
       • Server-side flash storage - great fit for tempdb. low persistence but high utilization.
       • New technologies in storage can enable SQL admins to gain access to persistent flash
         based storage to store things like index data files (separate them into different file
         groups) identify heavily read-only content and you can move it into flash. Hybrid flash is
         the future
       • EMC Fast Cache as an example
   – Compute
       • More CPU , up to 64 vCPU for a single VM
       • More RAM , up to 1TB for a single VM
   – Cost
       • Much lower price, economy of scale kicking in
       • Consolidation ratio good. Host memory/CPU density has increased massively. 384 GB+
         RAM in a single 2 socket blade is common
SQL Workloads
• OLTP
  – High volume web back-ends
  – High IOPS requirements throughout the day
     • Plan for peak daily activity
  – CPU contention may require more vCPU per VM,
    especially on poorly optimized (front end) systems.
  – Pro-tip : web application developer <> SQL developer
    in most cases. Poorly coded queries can shred a virtual
    SQL instance just as easily as a physical. Virtual vs
    physical doesn’t change your DBA hat.
  – Dynamically scale based on known demand peaks
SQL Workloads
• OLAP
  – Data warehouse
     • Predictable high volume workloads
     • Just like a physical server, daily ETL jobs will crush the disk.
       This is not an artifact of virtualization, just the nature of the
       job.
     • Easier to dynamically scale a server for EOY operations. Add
       CPU/Memory when needed and remove when not required.
     • Storage DRS can adapt workloads to appropriate storage
       dynamically.
  – Reporting services
     • Treat like a web server (since that is essentially what it is)
When is it not a good idea to
               Virtualize?
• Vendor support
  – Number 1 case that precludes virtualizing a database
    workload
  – Internal assessment of risk and maintenance of a
    parallel physical test environment to satisfy Vendor
    requirements (reproduce an issue in the physical test
    environment)
• Ultra-low latency/Custom hardware
  requirements (example Stock market)
  – Not many real-world examples that people will often
    run into.
Licensing
• SQL 2012 as point of reference. Individual VM vs
  Host licensing
• Individual VM licensing
  – Core license model
     • 1 core license per virtual CPU/thread
         – Hyper-threading counts!!
     • Sold in 2-packs
     • Minimum of 4 core licenses per VM
  – Server/CAL licensing
         – Still present for standard edition
         – Only present for Enterprise edition if you currently have an active
           SA contract on your Enterprise Ed. Server/CAL licenses
         – Can run maximum of 4 servers per host on Enterprise
Licensing
• License mobility
   – Any SQL license with SA on it can move to different
     physical hosts as the VM moves.
   – Available for both per-core and Server/CAL models
• Dense virtualization licensing
      • License all of the physical processors on a host and you can
        spin up unlimited virtual SQL servers (with SA on your
        licenses)
      • Great fit for environments with large number of SQL servers
      • Can carve out either dedicated SQL clusters, or VM affinity
        rules for the group of SQL servers to only run on a group of
        hosts.
VM Configuration
• TEMPLATE
  – Best thing you can do is to create a standard SQL VM
    template that is tweaked (and documented!) to high
    heaven
  – Reduces deployment time to approximately an
    hour, even with some post-patching
  – Consistent high performance design reproduced
    throughout the environment
  – Allows you to set the standard but still let
    virtualization administrators to deploy SQL VMs for
    you
VM Configuration
• Memory considerations
  – SQL and OS side tweaks
     • “Lock pages in memory” permission for SQL service account
     • Set SQL max memory to 1-2GB less than OS memory
     • Rough Rule of thumb, under 8GB of memory = 1GB reserved
       for OS. More than that = 2GB reserved for OS.
  – Vmware Reservation for full memory amount
        – Critical if you use lock pages in memory tweak, but important to
          use regardless.
        – This is one of the real legitimate uses of memory reservations!
          Don’t be afraid to ask for it!
VM Configuration
• CPU considerations
   – SQL tweaks
      • Set Max Degree of Parallelism (MDOP) to be equal to the number
        of vCPU assigned to the VM
      • Split TempDB into a number of files equal to the number of vCPU
        assigned to the VM. This will optimize access to TempDB
   – VMware tweaks
      • Start with a single vCPU in your template.
          – Easier to go up to the multiprocessor HAL than come back down again
          – All virtual resources have overhead, less is more. Only assign what you
            really need.
      • Remember that you can now create multi-core vCPU sockets. This
        can have licensing implications.
VM Configuration
• Network considerations
  – VMTools and VMXNET 3
  – VMXNET 3 NIC driver is bundled into VMTools.
  – Ensure the latest version of VMTools is included in
    your template
  – SIGNIFICANT performance advantage over default
    E1000 virtual NIC driver (TCP offloading),
    especially when using 10 Gb NICs in the hosts.
VM Configuration
• Storage considerations
  – Sensitive workloads: design backend storage like
    you would for physical SQL
  – For SQL VM’s with low performance reqs: design
    backend storage like you would for other server
    VM’s
VM Configuration
• Storage Protocols
  – FC / FCoE
     • Stable, consistent, reliable. We prefer FC vs IP based storage
       protocols when discussing SQL (and other business critical
       apps)
  – iSCSI
     • Easy to start, but more complex than FC to do right
     • You don’t need Jumbo Frames, that’s to reduce CPU and we
       have plenty of CPU
  – NFS
     • No true multipathing (yet)
     • Does not support RDMs, so capped at 2 TB for a volume
     • Scales out very well
VM Configuration
• FC vs iSCSI
   – Fibre Channel – similar to
     railways
      • Purpose built, connected to
        predetermined specific endpoints.
      • Predictable performance
   – iSCSI – similar to highways
      • Can be more flexible
      • Endpoints are simple to add
      • Traffic, latency can be a problem
VM Configuration
• RDM vs VMDK
  – Performance is near identical. Very, very small
    performance hit for VMDK
  – VMDK is much simpler to use
  – VMDK is <still> limited to just under 2 TB
  – RDM is required for Physical to Virtual failover
    clusters
  – RDM is required for array based snapshot and
    backup applications
VM Configuration
• VM SCSI controllers
  – Use multiple SCSI controllers to allow parallel I/O
    operations (max of 4)
  – Separate controller for OS, data files, transaction
    log files
  – Queue depth – default is usually fine, engage your
    storage team and vendor before adjusting. Lots of
    knobs and dials here.
VM Configuration
• Paravirtualized SCSI controllers
  – Recommended for new SQL VM’s that are storage
    performance sensitive
  – Will require loading the mass storage driver during
    Windows if used for the system drive (use a
    template!)
  – About 2000 IOPS from the VM is when it starts to
    make a difference
VM Configuration
• Partition alignment - host:
   – In vSphere 5.X, VMFS is aligned at the 1 MB mark, this
     is fine
• Partition alignment - guest:
   – Align guest partitions, usually 1 MB is good (use a
     template!)
• Create VMDK files as Eager Zeroed Thick
   – You don’t want write I/O’s waiting for the .vmdk to be
     zeroed
   – VAAI enabled storage arrays will speed this up
Using PCIe SSD Cards
• I/O Latency measured in microseconds instead of
  milliseconds
• EMC XtremeSF, FusionIO, others. Does not require a
  specific storage array.
• Manually place tempDB here, for example, or other non-
  unique data (mirrored at the application level, for instance)
• Software enables using local SSD as an extension of storage
  array cache. Can be used for unique data, reads are
  accelerated and writes are handled by the storage array
  (EMC XtremeSW)
• Or use SQL Always-On to protect data, allowing unique data
  to be written to local SSD
• Will affect vMotion (and therefore DRS), but SQL Always-on
  can mitigate
VM Configuration
Backup Strategy
• Choice between Host based application aware VM
  backups, and traditional SQL maintenance plan backups
   – Must choose an approach for Full Recovery Mode, as each one
     wants to have control over the backup chain, and you do not
     want to risk both truncating the transaction log
   – Host based appropriate for single daily backups
   – SQL Maintenance plan preferred for “point-in-time” type restore
     SLA
   – Must adjust either strategy no matter which is chosen. Neither
     can do the whole job, so both must be used
• Backup admins will have policies configured to groups of
  VMs. You must ensure SQL VMs are designated as a
  separate backup group so policies can be tweaked no
  matter what choices are made
Backup Strategy
• Host based application aware backups
  – Whole VM backup that will truncate SQL/Exchange
    Logs when the job is run
  – Utilizes VM snapshots to freeze disk IO while the
    backup is run
  – Provides a crash consistent backup. Rollbacks may
    occur when databases are brought online
  – Appropriate for workloads that have a single daily
    backups SLA
  – Ensure your databases are in Simple recovery mode
    not Full, to keep transaction log size down
Backup Strategy
• SQL Maintenance plan backups
  – Still useful for index and statistics and general
    maintenance work around the database, even if
    Host based backups are used
Virtual SQL Clustering and HA
• Traditional clustering
   – Very much still available. 5 node cluster max with
     vsphere 5.1
   – vMotion and Storage vMotion not allowed. HA not
     impacted
   – Virtual/Physical hybrid clustering possible
   – Some restrictions on allowed storage protocols
   – Does not make traditional clustering any less painful
   – Does allow for a cultural transition from physical to
     virtual more palatable for some
Virtual SQL Clustering and HA
Virtual SQL Clustering and HA
• SQL Always-On clustering
  – Best of both worlds!
     • Essentially get clustering without shared storage
       requirement or headaches normally found in MSCS clusters
     • No shared storage = less money with no loss in
       stability/performance
  – Super-fast failover you get from SQL Mirrored
    databases without SQL NCLI (or .NET data connector)
    requirement.
     • Fails groups of databases over together
     • Works great even for legacy apps that use old ODBC/JDBC
       drivers
Virtual SQL Clustering and HA
• SQL Always-On clustering
  – Based on proven technologies
         – Windows Failover Clustering
         – SQL Mirroring
         – Exchange DAG similarity
  – Lagged nodes allow for cheap off-site DR capacity
  – Read-intent queries
     • Automatically offload certain query types to a read-only
       node
  – Backups can run on secondary copies
     • Can specify preference for where backups run. Maintenance
       plans are intelligent and know if another node is running a
       backup. Can leverage this intelligence in scripts as well.
Virtual SQL Clustering and HA



     Video demonstration of
        Always-on failover
Virtual SQL Clustering and HA
• Site Recover Manager – Offsite replication and
  recovery options
  – Protects entire virtual infrastructure including SQL
  – Can provide close to 0 RPO and very low RTO
  – Scripted DR recovery means critical SQL servers
    can come up first. VM “importance” preference.
    Critical for SQL workloads that are often back-
    ends for other systems.
Take-away’s for the DBA’s
• Use a TEMPLATE!!
  – Seriously build a SQL VM template
  – Building a template is great….updating a template
    twice a year at least is better
• vCenter access
  –   Performance stats over time
  –   Console access to VM’s (virtual KVM access)
  –   Event history
  –   Capacity planning
  –   Security concerns from VM Admins can be handled by
      granting read-only access to the DBA
Take-away’s for the DBA’s
• More <> better
  – Often times overprovisioning virtual resources can
    actually mean worse performance. Don't ask for more
    than you need
  – Start with single vCPU in your template. Easier to go
    to SMP than to go back to single proc. Single VM
    reboot to add vCPU’s for SMP.
  – Remove extraneous devices, CD-
    Rom, ISO’s, Floppy, anything that is not needed for
    clean operation.
     • You can always add things on temporarily if needed
Links
• Always-On Failover Demonstration – Kyle Quinby, Varrow
• MS KB 920093 - Tuning options for SQL Server when running in high
  performance workloads
• MS SQL 2012 license guide for virtualization
• VMw KB 1037959 - Microsoft Clustering on VMware vSphere: Guidelines
  for Supported Configurations
• VMware's SQL Server Best Practices Guide
• When to use Lock Pages in Memory with SQL Server
• How to Enable the Lock Pages in Memory Option
• VMware - 1 Million IOPS on 1 VM
• VMw KB 1010398 - Configuring disks to use VMware Paravirtual SCSI
  (PVSCSI) adapters
• VMw KB 1022242 - Types of supported Virtual Disks on ESX/ESXi hosts

More Related Content

PPTX
Varrow datacenter storage today and tomorrow
PPTX
Vm13 vnx mixed workloads
PPTX
SM16 - Can i move my stuff to openstack
PPTX
Managing storage on Prem and in Cloud
PDF
VMworld 2014: Virtualizing Databases
PDF
SOUG_SDM_OracleDB_V3
PPTX
NGENSTOR_ODA_HPDA
PPTX
2015 deploying flash in the data center
Varrow datacenter storage today and tomorrow
Vm13 vnx mixed workloads
SM16 - Can i move my stuff to openstack
Managing storage on Prem and in Cloud
VMworld 2014: Virtualizing Databases
SOUG_SDM_OracleDB_V3
NGENSTOR_ODA_HPDA
2015 deploying flash in the data center

What's hot (20)

PDF
SOUG_GV_Flashgrid_V4
PPTX
StarWind Virtual SAN Overview
PDF
StarWind_V_SAN_product_presentation
PPTX
SOUG_Deployment__Automation_DB
PPTX
Storage Enhancements in Windows 2012 R2
PPTX
Windows Server 2012 R2 Software-Defined Storage
PDF
Road show 2015 triangle meetup
PDF
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
PPTX
Webinar: Overcoming the Storage Challenges Cassandra and Couchbase Create
PPTX
TDS-16489U - Dual Processor
PDF
Red Hat Storage Server For AWS
PPT
Open vStorage Road show 2015 Q1
PPTX
JetStor portfolio update final_2020-2021
PPTX
Enterprise Storage NAS - Dual Controller
PPTX
NGENSTOR_ODA_P2V_V5
PDF
NVMe Over Fabrics Support in Linux
PDF
Postgres on OpenStack
 
PPT
Open vStorage Meetup - Santa Clara 04/16
PPTX
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
PPT
Turning OpenStack Swift into a VM storage platform
SOUG_GV_Flashgrid_V4
StarWind Virtual SAN Overview
StarWind_V_SAN_product_presentation
SOUG_Deployment__Automation_DB
Storage Enhancements in Windows 2012 R2
Windows Server 2012 R2 Software-Defined Storage
Road show 2015 triangle meetup
VMworld 2014: Advanced SQL Server on vSphere Techniques and Best Practices
Webinar: Overcoming the Storage Challenges Cassandra and Couchbase Create
TDS-16489U - Dual Processor
Red Hat Storage Server For AWS
Open vStorage Road show 2015 Q1
JetStor portfolio update final_2020-2021
Enterprise Storage NAS - Dual Controller
NGENSTOR_ODA_P2V_V5
NVMe Over Fabrics Support in Linux
Postgres on OpenStack
 
Open vStorage Meetup - Santa Clara 04/16
Ceph Day San Jose - Red Hat Storage Acceleration Utlizing Flash Technology
Turning OpenStack Swift into a VM storage platform
Ad

Similar to Varrow madness 2013 virtualizing sql presentation (20)

PDF
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...
PPTX
Virtualizing Tier One Applications - Varrow
PPTX
Varrow Q4 Lunch & Learn Presentation - Virtualizing Business Critical Applica...
PPTX
VMworld 2015: Advanced SQL Server on vSphere
PPTX
Ceph Community Talk on High-Performance Solid Sate Ceph
PDF
VMworld 2014: Extreme Performance Series
PDF
TechNet Live spor 1 sesjon 6 - more vdi
PPTX
webinar vmware v-sphere performance management Challenges and Best Practices
PPTX
Virtualizing Sharepoint for Performance and Availability
PDF
Hyper-V Best Practices & Tips and Tricks
PPTX
IaaS for DBAs in Azure
PPTX
Handling Massive Writes
PDF
Presentation architecting a cloud infrastructure
PDF
Presentation architecting a cloud infrastructure
PPT
How to Design a Scalable Private Cloud
PPTX
Sql Start! 2020 - SQL Server Lift & Shift su Azure
PPTX
What is coming for VMware vSphere?
PPTX
Five common customer use cases for Virtual SAN - VMworld US / 2015
PDF
Exchange 2010 New England Vmug
VMworld Europe 2014: Advanced SQL Server on vSphere Techniques and Best Pract...
Virtualizing Tier One Applications - Varrow
Varrow Q4 Lunch & Learn Presentation - Virtualizing Business Critical Applica...
VMworld 2015: Advanced SQL Server on vSphere
Ceph Community Talk on High-Performance Solid Sate Ceph
VMworld 2014: Extreme Performance Series
TechNet Live spor 1 sesjon 6 - more vdi
webinar vmware v-sphere performance management Challenges and Best Practices
Virtualizing Sharepoint for Performance and Availability
Hyper-V Best Practices & Tips and Tricks
IaaS for DBAs in Azure
Handling Massive Writes
Presentation architecting a cloud infrastructure
Presentation architecting a cloud infrastructure
How to Design a Scalable Private Cloud
Sql Start! 2020 - SQL Server Lift & Shift su Azure
What is coming for VMware vSphere?
Five common customer use cases for Virtual SAN - VMworld US / 2015
Exchange 2010 New England Vmug
Ad

Varrow madness 2013 virtualizing sql presentation

  • 1. Virtualizing SQL How to implement on virtual infrastructure to maximize performance and HA
  • 2. The Golden Rule • Design considerations for SQL workloads are essentially the same between physical and virtual environments now – vCPU can perform the same as CPU now, even virtual multi-core sockets – If guaranteed I/O SLA is demanded, dedicated disks can be provisioned to virtual infrastructure just as easy as physical (easier in most cases). Storage vMotion means you can always adjust later completely transparent to SQL – Virtual memory is always backed by physical memory unless a cluster in under pressure. So vRAM is no different from RAM as long as you are not overprovisioning your virtual infrastructure
  • 3. Base Advantages of Virtualized SQL • High Availability – Provides all SQL VMs with an inherited cluster-like ability, without any of the headache(when SLA’s do not require very low RTO) • Snapshot – Ability to snapshot prior to major changes (ie: Windows updates) allows for faster rollback (to that point in time) • Flexibility – Easily increase memory or CPU to meet periodic demand spikes. End of month activities, or periods of high usage for front end applications • Portability – Move workloads/servers to higher-power hosts without interruption of service. Refresh hardware without reinstall. Provides for easier and more plentiful disaster recovery options
  • 4. The Way Virtualization Used to Be • Old drawbacks – Anemic VMs from a SQL perspective – Storage • Response times could get worse • Throughput could be decreased – Compute • Lack of memory • Lack of vCPU – Cost • Consolidation ratio blown. Host memory density not worth it
  • 5. Multi-Vendor Virtualization Today • New strengths – Storage • 2TB disk size, 60 SCSI targets per VM • Improvements in storage. much better latency on storage and IO, up to 1 million IOPS from a single VM with vSphere 5.1! • Server-side flash storage - great fit for tempdb. low persistence but high utilization. • New technologies in storage can enable SQL admins to gain access to persistent flash based storage to store things like index data files (separate them into different file groups) identify heavily read-only content and you can move it into flash. Hybrid flash is the future • EMC Fast Cache as an example – Compute • More CPU , up to 64 vCPU for a single VM • More RAM , up to 1TB for a single VM – Cost • Much lower price, economy of scale kicking in • Consolidation ratio good. Host memory/CPU density has increased massively. 384 GB+ RAM in a single 2 socket blade is common
  • 6. SQL Workloads • OLTP – High volume web back-ends – High IOPS requirements throughout the day • Plan for peak daily activity – CPU contention may require more vCPU per VM, especially on poorly optimized (front end) systems. – Pro-tip : web application developer <> SQL developer in most cases. Poorly coded queries can shred a virtual SQL instance just as easily as a physical. Virtual vs physical doesn’t change your DBA hat. – Dynamically scale based on known demand peaks
  • 7. SQL Workloads • OLAP – Data warehouse • Predictable high volume workloads • Just like a physical server, daily ETL jobs will crush the disk. This is not an artifact of virtualization, just the nature of the job. • Easier to dynamically scale a server for EOY operations. Add CPU/Memory when needed and remove when not required. • Storage DRS can adapt workloads to appropriate storage dynamically. – Reporting services • Treat like a web server (since that is essentially what it is)
  • 8. When is it not a good idea to Virtualize? • Vendor support – Number 1 case that precludes virtualizing a database workload – Internal assessment of risk and maintenance of a parallel physical test environment to satisfy Vendor requirements (reproduce an issue in the physical test environment) • Ultra-low latency/Custom hardware requirements (example Stock market) – Not many real-world examples that people will often run into.
  • 9. Licensing • SQL 2012 as point of reference. Individual VM vs Host licensing • Individual VM licensing – Core license model • 1 core license per virtual CPU/thread – Hyper-threading counts!! • Sold in 2-packs • Minimum of 4 core licenses per VM – Server/CAL licensing – Still present for standard edition – Only present for Enterprise edition if you currently have an active SA contract on your Enterprise Ed. Server/CAL licenses – Can run maximum of 4 servers per host on Enterprise
  • 10. Licensing • License mobility – Any SQL license with SA on it can move to different physical hosts as the VM moves. – Available for both per-core and Server/CAL models • Dense virtualization licensing • License all of the physical processors on a host and you can spin up unlimited virtual SQL servers (with SA on your licenses) • Great fit for environments with large number of SQL servers • Can carve out either dedicated SQL clusters, or VM affinity rules for the group of SQL servers to only run on a group of hosts.
  • 11. VM Configuration • TEMPLATE – Best thing you can do is to create a standard SQL VM template that is tweaked (and documented!) to high heaven – Reduces deployment time to approximately an hour, even with some post-patching – Consistent high performance design reproduced throughout the environment – Allows you to set the standard but still let virtualization administrators to deploy SQL VMs for you
  • 12. VM Configuration • Memory considerations – SQL and OS side tweaks • “Lock pages in memory” permission for SQL service account • Set SQL max memory to 1-2GB less than OS memory • Rough Rule of thumb, under 8GB of memory = 1GB reserved for OS. More than that = 2GB reserved for OS. – Vmware Reservation for full memory amount – Critical if you use lock pages in memory tweak, but important to use regardless. – This is one of the real legitimate uses of memory reservations! Don’t be afraid to ask for it!
  • 13. VM Configuration • CPU considerations – SQL tweaks • Set Max Degree of Parallelism (MDOP) to be equal to the number of vCPU assigned to the VM • Split TempDB into a number of files equal to the number of vCPU assigned to the VM. This will optimize access to TempDB – VMware tweaks • Start with a single vCPU in your template. – Easier to go up to the multiprocessor HAL than come back down again – All virtual resources have overhead, less is more. Only assign what you really need. • Remember that you can now create multi-core vCPU sockets. This can have licensing implications.
  • 14. VM Configuration • Network considerations – VMTools and VMXNET 3 – VMXNET 3 NIC driver is bundled into VMTools. – Ensure the latest version of VMTools is included in your template – SIGNIFICANT performance advantage over default E1000 virtual NIC driver (TCP offloading), especially when using 10 Gb NICs in the hosts.
  • 15. VM Configuration • Storage considerations – Sensitive workloads: design backend storage like you would for physical SQL – For SQL VM’s with low performance reqs: design backend storage like you would for other server VM’s
  • 16. VM Configuration • Storage Protocols – FC / FCoE • Stable, consistent, reliable. We prefer FC vs IP based storage protocols when discussing SQL (and other business critical apps) – iSCSI • Easy to start, but more complex than FC to do right • You don’t need Jumbo Frames, that’s to reduce CPU and we have plenty of CPU – NFS • No true multipathing (yet) • Does not support RDMs, so capped at 2 TB for a volume • Scales out very well
  • 17. VM Configuration • FC vs iSCSI – Fibre Channel – similar to railways • Purpose built, connected to predetermined specific endpoints. • Predictable performance – iSCSI – similar to highways • Can be more flexible • Endpoints are simple to add • Traffic, latency can be a problem
  • 18. VM Configuration • RDM vs VMDK – Performance is near identical. Very, very small performance hit for VMDK – VMDK is much simpler to use – VMDK is <still> limited to just under 2 TB – RDM is required for Physical to Virtual failover clusters – RDM is required for array based snapshot and backup applications
  • 19. VM Configuration • VM SCSI controllers – Use multiple SCSI controllers to allow parallel I/O operations (max of 4) – Separate controller for OS, data files, transaction log files – Queue depth – default is usually fine, engage your storage team and vendor before adjusting. Lots of knobs and dials here.
  • 20. VM Configuration • Paravirtualized SCSI controllers – Recommended for new SQL VM’s that are storage performance sensitive – Will require loading the mass storage driver during Windows if used for the system drive (use a template!) – About 2000 IOPS from the VM is when it starts to make a difference
  • 21. VM Configuration • Partition alignment - host: – In vSphere 5.X, VMFS is aligned at the 1 MB mark, this is fine • Partition alignment - guest: – Align guest partitions, usually 1 MB is good (use a template!) • Create VMDK files as Eager Zeroed Thick – You don’t want write I/O’s waiting for the .vmdk to be zeroed – VAAI enabled storage arrays will speed this up
  • 22. Using PCIe SSD Cards • I/O Latency measured in microseconds instead of milliseconds • EMC XtremeSF, FusionIO, others. Does not require a specific storage array. • Manually place tempDB here, for example, or other non- unique data (mirrored at the application level, for instance) • Software enables using local SSD as an extension of storage array cache. Can be used for unique data, reads are accelerated and writes are handled by the storage array (EMC XtremeSW) • Or use SQL Always-On to protect data, allowing unique data to be written to local SSD • Will affect vMotion (and therefore DRS), but SQL Always-on can mitigate
  • 24. Backup Strategy • Choice between Host based application aware VM backups, and traditional SQL maintenance plan backups – Must choose an approach for Full Recovery Mode, as each one wants to have control over the backup chain, and you do not want to risk both truncating the transaction log – Host based appropriate for single daily backups – SQL Maintenance plan preferred for “point-in-time” type restore SLA – Must adjust either strategy no matter which is chosen. Neither can do the whole job, so both must be used • Backup admins will have policies configured to groups of VMs. You must ensure SQL VMs are designated as a separate backup group so policies can be tweaked no matter what choices are made
  • 25. Backup Strategy • Host based application aware backups – Whole VM backup that will truncate SQL/Exchange Logs when the job is run – Utilizes VM snapshots to freeze disk IO while the backup is run – Provides a crash consistent backup. Rollbacks may occur when databases are brought online – Appropriate for workloads that have a single daily backups SLA – Ensure your databases are in Simple recovery mode not Full, to keep transaction log size down
  • 26. Backup Strategy • SQL Maintenance plan backups – Still useful for index and statistics and general maintenance work around the database, even if Host based backups are used
  • 27. Virtual SQL Clustering and HA • Traditional clustering – Very much still available. 5 node cluster max with vsphere 5.1 – vMotion and Storage vMotion not allowed. HA not impacted – Virtual/Physical hybrid clustering possible – Some restrictions on allowed storage protocols – Does not make traditional clustering any less painful – Does allow for a cultural transition from physical to virtual more palatable for some
  • 29. Virtual SQL Clustering and HA • SQL Always-On clustering – Best of both worlds! • Essentially get clustering without shared storage requirement or headaches normally found in MSCS clusters • No shared storage = less money with no loss in stability/performance – Super-fast failover you get from SQL Mirrored databases without SQL NCLI (or .NET data connector) requirement. • Fails groups of databases over together • Works great even for legacy apps that use old ODBC/JDBC drivers
  • 30. Virtual SQL Clustering and HA • SQL Always-On clustering – Based on proven technologies – Windows Failover Clustering – SQL Mirroring – Exchange DAG similarity – Lagged nodes allow for cheap off-site DR capacity – Read-intent queries • Automatically offload certain query types to a read-only node – Backups can run on secondary copies • Can specify preference for where backups run. Maintenance plans are intelligent and know if another node is running a backup. Can leverage this intelligence in scripts as well.
  • 31. Virtual SQL Clustering and HA Video demonstration of Always-on failover
  • 32. Virtual SQL Clustering and HA • Site Recover Manager – Offsite replication and recovery options – Protects entire virtual infrastructure including SQL – Can provide close to 0 RPO and very low RTO – Scripted DR recovery means critical SQL servers can come up first. VM “importance” preference. Critical for SQL workloads that are often back- ends for other systems.
  • 33. Take-away’s for the DBA’s • Use a TEMPLATE!! – Seriously build a SQL VM template – Building a template is great….updating a template twice a year at least is better • vCenter access – Performance stats over time – Console access to VM’s (virtual KVM access) – Event history – Capacity planning – Security concerns from VM Admins can be handled by granting read-only access to the DBA
  • 34. Take-away’s for the DBA’s • More <> better – Often times overprovisioning virtual resources can actually mean worse performance. Don't ask for more than you need – Start with single vCPU in your template. Easier to go to SMP than to go back to single proc. Single VM reboot to add vCPU’s for SMP. – Remove extraneous devices, CD- Rom, ISO’s, Floppy, anything that is not needed for clean operation. • You can always add things on temporarily if needed
  • 35. Links • Always-On Failover Demonstration – Kyle Quinby, Varrow • MS KB 920093 - Tuning options for SQL Server when running in high performance workloads • MS SQL 2012 license guide for virtualization • VMw KB 1037959 - Microsoft Clustering on VMware vSphere: Guidelines for Supported Configurations • VMware's SQL Server Best Practices Guide • When to use Lock Pages in Memory with SQL Server • How to Enable the Lock Pages in Memory Option • VMware - 1 Million IOPS on 1 VM • VMw KB 1010398 - Configuring disks to use VMware Paravirtual SCSI (PVSCSI) adapters • VMw KB 1022242 - Types of supported Virtual Disks on ESX/ESXi hosts

Editor's Notes

  • #2: Introductions, Kyle and Tony2 minsBest practices from our experiences with MS SQL and vSphere 5.1, but applies to other hypervisors and databases
  • #5: Storageresponse times – virtualization overhead hurtthroughputComputeLack of memoryLack of vcpu (key)Costconsolidation ratio blown
  • #6: Multi-vendor – vmware, MS, Citrix hypervisors all can provide large amounts of virtual resources nowAuto tiering storage. Cache must “heat up”
  • #7: GO QUICKLY THROUGH THESE 2 SECTIONS ON SQL WORKLOADS
  • #8: GO QUICKLY THROUGH THESE 2 SECTIONS ON SQL WORKLOADS
  • #19: RDM is Raw Device Mapping. This means that the backend storage LUN or volume is mapped directly the VM through the vmkernel, not as a virtual disk. It would then be formatted with NTFS, instead of VMFS, so it could be read and understood by a physical Windows host. Always use Virtual Mode RDM, unless Physical Mode absolutely required.
  • #21: Existing VM’s should not need to be changed, unless the VM is having storage I/O latency related to the LSI virtual scsi adapter.  New VM’s can use pvscsi safely, there’s no performance downside, but it’s a little bit of complexity that needs to be maintained.  Server 2008 and Server 2012 will require loading the mass storage drivers during Windows install if it’s used for the system drive, but if it’s done as a template then it doesn’t have to be done again.  A VM that pushes 2000 iops by itself would start to benefit from paravirtualization.  For a performance-sensitive SQL server VM, we can recommend pvscsci.  For less performance-sensitive VMs we recommend the default for that environment. 
  • #29: Link: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=1037959