SlideShare a Scribd company logo
Deploying SSD in the data center
Or How this Flash
Makes Storage Like This Flash
Today’s Agenda
• The storage performance problem
• Flash to the rescue
– A brief flash memory primer
– Flash/SSD types and form factors
• The All Flash Arrays (AFA)
• Hybrid arrays
• Server side flash
• Converged architectures
• Choosing a solution
The IO Gap
• Processor speed doubles every 2-3 years
• Disks have been stuck at 15K RPM since 2000
“The I/O Blender” Strains Storage
• Virtualization throws
I/O into a blender…
all I/O is now random
I/O!
• 10 VMs doing
sequential I/O to the
same
datastore=random I/O
• Disk drives are good
at sequential, less
good at random
Data Access Performance
• L1 processor cache ~1ns
• L2 processor cache ~4ns
• Main memory ~100ns
• PCIe SSD read 16-60μs (16,000-60,00ns)
• SAS/SATA SSD read 50-200μs (50,000-
200,000ns)
• Disk read 4-50ms (4-50,000,000ns)
Moral of the story: keep IOPS away from the disk
Traditional Performance Solutions
• Head per track disk drives, DRAM SSDs
– Huge price premium limits use to the very few
• Wide Striping
– A 15K RPM disk delivers 200 IOPS
– For 10,000 IOPS spread load across 50 drives
• Of course that’s 15PB of capacity
– Short stroking
• Use just outside tracks to cut latency
• Wasting capacity wastes $ and OpEx (power, maint)
What Is Flash Memory?
• Solid State, Non-volatile memory
– Stored charge device
– Not as fast as DRAM but retains
• Read/Write blocks but must erase 256KB-1MB
pages
– Erase takes 2ms or more
– Erase wears out cells
• Writes always slower than reads
The Three, and a half, Types of Flash
• Single Level Cell (SLC) (1bit/cell)
– Fastest
– 100,000 program/erase cycle lifetime
• Multi Level Cell (MLC) (2 bits/cell)
– Slower
– 10,000 program/erase cycle lifetime
• eMLC or HET MLC (2 bits/cell)
– Slightly slower writes
– 30,000 cycles
• Triple Level Cell (TLC) (3 bits/cell)
– Not ready for data center use
– Phones, tablets, maybe laptops
Flash’s Future
• Today’s state of the art flash 1x nm cells (17-
19nm)
• Most shipping SSDs still 24nm
• Smaller cells are denser, cheaper, crappier
• Samsung now shipping 3d
– Because they couldn’t get Hi-k to work
• Other foundries have 1-2 more shrinks
• Other technologies post 2020
– PCM, Memristors, Spin Torque, Etc.
Anatomy of an SSD
• Flash Controller
– Provides external interface
• SATA
• SAS
• PCIe
– Wear leveling
– Error correction
• DRAM
– Write buffer
– Metadata
• Ultra or other capacitor
– Power failure DRAM dump
– Enterprise SSDs only
Flash/SSD Form Factors
• SATA 2.5”
– The standard for laptops, good for servers
• SAS 2.5”
– Dual ports for dual controller arrays
• PCIe
– Lower latency, higher bandwidth
– Blades require special form factors
• SATA Express
– 2.5” PCIe frequently with NVMe
SSDs use Flash but Flash≠SSD
• Fusion-IO cards
– Atomic Writes
• Send multiple writes (eg: parts of a database transaction)
– Key-Value Store
• NVMe
– PCIe but with more, deeper queues
• Memory Channel Flash (SanDisk UltraDIMM)
– Block storage or direct memory
– Write latency as low as 3µsec
– Requires BIOS support
– Pricey
Selecting SSDs
• Trust your OEM’s qualification
– They really do test
• Most applications won’t need 100K IOPS
• Endurance ≠ reliability
– SSDs more reliable than HDDs
• 2 million hr MTBF
• 10^17 BER vs 10^15 for near line HDD
– Wear out is predictable
– Consider treating SSDs as consumables
– However don’t use read optimized drive in write
heavy environment
SanDisk’s Enterprise SATA SSDs
Name Sizes IOPS r/w Endurance Application
Eco 240, 480,
960
80K/15K 1 DWPD, 3yr Read intensive
Ascend 240, 480,
960
75K/14K 1 DWPD, 3yr Read intensive
Ultra 200, 400,
800
75K/25K 3 DWPD, 5yr General purpose
Extreme 100, 200,
400, 800
75K/25K 10 DWPD, 5yr Write intensive
DWPD = Drive Writes Per Day
Flash for Acceleration
• There are 31 flavors of flash usage
• What’s best for you depends on your:
– Application mix
– IOPS demand
– Tolerance of variable performance
– Pocketbook
– Organizational politics
Basic Deployment Models
• SSDs in server as disk
• All solid state array
• Hybrid arrays
– Sub LUN tiering
– Caching
• Server side caching
• Others
• Minimizes latency and maximizes bandwidth
– No SAN latency/congestion
– Dedicated controller
• But servers are unreliable
– Data on server SSD is captive
– Good where applications are resilient
– Web 2.0
– SQL Server Always On
– Software cross-server mirroring
Flash in the Server
All Flash Array Vendors Want You to
Think of This
But Some Are This
Or Worse This
What You Really Want
Rackmount SSDs
• Our drag racers
– They go fast but that’s all they do
• The first generation of solid state
• Not arrays because:
– Single Controller
– Limited to no data services
– IBM’s Texas Memory
– Astute Networks
– Historically Violin Memory though that’s changing
The Hot Rods
• Legacy architectures with SSD replacing HDD
– NetApp EF550
– EMC VNX
– Equallogic PS6110s
– Many 2nd and 3rd tier vendor’s AFAs
• Limited performance
• 50-300,000 IOPS
• Full set of data management features
• Wrong architecture/data layout for flash
All Solid State Arrays
• Minimum dual controllers w/failover
• Even better scale-out
• Higher performance (1 megaIOP or better)
• Better scalability (100s of TB)
• Most have partial data management features
– Snapshots, replication, thin provisioning, REST, Etc.
• May include data deduplication, compression
– Lower cost w/minimal impact on performance
Legacy Vendors All Flash Array
• 3Par and Compellent’s data layout better for
flash
– Easier tiering, less write amplification
• Dell - Compellent
– Mixed flash
• SLC write cache/buffer, MLC main storage
– Traditional dual controller
• HP 3Par Storeserv 7450
– 220TB (Raw)
– 2-4 controllers
Pure Storage
• Dual Controllers w/SAS shelves
• SLC write cache in shelf
• 2.75-35TB raw capacity
• Always on compress and dedupe
• FC, iSCSI or FCoE
• Snapshots now, replication soon
• Good support, upgrade policies
• Graduated from startup to upstart
• Promising scale-out
SolidFire
• Scale out architecture
– 5 node starter 174TB (raw) 375K IOPS
– Scale to 100 nodes
• Always on dedupe, compression
• Content addressed SSDs
• Leading storage QoS
• Moving from cloud providers to
enterprise
• iSCSI, FC via bridge nodes
Cisco/Whiptail
• Cisco bought AFA startup Whiptail, put with UCS
• Storage router based scale out
• Up to 24TB raw node, 10 nodes
• FC, iSCSI
• Dedupe, compression
Storage
Appliance
Storage Router Storage Router
Storage
Appliance
HP
ProLiant
DL380 G6
FANS
PROC
1
PROC
2
POWER
SUPPLY
2
POWER
SUPPLY
1
OVER
TEMP
POWER
CAP
1 2 3 4
9
8
7
6
5
4
3
2
1 1
2
3
4
5
6
7
8
9
ONLINE
SPARE
MIRROR
UID
2
1
4
3
6
5
8
7
6 5 4 3 2 1
HP
ProLiant
DL380 G6
FANS
PROC
1
PROC
2
POWER
SUPPLY
2
POWER
SUPPLY
1
OVER
TEMP
POWER
CAP
1 2 3 4
9
8
7
6
5
4
3
2
1 1
2
3
4
5
6
7
8
9
ONLINE
SPARE
MIRROR
UID
2
1
4
3
6
5
8
7
6 5 4 3 2 1
HP
ProLiant
DL380 G6
FANS
PROC
1
PROC
2
POWER
SUPPLY
2
POWER
SUPPLY
1
OVER
TEMP
POWER
CAP
1 2 3 4
9
8
7
6
5
4
3
2
1 1
2
3
4
5
6
7
8
9
ONLINE
SPARE
MIRROR
UID
2
1
4
3
6
5
8
7
6 5 4 3 2 1
Storage
Appliance
Storage
Appliance
Workloads
EMC XtremIO
• Scale-out Fibre Channel
• X-Brick is 2 x86 servers w/SSDs
• Scales to 8 X-Bricks
• Infiniband RDMA interconnect
• Shared memory requires UPS
• Full time dedupe, CAS
• 10-80TB raw
All Flash Scaling
Hybrid Arrays
• Combine flash and spinning disk in one system
– Usually 7200RPM
• Legacy designs with SSDs added
• Next-Gen Hybrids
– Tegile
– Nimble
– Fusion-IO/IO control
– Tintri
• High performance
– 20,000 IOPS or more from 3-4u
– 10% flash usually provides 2-4x performance boost
• May include deduplication, compression,
virtualization features
Sub-LUN Tiering
• Moves “hot” data from
slow to fast storage
• Only 1 copy of data
• Must collect access
frequency metadata
• Usually on legacy
arrays
• Ask about granularity,
frequency
– Up to 1GB, once a day
• Can give unpredictable
performance
Flash Caching
• Data copied to flash on read and/or write
• Real time
• Write around
– Reads cached
• Write-through cache
– All writes to disk and flash synchronously
– Acknowledgment from disk
• Write back cache
– Write to flash, spool to disk asynchronously
Server Flash Caching Advantages
• Take advantage of lower latency
– Especially w/PCIe flash card/SSD
• Data written to back end array
– So not captive in failure scenario
• Works with any array
– Or DAS for that matter
• Allows focused use of flash
– Put your dollars just where needed
– Match SSD performance to application
– Politics: Server team not storage team solution
Caching Boosts Performance!
0
500
1000
1500
2000
2500
3000
3500
Baseline PCIe SSD Cache Low end SSD Cache
Published TPC-C results
Write Through and Write Back
0
10000
20000
30000
40000
50000
60000
Baseline Write Through Write Back
TPC-C IOPS
• 100 GB cache
• Dataset 330GB grows to 450GB over 3 hour test
Server Side Caching Software
• Over 20 products on the market
• Some best for physical servers
– Windows or Linux
• Others for hypervisors
– Live migration/vMotion a problem
• Most provide write through cache
– No unique data in server
– Only accelerates
• Duplicated, distributed cache provides write back
Live Migration Issues
• Does cache allow migration
– Through standard workflow
• To allow automation like DRS?
• Is cache cold after migration?
• Cache coherency issues
• Guest cache
– Cache LUN locks VM to server
• Can automate but breaks workflow
• Hypervisor cache
– Must prepare, warm cache at destination
Distributed Cache
• Duplicate cached writes across n servers
• Eliminates imprisoned data
• Allows cache for servers w/o SSD
• RDMA based solutions
– PernixData
– Dell Fluid Cache
• Qlogic caching HBA
acts as target & initiator
Virtual Storage Appliances
• Storage array software in a VM
• iSCSI or NFS back to host(s)
• Caching in software or RAID controller
• Players:
 VMware
 StoreMagic
 HP/Lefthand
 Nexenta
Hyperconvirged Infrastructure
• Use server CPU and drive slots for storage
• Software pools SSD & HDD across multiple
servers
• Data protection via n-way
replication
• Can be sold as hardware
or software
– Software defined/driven
Hyper-convirged Systems
• Nutanix
– Derived from Google File System
– 4 nodes/block
– Multi-hypervisor
– Storage for cluster only
• Simplivity
– Dedupe and backup to the cloud
– Storage available to other servers
– 2u Servers
• Both have compute and storage heavy
models
Software Defined Storage
• VMware’s VSAN
– Scales from 4-32 nodes
– 1 SSD, 1 HDD required per node
• Maxta Storage Platform
– Data optimization (compress, dedupe)
– Metadata based snapshots
• EMC ScaleIO
– Scales to 100s of nodes
– Hypervisor agnostic
All Flash Array?
• If you need:
– More than 75,000 IOPS
– For one or more high ROI applications
• Expect to pay $4-7 GB
• Even with dedupe
• Think about data services
– Snapshots, replication, Etc.
Hybrids
• Hybrids fit most users
– High performance to flash
– Low performance from disk
– All automatic
• Look for flash-first architectures
– Usually but not always from newer vendors
• Ask about granularity and frequency for
tiering
• Again data services
– Snaps on HDD
– Per-VM services
I’ll give up Fibre Channel,
When you pry it from my cold dead
hands
Server Side Caching
• Decouples performance from capacity
• Strategic use
– Pernix data write back cache w/low cost array
• Tactical solition
– Offload existing array
– Boost performance with minimal Opex
Questions and Contact
• Contact info:
– Hmarks@deepstorage.net
– @DeepStoragenet on Twitter

More Related Content

What's hot (20)

PPTX
Hardware planning & sizing for sql server
Davide Mauri
 
PPTX
NGENSTOR_ODA_P2V_V5
UniFabric
 
PPTX
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
Marco Obinu
 
PDF
Linux and H/W optimizations for MySQL
Yoshinori Matsunobu
 
PPTX
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Patrick McGarry
 
PDF
CaSSanDra: An SSD Boosted Key-Value Store
Tilmann Rabl
 
PDF
Tuning Linux Windows and Firebird for Heavy Workload
Marius Adrian Popa
 
PDF
Replication Solutions for PostgreSQL
Peter Eisentraut
 
PDF
SOUG_SDM_OracleDB_V3
UniFabric
 
PPTX
Varrow madness 2013 virtualizing sql presentation
pittmantony
 
PDF
VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learne...
VMworld
 
PDF
SOUG_GV_Flashgrid_V4
UniFabric
 
PDF
PostgreSQL Scaling And Failover
John Paulett
 
PPTX
SM16 - Can i move my stuff to openstack
pittmantony
 
PPT
V L S
darulquthni
 
PDF
SSD Deployment Strategies for MySQL
Yoshinori Matsunobu
 
PPTX
Vm13 vnx mixed workloads
pittmantony
 
PPTX
Oracle Performance On Linux X86 systems
Baruch Osoveskiy
 
PDF
Tuning DB2 in a Solaris Environment
Jignesh Shah
 
PPTX
Varrow datacenter storage today and tomorrow
pittmantony
 
Hardware planning & sizing for sql server
Davide Mauri
 
NGENSTOR_ODA_P2V_V5
UniFabric
 
Global Azure Virtual 2020 What's new on Azure IaaS for SQL VMs
Marco Obinu
 
Linux and H/W optimizations for MySQL
Yoshinori Matsunobu
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Patrick McGarry
 
CaSSanDra: An SSD Boosted Key-Value Store
Tilmann Rabl
 
Tuning Linux Windows and Firebird for Heavy Workload
Marius Adrian Popa
 
Replication Solutions for PostgreSQL
Peter Eisentraut
 
SOUG_SDM_OracleDB_V3
UniFabric
 
Varrow madness 2013 virtualizing sql presentation
pittmantony
 
VMworld 2013: Just Because You Could, Doesn't Mean You Should: Lessons Learne...
VMworld
 
SOUG_GV_Flashgrid_V4
UniFabric
 
PostgreSQL Scaling And Failover
John Paulett
 
SM16 - Can i move my stuff to openstack
pittmantony
 
SSD Deployment Strategies for MySQL
Yoshinori Matsunobu
 
Vm13 vnx mixed workloads
pittmantony
 
Oracle Performance On Linux X86 systems
Baruch Osoveskiy
 
Tuning DB2 in a Solaris Environment
Jignesh Shah
 
Varrow datacenter storage today and tomorrow
pittmantony
 

Similar to Deploying ssd in the data center 2014 (20)

PPTX
2015 deploying flash in the data center
Howard Marks
 
PDF
505 kobal exadata
Kam Chan
 
PDF
Enterprise-class Solid State Drives
calypsori
 
PDF
Presentation database on flash
xKinAnx
 
PPTX
Storage and performance- Batch processing, Whiptail
Internet World
 
PDF
Storage Spaces Direct - the new Microsoft SDS star - Carsten Rachfahl
ITCamp
 
PDF
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Community
 
PDF
FlashSQL 소개 & TechTalk
I Goo Lee
 
PDF
Designs, Lessons and Advice from Building Large Distributed Systems
Daehyeok Kim
 
PPT
Ibm flash tms presentation 2013 04
Patrick Bouillaud
 
PDF
S016828 storage-tiering-nola-v1710b
Tony Pearson
 
PPTX
Flash Storage Technology 101
Unitiv
 
PDF
High Performance Hardware for Data Analysis
Mike Pittaro
 
PDF
Mike Pittaro - High Performance Hardware for Data Analysis
PyData
 
PDF
High Performance Hardware for Data Analysis
Mike Pittaro
 
PDF
High Performance Hardware for Data Analysis
odsc
 
PPTX
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Community
 
PPTX
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community
 
PPTX
Using flash on the server side
Howard Marks
 
PDF
Evoluzione dello storage
Andrea Mauro
 
2015 deploying flash in the data center
Howard Marks
 
505 kobal exadata
Kam Chan
 
Enterprise-class Solid State Drives
calypsori
 
Presentation database on flash
xKinAnx
 
Storage and performance- Batch processing, Whiptail
Internet World
 
Storage Spaces Direct - the new Microsoft SDS star - Carsten Rachfahl
ITCamp
 
Ceph Day Tokyo -- Ceph on All-Flash Storage
Ceph Community
 
FlashSQL 소개 & TechTalk
I Goo Lee
 
Designs, Lessons and Advice from Building Large Distributed Systems
Daehyeok Kim
 
Ibm flash tms presentation 2013 04
Patrick Bouillaud
 
S016828 storage-tiering-nola-v1710b
Tony Pearson
 
Flash Storage Technology 101
Unitiv
 
High Performance Hardware for Data Analysis
Mike Pittaro
 
Mike Pittaro - High Performance Hardware for Data Analysis
PyData
 
High Performance Hardware for Data Analysis
Mike Pittaro
 
High Performance Hardware for Data Analysis
odsc
 
Ceph Day Melbourne - Ceph on All-Flash Storage - Breaking Performance Barriers
Ceph Community
 
Ceph Community Talk on High-Performance Solid Sate Ceph
Ceph Community
 
Using flash on the server side
Howard Marks
 
Evoluzione dello storage
Andrea Mauro
 
Ad

More from Howard Marks (6)

PPTX
Flash memory summit enterprise udate 2019
Howard Marks
 
PPTX
Managing storage on Prem and in Cloud
Howard Marks
 
PPTX
Hyperconverged Infrastructure, It's the Future
Howard Marks
 
PPTX
Building Storage for Clouds (ONUG Spring 2015)
Howard Marks
 
PPTX
Software defined storage real or bs-2014
Howard Marks
 
PPTX
Storage for VDI
Howard Marks
 
Flash memory summit enterprise udate 2019
Howard Marks
 
Managing storage on Prem and in Cloud
Howard Marks
 
Hyperconverged Infrastructure, It's the Future
Howard Marks
 
Building Storage for Clouds (ONUG Spring 2015)
Howard Marks
 
Software defined storage real or bs-2014
Howard Marks
 
Storage for VDI
Howard Marks
 
Ad

Recently uploaded (20)

PDF
The Builder’s Playbook - 2025 State of AI Report.pdf
jeroen339954
 
PDF
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
PDF
Reverse Engineering of Security Products: Developing an Advanced Microsoft De...
nwbxhhcyjv
 
PDF
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PDF
LLMs.txt: Easily Control How AI Crawls Your Site
Keploy
 
PDF
HCIP-Data Center Facility Deployment V2.0 Training Material (Without Remarks ...
mcastillo49
 
PPTX
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
PPTX
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
PDF
Achieving Consistent and Reliable AI Code Generation - Medusa AI
medusaaico
 
PDF
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PDF
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
PDF
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
PDF
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
PPTX
✨Unleashing Collaboration: Salesforce Channels & Community Power in Patna!✨
SanjeetMishra29
 
PDF
Chris Elwell Woburn, MA - Passionate About IT Innovation
Chris Elwell Woburn, MA
 
PDF
CIFDAQ Token Spotlight for 9th July 2025
CIFDAQ
 
PPTX
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
PPTX
OpenID AuthZEN - Analyst Briefing July 2025
David Brossard
 
PDF
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
PDF
CIFDAQ Weekly Market Wrap for 11th July 2025
CIFDAQ
 
The Builder’s Playbook - 2025 State of AI Report.pdf
jeroen339954
 
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
Reverse Engineering of Security Products: Developing an Advanced Microsoft De...
nwbxhhcyjv
 
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
LLMs.txt: Easily Control How AI Crawls Your Site
Keploy
 
HCIP-Data Center Facility Deployment V2.0 Training Material (Without Remarks ...
mcastillo49
 
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
Achieving Consistent and Reliable AI Code Generation - Medusa AI
medusaaico
 
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
✨Unleashing Collaboration: Salesforce Channels & Community Power in Patna!✨
SanjeetMishra29
 
Chris Elwell Woburn, MA - Passionate About IT Innovation
Chris Elwell Woburn, MA
 
CIFDAQ Token Spotlight for 9th July 2025
CIFDAQ
 
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
OpenID AuthZEN - Analyst Briefing July 2025
David Brossard
 
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
CIFDAQ Weekly Market Wrap for 11th July 2025
CIFDAQ
 

Deploying ssd in the data center 2014

  • 1. Deploying SSD in the data center Or How this Flash Makes Storage Like This Flash
  • 2. Today’s Agenda • The storage performance problem • Flash to the rescue – A brief flash memory primer – Flash/SSD types and form factors • The All Flash Arrays (AFA) • Hybrid arrays • Server side flash • Converged architectures • Choosing a solution
  • 3. The IO Gap • Processor speed doubles every 2-3 years • Disks have been stuck at 15K RPM since 2000
  • 4. “The I/O Blender” Strains Storage • Virtualization throws I/O into a blender… all I/O is now random I/O! • 10 VMs doing sequential I/O to the same datastore=random I/O • Disk drives are good at sequential, less good at random
  • 5. Data Access Performance • L1 processor cache ~1ns • L2 processor cache ~4ns • Main memory ~100ns • PCIe SSD read 16-60μs (16,000-60,00ns) • SAS/SATA SSD read 50-200μs (50,000- 200,000ns) • Disk read 4-50ms (4-50,000,000ns) Moral of the story: keep IOPS away from the disk
  • 6. Traditional Performance Solutions • Head per track disk drives, DRAM SSDs – Huge price premium limits use to the very few • Wide Striping – A 15K RPM disk delivers 200 IOPS – For 10,000 IOPS spread load across 50 drives • Of course that’s 15PB of capacity – Short stroking • Use just outside tracks to cut latency • Wasting capacity wastes $ and OpEx (power, maint)
  • 7. What Is Flash Memory? • Solid State, Non-volatile memory – Stored charge device – Not as fast as DRAM but retains • Read/Write blocks but must erase 256KB-1MB pages – Erase takes 2ms or more – Erase wears out cells • Writes always slower than reads
  • 8. The Three, and a half, Types of Flash • Single Level Cell (SLC) (1bit/cell) – Fastest – 100,000 program/erase cycle lifetime • Multi Level Cell (MLC) (2 bits/cell) – Slower – 10,000 program/erase cycle lifetime • eMLC or HET MLC (2 bits/cell) – Slightly slower writes – 30,000 cycles • Triple Level Cell (TLC) (3 bits/cell) – Not ready for data center use – Phones, tablets, maybe laptops
  • 9. Flash’s Future • Today’s state of the art flash 1x nm cells (17- 19nm) • Most shipping SSDs still 24nm • Smaller cells are denser, cheaper, crappier • Samsung now shipping 3d – Because they couldn’t get Hi-k to work • Other foundries have 1-2 more shrinks • Other technologies post 2020 – PCM, Memristors, Spin Torque, Etc.
  • 10. Anatomy of an SSD • Flash Controller – Provides external interface • SATA • SAS • PCIe – Wear leveling – Error correction • DRAM – Write buffer – Metadata • Ultra or other capacitor – Power failure DRAM dump – Enterprise SSDs only
  • 11. Flash/SSD Form Factors • SATA 2.5” – The standard for laptops, good for servers • SAS 2.5” – Dual ports for dual controller arrays • PCIe – Lower latency, higher bandwidth – Blades require special form factors • SATA Express – 2.5” PCIe frequently with NVMe
  • 12. SSDs use Flash but Flash≠SSD • Fusion-IO cards – Atomic Writes • Send multiple writes (eg: parts of a database transaction) – Key-Value Store • NVMe – PCIe but with more, deeper queues • Memory Channel Flash (SanDisk UltraDIMM) – Block storage or direct memory – Write latency as low as 3µsec – Requires BIOS support – Pricey
  • 13. Selecting SSDs • Trust your OEM’s qualification – They really do test • Most applications won’t need 100K IOPS • Endurance ≠ reliability – SSDs more reliable than HDDs • 2 million hr MTBF • 10^17 BER vs 10^15 for near line HDD – Wear out is predictable – Consider treating SSDs as consumables – However don’t use read optimized drive in write heavy environment
  • 14. SanDisk’s Enterprise SATA SSDs Name Sizes IOPS r/w Endurance Application Eco 240, 480, 960 80K/15K 1 DWPD, 3yr Read intensive Ascend 240, 480, 960 75K/14K 1 DWPD, 3yr Read intensive Ultra 200, 400, 800 75K/25K 3 DWPD, 5yr General purpose Extreme 100, 200, 400, 800 75K/25K 10 DWPD, 5yr Write intensive DWPD = Drive Writes Per Day
  • 15. Flash for Acceleration • There are 31 flavors of flash usage • What’s best for you depends on your: – Application mix – IOPS demand – Tolerance of variable performance – Pocketbook – Organizational politics
  • 16. Basic Deployment Models • SSDs in server as disk • All solid state array • Hybrid arrays – Sub LUN tiering – Caching • Server side caching • Others
  • 17. • Minimizes latency and maximizes bandwidth – No SAN latency/congestion – Dedicated controller • But servers are unreliable – Data on server SSD is captive – Good where applications are resilient – Web 2.0 – SQL Server Always On – Software cross-server mirroring Flash in the Server
  • 18. All Flash Array Vendors Want You to Think of This
  • 19. But Some Are This
  • 22. Rackmount SSDs • Our drag racers – They go fast but that’s all they do • The first generation of solid state • Not arrays because: – Single Controller – Limited to no data services – IBM’s Texas Memory – Astute Networks – Historically Violin Memory though that’s changing
  • 23. The Hot Rods • Legacy architectures with SSD replacing HDD – NetApp EF550 – EMC VNX – Equallogic PS6110s – Many 2nd and 3rd tier vendor’s AFAs • Limited performance • 50-300,000 IOPS • Full set of data management features • Wrong architecture/data layout for flash
  • 24. All Solid State Arrays • Minimum dual controllers w/failover • Even better scale-out • Higher performance (1 megaIOP or better) • Better scalability (100s of TB) • Most have partial data management features – Snapshots, replication, thin provisioning, REST, Etc. • May include data deduplication, compression – Lower cost w/minimal impact on performance
  • 25. Legacy Vendors All Flash Array • 3Par and Compellent’s data layout better for flash – Easier tiering, less write amplification • Dell - Compellent – Mixed flash • SLC write cache/buffer, MLC main storage – Traditional dual controller • HP 3Par Storeserv 7450 – 220TB (Raw) – 2-4 controllers
  • 26. Pure Storage • Dual Controllers w/SAS shelves • SLC write cache in shelf • 2.75-35TB raw capacity • Always on compress and dedupe • FC, iSCSI or FCoE • Snapshots now, replication soon • Good support, upgrade policies • Graduated from startup to upstart • Promising scale-out
  • 27. SolidFire • Scale out architecture – 5 node starter 174TB (raw) 375K IOPS – Scale to 100 nodes • Always on dedupe, compression • Content addressed SSDs • Leading storage QoS • Moving from cloud providers to enterprise • iSCSI, FC via bridge nodes
  • 28. Cisco/Whiptail • Cisco bought AFA startup Whiptail, put with UCS • Storage router based scale out • Up to 24TB raw node, 10 nodes • FC, iSCSI • Dedupe, compression Storage Appliance Storage Router Storage Router Storage Appliance HP ProLiant DL380 G6 FANS PROC 1 PROC 2 POWER SUPPLY 2 POWER SUPPLY 1 OVER TEMP POWER CAP 1 2 3 4 9 8 7 6 5 4 3 2 1 1 2 3 4 5 6 7 8 9 ONLINE SPARE MIRROR UID 2 1 4 3 6 5 8 7 6 5 4 3 2 1 HP ProLiant DL380 G6 FANS PROC 1 PROC 2 POWER SUPPLY 2 POWER SUPPLY 1 OVER TEMP POWER CAP 1 2 3 4 9 8 7 6 5 4 3 2 1 1 2 3 4 5 6 7 8 9 ONLINE SPARE MIRROR UID 2 1 4 3 6 5 8 7 6 5 4 3 2 1 HP ProLiant DL380 G6 FANS PROC 1 PROC 2 POWER SUPPLY 2 POWER SUPPLY 1 OVER TEMP POWER CAP 1 2 3 4 9 8 7 6 5 4 3 2 1 1 2 3 4 5 6 7 8 9 ONLINE SPARE MIRROR UID 2 1 4 3 6 5 8 7 6 5 4 3 2 1 Storage Appliance Storage Appliance Workloads
  • 29. EMC XtremIO • Scale-out Fibre Channel • X-Brick is 2 x86 servers w/SSDs • Scales to 8 X-Bricks • Infiniband RDMA interconnect • Shared memory requires UPS • Full time dedupe, CAS • 10-80TB raw
  • 31. Hybrid Arrays • Combine flash and spinning disk in one system – Usually 7200RPM • Legacy designs with SSDs added • Next-Gen Hybrids – Tegile – Nimble – Fusion-IO/IO control – Tintri • High performance – 20,000 IOPS or more from 3-4u – 10% flash usually provides 2-4x performance boost • May include deduplication, compression, virtualization features
  • 32. Sub-LUN Tiering • Moves “hot” data from slow to fast storage • Only 1 copy of data • Must collect access frequency metadata • Usually on legacy arrays • Ask about granularity, frequency – Up to 1GB, once a day • Can give unpredictable performance
  • 33. Flash Caching • Data copied to flash on read and/or write • Real time • Write around – Reads cached • Write-through cache – All writes to disk and flash synchronously – Acknowledgment from disk • Write back cache – Write to flash, spool to disk asynchronously
  • 34. Server Flash Caching Advantages • Take advantage of lower latency – Especially w/PCIe flash card/SSD • Data written to back end array – So not captive in failure scenario • Works with any array – Or DAS for that matter • Allows focused use of flash – Put your dollars just where needed – Match SSD performance to application – Politics: Server team not storage team solution
  • 35. Caching Boosts Performance! 0 500 1000 1500 2000 2500 3000 3500 Baseline PCIe SSD Cache Low end SSD Cache Published TPC-C results
  • 36. Write Through and Write Back 0 10000 20000 30000 40000 50000 60000 Baseline Write Through Write Back TPC-C IOPS • 100 GB cache • Dataset 330GB grows to 450GB over 3 hour test
  • 37. Server Side Caching Software • Over 20 products on the market • Some best for physical servers – Windows or Linux • Others for hypervisors – Live migration/vMotion a problem • Most provide write through cache – No unique data in server – Only accelerates • Duplicated, distributed cache provides write back
  • 38. Live Migration Issues • Does cache allow migration – Through standard workflow • To allow automation like DRS? • Is cache cold after migration? • Cache coherency issues • Guest cache – Cache LUN locks VM to server • Can automate but breaks workflow • Hypervisor cache – Must prepare, warm cache at destination
  • 39. Distributed Cache • Duplicate cached writes across n servers • Eliminates imprisoned data • Allows cache for servers w/o SSD • RDMA based solutions – PernixData – Dell Fluid Cache • Qlogic caching HBA acts as target & initiator
  • 40. Virtual Storage Appliances • Storage array software in a VM • iSCSI or NFS back to host(s) • Caching in software or RAID controller • Players:  VMware  StoreMagic  HP/Lefthand  Nexenta
  • 41. Hyperconvirged Infrastructure • Use server CPU and drive slots for storage • Software pools SSD & HDD across multiple servers • Data protection via n-way replication • Can be sold as hardware or software – Software defined/driven
  • 42. Hyper-convirged Systems • Nutanix – Derived from Google File System – 4 nodes/block – Multi-hypervisor – Storage for cluster only • Simplivity – Dedupe and backup to the cloud – Storage available to other servers – 2u Servers • Both have compute and storage heavy models
  • 43. Software Defined Storage • VMware’s VSAN – Scales from 4-32 nodes – 1 SSD, 1 HDD required per node • Maxta Storage Platform – Data optimization (compress, dedupe) – Metadata based snapshots • EMC ScaleIO – Scales to 100s of nodes – Hypervisor agnostic
  • 44. All Flash Array? • If you need: – More than 75,000 IOPS – For one or more high ROI applications • Expect to pay $4-7 GB • Even with dedupe • Think about data services – Snapshots, replication, Etc.
  • 45. Hybrids • Hybrids fit most users – High performance to flash – Low performance from disk – All automatic • Look for flash-first architectures – Usually but not always from newer vendors • Ask about granularity and frequency for tiering • Again data services – Snaps on HDD – Per-VM services
  • 46. I’ll give up Fibre Channel, When you pry it from my cold dead hands
  • 47. Server Side Caching • Decouples performance from capacity • Strategic use – Pernix data write back cache w/low cost array • Tactical solition – Offload existing array – Boost performance with minimal Opex
  • 48. Questions and Contact • Contact info: – [email protected] – @DeepStoragenet on Twitter

Editor's Notes

  • #19: Lamborghini Gallardo 0-60 3.8 Top Speed 202
  • #22: M5 0-60 3.7 Top Speed 205
  • #33: Like driving by looking in rear view mirrorEMC FAST VP algorithm: Each IO to slice adds to counter. Ios age out so after 24 hrs it’s worth .5 and after 7 days almost 0. Once an hour data is analyzed and slices sorted by “heat”. Data moved during allowed movement times (no more frequewntly than once/hr leaving 10% of fastest pool free for future promotions and new allocations to high proirity LUNS in pool. Schedule set for start time (IE 22:00 all 7 days), duration, UI shows estimated migration time. Uiser can select rate from high, med, low.
  • #36: SQL Server/FlashSoft http://www.sandisk.com/assets/docs/SQL_Server_Performance_Enhancement.pdf
  • #37: Note: Flashsoft data w/Virident SSD Baseline 15 15K RPM SAS disks RAID 0