SlideShare a Scribd company logo
A Tale of a Pathological Storage Workload
Eric Sproul, Circonus
ZFS User Conference
March 2017
Fragging Rights
Fragging Rights | Eric Sproul | March 17, 2017
Circonus Release Engineer
Wearer of other hats as well (ops, sales eng., support)
ZFS user since Solaris 10u2 (>10 years ago now??)
Helped bring OmniOS to the world @ OmniTI
Eric Sproul
@eirescot
esproul
• How ZFS manages free space
• Our workload
• Problem manifesta=on
• Lessons learned and future direc=on
We’ll talk about free space, COW, and the hole we dug for ourselves:
Fragging Rights | Eric Sproul | March 17, 2017
ZFS handled it well… unFl it didn’t.
A Tale of Surprise and Pain
Fragging Rights | Eric Sproul | March 17, 2017
ZFS tracks free space with space maps
Time-ordered log of allocaNons and frees
Top-level vdevs divided into a few hundred metaslabs,
each with its own space map
At metaslab load, the map is read into AVL tree in memory
Free Space in ZFS
For details, see Jeff Bonwick’s excellent explanaFon of space maps:

hOp://web.archive.org/web/20080616104156/hOp://blogs.sun.com/bonwick/entry/space_maps
Fragging Rights | Eric Sproul | March 17, 2017
Never overwrite exisNng data.
“UpdaNng a file” means new bits
wriXen to previously free space,
followed by freeing of old chunk.
Copy-on-Write
Scalable =me-series data store
Compact on-disk format while not sacrificing granularity
Enable rich analysis of historical telemetry data
Store value plus 6 other pieces of info:
• variance of value (stddev)
• deriva=ve (change over =me) and stddev of deriva=ve
• counter (non-nega=ve deriva=ve) and stddev of counter
• count of samples that were averaged to produce value
Fragging Rights | Eric Sproul | March 17, 2017
“It seemed like a good idea at the Fme…”
The Workload
Fragging Rights | Eric Sproul | March 17, 2017
File format is columnar and offset-based, with 32-byte records.
Common case is appending to the end (most recent measurement/rollup).
No synchronous semanNcs used for updaNng data files
(this is a cluster and we have write-ahead logs).
5x 1m records = 32 * 5 = 160 bytes append
1x 5m rollup = 32 bytes append
1x 3h rollup = 32 bytes append, then update w/ recalculated rollup
The Workload
Fragging Rights | Eric Sproul | March 17, 2017
With copy-on-write, every Circonus record append or overwrite modifies a ZFS
block, freeing the old copy of that block.
ZFS has variable block sizes: minimum is a single disk sector (512b/4K),
max is recordsize property (we used 8K).
When the tail block reaches 8K, it stops gedng updated
and a new tail block starts.
The Workload
Fragging Rights | Eric Sproul | March 17, 2017
ZFS sees tail block updated every 5 minutes for:
1m files: 8192/160 = 51 Nmes (~4 hours)
5m files: 8192/32 = 256 Nmes (~21 hours)
3h files: tragic
• last record wriXen, then rewriXen 35 more Nmes

(as 3h rollup value gets recalculated)
• 256 records to fill 8K, ZFS sees 256*36 = 9216 block updates (32 days)
Alas, we also use compression, so it’s actually ~2x worse than this.
The Workload
Fragging Rights | Eric Sproul | March 17, 2017
Aher “some Nme”, depending on ingesNon volume and pool/vdev size,
performance degrades swihly.
TXGs take longer and longer to process, stalling the ZIO pipeline.
ZIO latency bubbles up to userland; applicaNon sees
increasing write syscall latency, lowering throughput.
Customers are sad. 😔
The Problem
Fragging Rights | Eric Sproul | March 17, 2017
Start with what the app sees: DTrace syscalls
(us) syscall: pwrite bytes: 32
value ------------- Distribution ------------- count
4 | 0
8 | 878
16 |@@@@@@@@@@@@ 27051
32 |@@@@@@@@@@@@@@@ 34310
64 |@@@@@@ 13361
128 |@ 2586
256 | 148
512 | 53
1024 | 39
2048 | 33
4096 | 82
8192 | 534
16384 |@@@@ 8614
32768 | 474
65536 | 22
131072 | 0
262144 | 0
524288 | 0
1048576 | 36
2097152 | 335
4194304 | 72
8388608 | 0
Troubleshoo=ng
lolwut?!?
bad
App
VFS
ZFS
(disks)
userland
kernel
syscall
Fragging Rights | Eric Sproul | March 17, 2017
Slow writes must mean disks are saturated, right?
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
data 918.3 2638.5 4644.3 32217.0 1362.7 11.8 386.4 21 40
rpool 0.9 4.4 3.5 22.3 0.0 0.0 2.0 0 0
sd0 0.9 4.4 3.5 22.3 0.0 0.0 0.5 0 0
sd6 67.6 175.1 347.9 2299.8 0.0 0.6 2.6 0 13
sd7 64.4 225.5 335.9 2300.7 0.0 0.7 2.3 0 14
sd8 67.3 167.8 314.5 2300.4 0.0 0.6 2.6 0 13
sd9 65.3 173.8 326.7 2299.8 0.0 0.6 2.6 0 13
sd10 66.1 226.6 332.2 2300.7 0.0 0.6 2.2 0 13
sd11 67.2 153.9 338.8 2301.4 0.0 0.4 2.0 0 11
sd12 69.4 154.6 345.5 2301.4 0.0 0.4 2.0 0 11
sd13 64.0 162.0 321.9 2300.8 0.0 0.4 2.0 0 11
sd14 65.5 163.7 328.7 2300.8 0.0 0.4 2.0 0 11
sd15 64.5 221.4 343.9 2303.5 0.0 0.8 2.7 0 15
sd16 61.1 222.6 318.1 2303.5 0.0 0.8 2.8 0 15
sd17 63.5 211.3 338.1 2303.2 0.0 0.7 2.7 0 14
sd18 63.8 213.0 330.6 2303.2 0.0 0.7 2.6 0 14
sd19 68.7 170.0 321.9 2300.4 0.0 0.6 2.5 0 13
Troubleshoo=ng
App
VFS
ZFS
(disks)
userland
kernel
iostat -x
Fragging Rights | Eric Sproul | March 17, 2017
We know the problem is in the write path, but it's not the disks.
What Now?
TL;DR is that ZFS internals are very complicated and difficult to reason about, especially with
legacy ZFS code (OmniOS r151006, circa 2013).
We flailed around for a while, using DTrace to generate kernel flame graphs,
to get a sense of what the kernel was up to.
Fragging Rights | Eric Sproul | March 17, 2017
Flame graphs!
Kernel Stack Profiling
System mostly idle
(not shown),
but when not, it's ZFS.
hXps://github.com/brendangregg/FlameGraph
Fragging Rights | Eric Sproul | March 17, 2017
>1s to find free space?!?
metaslab_alloc
time (nsec)
value ------------- Distribution ------------- count
1024 | 0
2048 | 1166
4096 |@@@@@ 41714
8192 |@@@@@@@@@@@@@@@@ 134407
16384 |@@@@@@@@@@@@@ 107140
32768 |@@ 17603
65536 |@ 10066
131072 |@ 10315
262144 |@ 8144
524288 | 3715
1048576 | 1598
2097152 | 581
4194304 | 75
8388608 | 49
16777216 | 0
33554432 | 0
67108864 | 0
134217728 | 0
268435456 | 0
536870912 | 0
1073741824 | 1276
2147483648 | 0
Slab Alloca=on
App
VFS
ZFS
(disks)
userland
kernel
metaslab_alloc
• pwrite(2) syscall is slow.
• Slow because ZFS takes so long to allocate new space for writes.
• We don't know precisely why these alloca=ons are bogged down,

but it likely involves the nature of the pool's free space.
StarNng with applicaNon-perceived latency:
Fragging Rights | Eric Sproul | March 17, 2017
What do we know now?
Troubleshoo=ng: Recap
Fragging Rights | Eric Sproul | March 17, 2017
Aggregate free space isn't everything
Needed to visualize exisNng metaslab allocaNons:
Quan=ty vs. Quality
# zdb -m data
Metaslabs:
vdev 0
metaslabs 116 offset spacemap free
--------------- ------------------- --------------- -------------
metaslab 0 offset 0 spacemap 38 free 2.35G
metaslab 1 offset 100000000 spacemap 153 free 2.19G
metaslab 2 offset 200000000 spacemap 156 free 1.70G
metaslab 3 offset 300000000 spacemap 158 free 639M
metaslab 4 offset 400000000 spacemap 160 free 1.11G
Fragging Rights | Eric Sproul | March 17, 2017
Visualizing Metaslabs
Credit for the idea: hOps://www.delphix.com/blog/delphix-engineering/zfs-write-performance-impact-fragmentaFon
Color indicates fullness;
Green = empty
Red = full
Percentage is remaining
free space.
Fragging Rights | Eric Sproul | March 17, 2017
Visualizing Metaslabs
Aher data rewrite,
allocaNons are Nghter.
Performance problem
is gone.
Did we just create "ZFS Defrag"?
Fragging Rights | Eric Sproul | March 17, 2017
From one slab, on one vdev, using `zdb -mmm`:
Spacemap Detail
Metaslabs:
vdev 0
metaslabs 116 offset spacemap free
--------------- ------------------- --------------- -------------
metaslab 39 offset 9c00000000 spacemap 357 free 13.2G
segments 1310739 maxsize 168K freepct 82%
[ 0] ALLOC: txg 7900863, pass 1
[ 1] A range: 9c00000000-9c00938800 size: 938800
[ 2] A range: 9c00939000-9c0093d200 size: 004200
...
[4041229] FREE: txg 7974611, pass 1
[4041230] F range: 9d79d10600-9d79d10800 size: 000200
[4041231] F range: 9e84efe400-9e84efe600 size: 000200
[4041232] FREE: txg 7974612, pass 1
[4041233] F range: 9e72ba4600-9e72ba5400 size: 000e00
90th %ile freed size: ~9K
31% of frees are 512b-1K
>4M records in one spacemap...

Costly to load, only to discover you can't allocate from it!

Probably contributes to those long metaslab_alloc() =mes.
• Spacemap histograms
• Visible via zdb (-mm) and mdb (::spa -mh)
• Metaslab fragmenta=on metrics
• Allocator changes to account for fragmenta=on
• New tuning knobs* for write thro_le
Since our iniNal foray into this issue, new features have come out:
Fragging Rights | Eric Sproul | March 17, 2017
We weren't alone!
OpenZFS Improvements
* hOp://dtrace.org/blogs/ahl/2014/08/31/openzfs-tuning/
Fragging Rights | Eric Sproul | March 17, 2017
metaslab 2 offset 400000000 spacemap 227 free 7.99G
On-disk histogram: fragmentation 90
9: 632678 ****************************************
10: 198275 *************
11: 342565 **********************
12: 460625 ******************************
13: 213397 **************
14: 82860 ******
15: 9774 *
16: 137 *
17: 1 *
Spacemap Histogram
Key is power-of-2 range size
FragmentaNon metric is based on this distribuNon
• Performance is fine un=l some percentage of metaslabs are "spoiled",
even though overall pool used space is low.
• Once in this state, only solu=on is bulk data rewrite.
• Happens sooner if you have fewer/smaller slabs.
• Happens sooner if you increase inges=on rate.
Our workload is (someNmes) pathologically bad at scale
Fragging Rights | Eric Sproul | March 17, 2017
"Doctor, it hurts when I do <this>..."

"Then don't do that."
What We Learned
• Use in-memory DB to accumulate incoming data
• Batch-update columnar files with large, sequen=al writes
• Eventually replace columnar files with some other DB format
Avoid those single-column updates
Fragging Rights | Eric Sproul | March 17, 2017
Follow doctor's orders
What We're Doing About It
QuesFons?
Eric Sproul, Circonus
ZFS User Conference
March 2017
Thanks for listening!
@eirescot

More Related Content

PDF
06.09.2017 Computer Science, Machine Learning & Statistiks Meetup - MULTI-GPU...
Zalando adtech lab
 
PDF
Naist2015 dec ver1
Hiroki Nakahara
 
PDF
An Analysis of Convolution for Inference
Intel Nervana
 
PDF
Semantic search within Earth Observation products databases based on automati...
Gasperi Jerome
 
PPTX
Deep Learning on Aerial Imagery: What does it look like on a map?
Rob Emanuele
 
PDF
High-Performance GPU Programming for Deep Learning
Intel Nervana
 
PDF
Analyzing Larger RasterData in a Jupyter Notebook with GeoPySpark on AWS - FO...
Rob Emanuele
 
PDF
Experiences of numerical simulations on a PC cluster
Antti Vanne
 
06.09.2017 Computer Science, Machine Learning & Statistiks Meetup - MULTI-GPU...
Zalando adtech lab
 
Naist2015 dec ver1
Hiroki Nakahara
 
An Analysis of Convolution for Inference
Intel Nervana
 
Semantic search within Earth Observation products databases based on automati...
Gasperi Jerome
 
Deep Learning on Aerial Imagery: What does it look like on a map?
Rob Emanuele
 
High-Performance GPU Programming for Deep Learning
Intel Nervana
 
Analyzing Larger RasterData in a Jupyter Notebook with GeoPySpark on AWS - FO...
Rob Emanuele
 
Experiences of numerical simulations on a PC cluster
Antti Vanne
 

What's hot (20)

PDF
Large scale data-parsing with Hadoop in Bioinformatics
Ntino Krampis
 
PPTX
SVs hackathon group report
Fritz Sedlazeck
 
PDF
FOSDEM 2015: Distributed Tile Processing with GeoTrellis and Spark
Rob Emanuele
 
PDF
FPL15 talk: Deep Convolutional Neural Network on FPGA
Hiroki Nakahara
 
PPT
Riding the Elephant - Hadoop 2.0
Simon Elliston Ball
 
PDF
C07.heaps
syeda madeha azmat
 
PDF
LHCb Computing Workshop 2018: PV finding with CNNs
Henry Schreiner
 
PDF
Climate data in r with the raster package
Alberto Labarga
 
PPTX
Super COMPUTING Journal
Pandey_G
 
PDF
A Random Forest using a Multi-valued Decision Diagram on an FPGa
Hiroki Nakahara
 
PDF
An NSA Big Graph experiment
Trieu Nguyen
 
PDF
Real-time applications on IntelXeon/Phi
Karel Ha
 
PDF
Surface-related multiple elimination through orthogonal encoding in the laten...
Oleg Ovcharenko
 
PDF
Auscert Finding needles in haystacks (the size of countries)
packetloop
 
PDF
Graph Regularised Hashing
Sean Moran
 
PDF
Ch 5: Introduction to heap overflows
Sam Bowne
 
PDF
Automatic Features Generation And Model Training On Spark: A Bayesian Approach
Spark Summit
 
PPTX
Jonathan Lefman presents his work on Superresolution chemical microscopy
Jonathan Lefman
 
PDF
Meet the Experts: Visualize Your Time-Stamped Data Using the React-Based Gira...
InfluxData
 
PDF
Ai Forum at Computex 2017 - Keynote Slides by Jensen Huang
NVIDIA Taiwan
 
Large scale data-parsing with Hadoop in Bioinformatics
Ntino Krampis
 
SVs hackathon group report
Fritz Sedlazeck
 
FOSDEM 2015: Distributed Tile Processing with GeoTrellis and Spark
Rob Emanuele
 
FPL15 talk: Deep Convolutional Neural Network on FPGA
Hiroki Nakahara
 
Riding the Elephant - Hadoop 2.0
Simon Elliston Ball
 
LHCb Computing Workshop 2018: PV finding with CNNs
Henry Schreiner
 
Climate data in r with the raster package
Alberto Labarga
 
Super COMPUTING Journal
Pandey_G
 
A Random Forest using a Multi-valued Decision Diagram on an FPGa
Hiroki Nakahara
 
An NSA Big Graph experiment
Trieu Nguyen
 
Real-time applications on IntelXeon/Phi
Karel Ha
 
Surface-related multiple elimination through orthogonal encoding in the laten...
Oleg Ovcharenko
 
Auscert Finding needles in haystacks (the size of countries)
packetloop
 
Graph Regularised Hashing
Sean Moran
 
Ch 5: Introduction to heap overflows
Sam Bowne
 
Automatic Features Generation And Model Training On Spark: A Bayesian Approach
Spark Summit
 
Jonathan Lefman presents his work on Superresolution chemical microscopy
Jonathan Lefman
 
Meet the Experts: Visualize Your Time-Stamped Data Using the React-Based Gira...
InfluxData
 
Ai Forum at Computex 2017 - Keynote Slides by Jensen Huang
NVIDIA Taiwan
 
Ad

Viewers also liked (19)

PDF
Startups in Brazil and Latin America - SXSW 2017
Bruno Peroni
 
PDF
Amazon AI (March 2017)
Julien SIMON
 
PDF
ハイブリッドクラウドの現実とAzureの使いどころ
Toru Makabe
 
PPTX
What is Deep Learning?
NVIDIA
 
PDF
Visual Design with Data
Seth Familian
 
PPTX
Craftsmanship
Theo Schlossnagle
 
PDF
Introduction to Docker
Adam Štipák
 
PDF
Introduction to Domain Driven Design (Webtlak #7)
Adam Štipák
 
PDF
#TorontoHR Meetup: How to speak CEO | TemboStatus
TemboStatus
 
PDF
Zfs intro v2
Eric Sproul
 
PPTX
Artificial inteligence
ankit dubey
 
PPTX
MuleSoft London Community - API Marketing, Culture Change and Tooling
Pace Integration
 
PPTX
Profissões do futuro [ou o futuro das Profissões?]
Pedro Ramos
 
PDF
Web crawl with Elixir
이재철
 
PDF
Socialytics: Accelerating IBM Connections Adoption with Watson Analytics
Femke Goedhart
 
PDF
Drive Digital Transformation with Innovation
Perficient, Inc.
 
PDF
Docker for developers
Anvay Patil
 
PDF
Dockercon2015 bamboo
Steve Smith
 
PPTX
DevOps and Continuous Delivery reference architectures for Docker
Sonatype
 
Startups in Brazil and Latin America - SXSW 2017
Bruno Peroni
 
Amazon AI (March 2017)
Julien SIMON
 
ハイブリッドクラウドの現実とAzureの使いどころ
Toru Makabe
 
What is Deep Learning?
NVIDIA
 
Visual Design with Data
Seth Familian
 
Craftsmanship
Theo Schlossnagle
 
Introduction to Docker
Adam Štipák
 
Introduction to Domain Driven Design (Webtlak #7)
Adam Štipák
 
#TorontoHR Meetup: How to speak CEO | TemboStatus
TemboStatus
 
Zfs intro v2
Eric Sproul
 
Artificial inteligence
ankit dubey
 
MuleSoft London Community - API Marketing, Culture Change and Tooling
Pace Integration
 
Profissões do futuro [ou o futuro das Profissões?]
Pedro Ramos
 
Web crawl with Elixir
이재철
 
Socialytics: Accelerating IBM Connections Adoption with Watson Analytics
Femke Goedhart
 
Drive Digital Transformation with Innovation
Perficient, Inc.
 
Docker for developers
Anvay Patil
 
Dockercon2015 bamboo
Steve Smith
 
DevOps and Continuous Delivery reference architectures for Docker
Sonatype
 
Ad

Similar to Fragging Rights: A Tale of a Pathological Storage Workload (20)

PPTX
Vancouver bug enterprise storage and zfs
Rami Jebara
 
PDF
Scale2014
Dru Lavigne
 
PDF
Flourish16
Dru Lavigne
 
PDF
OSDC 2016 - Interesting things you can do with ZFS by Allan Jude&Benedict Reu...
NETWAYS
 
PDF
OpenZFS novel algorithms: snapshots, space allocation, RAID-Z - Matt Ahrens
Matthew Ahrens
 
PDF
An Introduction to the Implementation of ZFS by Kirk McKusick
eurobsdcon
 
ODP
ZFS by PWR 2013
pwrsoft
 
PDF
Asiabsdcon14
Dru Lavigne
 
PDF
ZFSperftools2012
Brendan Gregg
 
PDF
Tlf2014
Dru Lavigne
 
PDF
AddThis: Scaling Cassandra up and down into containers with ZFS
DataStax Academy
 
PDF
Nycbsdcon14
Dru Lavigne
 
PDF
ZFS: The Last Word in Filesystems
Jarod Wang
 
PDF
Olf2013
Dru Lavigne
 
PDF
PostgreSQL + ZFS best practices
Sean Chittenden
 
PDF
ZFS
Marc Seeger
 
PPTX
RAIDZ on-disk format vs. small blocks
Christie Barnes Andersen
 
PPTX
Raidz on-disk format vs. small blocks
Joyent
 
PDF
Posscon2013
Dru Lavigne
 
PDF
Fossetcon14
Dru Lavigne
 
Vancouver bug enterprise storage and zfs
Rami Jebara
 
Scale2014
Dru Lavigne
 
Flourish16
Dru Lavigne
 
OSDC 2016 - Interesting things you can do with ZFS by Allan Jude&Benedict Reu...
NETWAYS
 
OpenZFS novel algorithms: snapshots, space allocation, RAID-Z - Matt Ahrens
Matthew Ahrens
 
An Introduction to the Implementation of ZFS by Kirk McKusick
eurobsdcon
 
ZFS by PWR 2013
pwrsoft
 
Asiabsdcon14
Dru Lavigne
 
ZFSperftools2012
Brendan Gregg
 
Tlf2014
Dru Lavigne
 
AddThis: Scaling Cassandra up and down into containers with ZFS
DataStax Academy
 
Nycbsdcon14
Dru Lavigne
 
ZFS: The Last Word in Filesystems
Jarod Wang
 
Olf2013
Dru Lavigne
 
PostgreSQL + ZFS best practices
Sean Chittenden
 
RAIDZ on-disk format vs. small blocks
Christie Barnes Andersen
 
Raidz on-disk format vs. small blocks
Joyent
 
Posscon2013
Dru Lavigne
 
Fossetcon14
Dru Lavigne
 

Recently uploaded (20)

PDF
Tea4chat - another LLM Project by Kerem Atam
a0m0rajab1
 
PDF
Unlocking the Future- AI Agents Meet Oracle Database 23ai - AIOUG Yatra 2025.pdf
Sandesh Rao
 
PPTX
OA presentation.pptx OA presentation.pptx
pateldhruv002338
 
PPTX
AI and Robotics for Human Well-being.pptx
JAYMIN SUTHAR
 
PPTX
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
PDF
The Future of Artificial Intelligence (AI)
Mukul
 
PDF
Brief History of Internet - Early Days of Internet
sutharharshit158
 
PDF
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
PPTX
cloud computing vai.pptx for the project
vaibhavdobariyal79
 
PDF
Accelerating Oracle Database 23ai Troubleshooting with Oracle AHF Fleet Insig...
Sandesh Rao
 
PDF
A Day in the Life of Location Data - Turning Where into How.pdf
Precisely
 
PDF
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
PDF
Event Presentation Google Cloud Next Extended 2025
minhtrietgect
 
PDF
Cloud-Migration-Best-Practices-A-Practical-Guide-to-AWS-Azure-and-Google-Clou...
Artjoker Software Development Company
 
PDF
MASTERDECK GRAPHSUMMIT SYDNEY (Public).pdf
Neo4j
 
PPTX
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
PDF
How-Cloud-Computing-Impacts-Businesses-in-2025-and-Beyond.pdf
Artjoker Software Development Company
 
PDF
Doc9.....................................
SofiaCollazos
 
PDF
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
PDF
Google I/O Extended 2025 Baku - all ppts
HusseinMalikMammadli
 
Tea4chat - another LLM Project by Kerem Atam
a0m0rajab1
 
Unlocking the Future- AI Agents Meet Oracle Database 23ai - AIOUG Yatra 2025.pdf
Sandesh Rao
 
OA presentation.pptx OA presentation.pptx
pateldhruv002338
 
AI and Robotics for Human Well-being.pptx
JAYMIN SUTHAR
 
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
The Future of Artificial Intelligence (AI)
Mukul
 
Brief History of Internet - Early Days of Internet
sutharharshit158
 
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
cloud computing vai.pptx for the project
vaibhavdobariyal79
 
Accelerating Oracle Database 23ai Troubleshooting with Oracle AHF Fleet Insig...
Sandesh Rao
 
A Day in the Life of Location Data - Turning Where into How.pdf
Precisely
 
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
Event Presentation Google Cloud Next Extended 2025
minhtrietgect
 
Cloud-Migration-Best-Practices-A-Practical-Guide-to-AWS-Azure-and-Google-Clou...
Artjoker Software Development Company
 
MASTERDECK GRAPHSUMMIT SYDNEY (Public).pdf
Neo4j
 
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
How-Cloud-Computing-Impacts-Businesses-in-2025-and-Beyond.pdf
Artjoker Software Development Company
 
Doc9.....................................
SofiaCollazos
 
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
Google I/O Extended 2025 Baku - all ppts
HusseinMalikMammadli
 

Fragging Rights: A Tale of a Pathological Storage Workload

  • 1. A Tale of a Pathological Storage Workload Eric Sproul, Circonus ZFS User Conference March 2017 Fragging Rights
  • 2. Fragging Rights | Eric Sproul | March 17, 2017 Circonus Release Engineer Wearer of other hats as well (ops, sales eng., support) ZFS user since Solaris 10u2 (>10 years ago now??) Helped bring OmniOS to the world @ OmniTI Eric Sproul @eirescot esproul
  • 3. • How ZFS manages free space • Our workload • Problem manifesta=on • Lessons learned and future direc=on We’ll talk about free space, COW, and the hole we dug for ourselves: Fragging Rights | Eric Sproul | March 17, 2017 ZFS handled it well… unFl it didn’t. A Tale of Surprise and Pain
  • 4. Fragging Rights | Eric Sproul | March 17, 2017 ZFS tracks free space with space maps Time-ordered log of allocaNons and frees Top-level vdevs divided into a few hundred metaslabs, each with its own space map At metaslab load, the map is read into AVL tree in memory Free Space in ZFS For details, see Jeff Bonwick’s excellent explanaFon of space maps:
 hOp://web.archive.org/web/20080616104156/hOp://blogs.sun.com/bonwick/entry/space_maps
  • 5. Fragging Rights | Eric Sproul | March 17, 2017 Never overwrite exisNng data. “UpdaNng a file” means new bits wriXen to previously free space, followed by freeing of old chunk. Copy-on-Write
  • 6. Scalable =me-series data store Compact on-disk format while not sacrificing granularity Enable rich analysis of historical telemetry data Store value plus 6 other pieces of info: • variance of value (stddev) • deriva=ve (change over =me) and stddev of deriva=ve • counter (non-nega=ve deriva=ve) and stddev of counter • count of samples that were averaged to produce value Fragging Rights | Eric Sproul | March 17, 2017 “It seemed like a good idea at the Fme…” The Workload
  • 7. Fragging Rights | Eric Sproul | March 17, 2017 File format is columnar and offset-based, with 32-byte records. Common case is appending to the end (most recent measurement/rollup). No synchronous semanNcs used for updaNng data files (this is a cluster and we have write-ahead logs). 5x 1m records = 32 * 5 = 160 bytes append 1x 5m rollup = 32 bytes append 1x 3h rollup = 32 bytes append, then update w/ recalculated rollup The Workload
  • 8. Fragging Rights | Eric Sproul | March 17, 2017 With copy-on-write, every Circonus record append or overwrite modifies a ZFS block, freeing the old copy of that block. ZFS has variable block sizes: minimum is a single disk sector (512b/4K), max is recordsize property (we used 8K). When the tail block reaches 8K, it stops gedng updated and a new tail block starts. The Workload
  • 9. Fragging Rights | Eric Sproul | March 17, 2017 ZFS sees tail block updated every 5 minutes for: 1m files: 8192/160 = 51 Nmes (~4 hours) 5m files: 8192/32 = 256 Nmes (~21 hours) 3h files: tragic • last record wriXen, then rewriXen 35 more Nmes
 (as 3h rollup value gets recalculated) • 256 records to fill 8K, ZFS sees 256*36 = 9216 block updates (32 days) Alas, we also use compression, so it’s actually ~2x worse than this. The Workload
  • 10. Fragging Rights | Eric Sproul | March 17, 2017 Aher “some Nme”, depending on ingesNon volume and pool/vdev size, performance degrades swihly. TXGs take longer and longer to process, stalling the ZIO pipeline. ZIO latency bubbles up to userland; applicaNon sees increasing write syscall latency, lowering throughput. Customers are sad. 😔 The Problem
  • 11. Fragging Rights | Eric Sproul | March 17, 2017 Start with what the app sees: DTrace syscalls (us) syscall: pwrite bytes: 32 value ------------- Distribution ------------- count 4 | 0 8 | 878 16 |@@@@@@@@@@@@ 27051 32 |@@@@@@@@@@@@@@@ 34310 64 |@@@@@@ 13361 128 |@ 2586 256 | 148 512 | 53 1024 | 39 2048 | 33 4096 | 82 8192 | 534 16384 |@@@@ 8614 32768 | 474 65536 | 22 131072 | 0 262144 | 0 524288 | 0 1048576 | 36 2097152 | 335 4194304 | 72 8388608 | 0 Troubleshoo=ng lolwut?!? bad App VFS ZFS (disks) userland kernel syscall
  • 12. Fragging Rights | Eric Sproul | March 17, 2017 Slow writes must mean disks are saturated, right? extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b data 918.3 2638.5 4644.3 32217.0 1362.7 11.8 386.4 21 40 rpool 0.9 4.4 3.5 22.3 0.0 0.0 2.0 0 0 sd0 0.9 4.4 3.5 22.3 0.0 0.0 0.5 0 0 sd6 67.6 175.1 347.9 2299.8 0.0 0.6 2.6 0 13 sd7 64.4 225.5 335.9 2300.7 0.0 0.7 2.3 0 14 sd8 67.3 167.8 314.5 2300.4 0.0 0.6 2.6 0 13 sd9 65.3 173.8 326.7 2299.8 0.0 0.6 2.6 0 13 sd10 66.1 226.6 332.2 2300.7 0.0 0.6 2.2 0 13 sd11 67.2 153.9 338.8 2301.4 0.0 0.4 2.0 0 11 sd12 69.4 154.6 345.5 2301.4 0.0 0.4 2.0 0 11 sd13 64.0 162.0 321.9 2300.8 0.0 0.4 2.0 0 11 sd14 65.5 163.7 328.7 2300.8 0.0 0.4 2.0 0 11 sd15 64.5 221.4 343.9 2303.5 0.0 0.8 2.7 0 15 sd16 61.1 222.6 318.1 2303.5 0.0 0.8 2.8 0 15 sd17 63.5 211.3 338.1 2303.2 0.0 0.7 2.7 0 14 sd18 63.8 213.0 330.6 2303.2 0.0 0.7 2.6 0 14 sd19 68.7 170.0 321.9 2300.4 0.0 0.6 2.5 0 13 Troubleshoo=ng App VFS ZFS (disks) userland kernel iostat -x
  • 13. Fragging Rights | Eric Sproul | March 17, 2017 We know the problem is in the write path, but it's not the disks. What Now? TL;DR is that ZFS internals are very complicated and difficult to reason about, especially with legacy ZFS code (OmniOS r151006, circa 2013). We flailed around for a while, using DTrace to generate kernel flame graphs, to get a sense of what the kernel was up to.
  • 14. Fragging Rights | Eric Sproul | March 17, 2017 Flame graphs! Kernel Stack Profiling System mostly idle (not shown), but when not, it's ZFS. hXps://github.com/brendangregg/FlameGraph
  • 15. Fragging Rights | Eric Sproul | March 17, 2017 >1s to find free space?!? metaslab_alloc time (nsec) value ------------- Distribution ------------- count 1024 | 0 2048 | 1166 4096 |@@@@@ 41714 8192 |@@@@@@@@@@@@@@@@ 134407 16384 |@@@@@@@@@@@@@ 107140 32768 |@@ 17603 65536 |@ 10066 131072 |@ 10315 262144 |@ 8144 524288 | 3715 1048576 | 1598 2097152 | 581 4194304 | 75 8388608 | 49 16777216 | 0 33554432 | 0 67108864 | 0 134217728 | 0 268435456 | 0 536870912 | 0 1073741824 | 1276 2147483648 | 0 Slab Alloca=on App VFS ZFS (disks) userland kernel metaslab_alloc
  • 16. • pwrite(2) syscall is slow. • Slow because ZFS takes so long to allocate new space for writes. • We don't know precisely why these alloca=ons are bogged down,
 but it likely involves the nature of the pool's free space. StarNng with applicaNon-perceived latency: Fragging Rights | Eric Sproul | March 17, 2017 What do we know now? Troubleshoo=ng: Recap
  • 17. Fragging Rights | Eric Sproul | March 17, 2017 Aggregate free space isn't everything Needed to visualize exisNng metaslab allocaNons: Quan=ty vs. Quality # zdb -m data Metaslabs: vdev 0 metaslabs 116 offset spacemap free --------------- ------------------- --------------- ------------- metaslab 0 offset 0 spacemap 38 free 2.35G metaslab 1 offset 100000000 spacemap 153 free 2.19G metaslab 2 offset 200000000 spacemap 156 free 1.70G metaslab 3 offset 300000000 spacemap 158 free 639M metaslab 4 offset 400000000 spacemap 160 free 1.11G
  • 18. Fragging Rights | Eric Sproul | March 17, 2017 Visualizing Metaslabs Credit for the idea: hOps://www.delphix.com/blog/delphix-engineering/zfs-write-performance-impact-fragmentaFon Color indicates fullness; Green = empty Red = full Percentage is remaining free space.
  • 19. Fragging Rights | Eric Sproul | March 17, 2017 Visualizing Metaslabs Aher data rewrite, allocaNons are Nghter. Performance problem is gone. Did we just create "ZFS Defrag"?
  • 20. Fragging Rights | Eric Sproul | March 17, 2017 From one slab, on one vdev, using `zdb -mmm`: Spacemap Detail Metaslabs: vdev 0 metaslabs 116 offset spacemap free --------------- ------------------- --------------- ------------- metaslab 39 offset 9c00000000 spacemap 357 free 13.2G segments 1310739 maxsize 168K freepct 82% [ 0] ALLOC: txg 7900863, pass 1 [ 1] A range: 9c00000000-9c00938800 size: 938800 [ 2] A range: 9c00939000-9c0093d200 size: 004200 ... [4041229] FREE: txg 7974611, pass 1 [4041230] F range: 9d79d10600-9d79d10800 size: 000200 [4041231] F range: 9e84efe400-9e84efe600 size: 000200 [4041232] FREE: txg 7974612, pass 1 [4041233] F range: 9e72ba4600-9e72ba5400 size: 000e00 90th %ile freed size: ~9K 31% of frees are 512b-1K >4M records in one spacemap...
 Costly to load, only to discover you can't allocate from it!
 Probably contributes to those long metaslab_alloc() =mes.
  • 21. • Spacemap histograms • Visible via zdb (-mm) and mdb (::spa -mh) • Metaslab fragmenta=on metrics • Allocator changes to account for fragmenta=on • New tuning knobs* for write thro_le Since our iniNal foray into this issue, new features have come out: Fragging Rights | Eric Sproul | March 17, 2017 We weren't alone! OpenZFS Improvements * hOp://dtrace.org/blogs/ahl/2014/08/31/openzfs-tuning/
  • 22. Fragging Rights | Eric Sproul | March 17, 2017 metaslab 2 offset 400000000 spacemap 227 free 7.99G On-disk histogram: fragmentation 90 9: 632678 **************************************** 10: 198275 ************* 11: 342565 ********************** 12: 460625 ****************************** 13: 213397 ************** 14: 82860 ****** 15: 9774 * 16: 137 * 17: 1 * Spacemap Histogram Key is power-of-2 range size FragmentaNon metric is based on this distribuNon
  • 23. • Performance is fine un=l some percentage of metaslabs are "spoiled", even though overall pool used space is low. • Once in this state, only solu=on is bulk data rewrite. • Happens sooner if you have fewer/smaller slabs. • Happens sooner if you increase inges=on rate. Our workload is (someNmes) pathologically bad at scale Fragging Rights | Eric Sproul | March 17, 2017 "Doctor, it hurts when I do <this>..."
 "Then don't do that." What We Learned
  • 24. • Use in-memory DB to accumulate incoming data • Batch-update columnar files with large, sequen=al writes • Eventually replace columnar files with some other DB format Avoid those single-column updates Fragging Rights | Eric Sproul | March 17, 2017 Follow doctor's orders What We're Doing About It
  • 25. QuesFons? Eric Sproul, Circonus ZFS User Conference March 2017 Thanks for listening! @eirescot