SlideShare a Scribd company logo
Architecting a 35 PB distributed
parallel file system for science
(formerly) Storage and I/O Software Engineer at NERSC, Berkeley Lab, US
(currently) HPC DevOps Engineer at Seqera Labs, Barcelona
Speck&Tech #53
Trento - May 29, 2023
Alberto Chiusole
- (2014-2017) BSc in Information and Business Organization Eng. - U. of Trento
- 5 months exchange student at Technical University of Denmark, Copenhagen
- (2017-2019) MSc in Data Science and Scientific Computing - U. of Trieste
- (2017-2020) HPC Sysadmin and Scientific software developer - eXact Lab, Trieste
- 3 months at CERN, Geneva, to work on Master’s thesis
- Comparison between CephFS at CERN and Lustre FS at eXact lab
- Presented at ISC High Performance in Frankfurt, July 2019
- (2020-2022) Storage and I/O Software Engineer - NERSC, Berkeley Lab, Cal., US
- Worked on Perlmutter and its Lustre FS, first full-flash only 35 PB parallel FS
- (2023 - now) HPC DevOps Engineer - Seqera Labs (remote)
https://www.linkedin.com/in/albertochiusole/
https://bit.ly/Alberto-Chiusole-Scholar
2
How I ended up working on Supercomputers
High Performance Computing (HPC) empowers breakthroughs
- Supercomputers run parallel applications to solve complex problems
- Applications come from all kind of sciences
- astrophysics, nuclear physics, molecular design, computational fluid dynamics, nuclear
warheads status simulation, climate and weather forecasts, COVID vaccines (!), to name a few
- Different from grid computing (nodes in HPC are more tightly coupled)
- At massive scale several complex problems appear
- Extremely expensive setups
- Certain labs are a matter of national security (think Men in black)
Let’s step back: why would anyone need such a FS?
3
So.. how do we get there? The hardware
HPC is a combination of advanced hardware and specialized software
4
A Namesake for Remarkable Contributions
Perlmutter is the newest supercomputer at NERSC (Berkeley
Lab, California, US)
Named after Saul Perlmutter, Nobel prize in Physics (2011)
for his 1988 discovery that the universe is expanding.
He confirmed his observations by running thousands of
simulations at NERSC, and his research team is believed to
have been the first to use supercomputers to analyze and
validate observational data in cosmology.
5
6
The hardware (Perlmutter)
- Hardware is made of several racks of “blades”
- CPU, GPU and now FPGA-enhanced nodes
- Fast network interconnection
- On PM: Cray (HPE) Slingshot 11
- Single-digit µs latency (~1-2 µs, <10 under heavy load)
- Optimized for HPC: offload into silicon
- Mix of Ethernet and InfiniBand protocols over fiber
- InfiniBand cheaper for same performance
- Liquid cooled units (note the colored pipes)
- Requires maintenance (& downtime) to change liquid
- Fast and large file systems
- Different tiers, for different time-scales
7
Special tiles!
The software landscape
- Linux-only world (mainly Red Hat, some SUSE, few Ubuntu, some custom)
- https://top500.org/statistics/list/
- Parallel programming
- OpenMP for intra-node comm., Message Passing Interface (MPI) for extra-node comm.
- Fortran kingdom!
- And C.. rarely C++. Python is gaining traction for data analysis and ML/AI steps
- Job schedulers to allocate resources to users
- Slurm (most popular), PBS, Torque, LSF, Moab, Grid Engine, etc
- User requests a certain “portion” of the cluster for their jobs
- Jobs are placed in a queue and wait for enough resources to start
- The scheduler prepares the environment, collects logs, wraps up when jobs are done
8
- 💿 Storage usage, I/O and data transfer
- Write the least to disk; write smartly (will see soon); avoid I/O bottlenecks
- 📐 Data locality
- Keep data as much as possible inside the node/rack
- ⚡ Power usage
- Servers use a lot of energy resources
- Perlmutter (US): 2.5 MW at full power – Fugaku (JP): 29.9 MW
- 830 households at max power (3 KW in Italy)
- 🥶 Cooling
- Location of data center is important
- Berkeley Lab benefits from the always cool temp of the Bay Area (19 C max year)
- Water is needed: can’t place DCs in deserts
Some of the challenges
9
What is I/O?
- Input/Output: everything that works with data and its storage
- At large scale you need multiple discs/drives and servers to store data
- Synchronization and consistency issues
- Two processes writing to a single file (strong or eventual consistency?)
- A process reading a file just written by another process (cache invalidation)
- A process writing to/reading from a file in a disc that crashed (fault tolerance)
- Duplicating files to increase aggregate read bandwidth
- Data locality: a temporary file may be written to a local FS rather than parallel FS
- Optimizing I/O is crucial
- CPUs work at the order of the ns (10-9
s), network/NVMe work at most at µs (10-6
s)
- Reducing network phase improves overall compute walltime considerably
10
Different file system scopes
The slower the drive, the higher the capacity
- Memory/NVMe drives are blazingly fast,
but they are expensive
- Scratch file systems should only be used
for temporary storage (are purged often)
11
- Data used in the same month should be moved to HDD
- Archive data should be moved to tape (it’s like VHS!)
- Movement of data may be enforced or automatic (like
S3 → Glacier)
- PM ships with the first all-flash file system in HPC
- 3,480 Samsung PM1733 PCIe NVMe drives (15.36 TB each)
- 3.5 GB/s seq. read, 3.2 seq. write speed by specifications
- 35 PB of usable POSIX storage (as in 'df -h')
- Directly integrated in the Slingshot compute network
- No need for LNet routers
Perlmutter scratch file system
12
- PM ships with the first all-flash file system in HPC
- 3,480 Samsung PM1733 PCIe NVMe drives (15.36 TB each)
- 3.5 GB/s seq. read, 3.2 seq. write speed by specifications
- 35 PB of usable POSIX storage (as in 'df -h')
- Directly integrated in the Slingshot compute network
- No need for LNet routers
- Enough to backup The Lord of The Rings trilogy 2.7M times
- Or 152k times for the extended cut in 4k Ultra HD
Perlmutter scratch file system
13
Metadata servers (MDS)
- Store the directory structure, file names,
inode locations inside OSS, etc
- Decide the file layout on OSSs (striping, etc)
- “Metadata” I/O, not bandwidth I/O
Object storage servers (OSS)
- Store chunks of data as binary
- Write 1 MiB stripes over OSS (like a RAID-0)
On PM: 16 MDS, 274 OSS
Parallel and distributed FS: Lustre
14
Inside ClusterStor E1000
MDS/OSS unit in the rack: twin servers
- Single-socket AMD Rome (128x PCIe Gen4 lanes)
- Allows switchless design
- 48 lanes for 24x NVMes, 32 lanes for 2x NICs
- Each server responsible for 12 NVMe drives, can
take over the other half if needed
- GridRAID (HPE) + ldiskfs to maximize
performance
- OSS = 8+2+1 RAID6 (GridRAID)
- MDS = 11-way RAID10 (mdraid)
15
Common HPC software used
16
Several tools are available to ease the coding for HPC: often intertwined
MPI is the bread and butter for multi-node communication
MPI-IO is its I/O layer, which helps managing files and transferring data
- File preallocation, offset management, etc
HDF5 uses MPI/MPI-IO to perform parallel I/O
NetCDF uses HDF5 as its storage format
IOR: benchmarking tool capable of generating synthetic I/O like HPC applications
Perlmutter: excellent performance end-to-end
17
41 GB/s read
27 GB/s write
1400 kIOPS read
29 kIOPS write
48 GB/s read
42 GB/s write
43 GB/s read
31 GB/s write
42 GB/s read
38 GB/s write
9,600 kIOPS read
1,600 kIOPS write
“Software distance” from drives
IOR:
88.4%(w) / 97.2%(r) NVMe block bandwidth (8+2 RAID on writes)
5.33%(w) / 15.1%(r) NVMe block IOPS (read-modify-write penalty RAID6)
Metadata performance of Perlmutter
Using IOR in a “production” run
- 230 clients x 6 procs/client = 1380 procs
- 1.6 M files/s created
In a “full-scale” run
- 1382 clients x 2 procs/client = 2764 procs
- 1.3 M files/s deleted
Way smoother User Experience than previous HDD-based Cori file system
18
Some surprises found during performance evaluation
SSDs slow down with age
- Like “HDD fragmentation”
- -10% write bandwidth after 5x capacity
written to an OST
- A fstrim is enough to fix it
- 5x OST size: 665 TB
- 2.2-2.9 PB daily expected writes
- 5x writes ~ 60/80 days
- The longer you wait, the longer fstrim takes
- Performed nightly to keep performance up
19
Thanks! Questions?
By the way, I use arch
20
PS: is hiring! seqera.io/careers
This material is based upon work supported by the U.S. Department of Energy, Office of Science, under contract
DE-AC02-05CH11231. This research used resources and data generated from resources of the National Energy Research Scientific
Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under
Contract No. DE-AC02-05CH11231.

More Related Content

Similar to Architecting a 35 PB distributed parallel file system for science (20)

PDF
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
Lenovo Data Center
 
PDF
6 open capi_meetup_in_japan_final
Yutaka Kawai
 
PDF
Designing HPC & Deep Learning Middleware for Exascale Systems
inside-BigData.com
 
PPTX
NSCC Training Introductory Class
National Supercomputing Centre Singapore
 
PPTX
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
inside-BigData.com
 
PDF
A64fx and Fugaku - A Game Changing, HPC / AI Optimized Arm CPU to enable Exas...
inside-BigData.com
 
PPTX
Programmable Exascale Supercomputer
Sagar Dolas
 
PDF
Conference Paper: Universal Node: Towards a high-performance NFV environment
Ericsson
 
PPTX
Lrz kurs: big data analysis
Ferdinand Jamitzky
 
PDF
Towards a Systematic Study of Big Data Performance and Benchmarking
Saliya Ekanayake
 
PPTX
Nehalem
Ajmal Ak
 
PPT
Parallelism Processor Design
Sri Prasanna
 
PDF
Roadrunner Tutorial: An Introduction to Roadrunner and the Cell Processor
Slide_N
 
PPT
Lecture 2 computer evolution and performance
Wajahat HuxaIn
 
DOCX
Multi-Core on Chip Architecture *doc - IK
Ilgın Kavaklıoğulları
 
PDF
From Rack scale computers to Warehouse scale computers
Ryousei Takano
 
PDF
PCCC24(第24回PCクラスタシンポジウム):筑波大学計算科学研究センター テーマ2「スーパーコンピュータCygnus / Pegasus」
PC Cluster Consortium
 
PDF
Overview of HPC Interconnects
inside-BigData.com
 
PPTX
Introduction to DPDK
Kernel TLV
 
PDF
The Best Programming Practice for Cell/B.E.
Slide_N
 
The Apache Spark config behind the indsutry's first 100TB Spark SQL benchmark
Lenovo Data Center
 
6 open capi_meetup_in_japan_final
Yutaka Kawai
 
Designing HPC & Deep Learning Middleware for Exascale Systems
inside-BigData.com
 
NSCC Training Introductory Class
National Supercomputing Centre Singapore
 
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
inside-BigData.com
 
A64fx and Fugaku - A Game Changing, HPC / AI Optimized Arm CPU to enable Exas...
inside-BigData.com
 
Programmable Exascale Supercomputer
Sagar Dolas
 
Conference Paper: Universal Node: Towards a high-performance NFV environment
Ericsson
 
Lrz kurs: big data analysis
Ferdinand Jamitzky
 
Towards a Systematic Study of Big Data Performance and Benchmarking
Saliya Ekanayake
 
Nehalem
Ajmal Ak
 
Parallelism Processor Design
Sri Prasanna
 
Roadrunner Tutorial: An Introduction to Roadrunner and the Cell Processor
Slide_N
 
Lecture 2 computer evolution and performance
Wajahat HuxaIn
 
Multi-Core on Chip Architecture *doc - IK
Ilgın Kavaklıoğulları
 
From Rack scale computers to Warehouse scale computers
Ryousei Takano
 
PCCC24(第24回PCクラスタシンポジウム):筑波大学計算科学研究センター テーマ2「スーパーコンピュータCygnus / Pegasus」
PC Cluster Consortium
 
Overview of HPC Interconnects
inside-BigData.com
 
Introduction to DPDK
Kernel TLV
 
The Best Programming Practice for Cell/B.E.
Slide_N
 

More from Speck&Tech (20)

PDF
Predicting the unpredictable: re-engineering recommendation algorithms for fr...
Speck&Tech
 
PDF
Persuasive AI: risks and opportunities in the age of digital debate
Speck&Tech
 
PDF
Fai da te ed elettricità, con la bobina di Tesla!
Speck&Tech
 
PDF
DIY ed elettronica ai tempi dell’università
Speck&Tech
 
PDF
Sotto il letto, sopra il cloud: costruirsi un’infrastruttura da zero
Speck&Tech
 
PDF
Verze e diamanti: oltre le nanotecnologie
Speck&Tech
 
PDF
Respira, sei in Trentino! Monitorare l'invisibile
Speck&Tech
 
PDF
Cognitive Robotics: from Babies to Robots and AI
Speck&Tech
 
PDF
Edge AI: Bringing Intelligence to Embedded Devices
Speck&Tech
 
PDF
Genere e gioco da tavolo: il caso di "Free to Choose"
Speck&Tech
 
PDF
SPaRKLE: un rivelatore compatto di radiazioni spaziali, realizzato dagli stud...
Speck&Tech
 
PDF
Il ruolo degli stati alterati di coscienza e degli psichedelici nella terapia
Speck&Tech
 
PDF
Unity3D: Things you need to know to get started
Speck&Tech
 
PDF
Learning from Biometric Fingerprints to prevent Cyber Security Threats
Speck&Tech
 
PDF
How do we program a God? - Do the Androids dream of the electric sheep?
Speck&Tech
 
PDF
The bad, the ugly and the weird about IoT
Speck&Tech
 
PDF
Arduino is Hardware, Software, IoT and Community
Speck&Tech
 
PDF
Computational privacy: balancing privacy and utility in the digital era
Speck&Tech
 
PDF
Il trucco c'è (e si vede) - Beatrice Mautino
Speck&Tech
 
PDF
ScrapeGraphAI: AI-powered web scraping, reso facile con l'open source
Speck&Tech
 
Predicting the unpredictable: re-engineering recommendation algorithms for fr...
Speck&Tech
 
Persuasive AI: risks and opportunities in the age of digital debate
Speck&Tech
 
Fai da te ed elettricità, con la bobina di Tesla!
Speck&Tech
 
DIY ed elettronica ai tempi dell’università
Speck&Tech
 
Sotto il letto, sopra il cloud: costruirsi un’infrastruttura da zero
Speck&Tech
 
Verze e diamanti: oltre le nanotecnologie
Speck&Tech
 
Respira, sei in Trentino! Monitorare l'invisibile
Speck&Tech
 
Cognitive Robotics: from Babies to Robots and AI
Speck&Tech
 
Edge AI: Bringing Intelligence to Embedded Devices
Speck&Tech
 
Genere e gioco da tavolo: il caso di "Free to Choose"
Speck&Tech
 
SPaRKLE: un rivelatore compatto di radiazioni spaziali, realizzato dagli stud...
Speck&Tech
 
Il ruolo degli stati alterati di coscienza e degli psichedelici nella terapia
Speck&Tech
 
Unity3D: Things you need to know to get started
Speck&Tech
 
Learning from Biometric Fingerprints to prevent Cyber Security Threats
Speck&Tech
 
How do we program a God? - Do the Androids dream of the electric sheep?
Speck&Tech
 
The bad, the ugly and the weird about IoT
Speck&Tech
 
Arduino is Hardware, Software, IoT and Community
Speck&Tech
 
Computational privacy: balancing privacy and utility in the digital era
Speck&Tech
 
Il trucco c'è (e si vede) - Beatrice Mautino
Speck&Tech
 
ScrapeGraphAI: AI-powered web scraping, reso facile con l'open source
Speck&Tech
 
Ad

Recently uploaded (20)

PPTX
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
PDF
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
PDF
HCIP-Data Center Facility Deployment V2.0 Training Material (Without Remarks ...
mcastillo49
 
PDF
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
PDF
IoT-Powered Industrial Transformation – Smart Manufacturing to Connected Heal...
Rejig Digital
 
PDF
Newgen 2022-Forrester Newgen TEI_13 05 2022-The-Total-Economic-Impact-Newgen-...
darshakparmar
 
PDF
The Rise of AI and IoT in Mobile App Tech.pdf
IMG Global Infotech
 
PDF
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
PDF
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PDF
"Beyond English: Navigating the Challenges of Building a Ukrainian-language R...
Fwdays
 
PDF
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PDF
HubSpot Main Hub: A Unified Growth Platform
Jaswinder Singh
 
PDF
LLMs.txt: Easily Control How AI Crawls Your Site
Keploy
 
PDF
Mastering Financial Management in Direct Selling
Epixel MLM Software
 
PDF
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
PDF
Biography of Daniel Podor.pdf
Daniel Podor
 
PPTX
Webinar: Introduction to LF Energy EVerest
DanBrown980551
 
PDF
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
PDF
Fl Studio 24.2.2 Build 4597 Crack for Windows Free Download 2025
faizk77g
 
PDF
Reverse Engineering of Security Products: Developing an Advanced Microsoft De...
nwbxhhcyjv
 
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
HCIP-Data Center Facility Deployment V2.0 Training Material (Without Remarks ...
mcastillo49
 
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
IoT-Powered Industrial Transformation – Smart Manufacturing to Connected Heal...
Rejig Digital
 
Newgen 2022-Forrester Newgen TEI_13 05 2022-The-Total-Economic-Impact-Newgen-...
darshakparmar
 
The Rise of AI and IoT in Mobile App Tech.pdf
IMG Global Infotech
 
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
"Beyond English: Navigating the Challenges of Building a Ukrainian-language R...
Fwdays
 
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
HubSpot Main Hub: A Unified Growth Platform
Jaswinder Singh
 
LLMs.txt: Easily Control How AI Crawls Your Site
Keploy
 
Mastering Financial Management in Direct Selling
Epixel MLM Software
 
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
Biography of Daniel Podor.pdf
Daniel Podor
 
Webinar: Introduction to LF Energy EVerest
DanBrown980551
 
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
Fl Studio 24.2.2 Build 4597 Crack for Windows Free Download 2025
faizk77g
 
Reverse Engineering of Security Products: Developing an Advanced Microsoft De...
nwbxhhcyjv
 
Ad

Architecting a 35 PB distributed parallel file system for science

  • 1. Architecting a 35 PB distributed parallel file system for science (formerly) Storage and I/O Software Engineer at NERSC, Berkeley Lab, US (currently) HPC DevOps Engineer at Seqera Labs, Barcelona Speck&Tech #53 Trento - May 29, 2023 Alberto Chiusole
  • 2. - (2014-2017) BSc in Information and Business Organization Eng. - U. of Trento - 5 months exchange student at Technical University of Denmark, Copenhagen - (2017-2019) MSc in Data Science and Scientific Computing - U. of Trieste - (2017-2020) HPC Sysadmin and Scientific software developer - eXact Lab, Trieste - 3 months at CERN, Geneva, to work on Master’s thesis - Comparison between CephFS at CERN and Lustre FS at eXact lab - Presented at ISC High Performance in Frankfurt, July 2019 - (2020-2022) Storage and I/O Software Engineer - NERSC, Berkeley Lab, Cal., US - Worked on Perlmutter and its Lustre FS, first full-flash only 35 PB parallel FS - (2023 - now) HPC DevOps Engineer - Seqera Labs (remote) https://www.linkedin.com/in/albertochiusole/ https://bit.ly/Alberto-Chiusole-Scholar 2 How I ended up working on Supercomputers
  • 3. High Performance Computing (HPC) empowers breakthroughs - Supercomputers run parallel applications to solve complex problems - Applications come from all kind of sciences - astrophysics, nuclear physics, molecular design, computational fluid dynamics, nuclear warheads status simulation, climate and weather forecasts, COVID vaccines (!), to name a few - Different from grid computing (nodes in HPC are more tightly coupled) - At massive scale several complex problems appear - Extremely expensive setups - Certain labs are a matter of national security (think Men in black) Let’s step back: why would anyone need such a FS? 3
  • 4. So.. how do we get there? The hardware HPC is a combination of advanced hardware and specialized software 4
  • 5. A Namesake for Remarkable Contributions Perlmutter is the newest supercomputer at NERSC (Berkeley Lab, California, US) Named after Saul Perlmutter, Nobel prize in Physics (2011) for his 1988 discovery that the universe is expanding. He confirmed his observations by running thousands of simulations at NERSC, and his research team is believed to have been the first to use supercomputers to analyze and validate observational data in cosmology. 5
  • 6. 6
  • 7. The hardware (Perlmutter) - Hardware is made of several racks of “blades” - CPU, GPU and now FPGA-enhanced nodes - Fast network interconnection - On PM: Cray (HPE) Slingshot 11 - Single-digit µs latency (~1-2 µs, <10 under heavy load) - Optimized for HPC: offload into silicon - Mix of Ethernet and InfiniBand protocols over fiber - InfiniBand cheaper for same performance - Liquid cooled units (note the colored pipes) - Requires maintenance (& downtime) to change liquid - Fast and large file systems - Different tiers, for different time-scales 7 Special tiles!
  • 8. The software landscape - Linux-only world (mainly Red Hat, some SUSE, few Ubuntu, some custom) - https://top500.org/statistics/list/ - Parallel programming - OpenMP for intra-node comm., Message Passing Interface (MPI) for extra-node comm. - Fortran kingdom! - And C.. rarely C++. Python is gaining traction for data analysis and ML/AI steps - Job schedulers to allocate resources to users - Slurm (most popular), PBS, Torque, LSF, Moab, Grid Engine, etc - User requests a certain “portion” of the cluster for their jobs - Jobs are placed in a queue and wait for enough resources to start - The scheduler prepares the environment, collects logs, wraps up when jobs are done 8
  • 9. - 💿 Storage usage, I/O and data transfer - Write the least to disk; write smartly (will see soon); avoid I/O bottlenecks - 📐 Data locality - Keep data as much as possible inside the node/rack - ⚡ Power usage - Servers use a lot of energy resources - Perlmutter (US): 2.5 MW at full power – Fugaku (JP): 29.9 MW - 830 households at max power (3 KW in Italy) - 🥶 Cooling - Location of data center is important - Berkeley Lab benefits from the always cool temp of the Bay Area (19 C max year) - Water is needed: can’t place DCs in deserts Some of the challenges 9
  • 10. What is I/O? - Input/Output: everything that works with data and its storage - At large scale you need multiple discs/drives and servers to store data - Synchronization and consistency issues - Two processes writing to a single file (strong or eventual consistency?) - A process reading a file just written by another process (cache invalidation) - A process writing to/reading from a file in a disc that crashed (fault tolerance) - Duplicating files to increase aggregate read bandwidth - Data locality: a temporary file may be written to a local FS rather than parallel FS - Optimizing I/O is crucial - CPUs work at the order of the ns (10-9 s), network/NVMe work at most at µs (10-6 s) - Reducing network phase improves overall compute walltime considerably 10
  • 11. Different file system scopes The slower the drive, the higher the capacity - Memory/NVMe drives are blazingly fast, but they are expensive - Scratch file systems should only be used for temporary storage (are purged often) 11 - Data used in the same month should be moved to HDD - Archive data should be moved to tape (it’s like VHS!) - Movement of data may be enforced or automatic (like S3 → Glacier)
  • 12. - PM ships with the first all-flash file system in HPC - 3,480 Samsung PM1733 PCIe NVMe drives (15.36 TB each) - 3.5 GB/s seq. read, 3.2 seq. write speed by specifications - 35 PB of usable POSIX storage (as in 'df -h') - Directly integrated in the Slingshot compute network - No need for LNet routers Perlmutter scratch file system 12
  • 13. - PM ships with the first all-flash file system in HPC - 3,480 Samsung PM1733 PCIe NVMe drives (15.36 TB each) - 3.5 GB/s seq. read, 3.2 seq. write speed by specifications - 35 PB of usable POSIX storage (as in 'df -h') - Directly integrated in the Slingshot compute network - No need for LNet routers - Enough to backup The Lord of The Rings trilogy 2.7M times - Or 152k times for the extended cut in 4k Ultra HD Perlmutter scratch file system 13
  • 14. Metadata servers (MDS) - Store the directory structure, file names, inode locations inside OSS, etc - Decide the file layout on OSSs (striping, etc) - “Metadata” I/O, not bandwidth I/O Object storage servers (OSS) - Store chunks of data as binary - Write 1 MiB stripes over OSS (like a RAID-0) On PM: 16 MDS, 274 OSS Parallel and distributed FS: Lustre 14
  • 15. Inside ClusterStor E1000 MDS/OSS unit in the rack: twin servers - Single-socket AMD Rome (128x PCIe Gen4 lanes) - Allows switchless design - 48 lanes for 24x NVMes, 32 lanes for 2x NICs - Each server responsible for 12 NVMe drives, can take over the other half if needed - GridRAID (HPE) + ldiskfs to maximize performance - OSS = 8+2+1 RAID6 (GridRAID) - MDS = 11-way RAID10 (mdraid) 15
  • 16. Common HPC software used 16 Several tools are available to ease the coding for HPC: often intertwined MPI is the bread and butter for multi-node communication MPI-IO is its I/O layer, which helps managing files and transferring data - File preallocation, offset management, etc HDF5 uses MPI/MPI-IO to perform parallel I/O NetCDF uses HDF5 as its storage format IOR: benchmarking tool capable of generating synthetic I/O like HPC applications
  • 17. Perlmutter: excellent performance end-to-end 17 41 GB/s read 27 GB/s write 1400 kIOPS read 29 kIOPS write 48 GB/s read 42 GB/s write 43 GB/s read 31 GB/s write 42 GB/s read 38 GB/s write 9,600 kIOPS read 1,600 kIOPS write “Software distance” from drives IOR: 88.4%(w) / 97.2%(r) NVMe block bandwidth (8+2 RAID on writes) 5.33%(w) / 15.1%(r) NVMe block IOPS (read-modify-write penalty RAID6)
  • 18. Metadata performance of Perlmutter Using IOR in a “production” run - 230 clients x 6 procs/client = 1380 procs - 1.6 M files/s created In a “full-scale” run - 1382 clients x 2 procs/client = 2764 procs - 1.3 M files/s deleted Way smoother User Experience than previous HDD-based Cori file system 18
  • 19. Some surprises found during performance evaluation SSDs slow down with age - Like “HDD fragmentation” - -10% write bandwidth after 5x capacity written to an OST - A fstrim is enough to fix it - 5x OST size: 665 TB - 2.2-2.9 PB daily expected writes - 5x writes ~ 60/80 days - The longer you wait, the longer fstrim takes - Performed nightly to keep performance up 19
  • 20. Thanks! Questions? By the way, I use arch 20 PS: is hiring! seqera.io/careers This material is based upon work supported by the U.S. Department of Energy, Office of Science, under contract DE-AC02-05CH11231. This research used resources and data generated from resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231.