SlideShare a Scribd company logo
i M O D U s e r D a y 2 0 1 9 – D S D - I N T 2 0 1 9
Parallelization project for the USGS
Jarno Verkaik (Deltares, groundwater management department)
SURFsara Cartesius supercomputer
(47,776 cores, 130TB RAM)
Why (distributed memory) parallel computing?
MODFLOW grid
Serial computing
Parallel computing
256 GB RAM
64 GB RAM
64 GB RAM
64 GB RAM
64 GB RAM MPI
iMODUserDay2019–DSD-INT2019
1 day computing
@ 256 GB machine
6 hours computing
@ 64 GB machines
MPI = Message Passing Interface
2
Contents
• Organization
• Project results and plans
• Global scale application
iMODUserDay2019–DSD-INT2019
3
How it started…
• 2010: Email correspondence on parallel MT3DMS
• 2013: Visit to USGS, start of joined code development (in kind)
• 2015: Start development of Parallel Krylov Solver for
MODFLOW-2005 and MODFLOW-USG
→ poster @ AGU Fall Meeting 2015, San Francisco
• 2016: First application of PKS at national and global scale
→ poster @ AGU Fall Meeting 2016, San Francisco
• Jul.2017: PKS as main feature for iMOD 4.0
& applied as default solver in National Water Model
• Oct.2017: Start parallelization of MODFLOW 6
→ funded by USGS through USGS-Deltares co-op
iMODUserDay2019–DSD-INT2019
4
AGU Fall Meeting 2015
AGU Fall Meeting 2016
Organization through (coastal morphology) USGS-Deltares co-op
iMODUserDay2019–DSD-INT2019
5
Robert McCall
Applied Morphodynamics,
Delft
Kees Nederhoff
Deltares USA,
Silver Spring
Martijn Russcher
Numerical Simulation Software,
Delft
Jarno Verkaik
Groundwater management,
Utrecht
Joseph D. Hughes
Integrated Modeling and Prediction,
Reston
Christian D. Langevin
Integrated Modeling and Prediction,
Mounds View
Li Erikson
Pacific Coastal and Marine
Science Center, Santa Cruz
USGS project FY2018 (Oct.2017 – Sep.2018)
• Start parallelization of MODFLOW 6
• Such that it can be part of a future release
• Target application: CONUS model by
Wesley Zell and Ward Sanford (USGS)
• USGS requirements:
- Proof of concept applicable to CONUS model
- Low code footprint
- Version controlled code at GitHub
- Easy to use
- Not depending on 3rd party libraries
iMODUserDay2019–DSD-INT2019
6
USGS project FY2018 (Oct.2017 – Sep.2018)
• Proof of concept was developed, applicable to CONUS
• Parallelization of IMS linear solver using Schwarz domain decomposition
(similar to Parallel Krylov Solver in iMOD)
• Repos: https://github.com/verkaik/modflow6-parallel.git
→MODFLOW 6 framework refactoring required for
exchanges between models (subdomains):
- That is generic for both serial and parallel computing
- Such that numerical schemes can be evaluated more easily at model interfaces
- Such that XT3D option can be used with multiple models (serial and parallel)
iMODUserDay2019–DSD-INT2019
7
Halo v2. concept
USGS project FY2019 & FY2020
• FY2019 (Oct.2018 – Sep.2019)
• Support XT3D option with multi-models (serial only)
• Development of interface model concept (revised halo v2)
• FY2020 (Oct.2019 – Sep.2020)
(To be determined)
• Continue working on parallel MODFLOW
• Development of Basic Model Interface
iMODUserDay2019–DSD-INT2019
8
M1
M2
USGS project FY2018 results: circle test 1250M cells
iMODUserDay2019–DSD-INT2019
9
USGS project FY2018 results: circle test 12.5M cells
iMODUserDay2019–DSD-INT2019
10
Related work to USGS project
iMODUserDay2019–DSD-INT2019
• PhD project (start 2018)
“Towards Exascale Computing for Large Scale Groundwater Simulation”
Goal: development of distributed parallel methods applying to large real-life groundwater models of
O(106)–O(109) cells.
• Mainly funded by Deltares research
• Directly relates to MODFLOW 6 kernel development for new iMOD 6
(see next presentation by Otto de Keizer)
Prof. Marc Bierkens
(Utrecht University)
Prof. Hai Xiang Lin
(Delft University of Technology)
Gualbert Oude Essink, PhD
(Deltares)
11
Contributions from PhD project
iMODUserDay2019–DSD-INT2019
Short term coding:
• Improve linear solver convergence when using many subdomains:
→ add coarse grid parallel preconditioner (implementation largely done)
• Option to check parallel implementation
→ add serial block Jacobi preconditioner (first implementation done)
• Code profiling & optimizing parallel performance (ongoing)
Longer term coding:
• Robustness option when using many subdomains:
→ add recovery mechanism for failing hardware
• Add physics-based parallel preconditioner
Short term modeling:
• Run USGS CONUS model in parallel @ 250 m 12
• Development of PCR-GLOBWB global groundwater model
having 1km x 1km resolution, O(108) cells
• First experience with parallel MODFLOW 6 with this scale:
• Physics based subdomain partitioning
• Model generation (pre-processing)
• Parallel computing
• Visualization of model results
→ Big data!
Typical raster: 43200 columns x 21600 rows, 3 GB binary
Global groundwater model @ 1km and 428M cells
Ref: Verkaik, J., Sutanudjaja, E.H., Oude Essink, G.H.P., Lin, H.X., and Bierkens, M.F.P., 2019. Parallel global hydrology and water resources
PCR-GLOBWB-MODFLOW model at hyper-resolution scale (1 km): first results, in: EGU General Assembly Conference Abstracts. p. 13397.
iMODUserDay2019–DSD-INT2019
13
MODFLOW 6 model characteristics:
• Steady-state, 2 layers, subsurface down-scaled from 10km
• Unstructured DISU grid met only “land cells”, total 428M
• CHD for sea, RIV in layer 1 + DRN in layer 1 & 2 (HydroSHEDS)
Global groundwater model @ 1km and 428M cells
iMODUserDay2019–DSD-INT2019
Parallel pre-processing using 128 subdomains
14
Global groundwater model @ 1km and 428M cells
Can we come up with predefined subdomain boundaries (e.g. hydrologically / administrative
boundary) such that they are useful for both modeler and parallel computing?
→ How to partition the world into 1024 subdomains using 1.8M catchments?
→ How to solve a sub-optimal optimization problem (load + edge cuts)?
1. Determine independent regions for groundwater
flow (continents, islands)
→ ~20k regions
2. Further divide large regions/catchments using
a lumped graph method → define parallel models
3. Cluster small regions → define serial models
iMODUserDay2019–DSD-INT2019
15
Global groundwater model @ 1km and 428M cells
• Partitioning results in 52 separate MODFLOW 6 models:
• 38 serial, small islands
• 13 parallel, 3 largest on super computer
428M
(2 layers)
Small parallel+
serial models
2. America
120M cells
286 cores
1min 36sec
112GB memory
1. Africa+EurAsia
256M cells
612 cores
3min 31sec
390 GB memory
3. Australia
20M cells
48 cores
33 sec
13 GB memory
5%
28%
60%
iMODUserDay2019–DSD-INT2019
16
Global groundwater model @ 1km and 428M cells
iMODUserDay2019–DSD-INT2019
Simulated
Groundwater Table
subdomain
boundary
(total: 1024)
17
Global groundwater model @ 1km and 428M cells
iMODUserDay2019–DSD-INT2019
Simulated
Groundwater Table
subdomain
boundary
(total: 1024)
Take home message:
USGS and Deltares are making progress on MPI parallelization
of the MODFLOW 6 multi-model capability
for reducing computing times & memory usage
THANK YOU! 18

More Related Content

PDF
Best Practices: Large Scale Multiphysics
inside-BigData.com
 
PDF
Designing HPC & Deep Learning Middleware for Exascale Systems
inside-BigData.com
 
PPTX
CNN Dataflow Implementation on FPGAs
NECST Lab @ Politecnico di Milano
 
PDF
DSD-INT 2016 The new parallel Krylov Solver package - Verkaik
Deltares
 
PDF
State of Linux Containers for HPC
inside-BigData.com
 
PDF
DSD-INT 2017 High Performance Parallel Computing with iMODFLOW-MetaSWAP - Ver...
Deltares
 
PPTX
CNN Dataflow Implementation on FPGAs
NECST Lab @ Politecnico di Milano
 
PDF
Elastic multicore scheduling with the XiTAO runtime
Miquel Pericas
 
Best Practices: Large Scale Multiphysics
inside-BigData.com
 
Designing HPC & Deep Learning Middleware for Exascale Systems
inside-BigData.com
 
CNN Dataflow Implementation on FPGAs
NECST Lab @ Politecnico di Milano
 
DSD-INT 2016 The new parallel Krylov Solver package - Verkaik
Deltares
 
State of Linux Containers for HPC
inside-BigData.com
 
DSD-INT 2017 High Performance Parallel Computing with iMODFLOW-MetaSWAP - Ver...
Deltares
 
CNN Dataflow Implementation on FPGAs
NECST Lab @ Politecnico di Milano
 
Elastic multicore scheduling with the XiTAO runtime
Miquel Pericas
 

What's hot (14)

PDF
Userspace RCU library : what linear multiprocessor scalability means for your...
Alexey Ivanov
 
PDF
Dock-site
Neeraj Wadhwa
 
PPTX
CNN Dataflow Implementation on FPGAs
NECST Lab @ Politecnico di Milano
 
PPTX
Comparing Orchestration
Knoldus Inc.
 
PDF
Low Energy Task Scheduling based on Work Stealing
LEGATO project
 
ODP
Gluster fs hadoop_fifth-elephant
Gluster.org
 
PDF
Using Ceph for Large Hadron Collider Data
Rob Gardner
 
PDF
Marian Marinov Clusters With Glusterfs
Ontico
 
PPT
Responsive Distributed Routing Algorithm
Nafiz Ishtiaque Ahmed
 
PDF
DSD-INT 2017 The extended iMOD water balance tool; a cooperation of Deltares ...
Deltares
 
PDF
Clusters With Glusterfs
Ontico
 
PDF
Using Docker containers for scientific environments - on-premises and in the ...
Helix Nebula The Science Cloud
 
PPTX
CNN Dataflow implementation on FPGAs
NECST Lab @ Politecnico di Milano
 
PPTX
New Ceph capabilities and Reference Architectures
Kamesh Pemmaraju
 
Userspace RCU library : what linear multiprocessor scalability means for your...
Alexey Ivanov
 
Dock-site
Neeraj Wadhwa
 
CNN Dataflow Implementation on FPGAs
NECST Lab @ Politecnico di Milano
 
Comparing Orchestration
Knoldus Inc.
 
Low Energy Task Scheduling based on Work Stealing
LEGATO project
 
Gluster fs hadoop_fifth-elephant
Gluster.org
 
Using Ceph for Large Hadron Collider Data
Rob Gardner
 
Marian Marinov Clusters With Glusterfs
Ontico
 
Responsive Distributed Routing Algorithm
Nafiz Ishtiaque Ahmed
 
DSD-INT 2017 The extended iMOD water balance tool; a cooperation of Deltares ...
Deltares
 
Clusters With Glusterfs
Ontico
 
Using Docker containers for scientific environments - on-premises and in the ...
Helix Nebula The Science Cloud
 
CNN Dataflow implementation on FPGAs
NECST Lab @ Politecnico di Milano
 
New Ceph capabilities and Reference Architectures
Kamesh Pemmaraju
 
Ad

Similar to DSD-INT 2019 Parallelization project for the USGS - Verkaik (20)

PDF
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
inside-BigData.com
 
PDF
DSD-NL 2017 Parallel Krylov Solver Package for iMODFLOW-MetaSWAP - Verkaik
Deltares
 
PPT
Realizing Robust and Scalable Evolutionary Algorithms toward Exascale Era
Masaharu Munetomo
 
PDF
DSD-INT 2019 The iMOD 6 project - De Keizer
Deltares
 
PDF
Exploring emerging technologies in the HPC co-design space
jsvetter
 
PDF
State of GeoServer - FOSS4G 2016
GeoSolutions
 
PDF
DSD-INT 2023 Coupling Hydrologic Process Models - A technical perspective - R...
Deltares
 
PDF
OpenACC and Open Hackathons Monthly Highlights: Summer 2024
OpenACC
 
PPTX
Building a Multi-Region Cluster at Target (Aaron Ploetz, Target) | Cassandra ...
DataStax
 
PDF
MayaData Datastax webinar - Operating Cassandra on Kubernetes with the help ...
MayaData Inc
 
PDF
DuraMat CO1 Central Data Resource: How it started, how it’s going …
Anubhav Jain
 
PDF
Parallelization techniques and hardware for 2D modelling - Mark Britton (DHI)
Stephen Flood
 
PPTX
Programmable Exascale Supercomputer
Sagar Dolas
 
PDF
Towards a Lightweight Multi-Cloud DSL for Elastic and Transferable Cloud-nati...
Nane Kratzke
 
PPTX
Java Thread and Process Performance for Parallel Machine Learning on Multicor...
Saliya Ekanayake
 
PDF
State of GeoServer 2.10
Jody Garnett
 
PPTX
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
MayaData Inc
 
PDF
State of GeoServer
Jody Garnett
 
PDF
Hpc Cloud project Overview
Floris Sluiter
 
PPTX
Exascale Capabl
Sagar Dolas
 
40 Powers of 10 - Simulating the Universe with the DiRAC HPC Facility
inside-BigData.com
 
DSD-NL 2017 Parallel Krylov Solver Package for iMODFLOW-MetaSWAP - Verkaik
Deltares
 
Realizing Robust and Scalable Evolutionary Algorithms toward Exascale Era
Masaharu Munetomo
 
DSD-INT 2019 The iMOD 6 project - De Keizer
Deltares
 
Exploring emerging technologies in the HPC co-design space
jsvetter
 
State of GeoServer - FOSS4G 2016
GeoSolutions
 
DSD-INT 2023 Coupling Hydrologic Process Models - A technical perspective - R...
Deltares
 
OpenACC and Open Hackathons Monthly Highlights: Summer 2024
OpenACC
 
Building a Multi-Region Cluster at Target (Aaron Ploetz, Target) | Cassandra ...
DataStax
 
MayaData Datastax webinar - Operating Cassandra on Kubernetes with the help ...
MayaData Inc
 
DuraMat CO1 Central Data Resource: How it started, how it’s going …
Anubhav Jain
 
Parallelization techniques and hardware for 2D modelling - Mark Britton (DHI)
Stephen Flood
 
Programmable Exascale Supercomputer
Sagar Dolas
 
Towards a Lightweight Multi-Cloud DSL for Elastic and Transferable Cloud-nati...
Nane Kratzke
 
Java Thread and Process Performance for Parallel Machine Learning on Multicor...
Saliya Ekanayake
 
State of GeoServer 2.10
Jody Garnett
 
Webinar: OpenEBS - Still Free and now FASTEST Kubernetes storage
MayaData Inc
 
State of GeoServer
Jody Garnett
 
Hpc Cloud project Overview
Floris Sluiter
 
Exascale Capabl
Sagar Dolas
 
Ad

More from Deltares (20)

PDF
DSD-INT 2024 Delft3D FM Suite 2025.01 2D3D - New features + Improvements - Ge...
Deltares
 
PDF
DSD-INT 2024 Delft3D FM Suite 2025.01 1D2D - Beta testing programme - Hutten
Deltares
 
PDF
DSD-INT 2024 MeshKernel and Grid Editor - New mesh generation tools - Carniato
Deltares
 
PDF
DSD-INT 2024 Quantifying wind wake effects around offshore wind farms in the ...
Deltares
 
PDF
DSD-INT 2024 Salinity intrusion in the Rhine-Meuse Delta - Geraeds
Deltares
 
PDF
DSD-INT 2024 El-Nakheel beach swimmer safety study - Dobrochinski
Deltares
 
PDF
DSD-INT 2024 Development of a Delft3D FM Scheldt Estuary Model - Vanlede
Deltares
 
PDF
DSD-INT 2024 Modeling the effects of dredging operations on salt transport in...
Deltares
 
PDF
DSD-INT 2024 Wadi Flash Flood Modelling using Delft3D FM Suite 1D2D - Dangudu...
Deltares
 
PDF
DSD-INT 2024 European Digital Twin Ocean and Delft3D FM - Dols
Deltares
 
PDF
DSD-INT 2024 Building towards a better (modelling) future - Wijnants
Deltares
 
PDF
DSD-INT 2024 Flood modelling using the Delft3D FM Suite 1D2D - Horn
Deltares
 
PDF
DSD-INT 2024 The effects of two cable installations on the water quality of t...
Deltares
 
PDF
DSD-INT 2024 Morphological modelling of tidal creeks along arid coasts - Luo
Deltares
 
PDF
DSD-INT 2024 Rainfall nowcasting – now and then - Uijlenhoet
Deltares
 
PDF
DSD-INT 2023 Hydrology User Days - Intro - Day 3 - Kroon
Deltares
 
PDF
DSD-INT 2023 Demo EPIC Response Assessment Methodology (ERAM) - Couvin Rodriguez
Deltares
 
PDF
DSD-INT 2023 Demo Climate Stress Testing Tool (CST Tool) - Taner
Deltares
 
PDF
DSD-INT 2023 Demo Climate Resilient Cities Tool (CRC Tool) - Rooze
Deltares
 
PDF
DSD-INT 2023 Approaches for assessing multi-hazard risk - Ward
Deltares
 
DSD-INT 2024 Delft3D FM Suite 2025.01 2D3D - New features + Improvements - Ge...
Deltares
 
DSD-INT 2024 Delft3D FM Suite 2025.01 1D2D - Beta testing programme - Hutten
Deltares
 
DSD-INT 2024 MeshKernel and Grid Editor - New mesh generation tools - Carniato
Deltares
 
DSD-INT 2024 Quantifying wind wake effects around offshore wind farms in the ...
Deltares
 
DSD-INT 2024 Salinity intrusion in the Rhine-Meuse Delta - Geraeds
Deltares
 
DSD-INT 2024 El-Nakheel beach swimmer safety study - Dobrochinski
Deltares
 
DSD-INT 2024 Development of a Delft3D FM Scheldt Estuary Model - Vanlede
Deltares
 
DSD-INT 2024 Modeling the effects of dredging operations on salt transport in...
Deltares
 
DSD-INT 2024 Wadi Flash Flood Modelling using Delft3D FM Suite 1D2D - Dangudu...
Deltares
 
DSD-INT 2024 European Digital Twin Ocean and Delft3D FM - Dols
Deltares
 
DSD-INT 2024 Building towards a better (modelling) future - Wijnants
Deltares
 
DSD-INT 2024 Flood modelling using the Delft3D FM Suite 1D2D - Horn
Deltares
 
DSD-INT 2024 The effects of two cable installations on the water quality of t...
Deltares
 
DSD-INT 2024 Morphological modelling of tidal creeks along arid coasts - Luo
Deltares
 
DSD-INT 2024 Rainfall nowcasting – now and then - Uijlenhoet
Deltares
 
DSD-INT 2023 Hydrology User Days - Intro - Day 3 - Kroon
Deltares
 
DSD-INT 2023 Demo EPIC Response Assessment Methodology (ERAM) - Couvin Rodriguez
Deltares
 
DSD-INT 2023 Demo Climate Stress Testing Tool (CST Tool) - Taner
Deltares
 
DSD-INT 2023 Demo Climate Resilient Cities Tool (CRC Tool) - Rooze
Deltares
 
DSD-INT 2023 Approaches for assessing multi-hazard risk - Ward
Deltares
 

Recently uploaded (20)

PDF
Download iTop VPN Free 6.1.0.5882 Crack Full Activated Pre Latest 2025
imang66g
 
PDF
Teaching Reproducibility and Embracing Variability: From Floating-Point Exper...
University of Rennes, INSA Rennes, Inria/IRISA, CNRS
 
PPTX
oapresentation.pptx
mehatdhavalrajubhai
 
PDF
Bandai Playdia The Book - David Glotz
BluePanther6
 
PDF
WatchTraderHub - Watch Dealer software with inventory management and multi-ch...
WatchDealer Pavel
 
PPTX
ConcordeApp: Engineering Global Impact & Unlocking Billions in Event ROI with AI
chastechaste14
 
PPTX
Maximizing Revenue with Marketo Measure: A Deep Dive into Multi-Touch Attribu...
bbedford2
 
PDF
49785682629390197565_LRN3014_Migrating_the_Beast.pdf
Abilash868456
 
DOCX
Can You Build Dashboards Using Open Source Visualization Tool.docx
Varsha Nayak
 
PPTX
Explanation about Structures in C language.pptx
Veeral Rathod
 
PDF
advancepresentationskillshdhdhhdhdhdhhfhf
jasmenrojas249
 
PDF
Protecting the Digital World Cyber Securit
dnthakkar16
 
PDF
vAdobe Premiere Pro 2025 (v25.2.3.004) Crack Pre-Activated Latest
imang66g
 
PPTX
The-Dawn-of-AI-Reshaping-Our-World.pptxx
parthbhanushali307
 
PPT
Why Reliable Server Maintenance Service in New York is Crucial for Your Business
Sam Vohra
 
PPTX
Presentation about Database and Database Administrator
abhishekchauhan86963
 
PPTX
classification of computer and basic part of digital computer
ravisinghrajpurohit3
 
PPTX
slidesgo-unlocking-the-code-the-dynamic-dance-of-variables-and-constants-2024...
kr2589474
 
PDF
An Experience-Based Look at AI Lead Generation Pricing, Features & B2B Results
Thomas albart
 
PPTX
Role Of Python In Programing Language.pptx
jaykoshti048
 
Download iTop VPN Free 6.1.0.5882 Crack Full Activated Pre Latest 2025
imang66g
 
Teaching Reproducibility and Embracing Variability: From Floating-Point Exper...
University of Rennes, INSA Rennes, Inria/IRISA, CNRS
 
oapresentation.pptx
mehatdhavalrajubhai
 
Bandai Playdia The Book - David Glotz
BluePanther6
 
WatchTraderHub - Watch Dealer software with inventory management and multi-ch...
WatchDealer Pavel
 
ConcordeApp: Engineering Global Impact & Unlocking Billions in Event ROI with AI
chastechaste14
 
Maximizing Revenue with Marketo Measure: A Deep Dive into Multi-Touch Attribu...
bbedford2
 
49785682629390197565_LRN3014_Migrating_the_Beast.pdf
Abilash868456
 
Can You Build Dashboards Using Open Source Visualization Tool.docx
Varsha Nayak
 
Explanation about Structures in C language.pptx
Veeral Rathod
 
advancepresentationskillshdhdhhdhdhdhhfhf
jasmenrojas249
 
Protecting the Digital World Cyber Securit
dnthakkar16
 
vAdobe Premiere Pro 2025 (v25.2.3.004) Crack Pre-Activated Latest
imang66g
 
The-Dawn-of-AI-Reshaping-Our-World.pptxx
parthbhanushali307
 
Why Reliable Server Maintenance Service in New York is Crucial for Your Business
Sam Vohra
 
Presentation about Database and Database Administrator
abhishekchauhan86963
 
classification of computer and basic part of digital computer
ravisinghrajpurohit3
 
slidesgo-unlocking-the-code-the-dynamic-dance-of-variables-and-constants-2024...
kr2589474
 
An Experience-Based Look at AI Lead Generation Pricing, Features & B2B Results
Thomas albart
 
Role Of Python In Programing Language.pptx
jaykoshti048
 

DSD-INT 2019 Parallelization project for the USGS - Verkaik

  • 1. i M O D U s e r D a y 2 0 1 9 – D S D - I N T 2 0 1 9 Parallelization project for the USGS Jarno Verkaik (Deltares, groundwater management department) SURFsara Cartesius supercomputer (47,776 cores, 130TB RAM)
  • 2. Why (distributed memory) parallel computing? MODFLOW grid Serial computing Parallel computing 256 GB RAM 64 GB RAM 64 GB RAM 64 GB RAM 64 GB RAM MPI iMODUserDay2019–DSD-INT2019 1 day computing @ 256 GB machine 6 hours computing @ 64 GB machines MPI = Message Passing Interface 2
  • 3. Contents • Organization • Project results and plans • Global scale application iMODUserDay2019–DSD-INT2019 3
  • 4. How it started… • 2010: Email correspondence on parallel MT3DMS • 2013: Visit to USGS, start of joined code development (in kind) • 2015: Start development of Parallel Krylov Solver for MODFLOW-2005 and MODFLOW-USG → poster @ AGU Fall Meeting 2015, San Francisco • 2016: First application of PKS at national and global scale → poster @ AGU Fall Meeting 2016, San Francisco • Jul.2017: PKS as main feature for iMOD 4.0 & applied as default solver in National Water Model • Oct.2017: Start parallelization of MODFLOW 6 → funded by USGS through USGS-Deltares co-op iMODUserDay2019–DSD-INT2019 4 AGU Fall Meeting 2015 AGU Fall Meeting 2016
  • 5. Organization through (coastal morphology) USGS-Deltares co-op iMODUserDay2019–DSD-INT2019 5 Robert McCall Applied Morphodynamics, Delft Kees Nederhoff Deltares USA, Silver Spring Martijn Russcher Numerical Simulation Software, Delft Jarno Verkaik Groundwater management, Utrecht Joseph D. Hughes Integrated Modeling and Prediction, Reston Christian D. Langevin Integrated Modeling and Prediction, Mounds View Li Erikson Pacific Coastal and Marine Science Center, Santa Cruz
  • 6. USGS project FY2018 (Oct.2017 – Sep.2018) • Start parallelization of MODFLOW 6 • Such that it can be part of a future release • Target application: CONUS model by Wesley Zell and Ward Sanford (USGS) • USGS requirements: - Proof of concept applicable to CONUS model - Low code footprint - Version controlled code at GitHub - Easy to use - Not depending on 3rd party libraries iMODUserDay2019–DSD-INT2019 6
  • 7. USGS project FY2018 (Oct.2017 – Sep.2018) • Proof of concept was developed, applicable to CONUS • Parallelization of IMS linear solver using Schwarz domain decomposition (similar to Parallel Krylov Solver in iMOD) • Repos: https://github.com/verkaik/modflow6-parallel.git →MODFLOW 6 framework refactoring required for exchanges between models (subdomains): - That is generic for both serial and parallel computing - Such that numerical schemes can be evaluated more easily at model interfaces - Such that XT3D option can be used with multiple models (serial and parallel) iMODUserDay2019–DSD-INT2019 7 Halo v2. concept
  • 8. USGS project FY2019 & FY2020 • FY2019 (Oct.2018 – Sep.2019) • Support XT3D option with multi-models (serial only) • Development of interface model concept (revised halo v2) • FY2020 (Oct.2019 – Sep.2020) (To be determined) • Continue working on parallel MODFLOW • Development of Basic Model Interface iMODUserDay2019–DSD-INT2019 8 M1 M2
  • 9. USGS project FY2018 results: circle test 1250M cells iMODUserDay2019–DSD-INT2019 9
  • 10. USGS project FY2018 results: circle test 12.5M cells iMODUserDay2019–DSD-INT2019 10
  • 11. Related work to USGS project iMODUserDay2019–DSD-INT2019 • PhD project (start 2018) “Towards Exascale Computing for Large Scale Groundwater Simulation” Goal: development of distributed parallel methods applying to large real-life groundwater models of O(106)–O(109) cells. • Mainly funded by Deltares research • Directly relates to MODFLOW 6 kernel development for new iMOD 6 (see next presentation by Otto de Keizer) Prof. Marc Bierkens (Utrecht University) Prof. Hai Xiang Lin (Delft University of Technology) Gualbert Oude Essink, PhD (Deltares) 11
  • 12. Contributions from PhD project iMODUserDay2019–DSD-INT2019 Short term coding: • Improve linear solver convergence when using many subdomains: → add coarse grid parallel preconditioner (implementation largely done) • Option to check parallel implementation → add serial block Jacobi preconditioner (first implementation done) • Code profiling & optimizing parallel performance (ongoing) Longer term coding: • Robustness option when using many subdomains: → add recovery mechanism for failing hardware • Add physics-based parallel preconditioner Short term modeling: • Run USGS CONUS model in parallel @ 250 m 12
  • 13. • Development of PCR-GLOBWB global groundwater model having 1km x 1km resolution, O(108) cells • First experience with parallel MODFLOW 6 with this scale: • Physics based subdomain partitioning • Model generation (pre-processing) • Parallel computing • Visualization of model results → Big data! Typical raster: 43200 columns x 21600 rows, 3 GB binary Global groundwater model @ 1km and 428M cells Ref: Verkaik, J., Sutanudjaja, E.H., Oude Essink, G.H.P., Lin, H.X., and Bierkens, M.F.P., 2019. Parallel global hydrology and water resources PCR-GLOBWB-MODFLOW model at hyper-resolution scale (1 km): first results, in: EGU General Assembly Conference Abstracts. p. 13397. iMODUserDay2019–DSD-INT2019 13
  • 14. MODFLOW 6 model characteristics: • Steady-state, 2 layers, subsurface down-scaled from 10km • Unstructured DISU grid met only “land cells”, total 428M • CHD for sea, RIV in layer 1 + DRN in layer 1 & 2 (HydroSHEDS) Global groundwater model @ 1km and 428M cells iMODUserDay2019–DSD-INT2019 Parallel pre-processing using 128 subdomains 14
  • 15. Global groundwater model @ 1km and 428M cells Can we come up with predefined subdomain boundaries (e.g. hydrologically / administrative boundary) such that they are useful for both modeler and parallel computing? → How to partition the world into 1024 subdomains using 1.8M catchments? → How to solve a sub-optimal optimization problem (load + edge cuts)? 1. Determine independent regions for groundwater flow (continents, islands) → ~20k regions 2. Further divide large regions/catchments using a lumped graph method → define parallel models 3. Cluster small regions → define serial models iMODUserDay2019–DSD-INT2019 15
  • 16. Global groundwater model @ 1km and 428M cells • Partitioning results in 52 separate MODFLOW 6 models: • 38 serial, small islands • 13 parallel, 3 largest on super computer 428M (2 layers) Small parallel+ serial models 2. America 120M cells 286 cores 1min 36sec 112GB memory 1. Africa+EurAsia 256M cells 612 cores 3min 31sec 390 GB memory 3. Australia 20M cells 48 cores 33 sec 13 GB memory 5% 28% 60% iMODUserDay2019–DSD-INT2019 16
  • 17. Global groundwater model @ 1km and 428M cells iMODUserDay2019–DSD-INT2019 Simulated Groundwater Table subdomain boundary (total: 1024) 17
  • 18. Global groundwater model @ 1km and 428M cells iMODUserDay2019–DSD-INT2019 Simulated Groundwater Table subdomain boundary (total: 1024) Take home message: USGS and Deltares are making progress on MPI parallelization of the MODFLOW 6 multi-model capability for reducing computing times & memory usage THANK YOU! 18