SlideShare a Scribd company logo
Swift Parallel Scripting for
High-Performance Workflow
April 16, 2015
Michael Wilde wilde@anl.gov
Daniel S. Katz dsk@uchicago.edu
http://swift-lang.org
Domain of Interest
“Time”
“Complexity”
Increasing capabilities in computational science
Workflow needs
 Application Drivers
– Applications that are many-task in nature: parameters sweeps, UQ,
inverse modeling, and data-driven applications
– Analysis of capability application outputs
– Analysis of stored or collected data
– Increase productivity at major experiment facilities (light source, etc.)
– Urgent computing
– These applications are all many-task in nature
 Requirements
– Usability and ease of workflow expression
– Ability to leverage complex architecture of HPC and HTC systems
(fabric, scheduler, hybrid node and programming models), individually
and collectively
– Ability to integrate high-performance data services and volumes
– Make use of the system task rate capabilities from clusters to
extreme-scale systems
 Approach
– A programming model for programming in the large
3
When do you need HPC workflow?
Example application: protein-ligand docking for drug screening
(B)
O(100K)
drug
candidates
…then hundreds of
detailed MD
models to find
10-20 fruitful
candidates for
wetlab & APS
crystallography
O(10) proteins
implicated in a disease
= 1M
docking
tasks…
X
…
5
For protein docking workflow:
foreach p, i in proteins {
foreach c, j in ligands {
(structure[i,j], log[i,j]) =
dock(p, c, minRad, maxRad);
}
scatter_plot = analyze(structure)
To run:
swift –site tukey,blues dock.swift
Expressing this many task workflow in Swift
The Swift runtime system has drivers and algorithms to efficiently support and aggregate diverse runtime environments
Swift enables execution of simulation campaigns across
multiple HPC and cloud resources
6
Local datamarkApps
Swift host: login node, laptop, …
Scripts
Data servers
Data servers
Data servers
Campus systems
Cloud resources
Petascale systems
National infrastructure
Swift in a nutshell
 Data types
string s = “hello world”;
int i = 4;
int A[];
 Mapped data types
type image;
image file1<“snapshot.jpg”>;
 Mapped functions
app (file o) myapp(file f, int i)
{ mysim "-s" i @f @o; }
 Conventional expressions
if (x == 3) {
y = x+2;
s = @strcat(“y: ”, y);
}
 Structured data
image A[]<array_mapper…>;
 Loops
foreach f,i in A {
B[i] = convert(A[i]);
}
 Data flow
analyze(B[0], B[1]);
analyze(B[2], B[3]);
Swift: A language for distributed parallel scripting, J. Parallel Computing, 2011
Pervasively parallel
 Swift is a parallel scripting system for grids, clouds and clusters
 F() and G() are computed in parallel
– Can be Swift functions, or leaf tasks (executables or scripts in shell, python, R,
Octave, MATLAB, ...)
 r computed when they are done
 This parallelism is automatic
 Works recursively throughout the program’s call graph
(int r) myproc (int i)
{
int f = F(i);
int g = G(i);
r = f + g;
}
Pervasive parallel data flow
Data-intensive example:
Processing MODIS land-use data
analyze
colorize
x 317
landUse
x 317
mark
Swift loops process hundreds of images in parallel
assemble
Image processing pipeline for land-use data from the MODIS satellite instrument…
Processing MODIS land-use data
foreach raw,i in rawFiles {
land[i] = landUse(raw,1);
colorFiles[i] = colorize(raw);
}
(topTiles, topFiles, topColors) =
analyze(land, landType, nSelect);
gridMap = mark(topTiles);
montage =
assemble(topFiles,colorFiles,webDir);
Example of Swift’s implicit parallelism:
Processing MODIS land-use data
analyze
colorize
x 317
landUse
x 317
mark
Swift loops process hundreds of images in parallel
assemble
Image processing pipeline for land-use data from the MODIS satellite instrument…
Swift provides 4 important benefits:
13
Makes parallelism more transparent
Implicitly parallel functional dataflow programming
Makes computing location more transparent
Runs your script on multiple distributed sites and
diverse computing resources (desktop to petascale)
Makes basic failure recovery transparent
Retries/relocates failing tasks
Can restart failing runs from point of failure
Enables provenance capture
Tasks have recordable inputs and outputs
Swift/T: productive extreme-scale scripting
 Script-like programming with “leaf” tasks
– In-memory function calls in C++, Fortran, Python, R, … passing in-memory objects
– More expressive than master-worker for “programming in the large”
– Leaf tasks can be MPI programs, etc. Can be separate processes if OS permits.
 Distributed, scalable runtime manages tasks, load balancing, data movement
 User function calls to external code run on thousands of worker nodes
Swift
control
process
Swift
control
process
Parallel
evaluator
and
data store
Swift worker process
C C++ Fortran
Swift worker process
C C++ Fortran
Swift worker process
C C++ Fortran
MPI
Scripts
Parallel tasks in Swift/T
 Swift expression: z = @par=32 f(x,y);
 ADLB server finds 8 available workers
– Workers receive ranks from ADLB server
– Performs comm = MPI_Comm_create_group()
 Workers perform f(x,y)communicating on comm
LAMMPS parallel tasks
 LAMMPS provides a
convenient C++ API
 Easily used by Swift/T
parallel tasks
foreach i in [0:20] {
t = 300+i;
sed_command = sprintf("s/_TEMPERATURE_/%i/g", t);
lammps_file_name = sprintf("input-%i.inp", t);
lammps_args = "-i " + lammps_file_name;
file lammps_input<lammps_file_name> =
sed(filter, sed_command) =>
@par=8 lammps(lammps_args);
}
Tasks with varying sizes packed into big MPI run
Black: Compute Blue: Message White: Idle
Swift/T-specific features
 Task locality: Ability to send a task to a process
– Allows for big data –type applications
– Allows for stateful objects to remain resident in the workflow
– location L = find_data(D);
int y = @location=L f(D, x);
 Data broadcast
 Task priorities: Ability to set task priority
– Useful for tweaking load balancing
 Updateable variables
– Allow data to be modified after its initial write
– Consumer tasks may receive original or updated values when they emerge
from the work queue
17
Wozniak et al. Language features for scalable distributed-memory
dataflow computing. Proc. Dataflow Execution Models at PACT, 2014.
Swift/T: scaling of trivial foreach { } loop
100 microsecond to 10 millisecond tasks
on up to 512K integer cores of Blue Waters
18
Large-scale applications using Swift
 Simulation of super-
cooled glass materials
 Protein and biomolecule
structure and interaction
 Climate model analysis and
decision making for global
food production & supply
 Materials science at the
Advanced Photon Source
 Multiscale subsurface
flow modeling
 Modeling of power grid
for OE applications
All have published science
results obtained using
Swift
E
C
A
B
A
B
C
D
E
F
F
D
Assess
Red indicates higher statistical
confidence in data
Impact and Approach Accomplishments ALCF Contributions
• HEDM imaging and analysis
shows granular
material structure, of
non-destructively
• APS Sector 1 scientists use
Mira to process data from live
HEDM experiments, providing
real-time feedback to correct
or improve in-progress
experiments
• Scientists working with
Discovery Engines LDRD
developed new Swift analysis
workflows to process APS data
from Sectors 1, 6, and 11
• Mira analyzes experiment in
10 mins vs. 5.2 hours on APS
cluster: > 30X improvement
• Scaling up to ~ 128K cores
(driven by data features)
• Cable flaw was found and
fixed at start of experiment,
saving an entire multi-day
experiment and valuable user
time and APS beam time.
• In press: High-Energy Synchrotron X-
ray Techniques for Studying Irradiated
Materials, J-S Park et al, J. Mat. Res.
• Big data staging with MPI-IO for
interactive X-ray science, J Wozniak et
al, Big Data Conference, Dec 2014
• Design, develop, support, and trial
user engagement to make Swift
workflow solution on ALCF
systems a reliable, secure and
supported production service
• Creation and support of the Petrel
data server
• Reserved resources on Mira for
APS HEDM experiment at Sector
1-ID beamline (8/10/2014 and
future sessions in APS 2015 Run 1)
Boosting Light Source Productivity with Swift ALCF Data Analysis
H Sharma, J Almer (APS); J Wozniak, M Wilde, I Foster (MCS)
Analyze
Fix
Re-analyze
Valid
Data!
2 3
4
5
1
Conclusion: parallel workflow scripting is practical,
productive, and necessary, at a broad range of scales
 Swift programming model demonstrated feasible and
scalable on XSEDE, Blue Waters, OSG, DOE systems
 Applied to numerous MTC and HPC application domains
– attractive for data-intensive applications
– and several hybrid programming models
 Proven productivity enhancement in materials,
genomics, biochem, earth systems science, …
 Deep integration of workflow in progress at XSEDE, ALCF
Workflow through implicitly parallel dataflow is
productive for applications and systems at many scales,
including on highest-end system
What’s next?
 Programmability
– New patterns ala Van Der Aalst et al (workflowpatterns.org)
 Fine grained dataflow – programming in the smaller?
– Run leaf tasks on accelerators (CUDA GPUs, Intel Phi)
– How low/fast can we drive this model?
 PowerFlow
– Applies dataflow semantics to manage and reduce energy usage
 Extreme-scale reliability
 Embed Swift semantics in Python, R, Java, shell, make
– Can we make Swift “invisible”? Should we?
 Swift-Reduce
– Learning from map-reduce
– Integration with map-reduce
GeMTC: GPU-enabled Many-Task Computing
Goals:
1) MTC support 2) Programmability
3) Efficiency 4) MPMD on SIMD
5) Increase concurrency to warp level
Approach:
Design & implement GeMTC middleware:
1) Manages GPU 2) Spread host/device
3) Workflow system integration (Swift/T)
Motivation: Support for MTC on all accelerators!
S. J. Krieder, J. M. Wozniak, T. Armstrong, M. Wilde, D. S. Katz, B. Grimmer, I. T. Foster, I. Raicu, "Design and Evaluation of
the GeMTC Framework for GPU-enabled Many-Task Computing,” HPDC'14
Further research directions
 Deeply in-situ processing for extreme-scale analytics
 Shell-like Read-Evaluate-Print Loop ala iPython
 Debugging of extreme-scale workflows
Deeply in-situ analytics of a
climate simulation
25
U . S . D E P A R T M E N T O F
ENERGY
Swift gratefully acknowledges support from:
http://swift-lang.org

More Related Content

What's hot (10)

PPTX
Mass Storage Structure
Vimalanathan D
 
PDF
Sistema de Reconhecimento de Placas de Carro (Brasil) - Visão Computacional/O...
Richiely Paiva
 
PPTX
Operating system 18 process creation and termination
Vaibhav Khanna
 
DOCX
Bus stucture
indra12345678
 
PDF
ch8.pdf operating systems by galvin to learn
rajitha ellandula
 
PPT
Silberschatz / OS Concepts
Alanisca Alanis
 
PPTX
Oracle SPARC T7 a M7 servery
MarketingArrowECS_CZ
 
PPTX
Applications of paralleL processing
Page Maker
 
PDF
Operating Systems - memory management
Mukesh Chinta
 
PPTX
Semaphore
sangrampatil81
 
Mass Storage Structure
Vimalanathan D
 
Sistema de Reconhecimento de Placas de Carro (Brasil) - Visão Computacional/O...
Richiely Paiva
 
Operating system 18 process creation and termination
Vaibhav Khanna
 
Bus stucture
indra12345678
 
ch8.pdf operating systems by galvin to learn
rajitha ellandula
 
Silberschatz / OS Concepts
Alanisca Alanis
 
Oracle SPARC T7 a M7 servery
MarketingArrowECS_CZ
 
Applications of paralleL processing
Page Maker
 
Operating Systems - memory management
Mukesh Chinta
 
Semaphore
sangrampatil81
 

Similar to Swift Parallel Scripting for High-Performance Workflow (20)

PPTX
Swift: A parallel scripting for applications at the petascale and beyond.
Nagasuri Bala Venkateswarlu
 
PPTX
Multi-component Modeling with Swift at Extreme Scale
Daniel S. Katz
 
PPTX
David Kelly SWIFT
David LeBauer
 
PPT
Many Task Applications for Grids and Supercomputers
Ian Foster
 
PDF
Performance Optimization of SPH Algorithms for Multi/Many-Core Architectures
Dr. Fabio Baruffa
 
PDF
Software Design Practices for Large-Scale Automation
Hao Xu
 
PDF
Assisting User’s Transition to Titan’s Accelerated Architecture
inside-BigData.com
 
PDF
eScience Cluster Arch. Overview
Francesco Bongiovanni
 
PDF
Nephele pegasus
Somnath Mazumdar
 
PPT
Parallel Computing 2007: Overview
Geoffrey Fox
 
PDF
Directive-based approach to Heterogeneous Computing
Ruymán Reyes
 
PPTX
Abstractions and Directives for Adapting Wavefront Algorithms to Future Archi...
inside-BigData.com
 
PDF
Panda scalable hpc_bestpractices_tue100418
inside-BigData.com
 
PDF
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
inside-BigData.com
 
PDF
AMS 250 - High-Performance, Massively Parallel Computing with FLASH
dongwook159
 
PDF
Engineer Engineering Software
Yung-Yu Chen
 
PPTX
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems
inside-BigData.com
 
PDF
Exploiting dynamic resource allocation for
ingenioustech
 
PDF
"Big Data" Bioinformatics
Brian Repko
 
PDF
NVIDIA HPC ソフトウエア斜め読み
NVIDIA Japan
 
Swift: A parallel scripting for applications at the petascale and beyond.
Nagasuri Bala Venkateswarlu
 
Multi-component Modeling with Swift at Extreme Scale
Daniel S. Katz
 
David Kelly SWIFT
David LeBauer
 
Many Task Applications for Grids and Supercomputers
Ian Foster
 
Performance Optimization of SPH Algorithms for Multi/Many-Core Architectures
Dr. Fabio Baruffa
 
Software Design Practices for Large-Scale Automation
Hao Xu
 
Assisting User’s Transition to Titan’s Accelerated Architecture
inside-BigData.com
 
eScience Cluster Arch. Overview
Francesco Bongiovanni
 
Nephele pegasus
Somnath Mazumdar
 
Parallel Computing 2007: Overview
Geoffrey Fox
 
Directive-based approach to Heterogeneous Computing
Ruymán Reyes
 
Abstractions and Directives for Adapting Wavefront Algorithms to Future Archi...
inside-BigData.com
 
Panda scalable hpc_bestpractices_tue100418
inside-BigData.com
 
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
inside-BigData.com
 
AMS 250 - High-Performance, Massively Parallel Computing with FLASH
dongwook159
 
Engineer Engineering Software
Yung-Yu Chen
 
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems
inside-BigData.com
 
Exploiting dynamic resource allocation for
ingenioustech
 
"Big Data" Bioinformatics
Brian Repko
 
NVIDIA HPC ソフトウエア斜め読み
NVIDIA Japan
 
Ad

More from Daniel S. Katz (20)

PDF
Research software susainability
Daniel S. Katz
 
PPTX
Software Professionals (RSEs) at NCSA
Daniel S. Katz
 
PPTX
Parsl: Pervasive Parallel Programming in Python
Daniel S. Katz
 
PPTX
Requiring Publicly-Funded Software, Algorithms, and Workflows to be Made Publ...
Daniel S. Katz
 
PPTX
What is eScience, and where does it go from here?
Daniel S. Katz
 
PDF
Citation and Research Objects: Toward Active Research Objects
Daniel S. Katz
 
PDF
FAIR is not Fair Enough, Particularly for Software Citation, Availability, or...
Daniel S. Katz
 
PPTX
Fundamentals of software sustainability
Daniel S. Katz
 
PPTX
Software Citation in Theory and Practice
Daniel S. Katz
 
PPTX
URSSI
Daniel S. Katz
 
PDF
Research Software Sustainability: WSSSPE & URSSI
Daniel S. Katz
 
PDF
Software citation
Daniel S. Katz
 
PDF
Expressing and sharing workflows
Daniel S. Katz
 
PDF
Citation and reproducibility in software
Daniel S. Katz
 
PPTX
Software Citation: Principles, Implementation, and Impact
Daniel S. Katz
 
PPTX
Summary of WSSSPE and its working groups
Daniel S. Katz
 
PPTX
Working towards Sustainable Software for Science: Practice and Experience (WS...
Daniel S. Katz
 
PPTX
20160607 citation4software panel
Daniel S. Katz
 
PPTX
20160607 citation4software opening
Daniel S. Katz
 
PPTX
Scientific Software Challenges and Community Responses
Daniel S. Katz
 
Research software susainability
Daniel S. Katz
 
Software Professionals (RSEs) at NCSA
Daniel S. Katz
 
Parsl: Pervasive Parallel Programming in Python
Daniel S. Katz
 
Requiring Publicly-Funded Software, Algorithms, and Workflows to be Made Publ...
Daniel S. Katz
 
What is eScience, and where does it go from here?
Daniel S. Katz
 
Citation and Research Objects: Toward Active Research Objects
Daniel S. Katz
 
FAIR is not Fair Enough, Particularly for Software Citation, Availability, or...
Daniel S. Katz
 
Fundamentals of software sustainability
Daniel S. Katz
 
Software Citation in Theory and Practice
Daniel S. Katz
 
Research Software Sustainability: WSSSPE & URSSI
Daniel S. Katz
 
Software citation
Daniel S. Katz
 
Expressing and sharing workflows
Daniel S. Katz
 
Citation and reproducibility in software
Daniel S. Katz
 
Software Citation: Principles, Implementation, and Impact
Daniel S. Katz
 
Summary of WSSSPE and its working groups
Daniel S. Katz
 
Working towards Sustainable Software for Science: Practice and Experience (WS...
Daniel S. Katz
 
20160607 citation4software panel
Daniel S. Katz
 
20160607 citation4software opening
Daniel S. Katz
 
Scientific Software Challenges and Community Responses
Daniel S. Katz
 
Ad

Recently uploaded (20)

PDF
Reverse Engineering of Security Products: Developing an Advanced Microsoft De...
nwbxhhcyjv
 
PDF
Log-Based Anomaly Detection: Enhancing System Reliability with Machine Learning
Mohammed BEKKOUCHE
 
PPTX
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
PDF
Timothy Rottach - Ramp up on AI Use Cases, from Vector Search to AI Agents wi...
AWS Chicago
 
PPTX
Webinar: Introduction to LF Energy EVerest
DanBrown980551
 
PDF
The Builder’s Playbook - 2025 State of AI Report.pdf
jeroen339954
 
PDF
Presentation - Vibe Coding The Future of Tech
yanuarsinggih1
 
PDF
SWEBOK Guide and Software Services Engineering Education
Hironori Washizaki
 
PDF
Chris Elwell Woburn, MA - Passionate About IT Innovation
Chris Elwell Woburn, MA
 
PDF
July Patch Tuesday
Ivanti
 
PDF
LLMs.txt: Easily Control How AI Crawls Your Site
Keploy
 
PDF
Newgen Beyond Frankenstein_Build vs Buy_Digital_version.pdf
darshakparmar
 
PDF
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
PDF
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
PPTX
COMPARISON OF RASTER ANALYSIS TOOLS OF QGIS AND ARCGIS
Sharanya Sarkar
 
PDF
Complete JavaScript Notes: From Basics to Advanced Concepts.pdf
haydendavispro
 
PDF
HubSpot Main Hub: A Unified Growth Platform
Jaswinder Singh
 
PDF
From Code to Challenge: Crafting Skill-Based Games That Engage and Reward
aiyshauae
 
PPTX
"Autonomy of LLM Agents: Current State and Future Prospects", Oles` Petriv
Fwdays
 
PDF
Smart Trailers 2025 Update with History and Overview
Paul Menig
 
Reverse Engineering of Security Products: Developing an Advanced Microsoft De...
nwbxhhcyjv
 
Log-Based Anomaly Detection: Enhancing System Reliability with Machine Learning
Mohammed BEKKOUCHE
 
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
Timothy Rottach - Ramp up on AI Use Cases, from Vector Search to AI Agents wi...
AWS Chicago
 
Webinar: Introduction to LF Energy EVerest
DanBrown980551
 
The Builder’s Playbook - 2025 State of AI Report.pdf
jeroen339954
 
Presentation - Vibe Coding The Future of Tech
yanuarsinggih1
 
SWEBOK Guide and Software Services Engineering Education
Hironori Washizaki
 
Chris Elwell Woburn, MA - Passionate About IT Innovation
Chris Elwell Woburn, MA
 
July Patch Tuesday
Ivanti
 
LLMs.txt: Easily Control How AI Crawls Your Site
Keploy
 
Newgen Beyond Frankenstein_Build vs Buy_Digital_version.pdf
darshakparmar
 
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
COMPARISON OF RASTER ANALYSIS TOOLS OF QGIS AND ARCGIS
Sharanya Sarkar
 
Complete JavaScript Notes: From Basics to Advanced Concepts.pdf
haydendavispro
 
HubSpot Main Hub: A Unified Growth Platform
Jaswinder Singh
 
From Code to Challenge: Crafting Skill-Based Games That Engage and Reward
aiyshauae
 
"Autonomy of LLM Agents: Current State and Future Prospects", Oles` Petriv
Fwdays
 
Smart Trailers 2025 Update with History and Overview
Paul Menig
 

Swift Parallel Scripting for High-Performance Workflow

  • 1. Swift Parallel Scripting for High-Performance Workflow April 16, 2015 Michael Wilde [email protected] Daniel S. Katz [email protected] http://swift-lang.org
  • 2. Domain of Interest “Time” “Complexity” Increasing capabilities in computational science
  • 3. Workflow needs  Application Drivers – Applications that are many-task in nature: parameters sweeps, UQ, inverse modeling, and data-driven applications – Analysis of capability application outputs – Analysis of stored or collected data – Increase productivity at major experiment facilities (light source, etc.) – Urgent computing – These applications are all many-task in nature  Requirements – Usability and ease of workflow expression – Ability to leverage complex architecture of HPC and HTC systems (fabric, scheduler, hybrid node and programming models), individually and collectively – Ability to integrate high-performance data services and volumes – Make use of the system task rate capabilities from clusters to extreme-scale systems  Approach – A programming model for programming in the large 3
  • 4. When do you need HPC workflow? Example application: protein-ligand docking for drug screening (B) O(100K) drug candidates …then hundreds of detailed MD models to find 10-20 fruitful candidates for wetlab & APS crystallography O(10) proteins implicated in a disease = 1M docking tasks… X …
  • 5. 5 For protein docking workflow: foreach p, i in proteins { foreach c, j in ligands { (structure[i,j], log[i,j]) = dock(p, c, minRad, maxRad); } scatter_plot = analyze(structure) To run: swift –site tukey,blues dock.swift Expressing this many task workflow in Swift
  • 6. The Swift runtime system has drivers and algorithms to efficiently support and aggregate diverse runtime environments Swift enables execution of simulation campaigns across multiple HPC and cloud resources 6 Local datamarkApps Swift host: login node, laptop, … Scripts Data servers Data servers Data servers Campus systems Cloud resources Petascale systems National infrastructure
  • 7. Swift in a nutshell  Data types string s = “hello world”; int i = 4; int A[];  Mapped data types type image; image file1<“snapshot.jpg”>;  Mapped functions app (file o) myapp(file f, int i) { mysim "-s" i @f @o; }  Conventional expressions if (x == 3) { y = x+2; s = @strcat(“y: ”, y); }  Structured data image A[]<array_mapper…>;  Loops foreach f,i in A { B[i] = convert(A[i]); }  Data flow analyze(B[0], B[1]); analyze(B[2], B[3]); Swift: A language for distributed parallel scripting, J. Parallel Computing, 2011
  • 8. Pervasively parallel  Swift is a parallel scripting system for grids, clouds and clusters  F() and G() are computed in parallel – Can be Swift functions, or leaf tasks (executables or scripts in shell, python, R, Octave, MATLAB, ...)  r computed when they are done  This parallelism is automatic  Works recursively throughout the program’s call graph (int r) myproc (int i) { int f = F(i); int g = G(i); r = f + g; }
  • 10. Data-intensive example: Processing MODIS land-use data analyze colorize x 317 landUse x 317 mark Swift loops process hundreds of images in parallel assemble Image processing pipeline for land-use data from the MODIS satellite instrument…
  • 11. Processing MODIS land-use data foreach raw,i in rawFiles { land[i] = landUse(raw,1); colorFiles[i] = colorize(raw); } (topTiles, topFiles, topColors) = analyze(land, landType, nSelect); gridMap = mark(topTiles); montage = assemble(topFiles,colorFiles,webDir);
  • 12. Example of Swift’s implicit parallelism: Processing MODIS land-use data analyze colorize x 317 landUse x 317 mark Swift loops process hundreds of images in parallel assemble Image processing pipeline for land-use data from the MODIS satellite instrument…
  • 13. Swift provides 4 important benefits: 13 Makes parallelism more transparent Implicitly parallel functional dataflow programming Makes computing location more transparent Runs your script on multiple distributed sites and diverse computing resources (desktop to petascale) Makes basic failure recovery transparent Retries/relocates failing tasks Can restart failing runs from point of failure Enables provenance capture Tasks have recordable inputs and outputs
  • 14. Swift/T: productive extreme-scale scripting  Script-like programming with “leaf” tasks – In-memory function calls in C++, Fortran, Python, R, … passing in-memory objects – More expressive than master-worker for “programming in the large” – Leaf tasks can be MPI programs, etc. Can be separate processes if OS permits.  Distributed, scalable runtime manages tasks, load balancing, data movement  User function calls to external code run on thousands of worker nodes Swift control process Swift control process Parallel evaluator and data store Swift worker process C C++ Fortran Swift worker process C C++ Fortran Swift worker process C C++ Fortran MPI Scripts
  • 15. Parallel tasks in Swift/T  Swift expression: z = @par=32 f(x,y);  ADLB server finds 8 available workers – Workers receive ranks from ADLB server – Performs comm = MPI_Comm_create_group()  Workers perform f(x,y)communicating on comm
  • 16. LAMMPS parallel tasks  LAMMPS provides a convenient C++ API  Easily used by Swift/T parallel tasks foreach i in [0:20] { t = 300+i; sed_command = sprintf("s/_TEMPERATURE_/%i/g", t); lammps_file_name = sprintf("input-%i.inp", t); lammps_args = "-i " + lammps_file_name; file lammps_input<lammps_file_name> = sed(filter, sed_command) => @par=8 lammps(lammps_args); } Tasks with varying sizes packed into big MPI run Black: Compute Blue: Message White: Idle
  • 17. Swift/T-specific features  Task locality: Ability to send a task to a process – Allows for big data –type applications – Allows for stateful objects to remain resident in the workflow – location L = find_data(D); int y = @location=L f(D, x);  Data broadcast  Task priorities: Ability to set task priority – Useful for tweaking load balancing  Updateable variables – Allow data to be modified after its initial write – Consumer tasks may receive original or updated values when they emerge from the work queue 17 Wozniak et al. Language features for scalable distributed-memory dataflow computing. Proc. Dataflow Execution Models at PACT, 2014.
  • 18. Swift/T: scaling of trivial foreach { } loop 100 microsecond to 10 millisecond tasks on up to 512K integer cores of Blue Waters 18
  • 19. Large-scale applications using Swift  Simulation of super- cooled glass materials  Protein and biomolecule structure and interaction  Climate model analysis and decision making for global food production & supply  Materials science at the Advanced Photon Source  Multiscale subsurface flow modeling  Modeling of power grid for OE applications All have published science results obtained using Swift E C A B A B C D E F F D
  • 20. Assess Red indicates higher statistical confidence in data Impact and Approach Accomplishments ALCF Contributions • HEDM imaging and analysis shows granular material structure, of non-destructively • APS Sector 1 scientists use Mira to process data from live HEDM experiments, providing real-time feedback to correct or improve in-progress experiments • Scientists working with Discovery Engines LDRD developed new Swift analysis workflows to process APS data from Sectors 1, 6, and 11 • Mira analyzes experiment in 10 mins vs. 5.2 hours on APS cluster: > 30X improvement • Scaling up to ~ 128K cores (driven by data features) • Cable flaw was found and fixed at start of experiment, saving an entire multi-day experiment and valuable user time and APS beam time. • In press: High-Energy Synchrotron X- ray Techniques for Studying Irradiated Materials, J-S Park et al, J. Mat. Res. • Big data staging with MPI-IO for interactive X-ray science, J Wozniak et al, Big Data Conference, Dec 2014 • Design, develop, support, and trial user engagement to make Swift workflow solution on ALCF systems a reliable, secure and supported production service • Creation and support of the Petrel data server • Reserved resources on Mira for APS HEDM experiment at Sector 1-ID beamline (8/10/2014 and future sessions in APS 2015 Run 1) Boosting Light Source Productivity with Swift ALCF Data Analysis H Sharma, J Almer (APS); J Wozniak, M Wilde, I Foster (MCS) Analyze Fix Re-analyze Valid Data! 2 3 4 5 1
  • 21. Conclusion: parallel workflow scripting is practical, productive, and necessary, at a broad range of scales  Swift programming model demonstrated feasible and scalable on XSEDE, Blue Waters, OSG, DOE systems  Applied to numerous MTC and HPC application domains – attractive for data-intensive applications – and several hybrid programming models  Proven productivity enhancement in materials, genomics, biochem, earth systems science, …  Deep integration of workflow in progress at XSEDE, ALCF Workflow through implicitly parallel dataflow is productive for applications and systems at many scales, including on highest-end system
  • 22. What’s next?  Programmability – New patterns ala Van Der Aalst et al (workflowpatterns.org)  Fine grained dataflow – programming in the smaller? – Run leaf tasks on accelerators (CUDA GPUs, Intel Phi) – How low/fast can we drive this model?  PowerFlow – Applies dataflow semantics to manage and reduce energy usage  Extreme-scale reliability  Embed Swift semantics in Python, R, Java, shell, make – Can we make Swift “invisible”? Should we?  Swift-Reduce – Learning from map-reduce – Integration with map-reduce
  • 23. GeMTC: GPU-enabled Many-Task Computing Goals: 1) MTC support 2) Programmability 3) Efficiency 4) MPMD on SIMD 5) Increase concurrency to warp level Approach: Design & implement GeMTC middleware: 1) Manages GPU 2) Spread host/device 3) Workflow system integration (Swift/T) Motivation: Support for MTC on all accelerators! S. J. Krieder, J. M. Wozniak, T. Armstrong, M. Wilde, D. S. Katz, B. Grimmer, I. T. Foster, I. Raicu, "Design and Evaluation of the GeMTC Framework for GPU-enabled Many-Task Computing,” HPDC'14
  • 24. Further research directions  Deeply in-situ processing for extreme-scale analytics  Shell-like Read-Evaluate-Print Loop ala iPython  Debugging of extreme-scale workflows Deeply in-situ analytics of a climate simulation
  • 25. 25 U . S . D E P A R T M E N T O F ENERGY Swift gratefully acknowledges support from: http://swift-lang.org