SlideShare a Scribd company logo
ARM Research – Software & Large Scale Systems
UCX:An Open Source Framework
for HPC Network APIs and Beyond
Pavel Shamis (Pasha)
Principal Research Engineer
Co-Design Collaboration
Collaborative Effort
Industry, National Laboratories and Academia
The Next Generation
HPC Communication Framework
Challenges
§  Performance Portability (across various interconnects)
§  Collaboration between industry and research institutions
§  …but mostly industry (because they built the hardware)
§  Maintenance
§  Maintaining a network stack is time consuming and expensive
§  Industry have resources and strategic interest for this
§  Extendibility
§  MPI+X+Y ?
§  Exascale programming environment is an ongoing debate
UCX – Unified Communication X Framework
§  Unified
§  Network API for multiple network architectures that target HPC
programing models and libraries
§  Communication
§  How to move data from location in memory A to location in memory B
considering multiple types of memories
§  Framework
§  A collection of libraries and utilities for HPC network programmers
History
MXM
●  Developed by Mellanox Technologies
●  HPC communication library for InfiniBand
devices and shared memory
●  Primary focus: MPI, PGAS
PAMI
●  Developed by IBM on BG/Q, PERCS, IB
VERBS
●  Network devices and shared memory
●  MPI, OpenSHMEM, PGAS, CHARM++, X10
●  C++ components
●  Aggressive multi-threading with contexts
●  Active Messages
●  Non-blocking collectives with hw accleration
support
Decades of community and
industry experience in
development of HPC software
UCCS
●  Developed by ORNL, UH, UTK
●  Originally based on Open MPI BTL and OPAL
layers
●  HPC communication library for InfiniBand,
Cray Gemini/Aries, and shared memory
●  Primary focus: OpenSHMEM, PGAS
●  Also supports: MPI
What we are doing differently…
§  UCX consolidates multiple industry and academic efforts
§  Mellanox MXM, IBM PAMI, ORNL/UTK/UH UCCS, etc.
§  Supported and maintained by industry
§  IBM, Mellanox, NVIDIA, Pathscale,ARM
What we are doing differently…
§  Co-design effort between national laboratories, academia, and
industry
Applications: LAMMPS, NWCHEM, etc.
Programming models: MPI, PGAS/Gasnet, etc.
Middleware:
Driver and Hardware
Co-design
UCX
InfiniBand uGNI
Shared
Memory
GPU Memory
Emerging
Interconnects
MPI GasNet PGAS
Task Based
Runtimes
I/O
Transports
Protocols Services
Applications
A Collaboration Efforts
§  Mellanox co-designs network API and contributes MXM technology
§  Infrastructure, transport, shared memory, protocols, integration with
OpenMPI/SHMEM, MPICH
§  ORNL & LANL co-designs network API and contributes UCCS project
§  InfiniBand optimizations, Cray devices, shared memory
§  ARM co-designs the network API and contributes optimizations for
ARM eco-system
§  NVIDIA co-designs high-quality support for GPU devices
§  GPUDirect, GDR copy, etc.
§  IBM co-designs network API and contributes ideas and concepts from
PAMI
§  UH/UTK focus on integration with their research platforms
Licensing
§  Open Source
§  BSD 3 Clause license
§  Contributor License Agreement – BSD 3 based
UCX Framework Mission
§  Collaboration between industry, laboratories, and academia
§  Create open-source production grade communication framework for HPC applications
§  Enable the highest performance through co-design of software-hardware interfaces
§  Unify industry - national laboratories - academia efforts
Performance oriented
Optimization for low-software
overheads in communication path allows
near native-level performance
Community driven
Collaboration between industry,
laboratories, and academia
Production quality
Developed, maintained, tested, and used
by industry and researcher community
API
Exposes broad semantics that target
data centric and HPC programming
models and applications
Research
The framework concepts and ideas are
driven by research in academia,
laboratories, and industry
Cross platform
Support for Infiniband, Cray, various
shared memory (x86-64 and Power),
GPUs
Co-design of Exascale Network APIs
Architecture
UCX Framework
UC-S for Services
This framework provides
basic infrastructure for
component based
programming, memory
management, and useful
system utilities
Functionality:
Platform abstractions, data
structures, debug facilities.
UC-T forTransport
Low-level API that expose
basic network operations
supported by underlying
hardware. Reliable, out-of-
order delivery.
Functionality:
Setup and instantiation of
communication operations.
UC-P for Protocols
High-level API uses UCT
framework to construct
protocols commonly found
in applications
Functionality:
Multi-rail, device selection,
pending queue, rendezvous,
tag-matching, software-
atomics, etc.
A High-level Overview
UC-T (Hardware Transports) - Low Level API
RMA, Atomic, Tag-matching, Send/Recv, Active Message
Transport for InfiniBand VERBs
driver
RC UD XRC DCT
Transport for intra-node host memory communication
SYSV POSIX KNEM CMA XPMEM
Transport for
Accelerator Memory
communucation
GPU
Transport for
Gemini/Aries
drivers
GNI
UC-S
(Services)
Common utilities
UC-P (Protocols) - High Level API
Transport selection, cross-transrport multi-rail, fragmentation, operations not supported by hardware
Message Passing API Domain:
tag matching, randevouze
PGAS API Domain:
RMAs, Atomics
Task Based API Domain:
Active Messages
I/O API Domain:
Stream
Utilities
Data
stractures
Hardware
MPICH, Open-MPI, etc.
OpenSHMEM, UPC, CAF, X10,
Chapel, etc.
Parsec, OCR, Legions, etc. Burst buffer, ADIOS, etc.
ApplicationsUCX
Memory
Management
OFA Verbs Driver Cray Driver OS Kernel Cuda
UCP API (DRAFT) Snippet
(https://github.com/openucx/ucx/blob/master/src/ucp/api/ucp.h)
§  ucs_status_t ucp_put(ucp_ep_h ep, const void ∗buffer, size_t length, uint64_t remote_addr, ucp_rkey_h rkey)
Blocking remote memory put operation.
§  ucs_status_t ucp_put_nbi (ucp_ep_h ep, const void ∗buffer, size_t length, uint64_t remote_addr, ucp_rkey_h rkey)
Non-blocking implicit remote memory put operation.
§  ucs_status_t ucp_get (ucp_ep_h ep, void ∗buffer, size_t length, uint64_t remote_addr, ucp_rkey_h rkey)
Blocking remote memory get operation.
§  ucs_status_t ucp_get_nbi (ucp_ep_h ep, void ∗buffer, size_t length, uint64_t remote_addr, ucp_rkey_h rkey)
Non-blocking implicit remote memory get operation.
§  ucs_status_t ucp_atomic_add32 (ucp_ep_h ep, uint32_t add, uint64_t remote_addr, ucp_rkey_h rkey)
Blocking atomic add operation for 32 bit integers.
§  ucs_status_t ucp_atomic_add64 (ucp_ep_h ep, uint64_t add, uint64_t remote_addr, ucp_rkey_h rkey)
Blocking atomic add operation for 64 bit integers.
§  ucs_status_t ucp_atomic_fadd32 (ucp_ep_h ep, uint32_t add, uint64_t remote_addr, ucp_rkey_h rkey, uint32_t ∗result)
Blocking atomic fetch and add operation for 32 bit integers.
§  ucs_status_t ucp_atomic_fadd64 (ucp_ep_h ep, uint64_t add, uint64_t remote_addr, ucp_rkey_h rkey, uint64_t ∗result)
Blocking atomic fetch and add operation for 64 bit integers.
§  ucs_status_ptr_t ucp_tag_send_nb (ucp_ep_h ep, const void ∗buffer, size_t count, ucp_datatype_t datatype, ucp_tag_t tag, ucp_send_callback_t cb)
Non-blocking tagged-send operations.
§  ucs_status_ptr_t ucp_tag_recv_nb (ucp_worker_h worker, void ∗buffer, size_t count, ucp_datatype_t datatype, ucp_tag_t tag, ucp_tag_t tag_mask,
ucp_tag_recv_callback_t cb)
Non-blocking tagged-receive operation.
Preliminary Evaluation ( UCT )
§  Pavel Shamis, et al.“UCX:An Open Source Framework for HPC Network APIs and Beyond,” HOT Interconnects 2015 -
Santa Clara, California, US,August 2015
§  Two HP ProLiant DL380p Gen8 servers
§  Mellanox SX6036 switch, Single-port Mellanox Connect-IB FDR (10.10.5056)
§  Mellanox OFED 2.4-1.0.4. (VERBS)
§  Prototype implementation of AcceleratedVERBS (AVERBS)
��
��
��
��
��
���
���
���
���
���
���
�� �� �� �� ��� ��� ���
��������������������
��������������������
�����������������
����������������
�����������������
����������������
����
����
����
����
����
����
����
����
����
����
����
����
�� �� �� �� ��� ��� ���
������������ ��������������������
�����������������
�����������������
�����������������
����������������
����������������
����������������
��
��
��
��
��
��
��
��
�� ��� �� ��� ��
����������������
��������������������
�����������������
����������������
�����������������
����������������
OpenSHMEM and OSHMEM (OpenMPI)
Put Latency (shared memory)
0.1
1
10
100
1000
8 16 32 64 128 256 512 1KB 2KB 4KB 8KB 16KB 32KB 64KB 128KB256KB512KB 1MB 2MB 4MB
Latency(usec,logscale)
Message Size
OpenSHMEM−UCX (intranode)
OpenSHMEM−UCCS (intranode)
OSHMEM (intranode)
Lower is better
Slide courtesy of ORNL UCXTeam
OpenSHMEM and OSHMEM (OpenMPI)
Put Injection Rate
Higher is better
Connect-IB
0
2e+06
4e+06
6e+06
8e+06
1e+07
1.2e+07
1.4e+07
8 16 32 64 128 256 512 1KB 2KB 4KB
MessageRate(putoperations/second)
Message Size
OpenSHMEM−UCX (mlx5)
OpenSHMEM−UCCS (mlx5)
OSHMEM (mlx5)
OSHMEM−UCX (mlx5)
Slide courtesy of ORNL UCXTeam
OpenSHMEM and OSHMEM (OpenMPI)
GUPs Benchmark
Higher is better
Connect-IB
0
0.0002
0.0004
0.0006
0.0008
0.001
0.0012
0.0014
0.0016
0.0018
2 4 6 8 10 12 14 16
GUPS(billionupdatespersecond)
Number of PEs (two nodes)
UCX (mlx5)
OSHMEM (mlx5)
Slide courtesy of ORNL UCXTeam
MPICH - Message rate
Preliminary Results
0
1
2
3
4
5
6 1
2
4
8
16
32
64
128
256
512
1k
2k
4k
8k
16k
32k
64k
128k
256k
512k
1M
2M
4M
MMPS
MPICH/UCX MPICH/MXM
Slide courtesy of Pavan Balaji,ANL - sent to the ucx mailing list
Connect-IB
“non-blocking tag-send”
Where is UCX being used?
§  Upcoming release of Open MPI 2.0 (MPI and OpenSHMEM APIs)
§  Upcoming release of MPICH
§  OpenSHMEM reference implementation by UH and ORNL
§  PARSEC – runtime used on Scientific Linear Libraries
What Next ?
§  UCX Consortium !
§  http://www.csm.ornl.gov/newsite/
§  UCX Specification
§  Early draft is available online:
http://www.openucx.org/early-draft-of-ucx-specification-is-here/
§  Production releases
§  MPICH, Open MPI, Open SHMEM(s), Gasnet, and more…
§  Support for more networks and applications and libraries
§  UCX Hackathon 2016 !
§  Will be announced on the mailing list and website
https://github.com/orgs/openucx
WEB: www.openucx.org
Contact: info@openucx.org
Mailing List:
https://elist.ornl.gov/mailman/listinfo/ucx-group
ucx-group@elist.ornl.gov
Questions ? Unified Communication - X
Framework
WEB: www.openucx.org
Contact: info@openucx.org
WE B: https://github.com/orgs/openucx
Mailing List:
https://elist.ornl.gov/mailman/listinfo/ucx-group
ucx-group@elist.ornl.gov

More Related Content

What's hot (20)

PDF
Ansibleはじめよぉ -Infrastructure as Codeを理解-
Shingo Kitayama
 
PDF
OpenJDK トラブルシューティング #javacasual
Yuji Kubota
 
PDF
大規模オンプレミス環境はGitOpsの夢を見るか(CI/CD Conference 2021 by CloudNative Days 発表資料)
NTT DATA Technology & Innovation
 
PDF
Spring Boot の Web アプリケーションを Docker に載せて AWS ECS で動かしている話
JustSystems Corporation
 
PDF
WebAssemblyのWeb以外のことぜんぶ話す
Takaya Saeki
 
PDF
Magnum IO GPUDirect Storage 最新情報
NVIDIA Japan
 
PDF
DockerとDocker Hubの操作と概念
Masahito Zembutsu
 
PPTX
【配信!Veeam情報局】バックアップ容量の最適化、ストレージ節約や拡張方法を解説!
株式会社クライム
 
PDF
【2018年3月時点】Oracle BI ベストプラクティス
オラクルエンジニア通信
 
PDF
Cgroupあれこれ-第4回コンテナ型仮想化の情報交換会資料
KamezawaHiroyuki
 
PDF
Oracle Cloud is Best for Oracle Database - High Availability
Markus Michalewicz
 
PDF
Oracle SQL Developerを使い倒そう! 株式会社コーソル 守田 典男
CO-Sol for Community
 
PDF
Unified JVM Logging
Yuji Kubota
 
PDF
CodeBuildを身近にするためのはじめの一歩
淳 千葉
 
PPTX
Apache Avro vs Protocol Buffers
Seiya Mizuno
 
PDF
ODA Backup Restore Utility & ODA Rescue Live Disk
Ruggero Citton
 
PDF
Harbor RegistryのReplication機能
Masanori Nara
 
PDF
Oracle Analytics Cloud のご紹介【2021年3月版】
オラクルエンジニア通信
 
PDF
Dockerからcontainerdへの移行
Kohei Tokunaga
 
PPTX
Envoy and Kafka
Adam Kotwasinski
 
Ansibleはじめよぉ -Infrastructure as Codeを理解-
Shingo Kitayama
 
OpenJDK トラブルシューティング #javacasual
Yuji Kubota
 
大規模オンプレミス環境はGitOpsの夢を見るか(CI/CD Conference 2021 by CloudNative Days 発表資料)
NTT DATA Technology & Innovation
 
Spring Boot の Web アプリケーションを Docker に載せて AWS ECS で動かしている話
JustSystems Corporation
 
WebAssemblyのWeb以外のことぜんぶ話す
Takaya Saeki
 
Magnum IO GPUDirect Storage 最新情報
NVIDIA Japan
 
DockerとDocker Hubの操作と概念
Masahito Zembutsu
 
【配信!Veeam情報局】バックアップ容量の最適化、ストレージ節約や拡張方法を解説!
株式会社クライム
 
【2018年3月時点】Oracle BI ベストプラクティス
オラクルエンジニア通信
 
Cgroupあれこれ-第4回コンテナ型仮想化の情報交換会資料
KamezawaHiroyuki
 
Oracle Cloud is Best for Oracle Database - High Availability
Markus Michalewicz
 
Oracle SQL Developerを使い倒そう! 株式会社コーソル 守田 典男
CO-Sol for Community
 
Unified JVM Logging
Yuji Kubota
 
CodeBuildを身近にするためのはじめの一歩
淳 千葉
 
Apache Avro vs Protocol Buffers
Seiya Mizuno
 
ODA Backup Restore Utility & ODA Rescue Live Disk
Ruggero Citton
 
Harbor RegistryのReplication機能
Masanori Nara
 
Oracle Analytics Cloud のご紹介【2021年3月版】
オラクルエンジニア通信
 
Dockerからcontainerdへの移行
Kohei Tokunaga
 
Envoy and Kafka
Adam Kotwasinski
 

Similar to Ucx an open source framework for hpc network ap is and beyond (20)

PDF
UCX: An Open Source Framework for HPC Network APIs and Beyond
Ed Dodds
 
PDF
Designing HPC & Deep Learning Middleware for Exascale Systems
inside-BigData.com
 
PDF
High-Performance and Scalable Designs of Programming Models for Exascale Systems
inside-BigData.com
 
PPTX
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems
inside-BigData.com
 
PDF
Panda scalable hpc_bestpractices_tue100418
inside-BigData.com
 
PDF
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
inside-BigData.com
 
PPTX
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
inside-BigData.com
 
PDF
Accelerate Big Data Processing with High-Performance Computing Technologies
Intel® Software
 
PPTX
Designing High-Performance and Scalable Middleware for HPC, AI and Data Science
Object Automation
 
PPTX
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
inside-BigData.com
 
PDF
Japan's post K Computer
inside-BigData.com
 
PDF
Co-Design Architecture for Exascale
inside-BigData.com
 
PPTX
Designing High performance & Scalable Middleware for HPC
Object Automation
 
PPTX
Communication Frameworks for HPC and Big Data
inside-BigData.com
 
PDF
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
OpenStack
 
PDF
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
inside-BigData.com
 
ODP
Systems Support for Many Task Computing
Eric Van Hensbergen
 
PDF
A Library for Emerging High-Performance Computing Clusters
Intel® Software
 
PDF
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...
inside-BigData.com
 
PDF
Introduction to Apache Mesos and DC/OS
Steve Wong
 
UCX: An Open Source Framework for HPC Network APIs and Beyond
Ed Dodds
 
Designing HPC & Deep Learning Middleware for Exascale Systems
inside-BigData.com
 
High-Performance and Scalable Designs of Programming Models for Exascale Systems
inside-BigData.com
 
Designing HPC, Deep Learning, and Cloud Middleware for Exascale Systems
inside-BigData.com
 
Panda scalable hpc_bestpractices_tue100418
inside-BigData.com
 
Designing Scalable HPC, Deep Learning and Cloud Middleware for Exascale Systems
inside-BigData.com
 
How to Design Scalable HPC, Deep Learning, and Cloud Middleware for Exascale ...
inside-BigData.com
 
Accelerate Big Data Processing with High-Performance Computing Technologies
Intel® Software
 
Designing High-Performance and Scalable Middleware for HPC, AI and Data Science
Object Automation
 
Big Data Meets HPC - Exploiting HPC Technologies for Accelerating Big Data Pr...
inside-BigData.com
 
Japan's post K Computer
inside-BigData.com
 
Co-Design Architecture for Exascale
inside-BigData.com
 
Designing High performance & Scalable Middleware for HPC
Object Automation
 
Communication Frameworks for HPC and Big Data
inside-BigData.com
 
The Why and How of HPC-Cloud Hybrids with OpenStack - Lev Lafayette, Universi...
OpenStack
 
Building Efficient HPC Clouds with MCAPICH2 and RDMA-Hadoop over SR-IOV Infin...
inside-BigData.com
 
Systems Support for Many Task Computing
Eric Van Hensbergen
 
A Library for Emerging High-Performance Computing Clusters
Intel® Software
 
Designing Software Libraries and Middleware for Exascale Systems: Opportuniti...
inside-BigData.com
 
Introduction to Apache Mesos and DC/OS
Steve Wong
 
Ad

More from inside-BigData.com (20)

PDF
Major Market Shifts in IT
inside-BigData.com
 
PDF
Preparing to program Aurora at Exascale - Early experiences and future direct...
inside-BigData.com
 
PPTX
Transforming Private 5G Networks
inside-BigData.com
 
PDF
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
inside-BigData.com
 
PDF
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
inside-BigData.com
 
PDF
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
inside-BigData.com
 
PDF
HPC Impact: EDA Telemetry Neural Networks
inside-BigData.com
 
PDF
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
inside-BigData.com
 
PDF
Machine Learning for Weather Forecasts
inside-BigData.com
 
PPTX
HPC AI Advisory Council Update
inside-BigData.com
 
PDF
Fugaku Supercomputer joins fight against COVID-19
inside-BigData.com
 
PDF
Energy Efficient Computing using Dynamic Tuning
inside-BigData.com
 
PDF
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
inside-BigData.com
 
PDF
State of ARM-based HPC
inside-BigData.com
 
PDF
Versal Premium ACAP for Network and Cloud Acceleration
inside-BigData.com
 
PDF
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
inside-BigData.com
 
PDF
Scaling TCO in a Post Moore's Era
inside-BigData.com
 
PDF
CUDA-Python and RAPIDS for blazing fast scientific computing
inside-BigData.com
 
PDF
Introducing HPC with a Raspberry Pi Cluster
inside-BigData.com
 
PDF
Overview of HPC Interconnects
inside-BigData.com
 
Major Market Shifts in IT
inside-BigData.com
 
Preparing to program Aurora at Exascale - Early experiences and future direct...
inside-BigData.com
 
Transforming Private 5G Networks
inside-BigData.com
 
The Incorporation of Machine Learning into Scientific Simulations at Lawrence...
inside-BigData.com
 
How to Achieve High-Performance, Scalable and Distributed DNN Training on Mod...
inside-BigData.com
 
Evolving Cyberinfrastructure, Democratizing Data, and Scaling AI to Catalyze ...
inside-BigData.com
 
HPC Impact: EDA Telemetry Neural Networks
inside-BigData.com
 
Biohybrid Robotic Jellyfish for Future Applications in Ocean Monitoring
inside-BigData.com
 
Machine Learning for Weather Forecasts
inside-BigData.com
 
HPC AI Advisory Council Update
inside-BigData.com
 
Fugaku Supercomputer joins fight against COVID-19
inside-BigData.com
 
Energy Efficient Computing using Dynamic Tuning
inside-BigData.com
 
HPC at Scale Enabled by DDN A3i and NVIDIA SuperPOD
inside-BigData.com
 
State of ARM-based HPC
inside-BigData.com
 
Versal Premium ACAP for Network and Cloud Acceleration
inside-BigData.com
 
Zettar: Moving Massive Amounts of Data across Any Distance Efficiently
inside-BigData.com
 
Scaling TCO in a Post Moore's Era
inside-BigData.com
 
CUDA-Python and RAPIDS for blazing fast scientific computing
inside-BigData.com
 
Introducing HPC with a Raspberry Pi Cluster
inside-BigData.com
 
Overview of HPC Interconnects
inside-BigData.com
 
Ad

Recently uploaded (20)

PDF
HCIP-Data Center Facility Deployment V2.0 Training Material (Without Remarks ...
mcastillo49
 
PDF
From Code to Challenge: Crafting Skill-Based Games That Engage and Reward
aiyshauae
 
PDF
Blockchain Transactions Explained For Everyone
CIFDAQ
 
PDF
Biography of Daniel Podor.pdf
Daniel Podor
 
PDF
Fl Studio 24.2.2 Build 4597 Crack for Windows Free Download 2025
faizk77g
 
PDF
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 
PDF
Smart Trailers 2025 Update with History and Overview
Paul Menig
 
PDF
Newgen 2022-Forrester Newgen TEI_13 05 2022-The-Total-Economic-Impact-Newgen-...
darshakparmar
 
PPTX
AUTOMATION AND ROBOTICS IN PHARMA INDUSTRY.pptx
sameeraaabegumm
 
PDF
"AI Transformation: Directions and Challenges", Pavlo Shaternik
Fwdays
 
PPTX
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
PDF
The Rise of AI and IoT in Mobile App Tech.pdf
IMG Global Infotech
 
PDF
HubSpot Main Hub: A Unified Growth Platform
Jaswinder Singh
 
PDF
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
PDF
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
PDF
CIFDAQ Token Spotlight for 9th July 2025
CIFDAQ
 
PDF
What Makes Contify’s News API Stand Out: Key Features at a Glance
Contify
 
PDF
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
PPTX
AI Penetration Testing Essentials: A Cybersecurity Guide for 2025
defencerabbit Team
 
PDF
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
HCIP-Data Center Facility Deployment V2.0 Training Material (Without Remarks ...
mcastillo49
 
From Code to Challenge: Crafting Skill-Based Games That Engage and Reward
aiyshauae
 
Blockchain Transactions Explained For Everyone
CIFDAQ
 
Biography of Daniel Podor.pdf
Daniel Podor
 
Fl Studio 24.2.2 Build 4597 Crack for Windows Free Download 2025
faizk77g
 
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 
Smart Trailers 2025 Update with History and Overview
Paul Menig
 
Newgen 2022-Forrester Newgen TEI_13 05 2022-The-Total-Economic-Impact-Newgen-...
darshakparmar
 
AUTOMATION AND ROBOTICS IN PHARMA INDUSTRY.pptx
sameeraaabegumm
 
"AI Transformation: Directions and Challenges", Pavlo Shaternik
Fwdays
 
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
The Rise of AI and IoT in Mobile App Tech.pdf
IMG Global Infotech
 
HubSpot Main Hub: A Unified Growth Platform
Jaswinder Singh
 
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
CIFDAQ Token Spotlight for 9th July 2025
CIFDAQ
 
What Makes Contify’s News API Stand Out: Key Features at a Glance
Contify
 
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
AI Penetration Testing Essentials: A Cybersecurity Guide for 2025
defencerabbit Team
 
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 

Ucx an open source framework for hpc network ap is and beyond

  • 1. ARM Research – Software & Large Scale Systems UCX:An Open Source Framework for HPC Network APIs and Beyond Pavel Shamis (Pasha) Principal Research Engineer
  • 2. Co-Design Collaboration Collaborative Effort Industry, National Laboratories and Academia The Next Generation HPC Communication Framework
  • 3. Challenges §  Performance Portability (across various interconnects) §  Collaboration between industry and research institutions §  …but mostly industry (because they built the hardware) §  Maintenance §  Maintaining a network stack is time consuming and expensive §  Industry have resources and strategic interest for this §  Extendibility §  MPI+X+Y ? §  Exascale programming environment is an ongoing debate
  • 4. UCX – Unified Communication X Framework §  Unified §  Network API for multiple network architectures that target HPC programing models and libraries §  Communication §  How to move data from location in memory A to location in memory B considering multiple types of memories §  Framework §  A collection of libraries and utilities for HPC network programmers
  • 5. History MXM ●  Developed by Mellanox Technologies ●  HPC communication library for InfiniBand devices and shared memory ●  Primary focus: MPI, PGAS PAMI ●  Developed by IBM on BG/Q, PERCS, IB VERBS ●  Network devices and shared memory ●  MPI, OpenSHMEM, PGAS, CHARM++, X10 ●  C++ components ●  Aggressive multi-threading with contexts ●  Active Messages ●  Non-blocking collectives with hw accleration support Decades of community and industry experience in development of HPC software UCCS ●  Developed by ORNL, UH, UTK ●  Originally based on Open MPI BTL and OPAL layers ●  HPC communication library for InfiniBand, Cray Gemini/Aries, and shared memory ●  Primary focus: OpenSHMEM, PGAS ●  Also supports: MPI
  • 6. What we are doing differently… §  UCX consolidates multiple industry and academic efforts §  Mellanox MXM, IBM PAMI, ORNL/UTK/UH UCCS, etc. §  Supported and maintained by industry §  IBM, Mellanox, NVIDIA, Pathscale,ARM
  • 7. What we are doing differently… §  Co-design effort between national laboratories, academia, and industry Applications: LAMMPS, NWCHEM, etc. Programming models: MPI, PGAS/Gasnet, etc. Middleware: Driver and Hardware Co-design
  • 8. UCX InfiniBand uGNI Shared Memory GPU Memory Emerging Interconnects MPI GasNet PGAS Task Based Runtimes I/O Transports Protocols Services Applications
  • 9. A Collaboration Efforts §  Mellanox co-designs network API and contributes MXM technology §  Infrastructure, transport, shared memory, protocols, integration with OpenMPI/SHMEM, MPICH §  ORNL & LANL co-designs network API and contributes UCCS project §  InfiniBand optimizations, Cray devices, shared memory §  ARM co-designs the network API and contributes optimizations for ARM eco-system §  NVIDIA co-designs high-quality support for GPU devices §  GPUDirect, GDR copy, etc. §  IBM co-designs network API and contributes ideas and concepts from PAMI §  UH/UTK focus on integration with their research platforms
  • 10. Licensing §  Open Source §  BSD 3 Clause license §  Contributor License Agreement – BSD 3 based
  • 11. UCX Framework Mission §  Collaboration between industry, laboratories, and academia §  Create open-source production grade communication framework for HPC applications §  Enable the highest performance through co-design of software-hardware interfaces §  Unify industry - national laboratories - academia efforts Performance oriented Optimization for low-software overheads in communication path allows near native-level performance Community driven Collaboration between industry, laboratories, and academia Production quality Developed, maintained, tested, and used by industry and researcher community API Exposes broad semantics that target data centric and HPC programming models and applications Research The framework concepts and ideas are driven by research in academia, laboratories, and industry Cross platform Support for Infiniband, Cray, various shared memory (x86-64 and Power), GPUs Co-design of Exascale Network APIs
  • 13. UCX Framework UC-S for Services This framework provides basic infrastructure for component based programming, memory management, and useful system utilities Functionality: Platform abstractions, data structures, debug facilities. UC-T forTransport Low-level API that expose basic network operations supported by underlying hardware. Reliable, out-of- order delivery. Functionality: Setup and instantiation of communication operations. UC-P for Protocols High-level API uses UCT framework to construct protocols commonly found in applications Functionality: Multi-rail, device selection, pending queue, rendezvous, tag-matching, software- atomics, etc.
  • 14. A High-level Overview UC-T (Hardware Transports) - Low Level API RMA, Atomic, Tag-matching, Send/Recv, Active Message Transport for InfiniBand VERBs driver RC UD XRC DCT Transport for intra-node host memory communication SYSV POSIX KNEM CMA XPMEM Transport for Accelerator Memory communucation GPU Transport for Gemini/Aries drivers GNI UC-S (Services) Common utilities UC-P (Protocols) - High Level API Transport selection, cross-transrport multi-rail, fragmentation, operations not supported by hardware Message Passing API Domain: tag matching, randevouze PGAS API Domain: RMAs, Atomics Task Based API Domain: Active Messages I/O API Domain: Stream Utilities Data stractures Hardware MPICH, Open-MPI, etc. OpenSHMEM, UPC, CAF, X10, Chapel, etc. Parsec, OCR, Legions, etc. Burst buffer, ADIOS, etc. ApplicationsUCX Memory Management OFA Verbs Driver Cray Driver OS Kernel Cuda
  • 15. UCP API (DRAFT) Snippet (https://github.com/openucx/ucx/blob/master/src/ucp/api/ucp.h) §  ucs_status_t ucp_put(ucp_ep_h ep, const void ∗buffer, size_t length, uint64_t remote_addr, ucp_rkey_h rkey) Blocking remote memory put operation. §  ucs_status_t ucp_put_nbi (ucp_ep_h ep, const void ∗buffer, size_t length, uint64_t remote_addr, ucp_rkey_h rkey) Non-blocking implicit remote memory put operation. §  ucs_status_t ucp_get (ucp_ep_h ep, void ∗buffer, size_t length, uint64_t remote_addr, ucp_rkey_h rkey) Blocking remote memory get operation. §  ucs_status_t ucp_get_nbi (ucp_ep_h ep, void ∗buffer, size_t length, uint64_t remote_addr, ucp_rkey_h rkey) Non-blocking implicit remote memory get operation. §  ucs_status_t ucp_atomic_add32 (ucp_ep_h ep, uint32_t add, uint64_t remote_addr, ucp_rkey_h rkey) Blocking atomic add operation for 32 bit integers. §  ucs_status_t ucp_atomic_add64 (ucp_ep_h ep, uint64_t add, uint64_t remote_addr, ucp_rkey_h rkey) Blocking atomic add operation for 64 bit integers. §  ucs_status_t ucp_atomic_fadd32 (ucp_ep_h ep, uint32_t add, uint64_t remote_addr, ucp_rkey_h rkey, uint32_t ∗result) Blocking atomic fetch and add operation for 32 bit integers. §  ucs_status_t ucp_atomic_fadd64 (ucp_ep_h ep, uint64_t add, uint64_t remote_addr, ucp_rkey_h rkey, uint64_t ∗result) Blocking atomic fetch and add operation for 64 bit integers. §  ucs_status_ptr_t ucp_tag_send_nb (ucp_ep_h ep, const void ∗buffer, size_t count, ucp_datatype_t datatype, ucp_tag_t tag, ucp_send_callback_t cb) Non-blocking tagged-send operations. §  ucs_status_ptr_t ucp_tag_recv_nb (ucp_worker_h worker, void ∗buffer, size_t count, ucp_datatype_t datatype, ucp_tag_t tag, ucp_tag_t tag_mask, ucp_tag_recv_callback_t cb) Non-blocking tagged-receive operation.
  • 16. Preliminary Evaluation ( UCT ) §  Pavel Shamis, et al.“UCX:An Open Source Framework for HPC Network APIs and Beyond,” HOT Interconnects 2015 - Santa Clara, California, US,August 2015 §  Two HP ProLiant DL380p Gen8 servers §  Mellanox SX6036 switch, Single-port Mellanox Connect-IB FDR (10.10.5056) §  Mellanox OFED 2.4-1.0.4. (VERBS) §  Prototype implementation of AcceleratedVERBS (AVERBS) �� �� �� �� �� ��� ��� ��� ��� ��� ��� �� �� �� �� ��� ��� ��� �������������������� �������������������� ����������������� ���������������� ����������������� ���������������� ���� ���� ���� ���� ���� ���� ���� ���� ���� ���� ���� ���� �� �� �� �� ��� ��� ��� ������������ �������������������� ����������������� ����������������� ����������������� ���������������� ���������������� ���������������� �� �� �� �� �� �� �� �� �� ��� �� ��� �� ���������������� �������������������� ����������������� ���������������� ����������������� ����������������
  • 17. OpenSHMEM and OSHMEM (OpenMPI) Put Latency (shared memory) 0.1 1 10 100 1000 8 16 32 64 128 256 512 1KB 2KB 4KB 8KB 16KB 32KB 64KB 128KB256KB512KB 1MB 2MB 4MB Latency(usec,logscale) Message Size OpenSHMEM−UCX (intranode) OpenSHMEM−UCCS (intranode) OSHMEM (intranode) Lower is better Slide courtesy of ORNL UCXTeam
  • 18. OpenSHMEM and OSHMEM (OpenMPI) Put Injection Rate Higher is better Connect-IB 0 2e+06 4e+06 6e+06 8e+06 1e+07 1.2e+07 1.4e+07 8 16 32 64 128 256 512 1KB 2KB 4KB MessageRate(putoperations/second) Message Size OpenSHMEM−UCX (mlx5) OpenSHMEM−UCCS (mlx5) OSHMEM (mlx5) OSHMEM−UCX (mlx5) Slide courtesy of ORNL UCXTeam
  • 19. OpenSHMEM and OSHMEM (OpenMPI) GUPs Benchmark Higher is better Connect-IB 0 0.0002 0.0004 0.0006 0.0008 0.001 0.0012 0.0014 0.0016 0.0018 2 4 6 8 10 12 14 16 GUPS(billionupdatespersecond) Number of PEs (two nodes) UCX (mlx5) OSHMEM (mlx5) Slide courtesy of ORNL UCXTeam
  • 20. MPICH - Message rate Preliminary Results 0 1 2 3 4 5 6 1 2 4 8 16 32 64 128 256 512 1k 2k 4k 8k 16k 32k 64k 128k 256k 512k 1M 2M 4M MMPS MPICH/UCX MPICH/MXM Slide courtesy of Pavan Balaji,ANL - sent to the ucx mailing list Connect-IB “non-blocking tag-send”
  • 21. Where is UCX being used? §  Upcoming release of Open MPI 2.0 (MPI and OpenSHMEM APIs) §  Upcoming release of MPICH §  OpenSHMEM reference implementation by UH and ORNL §  PARSEC – runtime used on Scientific Linear Libraries
  • 22. What Next ? §  UCX Consortium ! §  http://www.csm.ornl.gov/newsite/ §  UCX Specification §  Early draft is available online: http://www.openucx.org/early-draft-of-ucx-specification-is-here/ §  Production releases §  MPICH, Open MPI, Open SHMEM(s), Gasnet, and more… §  Support for more networks and applications and libraries §  UCX Hackathon 2016 ! §  Will be announced on the mailing list and website
  • 23. https://github.com/orgs/openucx WEB: www.openucx.org Contact: [email protected] Mailing List: https://elist.ornl.gov/mailman/listinfo/ucx-group [email protected]
  • 24. Questions ? Unified Communication - X Framework WEB: www.openucx.org Contact: [email protected] WE B: https://github.com/orgs/openucx Mailing List: https://elist.ornl.gov/mailman/listinfo/ucx-group [email protected]