SlideShare a Scribd company logo
A Domain-Specific Embedded Language for
Programming Parallel Architectures.
Distributed Computing and Applications to Business,
Engineering and Science
September 2013.
Jason McGuiness
& Colin Egan
University of Hertfordshire
Copyright © Jason Mc
Guiness, 2013.
dcabes2013@count-zero.ltd.uk
http://libjmmcg.sf.net/
Sequence of Presentation.
A very pragmatic, practical basis to the talk.
An introduction: why I am here.
Why do we need & how do we manage multiple threads?
Propose a DSEL to enable parallelism.
Describe the grammar, give resultant theorems.
Examples, and their motivation.
Discussion.
Introduction.
Why yet another thread-library presentation?
Because we still find it hard to write multi-threaded programs
correctly.
According to programming folklore.
We haven’t successfully replaced the von Neumann
architecture:
Stored program architectures are still prevalent.
Companies don’t like to change their compilers.
People don’t like to recompile their programs to run on the
latest architectures.
The memory wall still affects us:
The CPU-instruction retirement rate, i.e. rate at which
programs require and generate data, exceeds the the memory
bandwidth - a by product of Moore’s Law.
Modern architectures add extra cores to CPUs, in this
instance, extra memory buses which feed into those cores.
A Quick Review of Related Threading Models:
Compiler-based such as Erlang, UPC or HPF.
Corollary: companies/people don’t like to change their
programming language.
Profusion of library-based solutions such as Posix Threads and
OpenMP, Boost.Threads:
Don’t have to change the language, nor compiler!
Suffer from inheritance anomalies & related issue of entangling
the thread-safety, thread scheduling and business logic.
Each program becomes bespoke, requiring re-testing for
threading and business logic issues.
Debugging: very hard, an open area of research.
Intel’s TBB or Cilk.
Have limited grammars: Cilk - simple data-flow model, TBB -
complex, but invasive API.
The question of how to implement multi-threaded debuggers
correctly an open question.
Race conditions commonly “disappear” in the debugger...
The DSEL to Assist Parallelism.
Should have the following properties:
Target general purpose threading, defined as scheduling where
conditionals or loop-bounds may not be computed at
compile-time, nor memoized.
Support both data-flow and data-parallel constructs succinctly
and naturally within the host language.
Provide guarantees regarding:
deadlocks and race-conditions,
the algorithmic complexity of any parallel schedule
implemented with it.
Assist in debugging any use of it.
Example implementation uses C++ as the host language, so
more likely to be used in business.
Grammar Overview: Part 1: thread-pool-type.
thread-pool-type → thread_pool work-policy size-policy pool-adaptor
A thread_pool would contain a collection of threads, which may differ from the number of
physical cores.
work-policy → worker_threads_get_work | one_thread_distributes
The library should implement the classic work-stealing or master-slave work
sharing algorithms.
size-policy → fixed_size | tracks_to_max | infinite
The size-policy combined with the threading-model could be used to
optimize the implementation of the thread-pool-type.
pool-adaptor → joinability api-type threading-model priority-modeopt comparatoropt
joinability → joinable | nonjoinable
The joinability type has been provided to allow for optimizations of the
thread-pool-type.
api-type → posix_pthreads | IBM_cyclops | ... omitted for brevity
threading-model → sequential_mode | heavyweight_threading | lightweight_threading
This specifier provides a coarse representation of the various
implementations of threading in the many architectures.
priority-mode → normal_fifodef | prioritized_queue
The prioritized_queue would allow specification of whether certain
instances of work-to-be-mutated could be mutated before other instances
according to the specified comparator.
comparator → std::lessdef
A binary function-type that would be used to specify a strict weak-ordering
on the elements within the prioritized_queue.
Grammar Overview: Part 2: other types.
The thread-pool-type should define further terminals for programming convenience:
execution_context: An opaque type of future that a transfer returns and a proxy to the result_type
that the mutation creates.
Access to the instance of the result_type implicitly causes the calling
thread to wait until the mutation has been completed: a data-flow
operation.
Implementations of execution_context must specifically prohibit: aliasing
instances of these types, copying instances of these types and assigning
instances of these types.
joinable: A modifier for transferring work-to-be-mutated into an instance of
thread-pool-type, a data-flow operation.
nonjoinable: Another modifier for transferring work-to-be-mutated into an instance of
thread-pool-type, a data-flow operation.
safe-colln → safe_colln collection-type lock-type
This adaptor wraps the collection-type and lock-type in one object; also providing some
thread-safe operations upon and access to the underlying collection.
lock-type → critical_section_lock_type | read_write | read_decaying_write
A critical_section_lock_type would be a single-reader, single-writer lock,
a simulation of EREW semantics.
A read_write lock would be a multi-readers, single-write lock, a simulation
of CREW semantics.
A read_decaying_write lock would be a specialization of a read_write lock
that also implements atomic transformation of a write-lock into a read-lock.
collection-type: A standard collection such as an STL-style list or vector, etc.
Grammar Overview: Part 3: Rewrite Rules.
Transfer of work-to-be-mutated into an instance of thread-pool-type has been defined as follows:
transfer-future → execution-context-resultopt thread-pool-type transfer-operation
execution-context-result → execution_context <‌<
An execution_context should be created only via a transfer of
work-to-be-mutated with the joinable modifier into a thread_pool defined
with the joinable joinability type.
It must be an error to transfer work into a thread_pool that has been
defined using the nonjoinable type.
An execution_context should not be creatable without transferring work, so
guaranteed to contain an instance of result_type of a mutation, implying
data-flow like operation.
transfer-operation → transfer-modifier-operationopt transfer-data-operation
transfer-modifier-operation → <‌< transfer-modifier
transfer-modifier → joinable | nonjoinable
transfer-data-operation → <‌< transfer-data
transfer-data → work-to-be-mutated | data-parallel-algorithm
The data-parallel-algorithms have been defined as follows:
data-parallel-algorithm → accumulate | ... omitted for brevity
The style and arguments of the data-parallel-algorithms should be similar to
those of the STL. Specifically they should all take a safe-colln as an
argument to specify the range and functors as specified within the STL.
Properties of the DSEL.
Due to the restricted properties of the execution contexts and the
thread pools a few important results arise:
1. The thread schedule created is only an acyclic, directed graph:
a tree.
2. From this property we have proved that the schedule
generated is deadlock and race-condition free.
3. Moreover in implementing the STL-style algorithms those
implementations are efficient, i.e. there are provable bounds
on both the execution time and minimum number of
processors required to achieve that time.
Initial Theorems (Proofs in the Paper).
1. CFG is a tree:
Theorem
The CFG of any program must be an acyclic directed graph
comprising of at least one singly-rooted tree, but may contain
multiple singly-rooted, independent, trees.
2. Race-condition Free:
Theorem
The schedule of a CFG satisfying Theorem 1 should be guaranteed
to be free of race-conditions.
3. Deadlock Free:
Theorem
The schedule of a CFG satisfying Theorem 1 should be guaranteed
to be free of deadlocks.
Final Theorems (Proofs in the Paper).
1. Race-condition and Deadlock Free:
Corollary
The schedule of a CFG satisfying Theorem 1 should be guaranteed
to be free of race-conditions and deadlocks
2. Implements Optimal Schedule:
Theorem
The schedule of a CFG satisfying Theorem 1 should be executed
with an algorithmic complexity of at least O (log (p)) and at most
O (n), in units of time to mutate the work, where n would be the
number of work items to be mutated on p processors. The
algorithmic order of the minimal time would be poly-logarithmic, so
within NC, therefore at least optimal.
Basic Data-Flow Example.
Listing 1: General-Purpose use of a Thread Pool and Future.
s t r u c t res_t { i n t i ; } ;
s t r u c t work_type {
v o i d p r o c e s s ( res_t &) {}
} ;
t y p e d e f ppd : : thread_pool <
p o o l _ t r a i t s : : worker_threads_get_work , p o o l _ t r a i t s : : f i x e d _ s i z e ,
pool_adaptor<g e n e r i c _ t r a i t s : : j o i n a b l e , posix_pthreads , heavyweight_threading >
> pool_type ;
t y p e d e f pool_type : : j o i n a b l e j o i n a b l e ;
pool_type p o o l ( 2 ) ;
auto c o n s t &c o n t e x t=pool<<j o i n a b l e ()<<work_type ( ) ;
context −>i ;
The work has been transferred to the thread_pool and the
resultant opaque execution_context has been captured.
process(res_t &) is the only invasive artefact of the library
for this use-case.
The dereference of the proxy conceals the implicit
synchronisation:
obviously a data-flow operation,
an implementation of the split-phase constraint.
Data-Parallel Example: map-reduce as accumulate.
Listing 2: Accumulate with a Thread Pool and Future.
t y p e d e f ppd : : thread_pool <
p o o l _ t r a i t s : : worker_threads_get_work , p o o l _ t r a i t s : : f i x e d _ s i z e ,
pool_adaptor<g e n e r i c _ t r a i t s : : j o i n a b l e , posix_pthreads , heavyweight_threading >
> pool_type ;
t y p e d e f ppd : : s a f e _ c o l l n <
v e c t o r <i n t >, l o c k _ t r a i t s : : c r i t i c a l _ s e c t i o n _ l o c k _ t y p e
> v t r _ c o l l n _ t ;
t y p e d e f pool_type : : j o i n a b l e j o i n a b l e ;
v t r _ c o l l n _ t v ; v . push_back ( 1 ) ; v . push_back ( 2 ) ;
auto c o n s t &c o n t e x t=pool<<j o i n a b l e ( )
<<p o o l . accumulate ( v , 1 , s t d : : plus <v t r _ c o l l n _ t : : value_type > ( ) ) ;
a s s e r t (∗ c o n t e x t ==4);
An implementation might:
distribute sub-ranges of the safe-colln, within the
thread_pool, performing the mutations sequentially within
the sub-ranges, without any locking,
compute the final result by combining the intermediate results,
the implementation providing suitable locking.
The lock-type of the safe_colln:
indicates EREW semantics obeyed for access to the collection,
released when all of the mutations have completed.
Operation of accumulate.
main()
accumulate
v
distribute_root
s
distribute
h
distribute
v
distribute
h
distribute
v
distribute
v
distribute
v
distribute
h
distribute
v
distribute
v
distribute
v
distribute
v
distribute
v
distribute
v
distribute
v
main() the C++ entry-point for the program,
accumulate & distribute_root the root-node of the transferred algorithm,
distribute
- internally distributed the input collection recursively within the graph,
- leaf nodes performed the mutation upon the sub-range.
s sequential, shown for exposition purposes only,
v vertical, mutation performed by thread within thread_pool.
h horizontal, mutation performed by a thread spawned within an execution_context.
Ensures that sufficient free threads available for fixed_size thread_pools.
Discussion.
A DSEL has been formulated:
that targets general purpose threading using both data-flow
and data-parallel constructs,
ensures there should be no deadlocks and race-conditions with
guarantees regarding the algorithmic complexity,
and assists with debugging any use of it.
The choice of C++ as a host language was not special.
Result should be no surprise: consider the work done relating
to auto-parallelizing compilers.
No need to learn a new programming language, nor change to
a novel compiler.
Not a panacea: program must be written in a data-flow style.
Expose estimate of threading costs.
Testing the performance with SPEC2006 could be investigated.
Perhaps on alternative architectures, GPUs, APUs, etc.

More Related Content

PDF
C++ Data-flow Parallelism sounds great! But how practical is it? Let’s see ho...
Jason Hearne-McGuiness
 
PDF
The Challenges facing Libraries and Imperative Languages from Massively Paral...
Jason Hearne-McGuiness
 
PDF
Transfer Learning for Software Performance Analysis: An Exploratory Analysis
Pooyan Jamshidi
 
PPTX
Enery efficient data prefetching
Himanshu Koli
 
PDF
Pretzel: optimized Machine Learning framework for low-latency and high throug...
NECST Lab @ Politecnico di Milano
 
PDF
Manycores for the Masses
Intel® Software
 
PDF
Jvm profiling under the hood
RichardWarburton
 
PPTX
論文輪読資料「Gated Feedback Recurrent Neural Networks」
kurotaki_weblab
 
C++ Data-flow Parallelism sounds great! But how practical is it? Let’s see ho...
Jason Hearne-McGuiness
 
The Challenges facing Libraries and Imperative Languages from Massively Paral...
Jason Hearne-McGuiness
 
Transfer Learning for Software Performance Analysis: An Exploratory Analysis
Pooyan Jamshidi
 
Enery efficient data prefetching
Himanshu Koli
 
Pretzel: optimized Machine Learning framework for low-latency and high throug...
NECST Lab @ Politecnico di Milano
 
Manycores for the Masses
Intel® Software
 
Jvm profiling under the hood
RichardWarburton
 
論文輪読資料「Gated Feedback Recurrent Neural Networks」
kurotaki_weblab
 

What's hot (20)

PDF
Producer consumer-problems
Richard Ashworth
 
PDF
Practical pairing of generative programming with functional programming.
Eugene Lazutkin
 
PDF
Parallel computation
Jayanti Prasad Ph.D.
 
PDF
First steps with Keras 2: A tutorial with Examples
Felipe
 
PDF
Comparing Write-Ahead Logging and the Memory Bus Using
jorgerodriguessimao
 
PPSX
Task Parallel Library Data Flows
SANKARSAN BOSE
 
PDF
SSBSE10.ppt
Ptidej Team
 
PPT
Parallel Programming: Beyond the Critical Section
Tony Albrecht
 
PDF
Introduction to neural networks and Keras
Jie He
 
PPT
Parallel Programming Primer
Sri Prasanna
 
PDF
Optimize Single Particle Orbital (SPO) Evaluations Based on B-splines
Intel® Software
 
PPTX
Shifu plugin-trainer and pmml-adapter
Lisa Hua
 
ODP
Generative Programming In The Large - Applied C++ meta-programming
Schalk Cronjé
 
PDF
Hybrid Model Based Testing Tool Architecture for Exascale Computing System
CSCJournals
 
PDF
Understand and Harness the Capabilities of Intel® Xeon Phi™ Processors
Intel® Software
 
PDF
Basic ideas on keras framework
Alison Marczewski
 
PPTX
Final training course
Noor Dhiya
 
PPT
IS-ENES COMP Superscalar tutorial
Roger Rafanell Mas
 
PDF
Nondeterminism is unavoidable, but data races are pure evil
racesworkshop
 
PPTX
DIY Deep Learning with Caffe Workshop
odsc
 
Producer consumer-problems
Richard Ashworth
 
Practical pairing of generative programming with functional programming.
Eugene Lazutkin
 
Parallel computation
Jayanti Prasad Ph.D.
 
First steps with Keras 2: A tutorial with Examples
Felipe
 
Comparing Write-Ahead Logging and the Memory Bus Using
jorgerodriguessimao
 
Task Parallel Library Data Flows
SANKARSAN BOSE
 
SSBSE10.ppt
Ptidej Team
 
Parallel Programming: Beyond the Critical Section
Tony Albrecht
 
Introduction to neural networks and Keras
Jie He
 
Parallel Programming Primer
Sri Prasanna
 
Optimize Single Particle Orbital (SPO) Evaluations Based on B-splines
Intel® Software
 
Shifu plugin-trainer and pmml-adapter
Lisa Hua
 
Generative Programming In The Large - Applied C++ meta-programming
Schalk Cronjé
 
Hybrid Model Based Testing Tool Architecture for Exascale Computing System
CSCJournals
 
Understand and Harness the Capabilities of Intel® Xeon Phi™ Processors
Intel® Software
 
Basic ideas on keras framework
Alison Marczewski
 
Final training course
Noor Dhiya
 
IS-ENES COMP Superscalar tutorial
Roger Rafanell Mas
 
Nondeterminism is unavoidable, but data races are pure evil
racesworkshop
 
DIY Deep Learning with Caffe Workshop
odsc
 
Ad

Similar to A Domain-Specific Embedded Language for Programming Parallel Architectures. (20)

PDF
Introducing Parallel Pixie Dust
Jason Hearne-McGuiness
 
PDF
C++ CoreHard Autumn 2018. Concurrency and Parallelism in C++17 and C++20/23 -...
corehard_by
 
PDF
Peyton jones-2011-parallel haskell-the_future
Takayuki Muranushi
 
PDF
Simon Peyton Jones: Managing parallelism
Skills Matter
 
PPTX
17. thread and deadlock
Vahid Heidari
 
PDF
A DSEL for Addressing the Problems Posed by Parallel Architectures
Jason Hearne-McGuiness
 
PDF
Other Approaches (Concurrency)
Sri Prasanna
 
PPTX
Operating System Assignment Help
Programming Homework Help
 
PPTX
Seminar on Parallel and Concurrent Programming
Stefan Marr
 
PPT
CS4961-L9.ppt
MarlonMagtibay2
 
PDF
Giorgio zoppi cpp11concurrency
Giorgio Zoppi
 
PPTX
Threads and multi threading
Antonio Cesarano
 
PDF
A Java Fork_Join Framework
Hiroshi Ono
 
PPT
Os Reindersfinal
oscon2007
 
PPT
Os Reindersfinal
oscon2007
 
PPTX
Cc module 3.pptx
ssuserbead51
 
PDF
Unmanaged Parallelization via P/Invoke
Dmitri Nesteruk
 
PPT
Google: Cluster computing and MapReduce: Introduction to Distributed System D...
tugrulh
 
PDF
GPU Programming on CPU - Using C++AMP
Miller Lee
 
Introducing Parallel Pixie Dust
Jason Hearne-McGuiness
 
C++ CoreHard Autumn 2018. Concurrency and Parallelism in C++17 and C++20/23 -...
corehard_by
 
Peyton jones-2011-parallel haskell-the_future
Takayuki Muranushi
 
Simon Peyton Jones: Managing parallelism
Skills Matter
 
17. thread and deadlock
Vahid Heidari
 
A DSEL for Addressing the Problems Posed by Parallel Architectures
Jason Hearne-McGuiness
 
Other Approaches (Concurrency)
Sri Prasanna
 
Operating System Assignment Help
Programming Homework Help
 
Seminar on Parallel and Concurrent Programming
Stefan Marr
 
CS4961-L9.ppt
MarlonMagtibay2
 
Giorgio zoppi cpp11concurrency
Giorgio Zoppi
 
Threads and multi threading
Antonio Cesarano
 
A Java Fork_Join Framework
Hiroshi Ono
 
Os Reindersfinal
oscon2007
 
Os Reindersfinal
oscon2007
 
Cc module 3.pptx
ssuserbead51
 
Unmanaged Parallelization via P/Invoke
Dmitri Nesteruk
 
Google: Cluster computing and MapReduce: Introduction to Distributed System D...
tugrulh
 
GPU Programming on CPU - Using C++AMP
Miller Lee
 
Ad

Recently uploaded (20)

PDF
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
PDF
BLW VOCATIONAL TRAINING SUMMER INTERNSHIP REPORT
codernjn73
 
PPTX
What-is-the-World-Wide-Web -- Introduction
tonifi9488
 
PPTX
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
PDF
Trying to figure out MCP by actually building an app from scratch with open s...
Julien SIMON
 
PDF
Advances in Ultra High Voltage (UHV) Transmission and Distribution Systems.pdf
Nabajyoti Banik
 
PDF
Tea4chat - another LLM Project by Kerem Atam
a0m0rajab1
 
PPTX
Dev Dives: Automate, test, and deploy in one place—with Unified Developer Exp...
AndreeaTom
 
PDF
Responsible AI and AI Ethics - By Sylvester Ebhonu
Sylvester Ebhonu
 
PDF
Data_Analytics_vs_Data_Science_vs_BI_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
PDF
Cloud-Migration-Best-Practices-A-Practical-Guide-to-AWS-Azure-and-Google-Clou...
Artjoker Software Development Company
 
PPTX
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
PDF
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
PDF
How Open Source Changed My Career by abdelrahman ismail
a0m0rajab1
 
PDF
Make GenAI investments go further with the Dell AI Factory
Principled Technologies
 
PDF
A Day in the Life of Location Data - Turning Where into How.pdf
Precisely
 
PDF
A Strategic Analysis of the MVNO Wave in Emerging Markets.pdf
IPLOOK Networks
 
PDF
Google I/O Extended 2025 Baku - all ppts
HusseinMalikMammadli
 
PDF
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
PDF
OFFOFFBOX™ – A New Era for African Film | Startup Presentation
ambaicciwalkerbrian
 
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
BLW VOCATIONAL TRAINING SUMMER INTERNSHIP REPORT
codernjn73
 
What-is-the-World-Wide-Web -- Introduction
tonifi9488
 
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
Trying to figure out MCP by actually building an app from scratch with open s...
Julien SIMON
 
Advances in Ultra High Voltage (UHV) Transmission and Distribution Systems.pdf
Nabajyoti Banik
 
Tea4chat - another LLM Project by Kerem Atam
a0m0rajab1
 
Dev Dives: Automate, test, and deploy in one place—with Unified Developer Exp...
AndreeaTom
 
Responsible AI and AI Ethics - By Sylvester Ebhonu
Sylvester Ebhonu
 
Data_Analytics_vs_Data_Science_vs_BI_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
Cloud-Migration-Best-Practices-A-Practical-Guide-to-AWS-Azure-and-Google-Clou...
Artjoker Software Development Company
 
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
How Open Source Changed My Career by abdelrahman ismail
a0m0rajab1
 
Make GenAI investments go further with the Dell AI Factory
Principled Technologies
 
A Day in the Life of Location Data - Turning Where into How.pdf
Precisely
 
A Strategic Analysis of the MVNO Wave in Emerging Markets.pdf
IPLOOK Networks
 
Google I/O Extended 2025 Baku - all ppts
HusseinMalikMammadli
 
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
OFFOFFBOX™ – A New Era for African Film | Startup Presentation
ambaicciwalkerbrian
 

A Domain-Specific Embedded Language for Programming Parallel Architectures.

  • 1. A Domain-Specific Embedded Language for Programming Parallel Architectures. Distributed Computing and Applications to Business, Engineering and Science September 2013. Jason McGuiness & Colin Egan University of Hertfordshire Copyright © Jason Mc Guiness, 2013. [email protected] http://libjmmcg.sf.net/
  • 2. Sequence of Presentation. A very pragmatic, practical basis to the talk. An introduction: why I am here. Why do we need & how do we manage multiple threads? Propose a DSEL to enable parallelism. Describe the grammar, give resultant theorems. Examples, and their motivation. Discussion.
  • 3. Introduction. Why yet another thread-library presentation? Because we still find it hard to write multi-threaded programs correctly. According to programming folklore. We haven’t successfully replaced the von Neumann architecture: Stored program architectures are still prevalent. Companies don’t like to change their compilers. People don’t like to recompile their programs to run on the latest architectures. The memory wall still affects us: The CPU-instruction retirement rate, i.e. rate at which programs require and generate data, exceeds the the memory bandwidth - a by product of Moore’s Law. Modern architectures add extra cores to CPUs, in this instance, extra memory buses which feed into those cores.
  • 4. A Quick Review of Related Threading Models: Compiler-based such as Erlang, UPC or HPF. Corollary: companies/people don’t like to change their programming language. Profusion of library-based solutions such as Posix Threads and OpenMP, Boost.Threads: Don’t have to change the language, nor compiler! Suffer from inheritance anomalies & related issue of entangling the thread-safety, thread scheduling and business logic. Each program becomes bespoke, requiring re-testing for threading and business logic issues. Debugging: very hard, an open area of research. Intel’s TBB or Cilk. Have limited grammars: Cilk - simple data-flow model, TBB - complex, but invasive API. The question of how to implement multi-threaded debuggers correctly an open question. Race conditions commonly “disappear” in the debugger...
  • 5. The DSEL to Assist Parallelism. Should have the following properties: Target general purpose threading, defined as scheduling where conditionals or loop-bounds may not be computed at compile-time, nor memoized. Support both data-flow and data-parallel constructs succinctly and naturally within the host language. Provide guarantees regarding: deadlocks and race-conditions, the algorithmic complexity of any parallel schedule implemented with it. Assist in debugging any use of it. Example implementation uses C++ as the host language, so more likely to be used in business.
  • 6. Grammar Overview: Part 1: thread-pool-type. thread-pool-type → thread_pool work-policy size-policy pool-adaptor A thread_pool would contain a collection of threads, which may differ from the number of physical cores. work-policy → worker_threads_get_work | one_thread_distributes The library should implement the classic work-stealing or master-slave work sharing algorithms. size-policy → fixed_size | tracks_to_max | infinite The size-policy combined with the threading-model could be used to optimize the implementation of the thread-pool-type. pool-adaptor → joinability api-type threading-model priority-modeopt comparatoropt joinability → joinable | nonjoinable The joinability type has been provided to allow for optimizations of the thread-pool-type. api-type → posix_pthreads | IBM_cyclops | ... omitted for brevity threading-model → sequential_mode | heavyweight_threading | lightweight_threading This specifier provides a coarse representation of the various implementations of threading in the many architectures. priority-mode → normal_fifodef | prioritized_queue The prioritized_queue would allow specification of whether certain instances of work-to-be-mutated could be mutated before other instances according to the specified comparator. comparator → std::lessdef A binary function-type that would be used to specify a strict weak-ordering on the elements within the prioritized_queue.
  • 7. Grammar Overview: Part 2: other types. The thread-pool-type should define further terminals for programming convenience: execution_context: An opaque type of future that a transfer returns and a proxy to the result_type that the mutation creates. Access to the instance of the result_type implicitly causes the calling thread to wait until the mutation has been completed: a data-flow operation. Implementations of execution_context must specifically prohibit: aliasing instances of these types, copying instances of these types and assigning instances of these types. joinable: A modifier for transferring work-to-be-mutated into an instance of thread-pool-type, a data-flow operation. nonjoinable: Another modifier for transferring work-to-be-mutated into an instance of thread-pool-type, a data-flow operation. safe-colln → safe_colln collection-type lock-type This adaptor wraps the collection-type and lock-type in one object; also providing some thread-safe operations upon and access to the underlying collection. lock-type → critical_section_lock_type | read_write | read_decaying_write A critical_section_lock_type would be a single-reader, single-writer lock, a simulation of EREW semantics. A read_write lock would be a multi-readers, single-write lock, a simulation of CREW semantics. A read_decaying_write lock would be a specialization of a read_write lock that also implements atomic transformation of a write-lock into a read-lock. collection-type: A standard collection such as an STL-style list or vector, etc.
  • 8. Grammar Overview: Part 3: Rewrite Rules. Transfer of work-to-be-mutated into an instance of thread-pool-type has been defined as follows: transfer-future → execution-context-resultopt thread-pool-type transfer-operation execution-context-result → execution_context <‌< An execution_context should be created only via a transfer of work-to-be-mutated with the joinable modifier into a thread_pool defined with the joinable joinability type. It must be an error to transfer work into a thread_pool that has been defined using the nonjoinable type. An execution_context should not be creatable without transferring work, so guaranteed to contain an instance of result_type of a mutation, implying data-flow like operation. transfer-operation → transfer-modifier-operationopt transfer-data-operation transfer-modifier-operation → <‌< transfer-modifier transfer-modifier → joinable | nonjoinable transfer-data-operation → <‌< transfer-data transfer-data → work-to-be-mutated | data-parallel-algorithm The data-parallel-algorithms have been defined as follows: data-parallel-algorithm → accumulate | ... omitted for brevity The style and arguments of the data-parallel-algorithms should be similar to those of the STL. Specifically they should all take a safe-colln as an argument to specify the range and functors as specified within the STL.
  • 9. Properties of the DSEL. Due to the restricted properties of the execution contexts and the thread pools a few important results arise: 1. The thread schedule created is only an acyclic, directed graph: a tree. 2. From this property we have proved that the schedule generated is deadlock and race-condition free. 3. Moreover in implementing the STL-style algorithms those implementations are efficient, i.e. there are provable bounds on both the execution time and minimum number of processors required to achieve that time.
  • 10. Initial Theorems (Proofs in the Paper). 1. CFG is a tree: Theorem The CFG of any program must be an acyclic directed graph comprising of at least one singly-rooted tree, but may contain multiple singly-rooted, independent, trees. 2. Race-condition Free: Theorem The schedule of a CFG satisfying Theorem 1 should be guaranteed to be free of race-conditions. 3. Deadlock Free: Theorem The schedule of a CFG satisfying Theorem 1 should be guaranteed to be free of deadlocks.
  • 11. Final Theorems (Proofs in the Paper). 1. Race-condition and Deadlock Free: Corollary The schedule of a CFG satisfying Theorem 1 should be guaranteed to be free of race-conditions and deadlocks 2. Implements Optimal Schedule: Theorem The schedule of a CFG satisfying Theorem 1 should be executed with an algorithmic complexity of at least O (log (p)) and at most O (n), in units of time to mutate the work, where n would be the number of work items to be mutated on p processors. The algorithmic order of the minimal time would be poly-logarithmic, so within NC, therefore at least optimal.
  • 12. Basic Data-Flow Example. Listing 1: General-Purpose use of a Thread Pool and Future. s t r u c t res_t { i n t i ; } ; s t r u c t work_type { v o i d p r o c e s s ( res_t &) {} } ; t y p e d e f ppd : : thread_pool < p o o l _ t r a i t s : : worker_threads_get_work , p o o l _ t r a i t s : : f i x e d _ s i z e , pool_adaptor<g e n e r i c _ t r a i t s : : j o i n a b l e , posix_pthreads , heavyweight_threading > > pool_type ; t y p e d e f pool_type : : j o i n a b l e j o i n a b l e ; pool_type p o o l ( 2 ) ; auto c o n s t &c o n t e x t=pool<<j o i n a b l e ()<<work_type ( ) ; context −>i ; The work has been transferred to the thread_pool and the resultant opaque execution_context has been captured. process(res_t &) is the only invasive artefact of the library for this use-case. The dereference of the proxy conceals the implicit synchronisation: obviously a data-flow operation, an implementation of the split-phase constraint.
  • 13. Data-Parallel Example: map-reduce as accumulate. Listing 2: Accumulate with a Thread Pool and Future. t y p e d e f ppd : : thread_pool < p o o l _ t r a i t s : : worker_threads_get_work , p o o l _ t r a i t s : : f i x e d _ s i z e , pool_adaptor<g e n e r i c _ t r a i t s : : j o i n a b l e , posix_pthreads , heavyweight_threading > > pool_type ; t y p e d e f ppd : : s a f e _ c o l l n < v e c t o r <i n t >, l o c k _ t r a i t s : : c r i t i c a l _ s e c t i o n _ l o c k _ t y p e > v t r _ c o l l n _ t ; t y p e d e f pool_type : : j o i n a b l e j o i n a b l e ; v t r _ c o l l n _ t v ; v . push_back ( 1 ) ; v . push_back ( 2 ) ; auto c o n s t &c o n t e x t=pool<<j o i n a b l e ( ) <<p o o l . accumulate ( v , 1 , s t d : : plus <v t r _ c o l l n _ t : : value_type > ( ) ) ; a s s e r t (∗ c o n t e x t ==4); An implementation might: distribute sub-ranges of the safe-colln, within the thread_pool, performing the mutations sequentially within the sub-ranges, without any locking, compute the final result by combining the intermediate results, the implementation providing suitable locking. The lock-type of the safe_colln: indicates EREW semantics obeyed for access to the collection, released when all of the mutations have completed.
  • 14. Operation of accumulate. main() accumulate v distribute_root s distribute h distribute v distribute h distribute v distribute v distribute v distribute h distribute v distribute v distribute v distribute v distribute v distribute v distribute v main() the C++ entry-point for the program, accumulate & distribute_root the root-node of the transferred algorithm, distribute - internally distributed the input collection recursively within the graph, - leaf nodes performed the mutation upon the sub-range. s sequential, shown for exposition purposes only, v vertical, mutation performed by thread within thread_pool. h horizontal, mutation performed by a thread spawned within an execution_context. Ensures that sufficient free threads available for fixed_size thread_pools.
  • 15. Discussion. A DSEL has been formulated: that targets general purpose threading using both data-flow and data-parallel constructs, ensures there should be no deadlocks and race-conditions with guarantees regarding the algorithmic complexity, and assists with debugging any use of it. The choice of C++ as a host language was not special. Result should be no surprise: consider the work done relating to auto-parallelizing compilers. No need to learn a new programming language, nor change to a novel compiler. Not a panacea: program must be written in a data-flow style. Expose estimate of threading costs. Testing the performance with SPEC2006 could be investigated. Perhaps on alternative architectures, GPUs, APUs, etc.