SlideShare a Scribd company logo
Putting Compilers to Work
August 19th, 2015 Drew Paroski
Why do compilers and runtimes matter?
•Virtually all modern software is powered by
compilers
•Compilers can have a huge effect on the
efficiency and performance of software
•Also can affect how programmers develop
software
Efficiency and Performance
•Qualitative:
•Change what user experiences are possible
•Quantitative:
•Reduce CPU and resource usage
•This often translates into reduced costs
and/or increased revenue
How do compilers work?
•Demo: Demystifying machine code
Let’s look at a 50-line C++ program that
generates machine code at run time and
invokes it
Putting Compilers to Work
Example Compilation Pipeline (HHVM)
parse
codegen
Virtual ISA
(aka bytecode)
emit
Intermediate
Representation
analyze
PHP
Source
Abstract
Syntax Tree
(AST)
CPU
x64
Machine
Code
InterpreterHHVM
Runtime
interpretation
The Rise of Web Programming
•1993: CGI scripting (Perl / C)
•1995: Javascript, PHP, and Java applets
•1996: Flash, ASP, and ActiveX
•1999: JSP
•2004: Ruby on Rails
•2005: Django (Python)
•2007: Silverlight (XAML / C#)
•Engines for most of these languages were
interpreters
•Often ~100x slower (or more) than C and C++
•Most interpreters don’t have a noticeable
compilation delay
•Supports a rapid iterative development
workflow (edit-save-run)
Web Programming Languages
•Websites grew in their complexity
•Developers started to bump against the
limitations of early engines’ performance
•Bonus points: Try to preserve the rapid
iterative development workflow of these
languages (edit-save-run)
Building Efficient Compilers for
Dynamic Languages
Compiler Improvements in Web Programming
•2002: ASP.NET
•2007: Cython
•2008: JRuby
•2009: TraceMonkey, WebKit SFX/Nitro, V8
•2010: HipHop for PHP, PyPy 1.2
•2011: HHVM
•2014: Pyston
•Let’s talk about performance optimization in
general
Improving software performance
Three Areas to Optimize
1) Data fetching and I/O in general
2) How memory is used on an individual machine
3) How computation is actually performed by the
CPU
•Always measure using profiling tools to
determine what areas you should focus your
optimization efforts on
•Typically the biggest issue to look at
•Does you application frequently stall while
waiting for data?
Data Fetching and other I/O
CPU
Blocked
on I/O
CPUCPU
Blocked
on I/O
Wall time
•Are you fetching more data than you need to?
•Are you making lots of round trips in a serial
fashion?
•Fetch data in batches to reduce # of round trips
•Is your application still blocking a lot?
•Use async I/O APIs
Data Fetching and other I/O
•Are you taking a lot of cache misses?
•Caches misses slow things down by making the CPU
stall
•Pay attention to your application’s memory usage
•Use as little memory as possible
•Avoid pointer chasing
How is memory used?
•Is your application repeatedly re-computing
something that could be memoized?
•Are common operations as efficient as possible?
•Is your application needlessly making lots of
short-lived allocations?
•Is there unnecessary contention with locks or
interlocked operations?
How is computation performed on the CPU?
•Compilers requires a fair amount of
investment before it pays off
•Have you exhausted the low hanging fruit
from other avenues of performance
optimization?
•Are there better engines already out there
you can use?
When to invest in compilers
•Before building anything, think about what your
goals really are
• How much better does execution performance need
to be? 2x? 4x?
• Do you need fast compile times? How fast?
• Do you need to support a rapid iterative workflow?
• Do these goals help improve the user experience for
your product or your company’s bottom line?
What are your goals?
•Everyone wants the best performance, but it doesn’t
come for free
•Do you really need 5x better execution performance?
Or would 2x be good enough for the next few years?
•Is it worth it for your company to pay one or more
engineers to work on this instead of other things
your company needs?
What are your goals?
•Does your use case require doing compilation
on-line?
•On average, how big are the source programs
that are being compiled?
•Does code your compiler outputs execute for
short periods of time or are they longer running?
•Is your source language statically typed or
dynamically typed?
What are the constraints of your use case?
• Do you need fine control over low-level / native stuff?
• Do you need to integrate with existing native code or data
structures?
• Do you need to work on multiple processors/platforms?
• How quickly do you need a working solution?
• What’s the best fit for your use case?
• Ahead of time (AOT) compilation
• Just-in-time (JIT) compilation
• Interpreted execution
What are the constraints of your use case?
•Full custom approach
•Interpreter approach
•Transpiler approach
•Build on top of an existing backend (LLVM, JVM)
•Meta-tracing / partial evaluation frameworks
(PyPy, Truffle)
Approaches
•Easy to write, maintain, and debug
•Can be built relatively quickly
•Very low compile times
•Well-crafted interpreters can deliver better
execution performance than you probably think
Interpreters
•Interpreters built to execute a virtual ISA (bytecode)
tend to be faster than AST interpreters
•Generally try to limit branches in your opcode
handlers
• Common trick: split up an opcode into several separate
opcodes
•Dispatch loop one of the major sources of
overhead, invest some time in optimizing it
Interpreters
•A transpiler converts one programming to another,
and relies on an existing compiler to compile and
run the target language (typically C or C++)
•Can be built fairly quickly, relatively easy to debug
•Can deliver better execution perf than interpreters
•Examples: Cython, HipHop for PHP
Transpiler approach
• Often has lower ceiling for best possible long-term
performance vs. the full custom approach
• Primitive operations the source language exposes often
do not cleanly map onto the primitive operations exposed
by the target language
• Transpiler architecture can get unwieldy as your
system evolves and you squeeze for more perf
• For use cases that involve compiling medium-sized
programs or larger, transpiling to C/C++ effectively
locks you into AOT-compilation-only model
Downsides of Transpilers
•LLVM is the backend for the clang C/C++ compiler
•Unlike most compiler backends, LLVM has well
designed external facing APIs
•Most suitable for compiling statically typed
languages where longer compile times are
acceptable
•LLVM is good when you need to tight integration
with existing native code where perf really matters
Building on top of LLVM
•Examples: Scala, JRuby
•Works really well if the source language was
designed with the JVM in mind
•Not great if you need to tight integration with
existing native code where perf matters
•Suffers from some of the same problems as
transpilers if your source language wasn’t
designed with the JVM in mind
Building on top of the JVM
•Examples: PyPy, Truffle
•Takes an interpreter for your source language as input
•Analyzes the interpreter implementation, stitches
together code from different opcode handlers, and
then does optimization passes on these fragments
and emits them to machine code
•New and therefore not as proven as other
approaches
Meta-tracing and partial evaluation frameworks
•Most expensive option in terms of time/effort
•Gives you maximum control over every part of
your system, let’s you craft everything to your
exact use case
•Can produce the best possible execution
performance and compile-time performance
•Ex: JVM, .NET, gcc, clang, V8, HHVM
Full Custom Approach
•Major risks:
•Can take too long to build
•Can go off the rails if you don’t have the proper
expertise
Full Custom Approach
•Depending on the approach you take, you’ll
need to optimize different parts of the system
•Memory, memory, memory
•Reduce memory usage and cache misses
•Optimize your runtime’s binary layout
•Try out jemalloc or tcmalloc
•Try out Linux’s “huge pages” feature
Optimization Advice
What’s Next for Compilers?
•Continued focus on engines that both
deliver superior execution performance and
support the rapid iterative development
workflow
Predictions: What’s Next?
•DB query compilation will be an interesting space
to watch
•Disk is not the bottleneck anymore
•Growing demand for real-time analytics on huge,
constantly-changing datasets
•I think we’ll see different database systems striving
to deliver the highest quality SQL->machine code
compilation
Questions?

More Related Content

What's hot (20)

PDF
HBaseCon2017 Splice Machine as a Service: Multi-tenant HBase using DCOS (Meso...
HBaseCon
 
PDF
Membase East Coast Meetups
Membase
 
PPTX
Membase Meetup - Silicon Valley
Membase
 
PDF
Application Caching: The Hidden Microservice
Scott Mansfield
 
PDF
Membase Introduction
Membase
 
PDF
DataEngConf SF16 - BYOMQ: Why We [re]Built IronMQ
Hakka Labs
 
KEY
Writing Scalable Software in Java
Ruben Badaró
 
PPTX
Bootstrap SaaS startup using Open Source Tools
botsplash.com
 
PDF
Column and hadoop
Alex Jiang
 
PPTX
ClustrixDB: how distributed databases scale out
MariaDB plc
 
PPTX
Introducing Venice
Yan Yan
 
PPTX
Introduction to Prometheus Monitoring (Singapore Meetup)
Arseny Chernov
 
PDF
Couchbase@live person meetup july 22nd
Ido Shilon
 
PDF
Stumbling stones when migrating from Oracle
EDB
 
PPTX
Connecting kafka message systems with scylla
Maheedhar Gunturu
 
PPT
StorageArchitecturesForCloudVDI
Vinay Rao
 
PDF
Node.js and couchbase Full Stack JSON - Munich NoSQL
Philipp Fehre
 
PDF
Scylla Summit 2016: Scylla at Samsung SDS
ScyllaDB
 
PDF
Renegotiating the boundary between database latency and consistency
ScyllaDB
 
PDF
Scylla Summit 2022: Building Zeotap's Privacy Compliant Customer Data Platfor...
ScyllaDB
 
HBaseCon2017 Splice Machine as a Service: Multi-tenant HBase using DCOS (Meso...
HBaseCon
 
Membase East Coast Meetups
Membase
 
Membase Meetup - Silicon Valley
Membase
 
Application Caching: The Hidden Microservice
Scott Mansfield
 
Membase Introduction
Membase
 
DataEngConf SF16 - BYOMQ: Why We [re]Built IronMQ
Hakka Labs
 
Writing Scalable Software in Java
Ruben Badaró
 
Bootstrap SaaS startup using Open Source Tools
botsplash.com
 
Column and hadoop
Alex Jiang
 
ClustrixDB: how distributed databases scale out
MariaDB plc
 
Introducing Venice
Yan Yan
 
Introduction to Prometheus Monitoring (Singapore Meetup)
Arseny Chernov
 
Couchbase@live person meetup july 22nd
Ido Shilon
 
Stumbling stones when migrating from Oracle
EDB
 
Connecting kafka message systems with scylla
Maheedhar Gunturu
 
StorageArchitecturesForCloudVDI
Vinay Rao
 
Node.js and couchbase Full Stack JSON - Munich NoSQL
Philipp Fehre
 
Scylla Summit 2016: Scylla at Samsung SDS
ScyllaDB
 
Renegotiating the boundary between database latency and consistency
ScyllaDB
 
Scylla Summit 2022: Building Zeotap's Privacy Compliant Customer Data Platfor...
ScyllaDB
 

Similar to Putting Compilers to Work (20)

PPTX
Languages formanandmachine
Gireesh Punathil
 
PDF
Programming Language Selection
Dhananjay Nene
 
PPTX
Presentation 1(Compiler Construction).pptx
HassanShah175763
 
PDF
From V8 to Modern Compilers
Min-Yih Hsu
 
PPTX
Heterogeneous programming
Boyana Norris
 
PDF
Oh the compilers you'll build
Mark Stoodley
 
PDF
The Concurrency Challenge : Notes
Subhajit Sahu
 
PDF
Optimizing Python
AdimianBE
 
PPTX
Choosing-the-Right-Programming-Language.
Ozias Rondon
 
PPTX
Chosing The Right Language for your project
LloydMoore
 
PDF
The Cog VM evolution
ESUG
 
PDF
Programming Languages #devcon2013
Iván Montes
 
PPTX
Preliminary Concepts in principlesofprogramming.pptx
yogashreeyoga1100
 
PPTX
Principlesofprogramminglanguage concepts.pptx
yogashreeyoga1100
 
PDF
Harmonic Stack for Speed
Yung-Yu Chen
 
PPTX
A deep dive into python and it's position in the programming landscape.pptx
Murugan Murugan
 
PPTX
Principles of Intro to Programming Languages
nnillas140014tc
 
PDF
report
Diptika Shukla
 
PPTX
Unit 2 computer software
Hardik Patel
 
PPTX
The Professional Programmer
Dave Cross
 
Languages formanandmachine
Gireesh Punathil
 
Programming Language Selection
Dhananjay Nene
 
Presentation 1(Compiler Construction).pptx
HassanShah175763
 
From V8 to Modern Compilers
Min-Yih Hsu
 
Heterogeneous programming
Boyana Norris
 
Oh the compilers you'll build
Mark Stoodley
 
The Concurrency Challenge : Notes
Subhajit Sahu
 
Optimizing Python
AdimianBE
 
Choosing-the-Right-Programming-Language.
Ozias Rondon
 
Chosing The Right Language for your project
LloydMoore
 
The Cog VM evolution
ESUG
 
Programming Languages #devcon2013
Iván Montes
 
Preliminary Concepts in principlesofprogramming.pptx
yogashreeyoga1100
 
Principlesofprogramminglanguage concepts.pptx
yogashreeyoga1100
 
Harmonic Stack for Speed
Yung-Yu Chen
 
A deep dive into python and it's position in the programming landscape.pptx
Murugan Murugan
 
Principles of Intro to Programming Languages
nnillas140014tc
 
Unit 2 computer software
Hardik Patel
 
The Professional Programmer
Dave Cross
 
Ad

More from SingleStore (20)

PPTX
Five ways database modernization simplifies your data life
SingleStore
 
PPTX
How Kafka and Modern Databases Benefit Apps and Analytics
SingleStore
 
PDF
Architecting Data in the AWS Ecosystem
SingleStore
 
PPTX
Building the Foundation for a Latency-Free Life
SingleStore
 
PDF
Converging Database Transactions and Analytics
SingleStore
 
PDF
Building a Machine Learning Recommendation Engine in SQL
SingleStore
 
PPTX
MemSQL 201: Advanced Tips and Tricks Webcast
SingleStore
 
PDF
Introduction to MemSQL
SingleStore
 
PDF
An Engineering Approach to Database Evaluations
SingleStore
 
PPTX
Building a Fault Tolerant Distributed Architecture
SingleStore
 
PDF
Stream Processing with Pipelines and Stored Procedures
SingleStore
 
PPTX
Curriculum Associates Strata NYC 2017
SingleStore
 
PPTX
Image Recognition on Streaming Data
SingleStore
 
PPTX
Spark Summit Dublin 2017 - MemSQL - Real-Time Image Recognition
SingleStore
 
PDF
The State of the Data Warehouse in 2017 and Beyond
SingleStore
 
PDF
How Database Convergence Impacts the Coming Decades of Data Management
SingleStore
 
PPTX
Teaching Databases to Learn in the World of AI
SingleStore
 
PDF
Gartner Catalyst 2017: The Data Warehouse Blueprint for ML, AI, and Hybrid Cloud
SingleStore
 
PPTX
Gartner Catalyst 2017: Image Recognition on Streaming Data
SingleStore
 
PPTX
Spark Summit West 2017: Real-Time Image Recognition with MemSQL and Spark
SingleStore
 
Five ways database modernization simplifies your data life
SingleStore
 
How Kafka and Modern Databases Benefit Apps and Analytics
SingleStore
 
Architecting Data in the AWS Ecosystem
SingleStore
 
Building the Foundation for a Latency-Free Life
SingleStore
 
Converging Database Transactions and Analytics
SingleStore
 
Building a Machine Learning Recommendation Engine in SQL
SingleStore
 
MemSQL 201: Advanced Tips and Tricks Webcast
SingleStore
 
Introduction to MemSQL
SingleStore
 
An Engineering Approach to Database Evaluations
SingleStore
 
Building a Fault Tolerant Distributed Architecture
SingleStore
 
Stream Processing with Pipelines and Stored Procedures
SingleStore
 
Curriculum Associates Strata NYC 2017
SingleStore
 
Image Recognition on Streaming Data
SingleStore
 
Spark Summit Dublin 2017 - MemSQL - Real-Time Image Recognition
SingleStore
 
The State of the Data Warehouse in 2017 and Beyond
SingleStore
 
How Database Convergence Impacts the Coming Decades of Data Management
SingleStore
 
Teaching Databases to Learn in the World of AI
SingleStore
 
Gartner Catalyst 2017: The Data Warehouse Blueprint for ML, AI, and Hybrid Cloud
SingleStore
 
Gartner Catalyst 2017: Image Recognition on Streaming Data
SingleStore
 
Spark Summit West 2017: Real-Time Image Recognition with MemSQL and Spark
SingleStore
 
Ad

Recently uploaded (20)

PPTX
Agile Chennai 18-19 July 2025 | Workshop - Enhancing Agile Collaboration with...
AgileNetwork
 
PDF
How ETL Control Logic Keeps Your Pipelines Safe and Reliable.pdf
Stryv Solutions Pvt. Ltd.
 
PDF
Build with AI and GDG Cloud Bydgoszcz- ADK .pdf
jaroslawgajewski1
 
PDF
Peak of Data & AI Encore - Real-Time Insights & Scalable Editing with ArcGIS
Safe Software
 
PDF
The Future of Artificial Intelligence (AI)
Mukul
 
PDF
Market Insight : ETH Dominance Returns
CIFDAQ
 
PPTX
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
PDF
GDG Cloud Munich - Intro - Luiz Carneiro - #BuildWithAI - July - Abdel.pdf
Luiz Carneiro
 
PPTX
Dev Dives: Automate, test, and deploy in one place—with Unified Developer Exp...
AndreeaTom
 
PPTX
OA presentation.pptx OA presentation.pptx
pateldhruv002338
 
PPTX
Farrell_Programming Logic and Design slides_10e_ch02_PowerPoint.pptx
bashnahara11
 
PDF
Researching The Best Chat SDK Providers in 2025
Ray Fields
 
PDF
State-Dependent Conformal Perception Bounds for Neuro-Symbolic Verification
Ivan Ruchkin
 
PPTX
AI and Robotics for Human Well-being.pptx
JAYMIN SUTHAR
 
PDF
AI Unleashed - Shaping the Future -Starting Today - AIOUG Yatra 2025 - For Co...
Sandesh Rao
 
PDF
Trying to figure out MCP by actually building an app from scratch with open s...
Julien SIMON
 
PPTX
IT Runs Better with ThousandEyes AI-driven Assurance
ThousandEyes
 
PPTX
AI Code Generation Risks (Ramkumar Dilli, CIO, Myridius)
Priyanka Aash
 
PDF
Brief History of Internet - Early Days of Internet
sutharharshit158
 
PDF
How Open Source Changed My Career by abdelrahman ismail
a0m0rajab1
 
Agile Chennai 18-19 July 2025 | Workshop - Enhancing Agile Collaboration with...
AgileNetwork
 
How ETL Control Logic Keeps Your Pipelines Safe and Reliable.pdf
Stryv Solutions Pvt. Ltd.
 
Build with AI and GDG Cloud Bydgoszcz- ADK .pdf
jaroslawgajewski1
 
Peak of Data & AI Encore - Real-Time Insights & Scalable Editing with ArcGIS
Safe Software
 
The Future of Artificial Intelligence (AI)
Mukul
 
Market Insight : ETH Dominance Returns
CIFDAQ
 
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
GDG Cloud Munich - Intro - Luiz Carneiro - #BuildWithAI - July - Abdel.pdf
Luiz Carneiro
 
Dev Dives: Automate, test, and deploy in one place—with Unified Developer Exp...
AndreeaTom
 
OA presentation.pptx OA presentation.pptx
pateldhruv002338
 
Farrell_Programming Logic and Design slides_10e_ch02_PowerPoint.pptx
bashnahara11
 
Researching The Best Chat SDK Providers in 2025
Ray Fields
 
State-Dependent Conformal Perception Bounds for Neuro-Symbolic Verification
Ivan Ruchkin
 
AI and Robotics for Human Well-being.pptx
JAYMIN SUTHAR
 
AI Unleashed - Shaping the Future -Starting Today - AIOUG Yatra 2025 - For Co...
Sandesh Rao
 
Trying to figure out MCP by actually building an app from scratch with open s...
Julien SIMON
 
IT Runs Better with ThousandEyes AI-driven Assurance
ThousandEyes
 
AI Code Generation Risks (Ramkumar Dilli, CIO, Myridius)
Priyanka Aash
 
Brief History of Internet - Early Days of Internet
sutharharshit158
 
How Open Source Changed My Career by abdelrahman ismail
a0m0rajab1
 

Putting Compilers to Work

  • 1. Putting Compilers to Work August 19th, 2015 Drew Paroski
  • 2. Why do compilers and runtimes matter? •Virtually all modern software is powered by compilers •Compilers can have a huge effect on the efficiency and performance of software •Also can affect how programmers develop software
  • 3. Efficiency and Performance •Qualitative: •Change what user experiences are possible •Quantitative: •Reduce CPU and resource usage •This often translates into reduced costs and/or increased revenue
  • 4. How do compilers work? •Demo: Demystifying machine code Let’s look at a 50-line C++ program that generates machine code at run time and invokes it
  • 6. Example Compilation Pipeline (HHVM) parse codegen Virtual ISA (aka bytecode) emit Intermediate Representation analyze PHP Source Abstract Syntax Tree (AST) CPU x64 Machine Code InterpreterHHVM Runtime interpretation
  • 7. The Rise of Web Programming •1993: CGI scripting (Perl / C) •1995: Javascript, PHP, and Java applets •1996: Flash, ASP, and ActiveX •1999: JSP •2004: Ruby on Rails •2005: Django (Python) •2007: Silverlight (XAML / C#)
  • 8. •Engines for most of these languages were interpreters •Often ~100x slower (or more) than C and C++ •Most interpreters don’t have a noticeable compilation delay •Supports a rapid iterative development workflow (edit-save-run) Web Programming Languages
  • 9. •Websites grew in their complexity •Developers started to bump against the limitations of early engines’ performance •Bonus points: Try to preserve the rapid iterative development workflow of these languages (edit-save-run) Building Efficient Compilers for Dynamic Languages
  • 10. Compiler Improvements in Web Programming •2002: ASP.NET •2007: Cython •2008: JRuby •2009: TraceMonkey, WebKit SFX/Nitro, V8 •2010: HipHop for PHP, PyPy 1.2 •2011: HHVM •2014: Pyston
  • 11. •Let’s talk about performance optimization in general Improving software performance
  • 12. Three Areas to Optimize 1) Data fetching and I/O in general 2) How memory is used on an individual machine 3) How computation is actually performed by the CPU •Always measure using profiling tools to determine what areas you should focus your optimization efforts on
  • 13. •Typically the biggest issue to look at •Does you application frequently stall while waiting for data? Data Fetching and other I/O CPU Blocked on I/O CPUCPU Blocked on I/O Wall time
  • 14. •Are you fetching more data than you need to? •Are you making lots of round trips in a serial fashion? •Fetch data in batches to reduce # of round trips •Is your application still blocking a lot? •Use async I/O APIs Data Fetching and other I/O
  • 15. •Are you taking a lot of cache misses? •Caches misses slow things down by making the CPU stall •Pay attention to your application’s memory usage •Use as little memory as possible •Avoid pointer chasing How is memory used?
  • 16. •Is your application repeatedly re-computing something that could be memoized? •Are common operations as efficient as possible? •Is your application needlessly making lots of short-lived allocations? •Is there unnecessary contention with locks or interlocked operations? How is computation performed on the CPU?
  • 17. •Compilers requires a fair amount of investment before it pays off •Have you exhausted the low hanging fruit from other avenues of performance optimization? •Are there better engines already out there you can use? When to invest in compilers
  • 18. •Before building anything, think about what your goals really are • How much better does execution performance need to be? 2x? 4x? • Do you need fast compile times? How fast? • Do you need to support a rapid iterative workflow? • Do these goals help improve the user experience for your product or your company’s bottom line? What are your goals?
  • 19. •Everyone wants the best performance, but it doesn’t come for free •Do you really need 5x better execution performance? Or would 2x be good enough for the next few years? •Is it worth it for your company to pay one or more engineers to work on this instead of other things your company needs? What are your goals?
  • 20. •Does your use case require doing compilation on-line? •On average, how big are the source programs that are being compiled? •Does code your compiler outputs execute for short periods of time or are they longer running? •Is your source language statically typed or dynamically typed? What are the constraints of your use case?
  • 21. • Do you need fine control over low-level / native stuff? • Do you need to integrate with existing native code or data structures? • Do you need to work on multiple processors/platforms? • How quickly do you need a working solution? • What’s the best fit for your use case? • Ahead of time (AOT) compilation • Just-in-time (JIT) compilation • Interpreted execution What are the constraints of your use case?
  • 22. •Full custom approach •Interpreter approach •Transpiler approach •Build on top of an existing backend (LLVM, JVM) •Meta-tracing / partial evaluation frameworks (PyPy, Truffle) Approaches
  • 23. •Easy to write, maintain, and debug •Can be built relatively quickly •Very low compile times •Well-crafted interpreters can deliver better execution performance than you probably think Interpreters
  • 24. •Interpreters built to execute a virtual ISA (bytecode) tend to be faster than AST interpreters •Generally try to limit branches in your opcode handlers • Common trick: split up an opcode into several separate opcodes •Dispatch loop one of the major sources of overhead, invest some time in optimizing it Interpreters
  • 25. •A transpiler converts one programming to another, and relies on an existing compiler to compile and run the target language (typically C or C++) •Can be built fairly quickly, relatively easy to debug •Can deliver better execution perf than interpreters •Examples: Cython, HipHop for PHP Transpiler approach
  • 26. • Often has lower ceiling for best possible long-term performance vs. the full custom approach • Primitive operations the source language exposes often do not cleanly map onto the primitive operations exposed by the target language • Transpiler architecture can get unwieldy as your system evolves and you squeeze for more perf • For use cases that involve compiling medium-sized programs or larger, transpiling to C/C++ effectively locks you into AOT-compilation-only model Downsides of Transpilers
  • 27. •LLVM is the backend for the clang C/C++ compiler •Unlike most compiler backends, LLVM has well designed external facing APIs •Most suitable for compiling statically typed languages where longer compile times are acceptable •LLVM is good when you need to tight integration with existing native code where perf really matters Building on top of LLVM
  • 28. •Examples: Scala, JRuby •Works really well if the source language was designed with the JVM in mind •Not great if you need to tight integration with existing native code where perf matters •Suffers from some of the same problems as transpilers if your source language wasn’t designed with the JVM in mind Building on top of the JVM
  • 29. •Examples: PyPy, Truffle •Takes an interpreter for your source language as input •Analyzes the interpreter implementation, stitches together code from different opcode handlers, and then does optimization passes on these fragments and emits them to machine code •New and therefore not as proven as other approaches Meta-tracing and partial evaluation frameworks
  • 30. •Most expensive option in terms of time/effort •Gives you maximum control over every part of your system, let’s you craft everything to your exact use case •Can produce the best possible execution performance and compile-time performance •Ex: JVM, .NET, gcc, clang, V8, HHVM Full Custom Approach
  • 31. •Major risks: •Can take too long to build •Can go off the rails if you don’t have the proper expertise Full Custom Approach
  • 32. •Depending on the approach you take, you’ll need to optimize different parts of the system •Memory, memory, memory •Reduce memory usage and cache misses •Optimize your runtime’s binary layout •Try out jemalloc or tcmalloc •Try out Linux’s “huge pages” feature Optimization Advice
  • 33. What’s Next for Compilers? •Continued focus on engines that both deliver superior execution performance and support the rapid iterative development workflow
  • 34. Predictions: What’s Next? •DB query compilation will be an interesting space to watch •Disk is not the bottleneck anymore •Growing demand for real-time analytics on huge, constantly-changing datasets •I think we’ll see different database systems striving to deliver the highest quality SQL->machine code compilation

Editor's Notes

  • #2: This talk is about compilers from a practical perspective, about why compilers are important, and how and when to invest in building compilers.
  • #3: Software = greatest force powering technology over last 50 years
  • #4: - Reducing how many servers you need - Producing higher yields for your trading algorithms - Increasing user engagement with your product - Can make your product better (an edge over your competitors) in ways like improve your product’s battery life
  • #5: [Live demo]
  • #7: Unlike the demo, of course, real compilers are more complex and multiple stages that are organized into what’s called a “compilation pipeline”. Here on this slide, for example, we have a diagram of the compilation pipeline used by HHVM. The demo showed what happens at the very end of the compilation pipeline. Pipelines can differ a fair amount when comparing different compilers.. You’ll almost always see ASTs early in the pipeline, and mid-to-lower level SSA-based IR later in the pipeline. Also, for engines that take the “virtual machine” approach, you’ll typically see v-ISA around the middle of the pipeline. Pipelines are structured based on your constraints, your goals, and what kind of optimizations you want to focus on.
  • #8: Many new languages were created during this period as developers faced new challenges involved in building web sites and web applications.
  • #9: Most of the engines were interpreters that were in the ballpark of 100x slower than C/C++, but for small programs that mostly blocked on I/O the performance was acceptable.
  • #11: Over the last 10-15 years, there were a lot of new efficient engines that came out for the popular dynamic languages: Javascript, PHP, Python, and Ruby. These advancements changed what was possible with web development. Sites could be richer, have more dynamic content, and be more interactive while still loading quickly.
  • #12: Before digging into compilers and runtimes specifically, it’s important to dig into performance optimization in general
  • #14: The diagram on this slide shows an example of an application that alternates between doing computation on the CPU and blocking on I/O. The x-axis is wall time. Notice how even if you made the “CPU” portions shrink to zero, the most you can reduce wall time here is about 50% (a 2x performance improvement)
  • #15: Using async I/O APIs and task switch between multiple independent tasks to keep the CPU busy and avoid blocking
  • #16: Small programs completely fit in cache, giving the illusion that memory is faster than it actually is. Larger programs will miss in the cache sometimes, Cache misses are similar to blocking on I/O, accept at a lower level of granularity. Instead of blocking dozens or hundreds of times, each time for a few milli-seconds (this is what I/O does).. with memory the CPU will stall many millions of times, each time for a tenth of a micro-second (death of a million cuts) Reduce the number of allocations, reduce the size of common data structures, avoid pointer-chasing and needless indirections where possible, when two variables are frequently read at the same time put the variables next to each other in your data structures Profiling what actually causes cache misses can be tricky. If you look where you took a cache miss, you’ll be able to identify the “victim” (i.e. what piece of data got evicted from the cache) but it will take some investigation to find the cause or the “perpetrator” (what piece of code polluted the cache and caused the other data to get evicted)
  • #17: Use profilers to look at callstacks where the most CPU time is being spent. Don’t operate solely on hunches – even if you’re right most of the time, you might waste time focusing on the wrong things. Another technique that works sometimes: Get your hands dirty and trace through machine code in the debugger, focus on hot functions and common operations.
  • #21: Short amounts of time: less than 1 ms or 100 ms Longer running: greater than 1 second or 1 minute or more
  • #24: Interpreters: Instead of executing your program by compiling all the way to machine code (or some target language that is then compiled down to machine code). Interpreters run programs using a technique called “interpreted execution” where the interpreter engine is effectively executing on behalf of the source program.
  • #25: If you plan on doing optimizations at the bytecode level, consider preferring a register-based bytecode over stack-based Design your bytecode so that opcode handlers can get most of what they need directly from an instruction’s immediate arguments As you squeeze for more performance, dispatch loop can be written in assembly
  • #26: Depending on your use case, transpilers can deliver better performance that interpreters Like interpreters, there’s no need to worry about the low-level nitty gritty details of generating raw machine code No need to perform certain kinds of low-level compiler optimizations
  • #27: Often has lower ceiling for best possible long-term performance vs. the full custom approach.. Now, the same can be said for interpreters, but I wanted to specifically call this out for traspilers because.. There’s often a misconception that if you traspile to C/C++ you’re going to get performance on par with hand written C/C++ programs Primitive operations that the source language exposes (whose precise semantics are relied on by most large codebases written in the source language) often do not cleanly map onto the primitive operations exposed by the target language Transpiling to C/C++ effectively locks you in to AOT for most use cases
  • #28: Remember that LLVM only handles the end of your compilation pipeline for you (low-level IR to machine code), but the rest of the compilation pipeline (from source code down to low-level IR) is still on you. Doesn’t provide automatic memory management for you (no GC). It was built originally for an AOT compiler for a static language where fast compilation was not the top goal. LLVM has been incorporated into some major JIT-compilation-based engines for languages like JavaScript and PHP, but often these engines still have their own JIT and LLVM is often used as a second or third gear that is only used to compile the hottest ~1% of the program.
  • #29: Benefits over LLVM: it’s a little higher level and provides GC for you InvokeDynamic was a promising development that came out ~5 years ago -> JRuby takes advantage of InvokeDynamic
  • #30: Meta-tracing and partial evaluation frameworks are the newest approach when building efficient compilers. New and therefore not as proven as other approaches, but I think it’s a really exciting space and I’ll be curious to see what happens in the coming years.
  • #32: Can go off the rails if you don’t have the proper expertise (it can go off the rails even if you do have expertise).
  • #33: Depending on the approach you take, you’ll need to optimize different parts of the system. Obviously, optimize the parts of the system you control, but if you’re using an existing backend/framework you’ll need to make sure you’re doing the right things to get the best perf out of that technology. Use jemalloc or tcmalloc instead of the default allocator implementations from the C/C++ runtimes. If you have a huge amount of generated code, take advantage of Linux’s “huge pages” feature.
  • #35: DB query compilation will be an interesting space to watch. With the rise of in-memory DBs, disk is not the bottleneck anymore. I think we’ll see different database systems striving to deliver the highest quality SQL->machine code compilation (and when I say that I’m referring both to execution performance of the machine code, as well as fast compile times)