SlideShare a Scribd company logo
PinTrace
Distributed Tracing@Pinterest
Suman Karumuri
Proprietary and Confidential
● About me
● What is distributed tracing?
● Why PinTrace?
● Pintrace architecture
● Challenges and Lessons
● Contributions
● Q & A.
Agenda
Proprietary and Confidential
● Lead for Tracing effort at Pinterest.
● Former Twitter Zipkin (open source distributed tracing project) lead.
● Former Twitter, Facebook, Amazon, Yahoo, Goldman Sachs Engineer.
● Published papers on automatic trace instrumentation@Brown CS.
● Passionate about Distributed Tracing and Distributed cloud infrastructure.
About me
Proprietary and Confidential
Distributed system
Client Service 1
Service 2
Service 3
Proprietary and Confidential
10th
Rule of Distributed System Monitoring
“Any sufficiently complicated distributed system
contains an ad-hoc, informally-specified, siloed
implementation of causal tracing.”
- Rodrigo Fonseca
Why Distributed tracing?
Proprietary and Confidential
What is distributed tracing?
Client Service 1 Service 2
ts1, r1, client req sent
ts2, r1, server req rcvd
ts7, r1, server resp sent
ts3, r1, client req sent
ts4, r1, server req rcvd
ts5, r1, server resp sent
ts6, r1, client resp rcvdts8, r1, client resp rcvd
Structured logging on steroids.
Proprietary and Confidential
Annotation
Client Service 1 Service 2
ts1, r1, CS
ts2, r1, server req rcvd
ts7, r1, server resp sent
ts3, r1, client req sent
ts4, r1, server req rcvd
ts5, r1, server resp sent
ts6, r1, client resp rcvdts8, r1, client resp rcvd
Timestamped event name with a structured payload.
Proprietary and Confidential
Span
Client Service 1 Service 2
ts1, r1, s1, - , CR
ts2, r1, s1, - , SR
ts7, r1, s1, - , SS
ts3, r1, client req sent
ts4, r1, server req rcvd
ts5, r1, server resp sent
ts6, r1, client resp rcvdts8, r1, s1, -, CS
A logical unit of work captured as a set of annotations. Ex: A request response pair.
Proprietary and Confidential
Trace
Client Service 1 Service 2
ts1, r1, s1, 0, CS
ts2, r1, s1, 0, SR
ts7, r1, s1, 0, SS
ts3, r1, s2, s1, CS
ts4, r1, s2, s1, SR
ts5, r1, s2, s1, SS
ts6, r1, s2, s1, CRts8, r1, s1, 0, CR
A DAG of spans that belong to the same request.
Proprietary and Confidential
Tracer: Piece of software that traces a request and generates spans.
Sampler: selects which requests to trace.
Reporter: Gathers the spans from a tracer and sends them to the collector.
Span aggregation pipeline: a mechanism to transfer spans from reporter to collector.
Collector: A service that gathers spans from various services from the pipeline.
Span storage: A backend used by the collector to store the spans.
Client/UI: An interface to search, access and visualize trace data.
Components of Tracing infrastructure
Proprietary and Confidential
Motivation:
Success of project prestige, Hbase debugging, Pinpoint.
Make backend faster and cheaper. Speed => More engagement.
Loading home feed consists of ~50 backend services.
Uses of Traces
Understand what we built: service dependency graphs.
Understand where a request spent it’s time - for debugging, tuning, cost attribution.
Improve time to triage: Ex: what service caused this request to fail? Why is the search API slow
after recent deployment?
Why PinTrace?
Proprietary and Confidential
PinTrace architecture
Varnish
ngapi
Singer -
Kafka pipeline
(Spark) Span aggregation
Trace processing & storage
ES
Trace store
Zipkin UI The Wall
Py thrift tracer
Py Span logger
Java service(s)
Java thrift tracer
Java span logger
Java Service
Python service
Go service
MySQL
Memcached
Decider
Proprietary and Confidential
Ensuring data quality.
Tracing infrastructure can be fragile since it has a lot of moving parts.
The more customized the pipeline, the harder it is to ensure data quality.
Use metrics and alerting to monitor the pipeline for correctness.
E2E monitoring: Sentinel
Traces a known request path periodically and check the resulting trace for correctness.
The known request path should have all known language/protocol combinations.
Measures end to end trace latency.
Testing
Proprietary and Confidential
Collect a lot of trace data but provides very few insights.
Spend time scaling the trace collection infrastructure than provide value.
Using tracing when simpler methods would suffice.
Use simpler time series metrics for counting the number of API calls.
Tracing is expensive,
Higher dark latency compared to other methods.
Tracing infrastructure is expensive since we are dealing with an order of magnitude more data.
Tracing tarpit
Proprietary and Confidential
Tracing is not the solution to a problem, it’s a tool.
Build tools around traces to solve a problem.
Should augment our time series metrics and logging platform.
Traces should only be used for computing distributed metrics.
Tracing infrastructure should be cheap and easy to run.
Quality of traces is more important than quantity of traces.
All processing and analysis of traces on ingestion and avoid post processing.
Our Tracing philosophy
Proprietary and Confidential
Instrumentation is hard.
Instrumenting the framework is less brittle, agnostic to business logic and more reusable.
Even after instrumenting the framework, there will be snow flakes.
The more opinionated the framework the easier it is to instrument. Ex: Java/go vs Python.
Need instrumentation for every language protocol combinations.
Use a framework that is already enabled for tracing.
Instrumentation challenges
Proprietary and Confidential
Deploying tracing at scale is a complex and challenging process.
Needs a company wide span aggregation pipeline.
Enabling and deploying instrumentation across several Java/Python services is like herding cats.
Scaling the tracing backend.
Dealing with multiple stakeholders and doing things the “right” way.
Can’t see it’s benefits or ensure data quality until it is fully deployed.
Do deployments along key request paths first for best results.
Deployment challenges
Proprietary and Confidential
User Education is very important.
Most people use tracing for solving needle in haystack and
SREs get tracing. Still an esoteric concept even for good engineers.
Explain the use cases on when they can use tracing.
Insights into performance bottlenecks or global visibility.
Tracing landscape is confusing.
Distributed tracing/Zipkin landscape is rapidly evolving and can be confusing.
Zipkin UI has some rough edges.
Lessons learned
Proprietary and Confidential
Data quality
For identifying performance bottlenecks from traces relative durations are most important.
When deployed in the right order, even partial tracing is useful.
Trace errors are ok when in leaves.
Tracing Infrastructure
Tracing infrastructure is a Tier 2 service in almost all companies.
Tracing is expensive.
Lessons learned (contd)
Proprietary and Confidential
● Identified that we use a really old version of finagle-memcache client that is
blocking the finagle upgrade.
● Identified ~7% of Java code as dead code and deleted 20KLoC so far.
● First company wide log/span aggregation pipeline.
● Identified an synchronous mysql client, now moving to asynchronous one.
● Local zipkin set up: Debugging Hbase latency issues.
Wins
Proprietary and Confidential
Future work
● Short term
○ Finish python instrumentation.
○ Open source spark backend.
○ Robust and scalable backend:
■ Trace all employee requests by default.
■ Make it easy to look at trace data for a request in pinterest app and web UI.
● Medium term
○ End to end traces to measure user perceived wait time. Ex:
Mobile/Browser -> Java/Python/go -> MySQL/MemCache/HBase.
○ Apply tracing to other use cases like jenkins builds times.
○ Improve Zipkin UI.
Q&A
Thank you!
skarumuri@pinterest.com
Btw, we are hiring!

More Related Content

PPTX
Hadoop / Spark on Malware Expression
MapR Technologies
 
PPTX
Applied Detection and Analysis Using Flow Data - MIRCon 2014
chrissanders88
 
PDF
Managing your black friday logs - Code Europe
David Pilato
 
PPTX
Detecting Hacks: Anomaly Detection on Networking Data
DataWorks Summit
 
PPTX
Analyzing 1.2 Million Network Packets per Second in Real-time
DataWorks Summit
 
PDF
Burning Down the Haystack to Find the Needle: Security Analytics in Action
Josh Sokol
 
PDF
Zmap talk-sec13
Sergi Duró
 
PPTX
Cisco OpenSOC
James Sirota
 
Hadoop / Spark on Malware Expression
MapR Technologies
 
Applied Detection and Analysis Using Flow Data - MIRCon 2014
chrissanders88
 
Managing your black friday logs - Code Europe
David Pilato
 
Detecting Hacks: Anomaly Detection on Networking Data
DataWorks Summit
 
Analyzing 1.2 Million Network Packets per Second in Real-time
DataWorks Summit
 
Burning Down the Haystack to Find the Needle: Security Analytics in Action
Josh Sokol
 
Zmap talk-sec13
Sergi Duró
 
Cisco OpenSOC
James Sirota
 

What's hot (20)

PPTX
APNIC Hackathon CDN Ranking
Siena Perry
 
PDF
WJAX 2019 - Taking Distributed Tracing to the next level
Frank Pfleger
 
PDF
Listening at the Cocktail Party with Deep Neural Networks and TensorFlow
Databricks
 
PPTX
Apache metron meetup presentation at capital one
gvetticaden
 
PPT
Invincea: Reasoning in Incident Response in Tapio
Invincea, Inc.
 
PDF
Threat Hunting with Elastic at SpectorOps: Welcome to HELK
Elasticsearch
 
PDF
RT4 - The whole sordid story
Jesse Vincent
 
PDF
Bhutan Cybersecurity Week 2021: APNIC vulnerability reporting program
APNIC
 
PPTX
Monitoring & Analysis 101 - N00b to Ninja in 60 Minutes at ISSW on April 9, 2016
grecsl
 
PDF
Everything You wanted to Know About Distributed Tracing
Amuhinda Hungai
 
PDF
Software cracking and patching
Mayank Gavri
 
PDF
Managing your Black Friday Logs NDC Oslo
David Pilato
 
PDF
Internet census 2012
Giuliano Tavaroli
 
PDF
Managing your black friday logs Voxxed Luxembourg
David Pilato
 
PDF
How Autodesk Delivers Seamless Customer Experience with Catchpoint
DevOps.com
 
PDF
How Automated Vulnerability Analysis Discovered Hundreds of Android 0-days
Priyanka Aash
 
PPTX
Big Data for Security - DNS Analytics
Marco Casassa Mont
 
PDF
SAI - Serverless Integration Architectures - 09/2019
Samuel Vandecasteele
 
PDF
Fighting cybersecurity threats with Apache Spot
markgrover
 
PDF
Measuring the IQ of your Threat Intelligence Feeds (#tiqtest)
Alex Pinto
 
APNIC Hackathon CDN Ranking
Siena Perry
 
WJAX 2019 - Taking Distributed Tracing to the next level
Frank Pfleger
 
Listening at the Cocktail Party with Deep Neural Networks and TensorFlow
Databricks
 
Apache metron meetup presentation at capital one
gvetticaden
 
Invincea: Reasoning in Incident Response in Tapio
Invincea, Inc.
 
Threat Hunting with Elastic at SpectorOps: Welcome to HELK
Elasticsearch
 
RT4 - The whole sordid story
Jesse Vincent
 
Bhutan Cybersecurity Week 2021: APNIC vulnerability reporting program
APNIC
 
Monitoring & Analysis 101 - N00b to Ninja in 60 Minutes at ISSW on April 9, 2016
grecsl
 
Everything You wanted to Know About Distributed Tracing
Amuhinda Hungai
 
Software cracking and patching
Mayank Gavri
 
Managing your Black Friday Logs NDC Oslo
David Pilato
 
Internet census 2012
Giuliano Tavaroli
 
Managing your black friday logs Voxxed Luxembourg
David Pilato
 
How Autodesk Delivers Seamless Customer Experience with Catchpoint
DevOps.com
 
How Automated Vulnerability Analysis Discovered Hundreds of Android 0-days
Priyanka Aash
 
Big Data for Security - DNS Analytics
Marco Casassa Mont
 
SAI - Serverless Integration Architectures - 09/2019
Samuel Vandecasteele
 
Fighting cybersecurity threats with Apache Spot
markgrover
 
Measuring the IQ of your Threat Intelligence Feeds (#tiqtest)
Alex Pinto
 
Ad

Viewers also liked (7)

PDF
Evm+agile estimating
Glen Alleman
 
PDF
Agile in the government
Glen Alleman
 
PDF
Paradigm of agile project management
Glen Alleman
 
PPTX
Webinar AWS für Unternehmen Teil 3: Disaster Recovery
AWS Germany
 
PDF
The Google Chubby lock service for loosely-coupled distributed systems
Romain Jacotin
 
PDF
The Google File System (GFS)
Romain Jacotin
 
Evm+agile estimating
Glen Alleman
 
Agile in the government
Glen Alleman
 
Paradigm of agile project management
Glen Alleman
 
Webinar AWS für Unternehmen Teil 3: Disaster Recovery
AWS Germany
 
The Google Chubby lock service for loosely-coupled distributed systems
Romain Jacotin
 
The Google File System (GFS)
Romain Jacotin
 
Ad

Similar to PinTrace Advanced AWS meetup (20)

PDF
Pintrace: Distributed tracing @Pinterest
Suman Karumuri
 
PDF
Microservices Tracing with Spring Cloud and Zipkin
Marcin Grzejszczak
 
PDF
Pintrace: Distributed tracing@Pinterest
Suman Karumuri
 
PDF
Microservices Tracing With Spring Cloud and Zipkin @CybercomDEV
Marcin Grzejszczak
 
PDF
Distributed Tracing, from internal SAAS insights
Huy Do
 
PDF
Distributed tracing - get a grasp on your production
nklmish
 
PDF
Microservices Tracing with Spring Cloud and Zipkin (devoxx)
Marcin Grzejszczak
 
PDF
Performance monitoring and call tracing in microservice environments
Martin Gutenbrunner
 
PDF
99.99% of Your Traces Are (Probably) Trash (SRECon NA 2024).pdf
Paige Cruz
 
PDF
Adopting Open Telemetry as Distributed Tracer on your Microservices at Kubern...
Tonny Adhi Sabastian
 
PDF
Distributed Tracing
Kevin Ingelman
 
PPTX
Latency analysis for your microservices using Spring Cloud & Zipkin
VMware Tanzu
 
PDF
OSDC 2018 - Distributed monitoring
Gianluca Arbezzano
 
PDF
OSDC 2018 | Distributed Monitoring by Gianluca Arbezzano
NETWAYS
 
PDF
Java il spanning services 2019
Yair Galler
 
PDF
Monitoring microservices platform
Boyan Dimitrov
 
PPTX
Distributed Tracing at UBER Scale: Creating a treasure map for your monitori...
Yuri Shkuro
 
PDF
Monitoring to the Nth tier: The state of distributed tracing in 2016
AppNeta
 
PDF
Why Distributed Tracing is Essential for Performance and Reliability
Aggregage
 
PDF
Tame the Mesh An intro to cross-platform tracing and troubleshooting.pdf
Ortus Solutions, Corp
 
Pintrace: Distributed tracing @Pinterest
Suman Karumuri
 
Microservices Tracing with Spring Cloud and Zipkin
Marcin Grzejszczak
 
Pintrace: Distributed tracing@Pinterest
Suman Karumuri
 
Microservices Tracing With Spring Cloud and Zipkin @CybercomDEV
Marcin Grzejszczak
 
Distributed Tracing, from internal SAAS insights
Huy Do
 
Distributed tracing - get a grasp on your production
nklmish
 
Microservices Tracing with Spring Cloud and Zipkin (devoxx)
Marcin Grzejszczak
 
Performance monitoring and call tracing in microservice environments
Martin Gutenbrunner
 
99.99% of Your Traces Are (Probably) Trash (SRECon NA 2024).pdf
Paige Cruz
 
Adopting Open Telemetry as Distributed Tracer on your Microservices at Kubern...
Tonny Adhi Sabastian
 
Distributed Tracing
Kevin Ingelman
 
Latency analysis for your microservices using Spring Cloud & Zipkin
VMware Tanzu
 
OSDC 2018 - Distributed monitoring
Gianluca Arbezzano
 
OSDC 2018 | Distributed Monitoring by Gianluca Arbezzano
NETWAYS
 
Java il spanning services 2019
Yair Galler
 
Monitoring microservices platform
Boyan Dimitrov
 
Distributed Tracing at UBER Scale: Creating a treasure map for your monitori...
Yuri Shkuro
 
Monitoring to the Nth tier: The state of distributed tracing in 2016
AppNeta
 
Why Distributed Tracing is Essential for Performance and Reliability
Aggregage
 
Tame the Mesh An intro to cross-platform tracing and troubleshooting.pdf
Ortus Solutions, Corp
 

More from Suman Karumuri (7)

PDF
Monorepo at Pinterest
Suman Karumuri
 
PDF
Phobos
Suman Karumuri
 
PDF
Gpu Join Presentation
Suman Karumuri
 
PDF
Dream Language!
Suman Karumuri
 
PDF
Bittorrent
Suman Karumuri
 
PDF
Practical Byzantine Fault Tolerance
Suman Karumuri
 
PPT
bluespec talk
Suman Karumuri
 
Monorepo at Pinterest
Suman Karumuri
 
Gpu Join Presentation
Suman Karumuri
 
Dream Language!
Suman Karumuri
 
Bittorrent
Suman Karumuri
 
Practical Byzantine Fault Tolerance
Suman Karumuri
 
bluespec talk
Suman Karumuri
 

PinTrace Advanced AWS meetup

  • 2. Proprietary and Confidential ● About me ● What is distributed tracing? ● Why PinTrace? ● Pintrace architecture ● Challenges and Lessons ● Contributions ● Q & A. Agenda
  • 3. Proprietary and Confidential ● Lead for Tracing effort at Pinterest. ● Former Twitter Zipkin (open source distributed tracing project) lead. ● Former Twitter, Facebook, Amazon, Yahoo, Goldman Sachs Engineer. ● Published papers on automatic trace instrumentation@Brown CS. ● Passionate about Distributed Tracing and Distributed cloud infrastructure. About me
  • 4. Proprietary and Confidential Distributed system Client Service 1 Service 2 Service 3
  • 5. Proprietary and Confidential 10th Rule of Distributed System Monitoring “Any sufficiently complicated distributed system contains an ad-hoc, informally-specified, siloed implementation of causal tracing.” - Rodrigo Fonseca Why Distributed tracing?
  • 6. Proprietary and Confidential What is distributed tracing? Client Service 1 Service 2 ts1, r1, client req sent ts2, r1, server req rcvd ts7, r1, server resp sent ts3, r1, client req sent ts4, r1, server req rcvd ts5, r1, server resp sent ts6, r1, client resp rcvdts8, r1, client resp rcvd Structured logging on steroids.
  • 7. Proprietary and Confidential Annotation Client Service 1 Service 2 ts1, r1, CS ts2, r1, server req rcvd ts7, r1, server resp sent ts3, r1, client req sent ts4, r1, server req rcvd ts5, r1, server resp sent ts6, r1, client resp rcvdts8, r1, client resp rcvd Timestamped event name with a structured payload.
  • 8. Proprietary and Confidential Span Client Service 1 Service 2 ts1, r1, s1, - , CR ts2, r1, s1, - , SR ts7, r1, s1, - , SS ts3, r1, client req sent ts4, r1, server req rcvd ts5, r1, server resp sent ts6, r1, client resp rcvdts8, r1, s1, -, CS A logical unit of work captured as a set of annotations. Ex: A request response pair.
  • 9. Proprietary and Confidential Trace Client Service 1 Service 2 ts1, r1, s1, 0, CS ts2, r1, s1, 0, SR ts7, r1, s1, 0, SS ts3, r1, s2, s1, CS ts4, r1, s2, s1, SR ts5, r1, s2, s1, SS ts6, r1, s2, s1, CRts8, r1, s1, 0, CR A DAG of spans that belong to the same request.
  • 10. Proprietary and Confidential Tracer: Piece of software that traces a request and generates spans. Sampler: selects which requests to trace. Reporter: Gathers the spans from a tracer and sends them to the collector. Span aggregation pipeline: a mechanism to transfer spans from reporter to collector. Collector: A service that gathers spans from various services from the pipeline. Span storage: A backend used by the collector to store the spans. Client/UI: An interface to search, access and visualize trace data. Components of Tracing infrastructure
  • 11. Proprietary and Confidential Motivation: Success of project prestige, Hbase debugging, Pinpoint. Make backend faster and cheaper. Speed => More engagement. Loading home feed consists of ~50 backend services. Uses of Traces Understand what we built: service dependency graphs. Understand where a request spent it’s time - for debugging, tuning, cost attribution. Improve time to triage: Ex: what service caused this request to fail? Why is the search API slow after recent deployment? Why PinTrace?
  • 12. Proprietary and Confidential PinTrace architecture Varnish ngapi Singer - Kafka pipeline (Spark) Span aggregation Trace processing & storage ES Trace store Zipkin UI The Wall Py thrift tracer Py Span logger Java service(s) Java thrift tracer Java span logger Java Service Python service Go service MySQL Memcached Decider
  • 13. Proprietary and Confidential Ensuring data quality. Tracing infrastructure can be fragile since it has a lot of moving parts. The more customized the pipeline, the harder it is to ensure data quality. Use metrics and alerting to monitor the pipeline for correctness. E2E monitoring: Sentinel Traces a known request path periodically and check the resulting trace for correctness. The known request path should have all known language/protocol combinations. Measures end to end trace latency. Testing
  • 14. Proprietary and Confidential Collect a lot of trace data but provides very few insights. Spend time scaling the trace collection infrastructure than provide value. Using tracing when simpler methods would suffice. Use simpler time series metrics for counting the number of API calls. Tracing is expensive, Higher dark latency compared to other methods. Tracing infrastructure is expensive since we are dealing with an order of magnitude more data. Tracing tarpit
  • 15. Proprietary and Confidential Tracing is not the solution to a problem, it’s a tool. Build tools around traces to solve a problem. Should augment our time series metrics and logging platform. Traces should only be used for computing distributed metrics. Tracing infrastructure should be cheap and easy to run. Quality of traces is more important than quantity of traces. All processing and analysis of traces on ingestion and avoid post processing. Our Tracing philosophy
  • 16. Proprietary and Confidential Instrumentation is hard. Instrumenting the framework is less brittle, agnostic to business logic and more reusable. Even after instrumenting the framework, there will be snow flakes. The more opinionated the framework the easier it is to instrument. Ex: Java/go vs Python. Need instrumentation for every language protocol combinations. Use a framework that is already enabled for tracing. Instrumentation challenges
  • 17. Proprietary and Confidential Deploying tracing at scale is a complex and challenging process. Needs a company wide span aggregation pipeline. Enabling and deploying instrumentation across several Java/Python services is like herding cats. Scaling the tracing backend. Dealing with multiple stakeholders and doing things the “right” way. Can’t see it’s benefits or ensure data quality until it is fully deployed. Do deployments along key request paths first for best results. Deployment challenges
  • 18. Proprietary and Confidential User Education is very important. Most people use tracing for solving needle in haystack and SREs get tracing. Still an esoteric concept even for good engineers. Explain the use cases on when they can use tracing. Insights into performance bottlenecks or global visibility. Tracing landscape is confusing. Distributed tracing/Zipkin landscape is rapidly evolving and can be confusing. Zipkin UI has some rough edges. Lessons learned
  • 19. Proprietary and Confidential Data quality For identifying performance bottlenecks from traces relative durations are most important. When deployed in the right order, even partial tracing is useful. Trace errors are ok when in leaves. Tracing Infrastructure Tracing infrastructure is a Tier 2 service in almost all companies. Tracing is expensive. Lessons learned (contd)
  • 20. Proprietary and Confidential ● Identified that we use a really old version of finagle-memcache client that is blocking the finagle upgrade. ● Identified ~7% of Java code as dead code and deleted 20KLoC so far. ● First company wide log/span aggregation pipeline. ● Identified an synchronous mysql client, now moving to asynchronous one. ● Local zipkin set up: Debugging Hbase latency issues. Wins
  • 21. Proprietary and Confidential Future work ● Short term ○ Finish python instrumentation. ○ Open source spark backend. ○ Robust and scalable backend: ■ Trace all employee requests by default. ■ Make it easy to look at trace data for a request in pinterest app and web UI. ● Medium term ○ End to end traces to measure user perceived wait time. Ex: Mobile/Browser -> Java/Python/go -> MySQL/MemCache/HBase. ○ Apply tracing to other use cases like jenkins builds times. ○ Improve Zipkin UI.
  • 22. Q&A