SlideShare a Scribd company logo
Scala-like Distributed Collections:
Dumping Time-Series Data With
Apache Spark
Demi Ben-Ari - CTO @ Panorays
About Me
Demi Ben-Ari, Co-Founder & CTO @ Panorays
●  BS’c Computer Science – Academic College Tel-Aviv Yaffo
●  Co-Founder “Big Things” Big Data Community
In the Past:
●  Sr. Data Engineer - Windward
●  Team Leader & Sr. Java Software Engineer,
Missile defense and Alert System - “Ofek” – IAF
Interested in almost every kind of technology – A True Geek
Agenda
●  Scala and Spark analogies
●  Data flow and Environment
●  What’s our time series data like?
●  Where we started from - where we got to
○  Problems and our decisions
●  Conclusions
Scala and Spark analogies
Scala is...
●  Functional
●  Object Oriented
●  Statically typed
●  Interoperates well with Java and Javascript
○  JVM based
DSLs on top of Scala
SBT
Spiral
Scalaz
Slick
Dispatch
Chisel
Specs
Opti{X}
shapeless
ScalaTest
Squeryl
Scala & Spark (Architecture)
Scala REPL Scala Compiler
Spark Runtime
Scala Runtime
JVM
File System
(eg. HDFS,
Cassandra, S3..)
Cluster Manager
(eg. Yarn, Mesos)
What kind of DSL is Apache Spark
●  Centered around Collections
●  Immutable data sets equipped with functional transformations
●  These are exactly the Scala collection operations
map
flatMap
filter
...
reduce
fold
aggregate
...
union
intersection
...
Spark vs. Scala Collections
●  So, Spark is exactly Scala Collections, but running in a Cluster?
●  Not quite. There are Two main differences:
○  Spark is Lazy, Scala collections are strict
○  Spark has added functionality, eg. PairRDDs.
■  Gives us the power doing lots of operations in the NoSQL distributed
world
Collections Design Choices
Imperative Functional
Strict Lazy
VS
VS
java.util scala.collections.immutable
Scala
OCaml
Spark
C#
Scala Streams, views
Spark is A Multi-Language Platform
●  Why to use Scala instead of
Python?
○  Native to Spark, Can use
everything without
translation
○  Types help
So Bottom Line…
What’s Spark???
United Tools Platform - Single Framework
Batch
InteractiveStreaming
Single Framework
United Tools Platform
Spark Standalone Cluster - Architecture
●  Master
●  History
Server
●  etc
Master
Core 3
Core 4
Core 2
Worker Memory
Core 1Slave
Slave
Slave
Slave
Core 3
Core 4
Core 2
Worker Memory
Core 1
Core 3
Core 4
Core 2
Worker Memory
Core 1
Slave
Core 3
Core 4
Core 2
Worker Memory
Core 1
Core 3
Core 4
Core 2
Worker Memory
Core 1
Slave
Slave
Slave
Data flow and Environment
(Our Use Case)
Structure of the Data
●  Geo Locations + Metadata
●  Arriving over time
●  Different types of messages being reported by sattelites
●  Encoded
●  Might arrive later than acttually transmitted
Data Flow Diagram
Externa
l Data
Source
Analytics
Layers
Data Pipeline
Parsed
Raw
Entity Resolution
Process
Building insights
on top of the entities
Data
Output
Layer
Anomaly
Detection
Trends
Environment Description
Cluster
Dev Testing
Live
Staging
ProductionEnv
OB1K
RESTful Java
Services
Basic Terms
●  Idempotence
is the property of certain operations in mathematics and computer
science, that can be applied multiple times without changing the
result beyond the initial application.
●  Function: Same input => Same output
Basic Terms
●  Missing Parts in Time Series Data
◦  Data arriving from the satellites
⚫  Might be causing delays because of bad transmission
◦  Data vendors delaying the data stream
◦  Calculation in Layers may cause Holes in the Data
●  Calculating the Data layers by time slices
Basic Terms
●  Partitions == Parallelizm
◦  Physical / Logical partitioning
●  Resilient Distributed Datasets (RDDs) == Collections
◦  fault-tolerant collection of elements that can be operated on in
parallel.
◦  Applying immutable transformations and actions over RDDs
So what’s the problem?
The Problem - Receiving
DATA
Beginning state, no data, and the timeline
begins
T = 0
Level 3
Entity
Level 2
Entity
Level 1
Entity
The Problem - Receiving
DATA
T = 10
Level 3
Entity
Level 2
Entity
Level 1
Entity
Computation sliding window size
Level 1 entities data
arrives and gets stored
The Problem - Receiving
DATA
T = 10
Level 3
Entity
Level 2
Entity
Level 1
Entity
Computation sliding window size
Level 3 entities are created
on top of Level 2’s Data
(Decreased amount of data)
Level 2 entities are
created on top of Level 1’s
Data
(Decreased amount of
data)
The Problem - Receiving
DATA
T = 20
Level 3
Entity
Level 2
Entity
Level 1
Entity
Computation sliding window size
Because of the sliding window’s
back size, level 2 and 3 entities
would not be created properly
and there would be “Holes” in the
Data
Level 1 entity's
data arriving late
Solution to the Problem
●  Creating Dependent Micro services forming a data pipeline
◦  Mainly Apache Spark applications
◦  Services are only dependent on the Data - not the previous
service’s run
●  Forming a structure and scheduling of “Back Sliding Window”
◦  Know your data and it’s relevance through time
◦  Don’t try to foresee the future – it might Bias the results
Starting point & Infrastructure
How we started?
●  Spark Standalone – via ec2 scripts
◦  Around 5 nodes (r3.xlarge instances)
◦  Didn’t want to keep a persistent HDFS – Costs a lot
◦  100 GB (per day) => ~150 TB for 4 years
◦  Cost for server per year (r3.xlarge):
●  On demand: ~2900$
●  Reserved: ~1750$
●  Know your costs: http://www.ec2instances.info/
Decision
●  Working with S3 as the persistence layer
◦  Pay extra for
●  Put (0.005 per 1000 requests)
●  Get (0.004 per 10,000 requests)
◦  150TB => ~210$ for 4 years of Data
●  Same format as HDFS (CSV files)
◦  s3n://some-bucket/entity1/201412010000/part-00000
◦  s3n://some-bucket/entity1/201412010000/part-00001
◦  ……
What about the serving?
MongoDB for Serving
Worker 1
Worker 2
….
….
…
…
Worker N
MongoDB
Replica
Set
Spark
Cluster
Master
Write
Read
Spark Slave - Server Specs
●  Instance Type: r3.xlarge
●  CPU’s: 4
●  RAM: 30.5GB
●  Storage: ephemeral
●  Amount: 10+
MongoDB - Server Specs
●  MongoDB version: 2.6.1
●  Instance Type: m3.xlarge (AWS)
●  CPU’s: 4
●  RAM: 15GB
●  Storage: EBS
●  DB Size: ~500GB
●  Collection Indexes: 5 (4 compound)
The Problem
●  Batch jobs
◦  Should run for 5-10 minutes in total
◦  Actual - runs for ~40 minutes
●  Why?
◦  ~20 minutes to write with the Java mongo driver – Async
(Unacknowledged)
◦  ~20 minutes to sync the journal
◦  Total: ~ 40 Minutes of the DB being unavailable
◦  No batch process response and no UI serving
Alternative Solutions
●  Sharded MongoDB (With replica sets)
◦  Pros:
●  Increases Throughput by the amount of shards
●  Increases the availability of the DB
◦  Cons:
●  Very hard to manage DevOps wise (for a small team of
developers)
●  High cost of servers – because each shared need 3 replicas
Workflow with MongoDB
Worker 1
Worker 2
….
….
…
…
Worker N
Spark
Cluster
Master
Write
Read
Master
Our DevOps – After that solution
We had no
DevOps guy at
that time at all
☹
Alternative Solutions
●  Apache Cassandra
◦  Pros:
●  Very large developer community
●  Linearly scalable Database
●  No single master architecture
●  Proven working with distributed engines like Apache Spark
◦  Cons:
●  We had no experience at all with the Database
●  No Geo Spatial Index – Needed to implement by ourselves
The Solution
●  Migration to Apache Cassandra
●  Create easily a Cassandra cluster using DataStax Community AMI
on AWS
◦  First easy step – Using the spark-cassandra-connector
(Easy bootstrap move to Spark ⬄ Cassandra)
◦  Creating a monitoring dashboard to Cassandra
Workflow with Cassandra
Worker 1
Worker 2
….
….
…
…
Worker N
Cassandr
a
Cluster
Spark
Cluster
Write
Read
Result
●  Performance improvement
◦  Batch write parts of the job run in 3 minutes instead of ~ 40
minutes in MongoDB
●  Took 2 weeks to go from “Zero to Hero”, and to ramp up a running
solution that work without glitches
So what’s the problem
(Again)?
Transferring the Heaviest Process
●  Micro service that runs every 10 minutes
●  Writes to Cassandra 30GB per iteration
◦  (Replication factor 3 => 90GB)
●  At first took us 18 minutes to do all of the writes
◦  Not Acceptable in a 10 minute process
Cluster On OpsCenter - Before
Transferring the Heaviest Process
●  Solutions
◦  We chose the i2.xlarge
◦  Optimization of the Cluster
◦  Changing the JDK to Java-8
●  Changing the GC algorithm to G1
◦  Tuning the Operation system
●  Ulimit, removing the swap
◦  Write time went down to ~5 minutes (For 30GB RF=3)
Sounds good right? I don’t think so
Cloud Watch After Tuning
The Solution
●  Taking the same Data Model that we held in Cassandra (All of the
Raw data per 10 minutes) and put it on S3
◦  Write time went down from ~5 minutes to 1.5 minutes
●  Added another process, not dependent on the main one, happens
every 15 minutes
◦  Reads from S3, downscales the amount and Writes them to
Cassandra for serving
How it looks after all?
Parsed
Raw
Static /
Aggregated
Data
Spark Analytics Layers
UI Serving
Downscale
d Data
Heavy
Fusion
Process
Conclusion
●  Always give an estimate to your data
◦  Frequency
◦  Volume
◦  Arrangement of the previous phase
●  There is no “Best” persistence layer
◦  There is the right one for the job
◦  Don’t overload an existing solution
Conclusion
●  Spark is a great framework for distributed collections
◦ Fully functional API
◦ Can perform imperative actions
● “With great power,
comes lots of partitioning”
◦ Control your work and
data distribution via partitions
●  https://www.pinterest.com/pin/155514993354583499/ (Thanks)
Questions?
Thanks! my contact:
—Demi Ben-Ari
●  LinkedIn
●  Twitter: @demibenari
●  Blog: http://progexc.blogspot.com/
●  Email: demi.benari@gmail.com
●  “Big Things” Community
–Meetup, YouTube, Facebook, Twitter

More Related Content

What's hot (20)

PPTX
Akka-demy (a.k.a. How to build stateful distributed systems) I/II
Peter Csala
 
PDF
Spark streaming: Best Practices
Prakash Chockalingam
 
PPTX
Building real time Data Pipeline using Spark Streaming
datamantra
 
PDF
Optimizing S3 Write-heavy Spark workloads
datamantra
 
PDF
Reactive mistakes reactive nyc
Petr Zapletal
 
PDF
netflix-real-time-data-strata-talk
Danny Yuan
 
PDF
Core Services behind Spark Job Execution
datamantra
 
PDF
Self-managed and automatically reconfigurable stream processing
Vasia Kalavri
 
PDF
Taskerman - a distributed cluster task manager
Raghavendra Prabhu
 
PDF
Mantis qcon nyc_2015
neerajrj
 
PPTX
Javantura v3 - Going Reactive with RxJava – Hrvoje Crnjak
HUJAK - Hrvatska udruga Java korisnika / Croatian Java User Association
 
PDF
Cassandra Summit 2014: Diagnosing Problems in Production
DataStax Academy
 
PDF
Cassandra Day Atlanta 2015: Diagnosing Problems in Production
DataStax Academy
 
PDF
Distributed Real-Time Stream Processing: Why and How 2.0
Petr Zapletal
 
PDF
Apache Samza Past, Present and Future
Kartik Paramasivam
 
PPTX
Determinism in finance
Peter Lawrey
 
PDF
Runaway complexity in Big Data... and a plan to stop it
nathanmarz
 
PPTX
Distributed Task Scheduling with Akka, Kafka and Cassandra
David van Geest
 
PPTX
How Workload Prioritization Reduces Your Datacenter Footprint
ScyllaDB
 
PDF
Webinar: Diagnosing Apache Cassandra Problems in Production
DataStax Academy
 
Akka-demy (a.k.a. How to build stateful distributed systems) I/II
Peter Csala
 
Spark streaming: Best Practices
Prakash Chockalingam
 
Building real time Data Pipeline using Spark Streaming
datamantra
 
Optimizing S3 Write-heavy Spark workloads
datamantra
 
Reactive mistakes reactive nyc
Petr Zapletal
 
netflix-real-time-data-strata-talk
Danny Yuan
 
Core Services behind Spark Job Execution
datamantra
 
Self-managed and automatically reconfigurable stream processing
Vasia Kalavri
 
Taskerman - a distributed cluster task manager
Raghavendra Prabhu
 
Mantis qcon nyc_2015
neerajrj
 
Javantura v3 - Going Reactive with RxJava – Hrvoje Crnjak
HUJAK - Hrvatska udruga Java korisnika / Croatian Java User Association
 
Cassandra Summit 2014: Diagnosing Problems in Production
DataStax Academy
 
Cassandra Day Atlanta 2015: Diagnosing Problems in Production
DataStax Academy
 
Distributed Real-Time Stream Processing: Why and How 2.0
Petr Zapletal
 
Apache Samza Past, Present and Future
Kartik Paramasivam
 
Determinism in finance
Peter Lawrey
 
Runaway complexity in Big Data... and a plan to stop it
nathanmarz
 
Distributed Task Scheduling with Akka, Kafka and Cassandra
David van Geest
 
How Workload Prioritization Reduces Your Datacenter Footprint
ScyllaDB
 
Webinar: Diagnosing Apache Cassandra Problems in Production
DataStax Academy
 

Similar to Scala like distributed collections - dumping time-series data with apache spark (20)

PDF
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Ben...
Codemotion Tel Aviv
 
PPTX
S3 cassandra or outer space? dumping time series data using spark
Demi Ben-Ari
 
PPTX
Intro to Spark development
Spark Summit
 
PDF
Introduction to Spark Training
Spark Summit
 
PDF
Data processing platforms with SMACK: Spark and Mesos internals
Anton Kirillov
 
PDF
Lambda Architecture with Spark Streaming, Kafka, Cassandra, Akka, Scala
Helena Edelson
 
PPTX
How we evolved data pipeline at Celtra and what we learned along the way
Grega Kespret
 
PPTX
ETL with SPARK - First Spark London meetup
Rafal Kwasny
 
PDF
Highlights and Challenges from Running Spark on Mesos in Production by Morri ...
Spark Summit
 
PPTX
Migrating Data Pipeline from MongoDB to Cassandra
Demi Ben-Ari
 
PPT
Spark training-in-bangalore
Kelly Technologies
 
PDF
Apache Spark At Scale in the Cloud
Databricks
 
PDF
Apache Spark At Scale in the Cloud
Rose Toomey
 
PDF
Using Spark over Cassandra
Noam Barkai
 
PDF
Apache Spark in Depth: Core Concepts, Architecture & Internals
Anton Kirillov
 
PDF
Bds session 13 14
Infinity Tech Solutions
 
PDF
Real-Time Analytics with Apache Cassandra and Apache Spark
Guido Schmutz
 
PDF
Real-Time Analytics with Apache Cassandra and Apache Spark,
Swiss Data Forum Swiss Data Forum
 
PDF
HKOSCon18 - Chetan Khatri - Scaling TB's of Data with Apache Spark and Scala ...
Chetan Khatri
 
PDF
Spark
newmooxx
 
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Ben...
Codemotion Tel Aviv
 
S3 cassandra or outer space? dumping time series data using spark
Demi Ben-Ari
 
Intro to Spark development
Spark Summit
 
Introduction to Spark Training
Spark Summit
 
Data processing platforms with SMACK: Spark and Mesos internals
Anton Kirillov
 
Lambda Architecture with Spark Streaming, Kafka, Cassandra, Akka, Scala
Helena Edelson
 
How we evolved data pipeline at Celtra and what we learned along the way
Grega Kespret
 
ETL with SPARK - First Spark London meetup
Rafal Kwasny
 
Highlights and Challenges from Running Spark on Mesos in Production by Morri ...
Spark Summit
 
Migrating Data Pipeline from MongoDB to Cassandra
Demi Ben-Ari
 
Spark training-in-bangalore
Kelly Technologies
 
Apache Spark At Scale in the Cloud
Databricks
 
Apache Spark At Scale in the Cloud
Rose Toomey
 
Using Spark over Cassandra
Noam Barkai
 
Apache Spark in Depth: Core Concepts, Architecture & Internals
Anton Kirillov
 
Bds session 13 14
Infinity Tech Solutions
 
Real-Time Analytics with Apache Cassandra and Apache Spark
Guido Schmutz
 
Real-Time Analytics with Apache Cassandra and Apache Spark,
Swiss Data Forum Swiss Data Forum
 
HKOSCon18 - Chetan Khatri - Scaling TB's of Data with Apache Spark and Scala ...
Chetan Khatri
 
Spark
newmooxx
 
Ad

More from Demi Ben-Ari (20)

PDF
Thinking DevOps in the Era of the Cloud - Demi Ben-Ari
Demi Ben-Ari
 
PPTX
CTO Management Tool Box - Demi Ben-Ari at Panorays
Demi Ben-Ari
 
PPTX
Kubernetes, Toolbox to fail or succeed for beginners - Demi Ben-Ari, VP R&D @...
Demi Ben-Ari
 
PPTX
Hacker vs company, Cloud Cyber Security Automated with Kubernetes - Demi Ben-...
Demi Ben-Ari
 
PPTX
CTO Management ToolBox - Demi Ben-Ari -- Panorays
Demi Ben-Ari
 
PPTX
All I Wanted Is to Found a Startup - Demi Ben-Ari - Panorays
Demi Ben-Ari
 
PDF
Hacking for fun & profit - The Kubernetes Way - Demi Ben-Ari - Panorays
Demi Ben-Ari
 
PDF
Community, Unifying the Geeks to Create Value - Demi Ben-Ari
Demi Ben-Ari
 
PDF
Apache Spark 101 - Demi Ben-Ari - Panorays
Demi Ben-Ari
 
PDF
Know the Startup World - Demi Ben-Ari - Ofek Alumni
Demi Ben-Ari
 
PDF
Big Data made easy in the era of the Cloud - Demi Ben-Ari
Demi Ben-Ari
 
PDF
Know the Startup World - Demi Ben Ari - Ofek Alumni
Demi Ben-Ari
 
PDF
Monitoring Big Data Systems Done "The Simple Way" - Codemotion Milan 2017 - D...
Demi Ben-Ari
 
PDF
Monitoring Big Data Systems Done "The Simple Way" - Codemotion Berlin 2017
Demi Ben-Ari
 
PDF
Monitoring Big Data Systems "Done the simple way" - Demi Ben-Ari - Codemotion...
Demi Ben-Ari
 
PDF
Thinking DevOps in the era of the Cloud - Demi Ben-Ari
Demi Ben-Ari
 
PDF
Bootstrapping a Tech Community - Demi Ben-Ari
Demi Ben-Ari
 
PDF
Apache Spark 101 - Demi Ben-Ari
Demi Ben-Ari
 
PPTX
Spark 101 – First Steps To Distributed Computing - Demi Ben-Ari @ Ofek Alumni
Demi Ben-Ari
 
PPTX
Spark 101 - First steps to distributed computing
Demi Ben-Ari
 
Thinking DevOps in the Era of the Cloud - Demi Ben-Ari
Demi Ben-Ari
 
CTO Management Tool Box - Demi Ben-Ari at Panorays
Demi Ben-Ari
 
Kubernetes, Toolbox to fail or succeed for beginners - Demi Ben-Ari, VP R&D @...
Demi Ben-Ari
 
Hacker vs company, Cloud Cyber Security Automated with Kubernetes - Demi Ben-...
Demi Ben-Ari
 
CTO Management ToolBox - Demi Ben-Ari -- Panorays
Demi Ben-Ari
 
All I Wanted Is to Found a Startup - Demi Ben-Ari - Panorays
Demi Ben-Ari
 
Hacking for fun & profit - The Kubernetes Way - Demi Ben-Ari - Panorays
Demi Ben-Ari
 
Community, Unifying the Geeks to Create Value - Demi Ben-Ari
Demi Ben-Ari
 
Apache Spark 101 - Demi Ben-Ari - Panorays
Demi Ben-Ari
 
Know the Startup World - Demi Ben-Ari - Ofek Alumni
Demi Ben-Ari
 
Big Data made easy in the era of the Cloud - Demi Ben-Ari
Demi Ben-Ari
 
Know the Startup World - Demi Ben Ari - Ofek Alumni
Demi Ben-Ari
 
Monitoring Big Data Systems Done "The Simple Way" - Codemotion Milan 2017 - D...
Demi Ben-Ari
 
Monitoring Big Data Systems Done "The Simple Way" - Codemotion Berlin 2017
Demi Ben-Ari
 
Monitoring Big Data Systems "Done the simple way" - Demi Ben-Ari - Codemotion...
Demi Ben-Ari
 
Thinking DevOps in the era of the Cloud - Demi Ben-Ari
Demi Ben-Ari
 
Bootstrapping a Tech Community - Demi Ben-Ari
Demi Ben-Ari
 
Apache Spark 101 - Demi Ben-Ari
Demi Ben-Ari
 
Spark 101 – First Steps To Distributed Computing - Demi Ben-Ari @ Ofek Alumni
Demi Ben-Ari
 
Spark 101 - First steps to distributed computing
Demi Ben-Ari
 
Ad

Recently uploaded (20)

PPTX
Coefficient of Variance in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
PPTX
Agentic Automation Journey Series Day 2 – Prompt Engineering for UiPath Agents
klpathrudu
 
PDF
유니티에서 Burst Compiler+ThreadedJobs+SIMD 적용사례
Seongdae Kim
 
PPTX
Tally software_Introduction_Presentation
AditiBansal54083
 
PDF
Online Queue Management System for Public Service Offices in Nepal [Focused i...
Rishab Acharya
 
PPTX
ChiSquare Procedure in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
PDF
Alarm in Android-Scheduling Timed Tasks Using AlarmManager in Android.pdf
Nabin Dhakal
 
PPTX
Java Native Memory Leaks: The Hidden Villain Behind JVM Performance Issues
Tier1 app
 
PDF
Automate Cybersecurity Tasks with Python
VICTOR MAESTRE RAMIREZ
 
PPTX
Fundamentals_of_Microservices_Architecture.pptx
MuhammadUzair504018
 
PDF
The 5 Reasons for IT Maintenance - Arna Softech
Arna Softech
 
PDF
Alexander Marshalov - How to use AI Assistants with your Monitoring system Q2...
VictoriaMetrics
 
PDF
Unlock Efficiency with Insurance Policy Administration Systems
Insurance Tech Services
 
PDF
Thread In Android-Mastering Concurrency for Responsive Apps.pdf
Nabin Dhakal
 
PPTX
Transforming Mining & Engineering Operations with Odoo ERP | Streamline Proje...
SatishKumar2651
 
PPTX
Human Resources Information System (HRIS)
Amity University, Patna
 
PDF
Efficient, Automated Claims Processing Software for Insurers
Insurance Tech Services
 
PPTX
In From the Cold: Open Source as Part of Mainstream Software Asset Management
Shane Coughlan
 
PPTX
Hardware(Central Processing Unit ) CU and ALU
RizwanaKalsoom2
 
PDF
SAP Firmaya İade ABAB Kodları - ABAB ile yazılmıl hazır kod örneği
Salih Küçük
 
Coefficient of Variance in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
Agentic Automation Journey Series Day 2 – Prompt Engineering for UiPath Agents
klpathrudu
 
유니티에서 Burst Compiler+ThreadedJobs+SIMD 적용사례
Seongdae Kim
 
Tally software_Introduction_Presentation
AditiBansal54083
 
Online Queue Management System for Public Service Offices in Nepal [Focused i...
Rishab Acharya
 
ChiSquare Procedure in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
Alarm in Android-Scheduling Timed Tasks Using AlarmManager in Android.pdf
Nabin Dhakal
 
Java Native Memory Leaks: The Hidden Villain Behind JVM Performance Issues
Tier1 app
 
Automate Cybersecurity Tasks with Python
VICTOR MAESTRE RAMIREZ
 
Fundamentals_of_Microservices_Architecture.pptx
MuhammadUzair504018
 
The 5 Reasons for IT Maintenance - Arna Softech
Arna Softech
 
Alexander Marshalov - How to use AI Assistants with your Monitoring system Q2...
VictoriaMetrics
 
Unlock Efficiency with Insurance Policy Administration Systems
Insurance Tech Services
 
Thread In Android-Mastering Concurrency for Responsive Apps.pdf
Nabin Dhakal
 
Transforming Mining & Engineering Operations with Odoo ERP | Streamline Proje...
SatishKumar2651
 
Human Resources Information System (HRIS)
Amity University, Patna
 
Efficient, Automated Claims Processing Software for Insurers
Insurance Tech Services
 
In From the Cold: Open Source as Part of Mainstream Software Asset Management
Shane Coughlan
 
Hardware(Central Processing Unit ) CU and ALU
RizwanaKalsoom2
 
SAP Firmaya İade ABAB Kodları - ABAB ile yazılmıl hazır kod örneği
Salih Küçük
 

Scala like distributed collections - dumping time-series data with apache spark

  • 1. Scala-like Distributed Collections: Dumping Time-Series Data With Apache Spark Demi Ben-Ari - CTO @ Panorays
  • 2. About Me Demi Ben-Ari, Co-Founder & CTO @ Panorays ●  BS’c Computer Science – Academic College Tel-Aviv Yaffo ●  Co-Founder “Big Things” Big Data Community In the Past: ●  Sr. Data Engineer - Windward ●  Team Leader & Sr. Java Software Engineer, Missile defense and Alert System - “Ofek” – IAF Interested in almost every kind of technology – A True Geek
  • 3. Agenda ●  Scala and Spark analogies ●  Data flow and Environment ●  What’s our time series data like? ●  Where we started from - where we got to ○  Problems and our decisions ●  Conclusions
  • 4. Scala and Spark analogies
  • 5. Scala is... ●  Functional ●  Object Oriented ●  Statically typed ●  Interoperates well with Java and Javascript ○  JVM based
  • 6. DSLs on top of Scala SBT Spiral Scalaz Slick Dispatch Chisel Specs Opti{X} shapeless ScalaTest Squeryl
  • 7. Scala & Spark (Architecture) Scala REPL Scala Compiler Spark Runtime Scala Runtime JVM File System (eg. HDFS, Cassandra, S3..) Cluster Manager (eg. Yarn, Mesos)
  • 8. What kind of DSL is Apache Spark ●  Centered around Collections ●  Immutable data sets equipped with functional transformations ●  These are exactly the Scala collection operations map flatMap filter ... reduce fold aggregate ... union intersection ...
  • 9. Spark vs. Scala Collections ●  So, Spark is exactly Scala Collections, but running in a Cluster? ●  Not quite. There are Two main differences: ○  Spark is Lazy, Scala collections are strict ○  Spark has added functionality, eg. PairRDDs. ■  Gives us the power doing lots of operations in the NoSQL distributed world
  • 10. Collections Design Choices Imperative Functional Strict Lazy VS VS java.util scala.collections.immutable Scala OCaml Spark C# Scala Streams, views
  • 11. Spark is A Multi-Language Platform ●  Why to use Scala instead of Python? ○  Native to Spark, Can use everything without translation ○  Types help
  • 13. United Tools Platform - Single Framework Batch InteractiveStreaming Single Framework
  • 15. Spark Standalone Cluster - Architecture ●  Master ●  History Server ●  etc Master Core 3 Core 4 Core 2 Worker Memory Core 1Slave Slave Slave Slave Core 3 Core 4 Core 2 Worker Memory Core 1 Core 3 Core 4 Core 2 Worker Memory Core 1 Slave Core 3 Core 4 Core 2 Worker Memory Core 1 Core 3 Core 4 Core 2 Worker Memory Core 1 Slave Slave Slave
  • 16. Data flow and Environment (Our Use Case)
  • 17. Structure of the Data ●  Geo Locations + Metadata ●  Arriving over time ●  Different types of messages being reported by sattelites ●  Encoded ●  Might arrive later than acttually transmitted
  • 18. Data Flow Diagram Externa l Data Source Analytics Layers Data Pipeline Parsed Raw Entity Resolution Process Building insights on top of the entities Data Output Layer Anomaly Detection Trends
  • 20. Basic Terms ●  Idempotence is the property of certain operations in mathematics and computer science, that can be applied multiple times without changing the result beyond the initial application. ●  Function: Same input => Same output
  • 21. Basic Terms ●  Missing Parts in Time Series Data ◦  Data arriving from the satellites ⚫  Might be causing delays because of bad transmission ◦  Data vendors delaying the data stream ◦  Calculation in Layers may cause Holes in the Data ●  Calculating the Data layers by time slices
  • 22. Basic Terms ●  Partitions == Parallelizm ◦  Physical / Logical partitioning ●  Resilient Distributed Datasets (RDDs) == Collections ◦  fault-tolerant collection of elements that can be operated on in parallel. ◦  Applying immutable transformations and actions over RDDs
  • 23. So what’s the problem?
  • 24. The Problem - Receiving DATA Beginning state, no data, and the timeline begins T = 0 Level 3 Entity Level 2 Entity Level 1 Entity
  • 25. The Problem - Receiving DATA T = 10 Level 3 Entity Level 2 Entity Level 1 Entity Computation sliding window size Level 1 entities data arrives and gets stored
  • 26. The Problem - Receiving DATA T = 10 Level 3 Entity Level 2 Entity Level 1 Entity Computation sliding window size Level 3 entities are created on top of Level 2’s Data (Decreased amount of data) Level 2 entities are created on top of Level 1’s Data (Decreased amount of data)
  • 27. The Problem - Receiving DATA T = 20 Level 3 Entity Level 2 Entity Level 1 Entity Computation sliding window size Because of the sliding window’s back size, level 2 and 3 entities would not be created properly and there would be “Holes” in the Data Level 1 entity's data arriving late
  • 28. Solution to the Problem ●  Creating Dependent Micro services forming a data pipeline ◦  Mainly Apache Spark applications ◦  Services are only dependent on the Data - not the previous service’s run ●  Forming a structure and scheduling of “Back Sliding Window” ◦  Know your data and it’s relevance through time ◦  Don’t try to foresee the future – it might Bias the results
  • 29. Starting point & Infrastructure
  • 30. How we started? ●  Spark Standalone – via ec2 scripts ◦  Around 5 nodes (r3.xlarge instances) ◦  Didn’t want to keep a persistent HDFS – Costs a lot ◦  100 GB (per day) => ~150 TB for 4 years ◦  Cost for server per year (r3.xlarge): ●  On demand: ~2900$ ●  Reserved: ~1750$ ●  Know your costs: http://www.ec2instances.info/
  • 31. Decision ●  Working with S3 as the persistence layer ◦  Pay extra for ●  Put (0.005 per 1000 requests) ●  Get (0.004 per 10,000 requests) ◦  150TB => ~210$ for 4 years of Data ●  Same format as HDFS (CSV files) ◦  s3n://some-bucket/entity1/201412010000/part-00000 ◦  s3n://some-bucket/entity1/201412010000/part-00001 ◦  ……
  • 32. What about the serving?
  • 33. MongoDB for Serving Worker 1 Worker 2 …. …. … … Worker N MongoDB Replica Set Spark Cluster Master Write Read
  • 34. Spark Slave - Server Specs ●  Instance Type: r3.xlarge ●  CPU’s: 4 ●  RAM: 30.5GB ●  Storage: ephemeral ●  Amount: 10+
  • 35. MongoDB - Server Specs ●  MongoDB version: 2.6.1 ●  Instance Type: m3.xlarge (AWS) ●  CPU’s: 4 ●  RAM: 15GB ●  Storage: EBS ●  DB Size: ~500GB ●  Collection Indexes: 5 (4 compound)
  • 36. The Problem ●  Batch jobs ◦  Should run for 5-10 minutes in total ◦  Actual - runs for ~40 minutes ●  Why? ◦  ~20 minutes to write with the Java mongo driver – Async (Unacknowledged) ◦  ~20 minutes to sync the journal ◦  Total: ~ 40 Minutes of the DB being unavailable ◦  No batch process response and no UI serving
  • 37. Alternative Solutions ●  Sharded MongoDB (With replica sets) ◦  Pros: ●  Increases Throughput by the amount of shards ●  Increases the availability of the DB ◦  Cons: ●  Very hard to manage DevOps wise (for a small team of developers) ●  High cost of servers – because each shared need 3 replicas
  • 38. Workflow with MongoDB Worker 1 Worker 2 …. …. … … Worker N Spark Cluster Master Write Read Master
  • 39. Our DevOps – After that solution We had no DevOps guy at that time at all ☹
  • 40. Alternative Solutions ●  Apache Cassandra ◦  Pros: ●  Very large developer community ●  Linearly scalable Database ●  No single master architecture ●  Proven working with distributed engines like Apache Spark ◦  Cons: ●  We had no experience at all with the Database ●  No Geo Spatial Index – Needed to implement by ourselves
  • 41. The Solution ●  Migration to Apache Cassandra ●  Create easily a Cassandra cluster using DataStax Community AMI on AWS ◦  First easy step – Using the spark-cassandra-connector (Easy bootstrap move to Spark ⬄ Cassandra) ◦  Creating a monitoring dashboard to Cassandra
  • 42. Workflow with Cassandra Worker 1 Worker 2 …. …. … … Worker N Cassandr a Cluster Spark Cluster Write Read
  • 43. Result ●  Performance improvement ◦  Batch write parts of the job run in 3 minutes instead of ~ 40 minutes in MongoDB ●  Took 2 weeks to go from “Zero to Hero”, and to ramp up a running solution that work without glitches
  • 44. So what’s the problem (Again)?
  • 45. Transferring the Heaviest Process ●  Micro service that runs every 10 minutes ●  Writes to Cassandra 30GB per iteration ◦  (Replication factor 3 => 90GB) ●  At first took us 18 minutes to do all of the writes ◦  Not Acceptable in a 10 minute process
  • 47. Transferring the Heaviest Process ●  Solutions ◦  We chose the i2.xlarge ◦  Optimization of the Cluster ◦  Changing the JDK to Java-8 ●  Changing the GC algorithm to G1 ◦  Tuning the Operation system ●  Ulimit, removing the swap ◦  Write time went down to ~5 minutes (For 30GB RF=3) Sounds good right? I don’t think so
  • 49. The Solution ●  Taking the same Data Model that we held in Cassandra (All of the Raw data per 10 minutes) and put it on S3 ◦  Write time went down from ~5 minutes to 1.5 minutes ●  Added another process, not dependent on the main one, happens every 15 minutes ◦  Reads from S3, downscales the amount and Writes them to Cassandra for serving
  • 50. How it looks after all? Parsed Raw Static / Aggregated Data Spark Analytics Layers UI Serving Downscale d Data Heavy Fusion Process
  • 51. Conclusion ●  Always give an estimate to your data ◦  Frequency ◦  Volume ◦  Arrangement of the previous phase ●  There is no “Best” persistence layer ◦  There is the right one for the job ◦  Don’t overload an existing solution
  • 52. Conclusion ●  Spark is a great framework for distributed collections ◦ Fully functional API ◦ Can perform imperative actions ● “With great power, comes lots of partitioning” ◦ Control your work and data distribution via partitions ●  https://www.pinterest.com/pin/155514993354583499/ (Thanks)
  • 54. Thanks! my contact: —Demi Ben-Ari ●  LinkedIn ●  Twitter: @demibenari ●  Blog: http://progexc.blogspot.com/ ●  Email: [email protected] ●  “Big Things” Community –Meetup, YouTube, Facebook, Twitter