SlideShare a Scribd company logo
S3, Cassandra or Outer Space? Dumping
Time Series Data using Spark
Demi Ben-Ari - VP R&D @
Tel-Aviv 30 MARCH 2017
About Me
Demi Ben-Ari, Co-Founder & VP R&D @ Panorays
● BS’c Computer Science – Academic College Tel-Aviv Yaffo
● Co-Founder
○ “Big Things” Big Data Community
○ Google Developer Group Cloud
In the Past:
● Sr. Data Engineer - Windward
● Team Leader & Sr. Software Engineer
Missile defense and Alert System - “Ofek” – IAF
Interested in almost every kind of technology – A True Geek
Agenda
● Apache Spark brief overview and Catch Up
● Data flow and Environment
● What’s our time series data like?
● Where we started from - where we got to
○ Problems and our decisions
○ Evolution of the solution
● Conclusions
Spark Brief Overview & Catchup
Scala & Spark (Architecture)
Scala REPL Scala Compiler
Spark Runtime
Scala Runtime
JVM
File System
(eg. HDFS,
Cassandra, S3..)
Cluster Manager
(eg. Yarn, Mesos)
What kind of DSL is Apache Spark
● Centered around Collections
● Immutable data sets equipped with functional transformations
● These are exactly the Scala collection operations
map
flatMap
filter
...
reduce
fold
aggregate
...
union
intersection
...
Spark is A Multi-Language Platform
● Why to use Scala instead of
Python?
○ Native to Spark, Can use
everything without
translation
○ Types help
So Bottom Line…
What’s Spark???
United Tools Platform
United Tools Platform - Single Framework
Batch
InteractiveStreaming
Single Framework
Data flow and Environment
(Our Use Case)
Structure of the Data
● Maritime Analytics Platform
● Geo Locations + Metadata
● Arriving over time
● Different types of messages being reported by satellites
● Encoded (For Compression purposes)
● Might arrive later than actually transmitted
Data Flow Diagram
External
Data
Source
Analytics
Layers
Data Pipeline
Parsed
Raw
Entity Resolution
Process
Building insights
on top of the entities
Data Output
Layer
Anomaly
Detection
Trends
Environment Description
Cluster
Dev Testing
Live
Staging
ProductionEnv
OB1K
RESTful Java Services
Basic Terms
● Missing Parts in Time Series Data
◦ Data arriving from the satellites
● Might be causing delays because of bad transmission
◦ Data vendors delaying the data stream
◦ Calculation in Layers may cause Holes in the Data
● Calculating the Data layers by time slices
Basic Terms
● Idempotence
is the property of certain operations in mathematics and computer
science, that can be applied multiple times without changing the
result beyond the initial application.
● Function: Same input => Same output
Basic Terms
● Partitions == Parallelism
◦ Physical / Logical partitioning
● Resilient Distributed Datasets (RDDs) == Collections
◦ fault-tolerant collection of elements that can be operated on in
parallel.
◦ Applying immutable transformations and actions over RDDs
What RDD’s really are?
So…..
The Problem - Receiving DATA
Beginning state, no data, and the timeline
begins
T = 0
Level 3 Entity
Level 2 Entity
Level 1 Entity
The Problem - Receiving DATA
T = 10
Level 3 Entity
Level 2 Entity
Level 1 Entity
Computation sliding window size
Level 1 entities data arrives
and gets stored
The Problem - Receiving DATA
T = 10
Level 3 Entity
Level 2 Entity
Level 1 Entity
Computation sliding window size
Level 3 entities are created on
top of Level 2’s Data
(Decreased amount of data)
Level 2 entities are created
on top of Level 1’s Data
(Decreased amount of
data)
The Problem - Receiving DATA
T = 20
Level 3 Entity
Level 2 Entity
Level 1 Entity
Computation sliding window size
Because of the sliding window’s
back size, level 2 and 3 entities
would not be created properly and
there would be “Holes” in the Data
Level 1 entity's
data arriving late
Solution to the Problem
● Creating Dependent Micro services forming a data pipeline
◦ Mainly Apache Spark applications
◦ Services are only dependent on the Data - not the previous
service’s run
● Forming a structure and scheduling of “Back Sliding Window”
◦ Know your data and its relevance through time
◦ Don’t try to foresee the future – it might Bias the results
How it looks like in the end...
Level 3 Entity
Level 2 Entity
Level 1 Entity
6 Hour time slot
12 Hours of Data
A Week of Data
More than a Week of Data
Starting point & Infrastructure
How we started?
● Spark Standalone – via ec2 scripts
◦ Around 5 nodes (r3.xlarge instances)
◦ Didn’t want to keep a persistent HDFS – Costs a lot
◦ 100 GB (per day) => ~150 TB for 4 years
◦ Cost for server per year (r3.xlarge):
- On demand: ~2900$
- Reserved: ~1750$
● Know your costs: http://www.ec2instances.info/
Know Your Costs
Decision
● Working with S3 as the persistence layer
◦ Pay extra for
- Put (0.005 per 1000 requests)
- Get (0.004 per 10,000 requests)
◦ 150TB => ~210$ for 4 years of Data
● Same format as HDFS (CSV files)
◦ s3n://some-bucket/entity1/201412010000/part-00000
◦ s3n://some-bucket/entity1/201412010000/part-00001
◦ ……
What about the serving?
MongoDB for Serving
Worker 1
Worker 2
….
….
…
…
Worker N
MongoDB
Replica Set
Spark
Cluster
Master
Write
Read
Spark Slave - Server Specs
● Instance Type: r3.xlarge
● CPU’s: 4
● RAM: 30.5GB
● Storage: ephemeral
● Amount: 10+
MongoDB - Server Specs
● MongoDB version: 2.6.1
● Instance Type: m3.xlarge (AWS)
● CPU’s: 4
● RAM: 15GB
● Storage: EBS
● DB Size: ~500GB
● Collection Indexes: 5 (4 compound)
The Problem
● Batch jobs
◦ Should run for 5-10 minutes in total
◦ Actual - runs for ~40 minutes
● Why?
◦ ~20 minutes to write with the Java mongo driver – Async
(Unacknowledged)
◦ ~20 minutes to sync the journal
◦ Total: ~ 40 Minutes of the DB being unavailable
◦ No batch process response and no UI serving
Alternative Solutions
● Sharded MongoDB (With replica sets)
◦ Pros:
- Increases Throughput by the amount of shards
- Increases the availability of the DB
◦ Cons:
- Very hard to manage DevOps wise (for a small team of
developers)
- High cost of servers – because each shared need 3 replicas
Workflow with MongoDB
Worker 1
Worker 2
….
….
…
…
Worker N
Spark
Cluster
Master
Write
Read
Master
Our DevOps – After that solution
We had no
DevOps guy at
that time at all
☹
Alternative Solutions
● Apache Cassandra
◦ Pros:
- Very large developer community
- Linearly scalable Database
- No single master architecture
- Proven working with distributed engines like Apache Spark
◦ Cons:
- We had no experience at all with the Database
- No Geo Spatial Index – Needed to implement by ourselves
The Solution
● Migration to Apache Cassandra
● Create easily a Cassandra cluster using DataStax Community
AMI on AWS
◦ First easy step – Using the spark-cassandra-connector
(Easy bootstrap move to Spark ⬄ Cassandra)
◦ Creating a monitoring dashboard to Cassandra
● Second phase:
◦ Creating a self managed and self provisioned Cassandra
Cluster
◦ Tuning the hell out of it!!!
Workflow with Cassandra
Worker 1
Worker 2
….
….
…
…
Worker N
Cassandra
Cluster
Spark
Cluster
Write
Read
Result
● Performance improvement
◦ Batch write parts of the job run in 3 minutes instead of ~ 40
minutes in MongoDB
● Took 2 weeks to go from “Zero to Hero”, and to ramp up a
running solution that work without glitches
So (Again)?
Transferring the Heaviest Process
● Micro service that runs every 10 minutes
● Writes to Cassandra 30GB per iteration
◦ (Replication factor 3 => 90GB)
● At first took us 18 minutes to do all of the writes
◦ Not Acceptable in a 10 minute process
Cluster On OpsCenter - Before
Transferring the Heaviest Process
● Solutions
◦ We chose the i2.xlarge
◦ Optimization of the Cluster
◦ Changing the JDK to Java-8
- Changing the GC algorithm to G1
◦ Tuning the Operation system
- Ulimit, removing the swap
◦ Write time went down to ~5 minutes (For 30GB RF=3)
Sounds good right? I don’t think so
Cloud Watch After Tuning
The Solution
● Taking the same Data Model that we held in Cassandra (All of the
Raw data per 10 minutes) and put it on S3
◦ Write time went down from ~5 minutes to 1.5 minutes
● Added another process, not dependent on the main one,
happens every 15 minutes
◦ Reads from S3, downscales the amount and Writes them to
Cassandra for serving
Parsed
Raw
Static /
Aggregated
Data
Spark Analytics Layers
UI Serving
Downscaled
Data
Heavy
Fusion
Process
How it looks after all?
Conclusion
● Always give an estimate to your data
◦ Frequency
◦ Volume
◦ Arrangement of the previous phase
● There is no “Best” persistence layer
◦ There is the right one for the job
◦ Don’t overload an existing solution
Conclusion
● Spark is a great framework for distributed collections
◦ Fully functional API
◦ Can perform imperative actions
● “With great power,
comes lots of partitioning”
◦ Control your work and
data distribution via partitions
● https://www.pinterest.com/pin/155514993354583499/ (Thanks)
Questions?
● LinkedIn
● Twitter: @demibenari
● Blog:
http://progexc.blogspot.com/
● demi.benari@gmail.com
● “Big Things” Community
Meetup, YouTube, Facebook,
Twitter
● GDG Cloud
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Ben-Ari, Panorays

More Related Content

What's hot (20)

PDF
Seastar / ScyllaDB, or how we implemented a 10-times faster Cassandra
Tzach Livyatan
 
PDF
Taming GC Pauses for Humongous Java Heaps in Spark Graph Computing-(Eric Kacz...
Spark Summit
 
PPTX
Developing Scylla Applications: Practical Tips
ScyllaDB
 
PDF
Flink Forward Berlin 2017: Robert Metzger - Keep it going - How to reliably a...
Flink Forward
 
PPTX
Beyond unit tests: Deployment and testing for Hadoop/Spark workflows
DataWorks Summit
 
PDF
Netflix Keystone - How Netflix Handles Data Streams up to 11M Events/Sec
Peter Bakas
 
PDF
Netflix Keystone Pipeline at Big Data Bootcamp, Santa Clara, Nov 2015
Monal Daxini
 
PPT
Reactive programming with examples
Peter Lawrey
 
PPTX
How Scylla Manager Handles Backups
ScyllaDB
 
PPTX
Streaming and Messaging
Xin Wang
 
PDF
Strata Singapore: Gearpump Real time DAG-Processing with Akka at Scale
Sean Zhong
 
PDF
QCON 2015: Gearpump, Realtime Streaming on Akka
Sean Zhong
 
PPTX
How Scylla Make Adding and Removing Nodes Faster and Safer
ScyllaDB
 
PPTX
Realtime Statistics based on Apache Storm and RocketMQ
Xin Wang
 
PDF
10 Devops-Friendly Database Must-Haves - Dor Laor, ScyllaDB - DevOpsDays Tel ...
DevOpsDays Tel Aviv
 
PPTX
Lightweight Transactions at Lightning Speed
ScyllaDB
 
PDF
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBase
HBaseCon
 
PDF
Gearpump akka streams
Kam Kasravi
 
PDF
Performance Tuning - Understanding Garbage Collection
Haribabu Nandyal Padmanaban
 
PPTX
How to manage large amounts of data with akka streams
Igor Mielientiev
 
Seastar / ScyllaDB, or how we implemented a 10-times faster Cassandra
Tzach Livyatan
 
Taming GC Pauses for Humongous Java Heaps in Spark Graph Computing-(Eric Kacz...
Spark Summit
 
Developing Scylla Applications: Practical Tips
ScyllaDB
 
Flink Forward Berlin 2017: Robert Metzger - Keep it going - How to reliably a...
Flink Forward
 
Beyond unit tests: Deployment and testing for Hadoop/Spark workflows
DataWorks Summit
 
Netflix Keystone - How Netflix Handles Data Streams up to 11M Events/Sec
Peter Bakas
 
Netflix Keystone Pipeline at Big Data Bootcamp, Santa Clara, Nov 2015
Monal Daxini
 
Reactive programming with examples
Peter Lawrey
 
How Scylla Manager Handles Backups
ScyllaDB
 
Streaming and Messaging
Xin Wang
 
Strata Singapore: Gearpump Real time DAG-Processing with Akka at Scale
Sean Zhong
 
QCON 2015: Gearpump, Realtime Streaming on Akka
Sean Zhong
 
How Scylla Make Adding and Removing Nodes Faster and Safer
ScyllaDB
 
Realtime Statistics based on Apache Storm and RocketMQ
Xin Wang
 
10 Devops-Friendly Database Must-Haves - Dor Laor, ScyllaDB - DevOpsDays Tel ...
DevOpsDays Tel Aviv
 
Lightweight Transactions at Lightning Speed
ScyllaDB
 
HBaseCon2017 Quanta: Quora's hierarchical counting system on HBase
HBaseCon
 
Gearpump akka streams
Kam Kasravi
 
Performance Tuning - Understanding Garbage Collection
Haribabu Nandyal Padmanaban
 
How to manage large amounts of data with akka streams
Igor Mielientiev
 

Similar to S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Ben-Ari, Panorays (20)

PDF
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Be...
Codemotion
 
PDF
Scala like distributed collections - dumping time-series data with apache spark
Demi Ben-Ari
 
PPTX
Migrating Data Pipeline from MongoDB to Cassandra
Demi Ben-Ari
 
PPTX
How we evolved data pipeline at Celtra and what we learned along the way
Grega Kespret
 
PDF
Highlights and Challenges from Running Spark on Mesos in Production by Morri ...
Spark Summit
 
PDF
Lambda Architecture with Spark Streaming, Kafka, Cassandra, Akka, Scala
Helena Edelson
 
PPTX
Intro to Spark development
Spark Summit
 
PPTX
ETL with SPARK - First Spark London meetup
Rafal Kwasny
 
PDF
Real-Time Analytics with Apache Cassandra and Apache Spark,
Swiss Data Forum Swiss Data Forum
 
PDF
Real-Time Analytics with Apache Cassandra and Apache Spark
Guido Schmutz
 
PDF
Introduction to Spark Training
Spark Summit
 
PDF
Apache Spark At Scale in the Cloud
Databricks
 
PDF
Apache Spark At Scale in the Cloud
Rose Toomey
 
PDF
Using Spark over Cassandra
Noam Barkai
 
PDF
Big data should be simple
Dori Waldman
 
PDF
Cloud arch patterns
Corey Huinker
 
PDF
Accelerate and Scale Big Data Analytics with Disaggregated Compute and Storage
Alluxio, Inc.
 
PDF
HKOSCon18 - Chetan Khatri - Scaling TB's of Data with Apache Spark and Scala ...
Chetan Khatri
 
PDF
Data processing platforms with SMACK: Spark and Mesos internals
Anton Kirillov
 
PDF
Five Lessons in Distributed Databases
jbellis
 
S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Be...
Codemotion
 
Scala like distributed collections - dumping time-series data with apache spark
Demi Ben-Ari
 
Migrating Data Pipeline from MongoDB to Cassandra
Demi Ben-Ari
 
How we evolved data pipeline at Celtra and what we learned along the way
Grega Kespret
 
Highlights and Challenges from Running Spark on Mesos in Production by Morri ...
Spark Summit
 
Lambda Architecture with Spark Streaming, Kafka, Cassandra, Akka, Scala
Helena Edelson
 
Intro to Spark development
Spark Summit
 
ETL with SPARK - First Spark London meetup
Rafal Kwasny
 
Real-Time Analytics with Apache Cassandra and Apache Spark,
Swiss Data Forum Swiss Data Forum
 
Real-Time Analytics with Apache Cassandra and Apache Spark
Guido Schmutz
 
Introduction to Spark Training
Spark Summit
 
Apache Spark At Scale in the Cloud
Databricks
 
Apache Spark At Scale in the Cloud
Rose Toomey
 
Using Spark over Cassandra
Noam Barkai
 
Big data should be simple
Dori Waldman
 
Cloud arch patterns
Corey Huinker
 
Accelerate and Scale Big Data Analytics with Disaggregated Compute and Storage
Alluxio, Inc.
 
HKOSCon18 - Chetan Khatri - Scaling TB's of Data with Apache Spark and Scala ...
Chetan Khatri
 
Data processing platforms with SMACK: Spark and Mesos internals
Anton Kirillov
 
Five Lessons in Distributed Databases
jbellis
 
Ad

More from Codemotion Tel Aviv (20)

PDF
Keynote: Trends in Modern Application Development - Gilly Dekel, IBM
Codemotion Tel Aviv
 
PDF
Angular is one fire(base)! - Shmuela Jacobs
Codemotion Tel Aviv
 
PDF
Demystifying docker networking black magic - Lorenzo Fontana, Kiratech
Codemotion Tel Aviv
 
PDF
Faster deep learning solutions from training to inference - Amitai Armon & Ni...
Codemotion Tel Aviv
 
PDF
Facts about multithreading that'll keep you up at night - Guy Bar on, Vonage
Codemotion Tel Aviv
 
PDF
Master the Art of the AST (and Take Control of Your JS!) - Yonatan Mevorach, ...
Codemotion Tel Aviv
 
PDF
Unleash the power of angular Reactive Forms - Nir Kaufman, 500Tech
Codemotion Tel Aviv
 
PDF
Can we build an Azure IoT controlled device in less than 40 minutes that cost...
Codemotion Tel Aviv
 
PDF
Actors and Microservices - Can two walk together? - Rotem Hermon, Gigya
Codemotion Tel Aviv
 
PDF
How to Leverage Machine Learning (R, Hadoop, Spark, H2O) for Real Time Proces...
Codemotion Tel Aviv
 
PDF
My Minecraft Smart Home: Prototyping the internet of uncanny things - Sascha ...
Codemotion Tel Aviv
 
PDF
Containerised ASP.NET Core apps with Kubernetes
Codemotion Tel Aviv
 
PDF
Fullstack DDD with ASP.NET Core and Anguar 2 - Ronald Harmsen, NForza
Codemotion Tel Aviv
 
PDF
The Art of Decomposing Monoliths - Kfir Bloch, Wix
Codemotion Tel Aviv
 
PDF
SOA Lessons Learnt (or Microservices done Better) - Sean Farmar, Particular S...
Codemotion Tel Aviv
 
PDF
Getting Physical with Web Bluetooth - Uri Shaked, BlackBerry
Codemotion Tel Aviv
 
PDF
Web based virtual reality - Tanay Pant, Mozilla
Codemotion Tel Aviv
 
PDF
Material Design Demytified - Ran Nachmany, Google
Codemotion Tel Aviv
 
PDF
All the reasons for choosing react js that you didn't know about - Avi Marcus...
Codemotion Tel Aviv
 
PDF
Mobile Security Attacks: A Glimpse from the Trenches - Yair Amit, Skycure
Codemotion Tel Aviv
 
Keynote: Trends in Modern Application Development - Gilly Dekel, IBM
Codemotion Tel Aviv
 
Angular is one fire(base)! - Shmuela Jacobs
Codemotion Tel Aviv
 
Demystifying docker networking black magic - Lorenzo Fontana, Kiratech
Codemotion Tel Aviv
 
Faster deep learning solutions from training to inference - Amitai Armon & Ni...
Codemotion Tel Aviv
 
Facts about multithreading that'll keep you up at night - Guy Bar on, Vonage
Codemotion Tel Aviv
 
Master the Art of the AST (and Take Control of Your JS!) - Yonatan Mevorach, ...
Codemotion Tel Aviv
 
Unleash the power of angular Reactive Forms - Nir Kaufman, 500Tech
Codemotion Tel Aviv
 
Can we build an Azure IoT controlled device in less than 40 minutes that cost...
Codemotion Tel Aviv
 
Actors and Microservices - Can two walk together? - Rotem Hermon, Gigya
Codemotion Tel Aviv
 
How to Leverage Machine Learning (R, Hadoop, Spark, H2O) for Real Time Proces...
Codemotion Tel Aviv
 
My Minecraft Smart Home: Prototyping the internet of uncanny things - Sascha ...
Codemotion Tel Aviv
 
Containerised ASP.NET Core apps with Kubernetes
Codemotion Tel Aviv
 
Fullstack DDD with ASP.NET Core and Anguar 2 - Ronald Harmsen, NForza
Codemotion Tel Aviv
 
The Art of Decomposing Monoliths - Kfir Bloch, Wix
Codemotion Tel Aviv
 
SOA Lessons Learnt (or Microservices done Better) - Sean Farmar, Particular S...
Codemotion Tel Aviv
 
Getting Physical with Web Bluetooth - Uri Shaked, BlackBerry
Codemotion Tel Aviv
 
Web based virtual reality - Tanay Pant, Mozilla
Codemotion Tel Aviv
 
Material Design Demytified - Ran Nachmany, Google
Codemotion Tel Aviv
 
All the reasons for choosing react js that you didn't know about - Avi Marcus...
Codemotion Tel Aviv
 
Mobile Security Attacks: A Glimpse from the Trenches - Yair Amit, Skycure
Codemotion Tel Aviv
 
Ad

Recently uploaded (20)

PPTX
Agile Chennai 18-19 July 2025 Ideathon | AI Powered Microfinance Literacy Gui...
AgileNetwork
 
PDF
introduction to computer hardware and sofeware
chauhanshraddha2007
 
PDF
Brief History of Internet - Early Days of Internet
sutharharshit158
 
PDF
Generative AI vs Predictive AI-The Ultimate Comparison Guide
Lily Clark
 
PDF
How ETL Control Logic Keeps Your Pipelines Safe and Reliable.pdf
Stryv Solutions Pvt. Ltd.
 
PPTX
AVL ( audio, visuals or led ), technology.
Rajeshwri Panchal
 
PDF
Tea4chat - another LLM Project by Kerem Atam
a0m0rajab1
 
PDF
RAT Builders - How to Catch Them All [DeepSec 2024]
malmoeb
 
PPTX
AI Code Generation Risks (Ramkumar Dilli, CIO, Myridius)
Priyanka Aash
 
PDF
OFFOFFBOX™ – A New Era for African Film | Startup Presentation
ambaicciwalkerbrian
 
PPTX
Agile Chennai 18-19 July 2025 | Emerging patterns in Agentic AI by Bharani Su...
AgileNetwork
 
PPTX
The Future of AI & Machine Learning.pptx
pritsen4700
 
PDF
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
PDF
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
PDF
Responsible AI and AI Ethics - By Sylvester Ebhonu
Sylvester Ebhonu
 
PDF
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
PDF
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
PDF
Make GenAI investments go further with the Dell AI Factory
Principled Technologies
 
PDF
State-Dependent Conformal Perception Bounds for Neuro-Symbolic Verification
Ivan Ruchkin
 
PDF
Structs to JSON: How Go Powers REST APIs
Emily Achieng
 
Agile Chennai 18-19 July 2025 Ideathon | AI Powered Microfinance Literacy Gui...
AgileNetwork
 
introduction to computer hardware and sofeware
chauhanshraddha2007
 
Brief History of Internet - Early Days of Internet
sutharharshit158
 
Generative AI vs Predictive AI-The Ultimate Comparison Guide
Lily Clark
 
How ETL Control Logic Keeps Your Pipelines Safe and Reliable.pdf
Stryv Solutions Pvt. Ltd.
 
AVL ( audio, visuals or led ), technology.
Rajeshwri Panchal
 
Tea4chat - another LLM Project by Kerem Atam
a0m0rajab1
 
RAT Builders - How to Catch Them All [DeepSec 2024]
malmoeb
 
AI Code Generation Risks (Ramkumar Dilli, CIO, Myridius)
Priyanka Aash
 
OFFOFFBOX™ – A New Era for African Film | Startup Presentation
ambaicciwalkerbrian
 
Agile Chennai 18-19 July 2025 | Emerging patterns in Agentic AI by Bharani Su...
AgileNetwork
 
The Future of AI & Machine Learning.pptx
pritsen4700
 
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
Responsible AI and AI Ethics - By Sylvester Ebhonu
Sylvester Ebhonu
 
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
Make GenAI investments go further with the Dell AI Factory
Principled Technologies
 
State-Dependent Conformal Perception Bounds for Neuro-Symbolic Verification
Ivan Ruchkin
 
Structs to JSON: How Go Powers REST APIs
Emily Achieng
 

S3, Cassandra or Outer Space? Dumping Time Series Data using Spark - Demi Ben-Ari, Panorays

  • 1. S3, Cassandra or Outer Space? Dumping Time Series Data using Spark Demi Ben-Ari - VP R&D @ Tel-Aviv 30 MARCH 2017
  • 2. About Me Demi Ben-Ari, Co-Founder & VP R&D @ Panorays ● BS’c Computer Science – Academic College Tel-Aviv Yaffo ● Co-Founder ○ “Big Things” Big Data Community ○ Google Developer Group Cloud In the Past: ● Sr. Data Engineer - Windward ● Team Leader & Sr. Software Engineer Missile defense and Alert System - “Ofek” – IAF Interested in almost every kind of technology – A True Geek
  • 3. Agenda ● Apache Spark brief overview and Catch Up ● Data flow and Environment ● What’s our time series data like? ● Where we started from - where we got to ○ Problems and our decisions ○ Evolution of the solution ● Conclusions
  • 5. Scala & Spark (Architecture) Scala REPL Scala Compiler Spark Runtime Scala Runtime JVM File System (eg. HDFS, Cassandra, S3..) Cluster Manager (eg. Yarn, Mesos)
  • 6. What kind of DSL is Apache Spark ● Centered around Collections ● Immutable data sets equipped with functional transformations ● These are exactly the Scala collection operations map flatMap filter ... reduce fold aggregate ... union intersection ...
  • 7. Spark is A Multi-Language Platform ● Why to use Scala instead of Python? ○ Native to Spark, Can use everything without translation ○ Types help
  • 10. United Tools Platform - Single Framework Batch InteractiveStreaming Single Framework
  • 11. Data flow and Environment (Our Use Case)
  • 12. Structure of the Data ● Maritime Analytics Platform ● Geo Locations + Metadata ● Arriving over time ● Different types of messages being reported by satellites ● Encoded (For Compression purposes) ● Might arrive later than actually transmitted
  • 13. Data Flow Diagram External Data Source Analytics Layers Data Pipeline Parsed Raw Entity Resolution Process Building insights on top of the entities Data Output Layer Anomaly Detection Trends
  • 15. Basic Terms ● Missing Parts in Time Series Data ◦ Data arriving from the satellites ● Might be causing delays because of bad transmission ◦ Data vendors delaying the data stream ◦ Calculation in Layers may cause Holes in the Data ● Calculating the Data layers by time slices
  • 16. Basic Terms ● Idempotence is the property of certain operations in mathematics and computer science, that can be applied multiple times without changing the result beyond the initial application. ● Function: Same input => Same output
  • 17. Basic Terms ● Partitions == Parallelism ◦ Physical / Logical partitioning ● Resilient Distributed Datasets (RDDs) == Collections ◦ fault-tolerant collection of elements that can be operated on in parallel. ◦ Applying immutable transformations and actions over RDDs
  • 20. The Problem - Receiving DATA Beginning state, no data, and the timeline begins T = 0 Level 3 Entity Level 2 Entity Level 1 Entity
  • 21. The Problem - Receiving DATA T = 10 Level 3 Entity Level 2 Entity Level 1 Entity Computation sliding window size Level 1 entities data arrives and gets stored
  • 22. The Problem - Receiving DATA T = 10 Level 3 Entity Level 2 Entity Level 1 Entity Computation sliding window size Level 3 entities are created on top of Level 2’s Data (Decreased amount of data) Level 2 entities are created on top of Level 1’s Data (Decreased amount of data)
  • 23. The Problem - Receiving DATA T = 20 Level 3 Entity Level 2 Entity Level 1 Entity Computation sliding window size Because of the sliding window’s back size, level 2 and 3 entities would not be created properly and there would be “Holes” in the Data Level 1 entity's data arriving late
  • 24. Solution to the Problem ● Creating Dependent Micro services forming a data pipeline ◦ Mainly Apache Spark applications ◦ Services are only dependent on the Data - not the previous service’s run ● Forming a structure and scheduling of “Back Sliding Window” ◦ Know your data and its relevance through time ◦ Don’t try to foresee the future – it might Bias the results
  • 25. How it looks like in the end... Level 3 Entity Level 2 Entity Level 1 Entity 6 Hour time slot 12 Hours of Data A Week of Data More than a Week of Data
  • 26. Starting point & Infrastructure
  • 27. How we started? ● Spark Standalone – via ec2 scripts ◦ Around 5 nodes (r3.xlarge instances) ◦ Didn’t want to keep a persistent HDFS – Costs a lot ◦ 100 GB (per day) => ~150 TB for 4 years ◦ Cost for server per year (r3.xlarge): - On demand: ~2900$ - Reserved: ~1750$ ● Know your costs: http://www.ec2instances.info/
  • 29. Decision ● Working with S3 as the persistence layer ◦ Pay extra for - Put (0.005 per 1000 requests) - Get (0.004 per 10,000 requests) ◦ 150TB => ~210$ for 4 years of Data ● Same format as HDFS (CSV files) ◦ s3n://some-bucket/entity1/201412010000/part-00000 ◦ s3n://some-bucket/entity1/201412010000/part-00001 ◦ ……
  • 30. What about the serving?
  • 31. MongoDB for Serving Worker 1 Worker 2 …. …. … … Worker N MongoDB Replica Set Spark Cluster Master Write Read
  • 32. Spark Slave - Server Specs ● Instance Type: r3.xlarge ● CPU’s: 4 ● RAM: 30.5GB ● Storage: ephemeral ● Amount: 10+
  • 33. MongoDB - Server Specs ● MongoDB version: 2.6.1 ● Instance Type: m3.xlarge (AWS) ● CPU’s: 4 ● RAM: 15GB ● Storage: EBS ● DB Size: ~500GB ● Collection Indexes: 5 (4 compound)
  • 34. The Problem ● Batch jobs ◦ Should run for 5-10 minutes in total ◦ Actual - runs for ~40 minutes ● Why? ◦ ~20 minutes to write with the Java mongo driver – Async (Unacknowledged) ◦ ~20 minutes to sync the journal ◦ Total: ~ 40 Minutes of the DB being unavailable ◦ No batch process response and no UI serving
  • 35. Alternative Solutions ● Sharded MongoDB (With replica sets) ◦ Pros: - Increases Throughput by the amount of shards - Increases the availability of the DB ◦ Cons: - Very hard to manage DevOps wise (for a small team of developers) - High cost of servers – because each shared need 3 replicas
  • 36. Workflow with MongoDB Worker 1 Worker 2 …. …. … … Worker N Spark Cluster Master Write Read Master
  • 37. Our DevOps – After that solution We had no DevOps guy at that time at all ☹
  • 38. Alternative Solutions ● Apache Cassandra ◦ Pros: - Very large developer community - Linearly scalable Database - No single master architecture - Proven working with distributed engines like Apache Spark ◦ Cons: - We had no experience at all with the Database - No Geo Spatial Index – Needed to implement by ourselves
  • 39. The Solution ● Migration to Apache Cassandra ● Create easily a Cassandra cluster using DataStax Community AMI on AWS ◦ First easy step – Using the spark-cassandra-connector (Easy bootstrap move to Spark ⬄ Cassandra) ◦ Creating a monitoring dashboard to Cassandra ● Second phase: ◦ Creating a self managed and self provisioned Cassandra Cluster ◦ Tuning the hell out of it!!!
  • 40. Workflow with Cassandra Worker 1 Worker 2 …. …. … … Worker N Cassandra Cluster Spark Cluster Write Read
  • 41. Result ● Performance improvement ◦ Batch write parts of the job run in 3 minutes instead of ~ 40 minutes in MongoDB ● Took 2 weeks to go from “Zero to Hero”, and to ramp up a running solution that work without glitches
  • 43. Transferring the Heaviest Process ● Micro service that runs every 10 minutes ● Writes to Cassandra 30GB per iteration ◦ (Replication factor 3 => 90GB) ● At first took us 18 minutes to do all of the writes ◦ Not Acceptable in a 10 minute process
  • 45. Transferring the Heaviest Process ● Solutions ◦ We chose the i2.xlarge ◦ Optimization of the Cluster ◦ Changing the JDK to Java-8 - Changing the GC algorithm to G1 ◦ Tuning the Operation system - Ulimit, removing the swap ◦ Write time went down to ~5 minutes (For 30GB RF=3) Sounds good right? I don’t think so
  • 47. The Solution ● Taking the same Data Model that we held in Cassandra (All of the Raw data per 10 minutes) and put it on S3 ◦ Write time went down from ~5 minutes to 1.5 minutes ● Added another process, not dependent on the main one, happens every 15 minutes ◦ Reads from S3, downscales the amount and Writes them to Cassandra for serving
  • 48. Parsed Raw Static / Aggregated Data Spark Analytics Layers UI Serving Downscaled Data Heavy Fusion Process How it looks after all?
  • 49. Conclusion ● Always give an estimate to your data ◦ Frequency ◦ Volume ◦ Arrangement of the previous phase ● There is no “Best” persistence layer ◦ There is the right one for the job ◦ Don’t overload an existing solution
  • 50. Conclusion ● Spark is a great framework for distributed collections ◦ Fully functional API ◦ Can perform imperative actions ● “With great power, comes lots of partitioning” ◦ Control your work and data distribution via partitions ● https://www.pinterest.com/pin/155514993354583499/ (Thanks)
  • 52. ● LinkedIn ● Twitter: @demibenari ● Blog: http://progexc.blogspot.com/ ● [email protected] ● “Big Things” Community Meetup, YouTube, Facebook, Twitter ● GDG Cloud