SlideShare a Scribd company logo
Lessons Learned
Running Hadoop and
Spark in Docker
Thomas Phelan
Chief Architect, BlueData
@tapbluedata
September 29, 2016
Outline
• Docker Containers and Big Data
• Hadoop and Spark on Docker: Challenges
• How We Did It: Lessons Learned
• Key Takeaways
• Q & A
A Better Way to Deploy Big Data
Traditional Approach
IT
ManufacturingSalesR&DServices
< 30%
utilization
Weeks to
build
each
cluster
Duplication
of dataManagement
complexity
Painful,
complex
upgrades
Hadoop and Spark on Docker
ManufacturingSalesR&DServices
BI/Analytics
Tools
> 90%
utilization
A New Approach
No duplication
of data
Simplified
management
Multi-tenant
Self-service,
on-demand
clusters
Simple,
instant
upgrades
Deploying Multiple Big Data Clusters
Data scientists want flexibility:
• Different versions of Hadoop, Spark, et.al.
• Different sets of tools
IT wants control:
• Multi-tenancy
- Data security
- Network isolation
Containers = the Future of Big Data
Infrastructure
• Agility and elasticity
• Standardized environments
(dev, test, prod)
• Portability
(on-premises and cloud)
• Higher resource utilization
Applications
• Fool-proof packaging
(configs, libraries, driver
versions, etc.)
• Repeatable builds and
orchestration
• Faster app dev cycles
The Journey to Big Data on Docker
Start with a clear goal
in sight
Begin with your Docker
toolbox of a single
container and basic
networking and storage
So you want to run Hadoop and Spark on Docker
in a multi-tenant enterprise deployment?
Beware … there is trouble ahead
Traverse the tightrope of
network configurations
Navigate the river of
container managers
• Swarm ?
• Kubernetes ?
• AWS ECS ?
• Mesos ?
• Overlay files ?
• Flocker ?
• Convoy ?
• Docker Networking ?
Calico
• Kubernetes Networking ?
Flannel, Weave Net
Big Data on Docker: Pitfalls
Cross the desert of storage
configurations
Big Data on Docker: Challenges
Pass thru the jungle of
software compatibility
Tame the lion of
performance
Finally you get to the top!
Trip down the staircase of
deployment mistakes
But for deployment in the enterprise, you are
not even close to being done …
Big Data on Docker: Next Steps?
You still have to climb past:
high availability,
backup/recovery, security,
multi-host, multi-container,
upgrades and patches
You realize it’s time
to get some help!
Big Data on Docker: Quagmire
How We Did It: Design Decisions
• Run Hadoop/Spark distros and applications
unmodified
- Deploy all services that typically run on a single BM
host in a single container
• Multi-tenancy support is key
- Network and storage security
How We Did It: Design Decisions
• Images built to “auto-configure” themselves at
time of instantiation
- Not all instances of a single image run the same set of
services when instantiated
• Master vs. worker cluster nodes
How We Did It: Design Decisions
• Maintain the promise of containers
- Keep them as stateless as possible
- Container storage is always ephemeral
- Persistent storage is external to the container
How We Did It: Implementation
Resource Utilization
• CPU cores vs. CPU shares
• Over-provisioning of CPU recommended
• No over-provisioning of memory
Network
• Connect containers across hosts
• Persistence of IP address across container restart
• Deploy VLANs and VxLAN tunnels for tenant-level traffic isolation
Noisy neighbors
How We Did It: Network Architecture
OVS
Container
Orchestrator
DHCP/DNS
VxLAN tunnel
NIC
Tenant Networks
OVS
NIC
Resource
Manager
Node
Manager
Node Manager SparkMaster
SparkWorker
Zeppelin
How We Did It: Implementation
Storage
• Tweak the default size of a container’s /root
- Resizing of storage inside an existing container is tricky
• Mount logical volume on /data
- No use of overlay file systems
• DataTap (version-independent, HDFS-compliant)
connectivity to external storage
Image Management
• Utilize Docker’s image repository
TIP: Mounting block
devices into a container
does not support
symbolic links (IOW:
/dev/sdb will not work,
/dm/… PCI device can
change across host
reboot).
TIP: Docker images can
get large. Use “docker
squash” to save on size.
How We Did It: Security Considerations
• Security is essential since containers and host share one kernel
- Non-privileged containers
• Achieved through layered set of capabilities
• Different capabilities provide different levels of isolation and protection
• Add “capabilities” to a container based on what operations are permitted
How We Did It: Sample Dockerfile
# Spark-1.5.2 docker image for RHEL/CentOS 6.x
FROM centos:centos6
# Download and extract spark
RUN mkdir /usr/lib/spark; curl -s http://d3kbcqa49mib13.cloudfront.net/spark-1.5.2-bin-hadoop2.4.tgz | tar -xz -C /usr/lib/spark/
# Download and extract scala
RUN mkdir /usr/lib/scala; curl -s http://www.scala-lang.org/files/archive/scala-2.10.3.tgz | tar xz -C /usr/lib/scala/
# Install zeppelin
RUN mkdir /usr/lib/zeppelin; curl -s http://10.10.10.10:8080/build/thirdparty/zeppelin/zeppelin-0.6.0-incubating-SNAPSHOT-v2.tar.gz|tar xz -C
/usr/lib/zeppelin
RUN yum clean all && rm -rf /tmp/* /var/tmp/* /var/cache/yum/*
ADD configure_spark_services.sh /root/configure_spark_services.sh
RUN chmod -x /root/configure_spark_services.sh && /root/configure_spark_services.sh
BlueData Application Image (.bin file)
Application bin file
Docker
image
CentOS
Dockerfile
RHEL
Dockerfile
appconfig
conf Init.d startscript
<app>
logo file
<app>.wb
bdwb
command
clusterconfig, image, role,
appconfig, catalog, service, ..
Sources
Docker file,
logo .PNG,
Init.d
RuntimeSoftware Bits
OR
Development
(e.g. extract .bin and modify to
create new bin)
Multi-Host
4 containers
on 2 different hosts
using 1 VLAN and 4
persistent IPs
Different Services in Each Container
Master Services
Worker Services
Container Storage from Host
Container Storage Host Storage
Performance Testing: Spark
• Spark 1.x on YARN
• HiBench - Terasort
- Data sizes: 100Gb, 500GB, 1TB
• 10 node physical/virtual cluster
• 36 cores and112GB memory per node
• 2TB HDFS storage per node (SSDs)
• 800GB ephemeral storage
Spark on Docker: Performance
MB/s
Performance Testing: Hadoop
Hadoop on Docker: Performance
Containers (BlueData EPIC with 1 virtual node per host)
Bare-Metal
4 Docker
containers
Multiple
Hadoop
versions
Different Hadoop
versions on
same set of
physical hosts
“Dockerized” Hadoop Clusters
5 fully managed Docker
containers with persistent
IP addresses
“Dockerized” Spark Standalone
Spark with Zeppelin
Notebook
Big Data on Docker: Key Takeaways
• All apps can be “Dockerized”,
including Hadoop & Spark
- Traditional bare-metal approach to Big
Data is rigid and inflexible
- Containers (e.g. Docker) provide a more
flexible & agile model
- Faster app dev cycles for Big Data app
developers, data scientists, & engineers
Big Data on Docker: Key Takeaways
• There are unique Big Data pitfalls & challenges with Docker
- For enterprise deployments, you will need to overcome these and more:
 Docker base images include Big Data libraries and jar files
 Container orchestration, including networking and storage
 Resource-aware runtime environment, including CPU and RAM
Big Data on Docker: Key Takeaways
• There are unique Big Data pitfalls & challenges with Docker
- More:
 Access to Container secured with ssh keypair or PAM module
(LDAP/AD)
 Fast access to external storage
 Management agents in Docker images
 Runtime injection of resource and configuration
information
Big Data on Docker: Key Takeaways
• “Do It Yourself” will be costly and time-consuming
- Be prepared to tackle the infrastructure & plumbing challenges
- In the enterprise, the business value is in the applications / data science
• There are other options …
- Public Cloud – AWS
- BlueData - turnkey solution
Lessons Learned Running Hadoop and Spark in Docker Containers
Thank You
Contact info:
@tapbluedata
tap@bluedata.com
www.bluedata.com
www.bluedata.com/free
FREE LITE VERSION

More Related Content

Viewers also liked (10)

PPTX
Managing Docker Containers In A Cluster - Introducing Kubernetes
Marc Sluiter
 
PDF
Hadoop Cluster on Docker Containers
pranav_joshi
 
PDF
Big Data Step-by-Step: Infrastructure 3/3: Taking it to the cloud... easily.....
Jeffrey Breen
 
PDF
Docker Swarm Cluster
Fernando Ike
 
PPTX
Configuring Your First Hadoop Cluster On EC2
benjaminwootton
 
PDF
Hortonworks Technical Workshop: What's New in HDP 2.3
Hortonworks
 
PPT
Docker based Hadoop provisioning - Hadoop Summit 2014
Janos Matyas
 
PPTX
Simplified Cluster Operation & Troubleshooting
DataWorks Summit/Hadoop Summit
 
PDF
Apache Hadoop YARN - Enabling Next Generation Data Applications
Hortonworks
 
PPTX
Building a Graph Database in Neo4j with Spark & Spark SQL to gain new insight...
DataWorks Summit/Hadoop Summit
 
Managing Docker Containers In A Cluster - Introducing Kubernetes
Marc Sluiter
 
Hadoop Cluster on Docker Containers
pranav_joshi
 
Big Data Step-by-Step: Infrastructure 3/3: Taking it to the cloud... easily.....
Jeffrey Breen
 
Docker Swarm Cluster
Fernando Ike
 
Configuring Your First Hadoop Cluster On EC2
benjaminwootton
 
Hortonworks Technical Workshop: What's New in HDP 2.3
Hortonworks
 
Docker based Hadoop provisioning - Hadoop Summit 2014
Janos Matyas
 
Simplified Cluster Operation & Troubleshooting
DataWorks Summit/Hadoop Summit
 
Apache Hadoop YARN - Enabling Next Generation Data Applications
Hortonworks
 
Building a Graph Database in Neo4j with Spark & Spark SQL to gain new insight...
DataWorks Summit/Hadoop Summit
 

Similar to Lessons Learned Running Hadoop and Spark in Docker Containers (20)

PPTX
Lessons Learned from Dockerizing Spark Workloads
BlueData, Inc.
 
PPTX
Lessons learned from running Spark on Docker
DataWorks Summit
 
PDF
Lessons Learned from Dockerizing Spark Workloads: Spark Summit East talk by T...
Spark Summit
 
PDF
Lessons Learned From Running Spark On Docker
Spark Summit
 
PPT
Deploying Big-Data-as-a-Service (BDaaS) in the Enterprise
Big-Data-as-a-Service (BDaaS) Meetup
 
PPTX
How to be successful running Docker in Production
Docker, Inc.
 
PDF
Data Science Workflows using Docker Containers
Aly Sivji
 
PPTX
Docker based Architecture by Denys Serdiuk
Lohika_Odessa_TechTalks
 
PPTX
Containers and Big Data
DataWorks Summit
 
PPTX
Dockercon EU 2015
John Fiedler
 
PDF
Scalable Spark deployment using Kubernetes
datamantra
 
PDF
Real-World Docker: 10 Things We've Learned
RightScale
 
PPTX
State of the Container Ecosystem
Vinay Rao
 
PDF
Docker Usage Patterns - Meetup Docker Paris - November, 10th 2015
Datadog
 
PDF
Using docker for data science - part 2
Calvin Giles
 
PDF
Docker 101 for Oracle DBAs - Oracle OpenWorld 2017
Adeesh Fulay
 
PDF
Dockerizing OpenStack for High Availability
Daniel Krook
 
PDF
Overcoming 5 Common Docker Challenges: How We Do It at RightScale
RightScale
 
PDF
The pain and gains running Docker in live @Pipedrive
Renno Reinurm
 
PDF
Microservices on AWS using AWS Lambda and Docker Containers
Danilo Poccia
 
Lessons Learned from Dockerizing Spark Workloads
BlueData, Inc.
 
Lessons learned from running Spark on Docker
DataWorks Summit
 
Lessons Learned from Dockerizing Spark Workloads: Spark Summit East talk by T...
Spark Summit
 
Lessons Learned From Running Spark On Docker
Spark Summit
 
Deploying Big-Data-as-a-Service (BDaaS) in the Enterprise
Big-Data-as-a-Service (BDaaS) Meetup
 
How to be successful running Docker in Production
Docker, Inc.
 
Data Science Workflows using Docker Containers
Aly Sivji
 
Docker based Architecture by Denys Serdiuk
Lohika_Odessa_TechTalks
 
Containers and Big Data
DataWorks Summit
 
Dockercon EU 2015
John Fiedler
 
Scalable Spark deployment using Kubernetes
datamantra
 
Real-World Docker: 10 Things We've Learned
RightScale
 
State of the Container Ecosystem
Vinay Rao
 
Docker Usage Patterns - Meetup Docker Paris - November, 10th 2015
Datadog
 
Using docker for data science - part 2
Calvin Giles
 
Docker 101 for Oracle DBAs - Oracle OpenWorld 2017
Adeesh Fulay
 
Dockerizing OpenStack for High Availability
Daniel Krook
 
Overcoming 5 Common Docker Challenges: How We Do It at RightScale
RightScale
 
The pain and gains running Docker in live @Pipedrive
Renno Reinurm
 
Microservices on AWS using AWS Lambda and Docker Containers
Danilo Poccia
 
Ad

More from BlueData, Inc. (18)

PPT
Introduction to KubeDirector - SF Kubernetes Meetup
BlueData, Inc.
 
PDF
Dell EMC Ready Solutions for Big Data
BlueData, Inc.
 
PDF
BlueData and Hortonworks Data Platform (HDP)
BlueData, Inc.
 
PPT
How to Protect Big Data in a Containerized Environment
BlueData, Inc.
 
PDF
BlueData EPIC datasheet (en Français)
BlueData, Inc.
 
PPTX
Best Practices for Running Kafka on Docker Containers
BlueData, Inc.
 
PDF
Bare-metal performance for Big Data workloads on Docker containers
BlueData, Inc.
 
PDF
BlueData EPIC on AWS - Spec Sheet
BlueData, Inc.
 
PPT
The Time Has Come for Big-Data-as-a-Service
BlueData, Inc.
 
PDF
Solution Brief: Real-Time Pipeline Accelerator
BlueData, Inc.
 
PDF
Hadoop Virtualization - Intel White Paper
BlueData, Inc.
 
PDF
Solution Brief: Big Data Lab Accelerator
BlueData, Inc.
 
PPTX
How to deploy Apache Spark in a multi-tenant, on-premises environment
BlueData, Inc.
 
PPTX
BlueData EPIC 2.0 Overview
BlueData, Inc.
 
PPTX
Big Data Case Study: Fortune 100 Telco
BlueData, Inc.
 
PPTX
BlueData Hunk Integration: Splunk Analytics for Hadoop
BlueData, Inc.
 
PPTX
Spark Infrastructure Made Easy
BlueData, Inc.
 
PPTX
BlueData Integration with Cloudera Manager
BlueData, Inc.
 
Introduction to KubeDirector - SF Kubernetes Meetup
BlueData, Inc.
 
Dell EMC Ready Solutions for Big Data
BlueData, Inc.
 
BlueData and Hortonworks Data Platform (HDP)
BlueData, Inc.
 
How to Protect Big Data in a Containerized Environment
BlueData, Inc.
 
BlueData EPIC datasheet (en Français)
BlueData, Inc.
 
Best Practices for Running Kafka on Docker Containers
BlueData, Inc.
 
Bare-metal performance for Big Data workloads on Docker containers
BlueData, Inc.
 
BlueData EPIC on AWS - Spec Sheet
BlueData, Inc.
 
The Time Has Come for Big-Data-as-a-Service
BlueData, Inc.
 
Solution Brief: Real-Time Pipeline Accelerator
BlueData, Inc.
 
Hadoop Virtualization - Intel White Paper
BlueData, Inc.
 
Solution Brief: Big Data Lab Accelerator
BlueData, Inc.
 
How to deploy Apache Spark in a multi-tenant, on-premises environment
BlueData, Inc.
 
BlueData EPIC 2.0 Overview
BlueData, Inc.
 
Big Data Case Study: Fortune 100 Telco
BlueData, Inc.
 
BlueData Hunk Integration: Splunk Analytics for Hadoop
BlueData, Inc.
 
Spark Infrastructure Made Easy
BlueData, Inc.
 
BlueData Integration with Cloudera Manager
BlueData, Inc.
 
Ad

Recently uploaded (20)

PPTX
Milwaukee Marketo User Group - Summer Road Trip: Mapping and Personalizing Yo...
bbedford2
 
PDF
SAP Firmaya İade ABAB Kodları - ABAB ile yazılmıl hazır kod örneği
Salih Küçük
 
PPTX
Change Common Properties in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
PDF
Top Agile Project Management Tools for Teams in 2025
Orangescrum
 
PPTX
Help for Correlations in IBM SPSS Statistics.pptx
Version 1 Analytics
 
PPTX
Agentic Automation Journey Session 1/5: Context Grounding and Autopilot for E...
klpathrudu
 
PDF
Empower Your Tech Vision- Why Businesses Prefer to Hire Remote Developers fro...
logixshapers59
 
PPTX
Hardware(Central Processing Unit ) CU and ALU
RizwanaKalsoom2
 
PDF
Odoo CRM vs Zoho CRM: Honest Comparison 2025
Odiware Technologies Private Limited
 
PDF
NEW-Viral>Wondershare Filmora 14.5.18.12900 Crack Free
sherryg1122g
 
PDF
TheFutureIsDynamic-BoxLang witch Luis Majano.pdf
Ortus Solutions, Corp
 
PPTX
Foundations of Marketo Engage - Powering Campaigns with Marketo Personalization
bbedford2
 
PDF
SciPy 2025 - Packaging a Scientific Python Project
Henry Schreiner
 
PDF
Download Canva Pro 2025 PC Crack Full Latest Version
bashirkhan333g
 
PDF
Open Chain Q2 Steering Committee Meeting - 2025-06-25
Shane Coughlan
 
PPTX
Tally_Basic_Operations_Presentation.pptx
AditiBansal54083
 
PPTX
Empowering Asian Contributions: The Rise of Regional User Groups in Open Sour...
Shane Coughlan
 
PPTX
Agentic Automation Journey Series Day 2 – Prompt Engineering for UiPath Agents
klpathrudu
 
PDF
How to Hire AI Developers_ Step-by-Step Guide in 2025.pdf
DianApps Technologies
 
PPTX
Homogeneity of Variance Test Options IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
Milwaukee Marketo User Group - Summer Road Trip: Mapping and Personalizing Yo...
bbedford2
 
SAP Firmaya İade ABAB Kodları - ABAB ile yazılmıl hazır kod örneği
Salih Küçük
 
Change Common Properties in IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 
Top Agile Project Management Tools for Teams in 2025
Orangescrum
 
Help for Correlations in IBM SPSS Statistics.pptx
Version 1 Analytics
 
Agentic Automation Journey Session 1/5: Context Grounding and Autopilot for E...
klpathrudu
 
Empower Your Tech Vision- Why Businesses Prefer to Hire Remote Developers fro...
logixshapers59
 
Hardware(Central Processing Unit ) CU and ALU
RizwanaKalsoom2
 
Odoo CRM vs Zoho CRM: Honest Comparison 2025
Odiware Technologies Private Limited
 
NEW-Viral>Wondershare Filmora 14.5.18.12900 Crack Free
sherryg1122g
 
TheFutureIsDynamic-BoxLang witch Luis Majano.pdf
Ortus Solutions, Corp
 
Foundations of Marketo Engage - Powering Campaigns with Marketo Personalization
bbedford2
 
SciPy 2025 - Packaging a Scientific Python Project
Henry Schreiner
 
Download Canva Pro 2025 PC Crack Full Latest Version
bashirkhan333g
 
Open Chain Q2 Steering Committee Meeting - 2025-06-25
Shane Coughlan
 
Tally_Basic_Operations_Presentation.pptx
AditiBansal54083
 
Empowering Asian Contributions: The Rise of Regional User Groups in Open Sour...
Shane Coughlan
 
Agentic Automation Journey Series Day 2 – Prompt Engineering for UiPath Agents
klpathrudu
 
How to Hire AI Developers_ Step-by-Step Guide in 2025.pdf
DianApps Technologies
 
Homogeneity of Variance Test Options IBM SPSS Statistics Version 31.pptx
Version 1 Analytics
 

Lessons Learned Running Hadoop and Spark in Docker Containers

  • 1. Lessons Learned Running Hadoop and Spark in Docker Thomas Phelan Chief Architect, BlueData @tapbluedata September 29, 2016
  • 2. Outline • Docker Containers and Big Data • Hadoop and Spark on Docker: Challenges • How We Did It: Lessons Learned • Key Takeaways • Q & A
  • 3. A Better Way to Deploy Big Data Traditional Approach IT ManufacturingSalesR&DServices < 30% utilization Weeks to build each cluster Duplication of dataManagement complexity Painful, complex upgrades Hadoop and Spark on Docker ManufacturingSalesR&DServices BI/Analytics Tools > 90% utilization A New Approach No duplication of data Simplified management Multi-tenant Self-service, on-demand clusters Simple, instant upgrades
  • 4. Deploying Multiple Big Data Clusters Data scientists want flexibility: • Different versions of Hadoop, Spark, et.al. • Different sets of tools IT wants control: • Multi-tenancy - Data security - Network isolation
  • 5. Containers = the Future of Big Data Infrastructure • Agility and elasticity • Standardized environments (dev, test, prod) • Portability (on-premises and cloud) • Higher resource utilization Applications • Fool-proof packaging (configs, libraries, driver versions, etc.) • Repeatable builds and orchestration • Faster app dev cycles
  • 6. The Journey to Big Data on Docker Start with a clear goal in sight Begin with your Docker toolbox of a single container and basic networking and storage So you want to run Hadoop and Spark on Docker in a multi-tenant enterprise deployment? Beware … there is trouble ahead
  • 7. Traverse the tightrope of network configurations Navigate the river of container managers • Swarm ? • Kubernetes ? • AWS ECS ? • Mesos ? • Overlay files ? • Flocker ? • Convoy ? • Docker Networking ? Calico • Kubernetes Networking ? Flannel, Weave Net Big Data on Docker: Pitfalls Cross the desert of storage configurations
  • 8. Big Data on Docker: Challenges Pass thru the jungle of software compatibility Tame the lion of performance Finally you get to the top! Trip down the staircase of deployment mistakes
  • 9. But for deployment in the enterprise, you are not even close to being done … Big Data on Docker: Next Steps? You still have to climb past: high availability, backup/recovery, security, multi-host, multi-container, upgrades and patches
  • 10. You realize it’s time to get some help! Big Data on Docker: Quagmire
  • 11. How We Did It: Design Decisions • Run Hadoop/Spark distros and applications unmodified - Deploy all services that typically run on a single BM host in a single container • Multi-tenancy support is key - Network and storage security
  • 12. How We Did It: Design Decisions • Images built to “auto-configure” themselves at time of instantiation - Not all instances of a single image run the same set of services when instantiated • Master vs. worker cluster nodes
  • 13. How We Did It: Design Decisions • Maintain the promise of containers - Keep them as stateless as possible - Container storage is always ephemeral - Persistent storage is external to the container
  • 14. How We Did It: Implementation Resource Utilization • CPU cores vs. CPU shares • Over-provisioning of CPU recommended • No over-provisioning of memory Network • Connect containers across hosts • Persistence of IP address across container restart • Deploy VLANs and VxLAN tunnels for tenant-level traffic isolation Noisy neighbors
  • 15. How We Did It: Network Architecture OVS Container Orchestrator DHCP/DNS VxLAN tunnel NIC Tenant Networks OVS NIC Resource Manager Node Manager Node Manager SparkMaster SparkWorker Zeppelin
  • 16. How We Did It: Implementation Storage • Tweak the default size of a container’s /root - Resizing of storage inside an existing container is tricky • Mount logical volume on /data - No use of overlay file systems • DataTap (version-independent, HDFS-compliant) connectivity to external storage Image Management • Utilize Docker’s image repository TIP: Mounting block devices into a container does not support symbolic links (IOW: /dev/sdb will not work, /dm/… PCI device can change across host reboot). TIP: Docker images can get large. Use “docker squash” to save on size.
  • 17. How We Did It: Security Considerations • Security is essential since containers and host share one kernel - Non-privileged containers • Achieved through layered set of capabilities • Different capabilities provide different levels of isolation and protection • Add “capabilities” to a container based on what operations are permitted
  • 18. How We Did It: Sample Dockerfile # Spark-1.5.2 docker image for RHEL/CentOS 6.x FROM centos:centos6 # Download and extract spark RUN mkdir /usr/lib/spark; curl -s http://d3kbcqa49mib13.cloudfront.net/spark-1.5.2-bin-hadoop2.4.tgz | tar -xz -C /usr/lib/spark/ # Download and extract scala RUN mkdir /usr/lib/scala; curl -s http://www.scala-lang.org/files/archive/scala-2.10.3.tgz | tar xz -C /usr/lib/scala/ # Install zeppelin RUN mkdir /usr/lib/zeppelin; curl -s http://10.10.10.10:8080/build/thirdparty/zeppelin/zeppelin-0.6.0-incubating-SNAPSHOT-v2.tar.gz|tar xz -C /usr/lib/zeppelin RUN yum clean all && rm -rf /tmp/* /var/tmp/* /var/cache/yum/* ADD configure_spark_services.sh /root/configure_spark_services.sh RUN chmod -x /root/configure_spark_services.sh && /root/configure_spark_services.sh
  • 19. BlueData Application Image (.bin file) Application bin file Docker image CentOS Dockerfile RHEL Dockerfile appconfig conf Init.d startscript <app> logo file <app>.wb bdwb command clusterconfig, image, role, appconfig, catalog, service, .. Sources Docker file, logo .PNG, Init.d RuntimeSoftware Bits OR Development (e.g. extract .bin and modify to create new bin)
  • 20. Multi-Host 4 containers on 2 different hosts using 1 VLAN and 4 persistent IPs
  • 21. Different Services in Each Container Master Services Worker Services
  • 22. Container Storage from Host Container Storage Host Storage
  • 23. Performance Testing: Spark • Spark 1.x on YARN • HiBench - Terasort - Data sizes: 100Gb, 500GB, 1TB • 10 node physical/virtual cluster • 36 cores and112GB memory per node • 2TB HDFS storage per node (SSDs) • 800GB ephemeral storage
  • 24. Spark on Docker: Performance MB/s
  • 26. Hadoop on Docker: Performance Containers (BlueData EPIC with 1 virtual node per host) Bare-Metal
  • 27. 4 Docker containers Multiple Hadoop versions Different Hadoop versions on same set of physical hosts “Dockerized” Hadoop Clusters
  • 28. 5 fully managed Docker containers with persistent IP addresses “Dockerized” Spark Standalone Spark with Zeppelin Notebook
  • 29. Big Data on Docker: Key Takeaways • All apps can be “Dockerized”, including Hadoop & Spark - Traditional bare-metal approach to Big Data is rigid and inflexible - Containers (e.g. Docker) provide a more flexible & agile model - Faster app dev cycles for Big Data app developers, data scientists, & engineers
  • 30. Big Data on Docker: Key Takeaways • There are unique Big Data pitfalls & challenges with Docker - For enterprise deployments, you will need to overcome these and more:  Docker base images include Big Data libraries and jar files  Container orchestration, including networking and storage  Resource-aware runtime environment, including CPU and RAM
  • 31. Big Data on Docker: Key Takeaways • There are unique Big Data pitfalls & challenges with Docker - More:  Access to Container secured with ssh keypair or PAM module (LDAP/AD)  Fast access to external storage  Management agents in Docker images  Runtime injection of resource and configuration information
  • 32. Big Data on Docker: Key Takeaways • “Do It Yourself” will be costly and time-consuming - Be prepared to tackle the infrastructure & plumbing challenges - In the enterprise, the business value is in the applications / data science • There are other options … - Public Cloud – AWS - BlueData - turnkey solution

Editor's Notes

  • #3: Jason to briefly cover agenda
  • #27: Overview of Results Testing revealed that, across the HiBench micro-workloads investigated, the BlueData EPIC software platform enables performance that is comparable or superior to that on bare-metal. Results are summarized in the above chart. The elapsed or execution times for physical Hadoop were used as the baseline (i.e. 100%) and the corresponding elapsed or execution time for the same test on BlueData EPIC compared to the bare metal execution time. The elapsed time results show that while BlueData EPIC is twice as fast for write dominant I/O workloads as shown with DFSIO Write as well as Teragen. The elapsed time results show the BlueData EPIC is comparable to bare metal for read dominant I/O workloads. However, even with lower throughput for DFSIO read, the performance of real world workloads that consist of series of read and write operations over a period of time, would make BlueData performance equivalent or better than bare metal. This balanced read/write performance is demonstrated by the TeraSort read/write results (shown above). Even with just a single virtual node per host, the performance of virtualized EPIC platform is within 2% of bare metal performance. The elapsed time results show that BlueData EPIC is 10-15% faster for compute intensive workloads as shown with Wordcount. The superior performance of the BlueData EPIC software platform for write-dominant workloads is due to the application aware caching enabled by IOBoost technology. The non-persistent memory cache improves the efficiency of access to physical-storage devices by enabling a write-behind cache, optimizing the performance of sequential writes. In general, any write operation, including those outside the scope of these benchmarks will benefit from write-behind cache since Hadoop uses a separate thread for write operations. BlueData IOBoost acknowledges this write request immediately thereby enabling the application to continue data processing without any latency. BlueData performance on read-dominant workloads is comparable to bare metal in part due to the read-ahead cache implemented in IOBoost. Unlike write operations where a separate thread is used, Hadoop applications have rigid semantics for read operations where in they wait for the read to complete before processing is continued. As a result, the read ahead cache has a lesser contribution to the I/O throughput for typical Hadoop applications. In summary, the use of a single virtual node per physical host shows that there is minimal to no overhead using virtualized EPIC platform compared to bare metal/physical Hadoop.
  • #34: Jason to moderate and introduce questions – then field questions with Anant & Tom to answer
  • #35: Jason: Thank you to everyone for attending this webinar – and a special thanks to our presenters …. All attendees of this webcast can click on the “Attachments” tab and download a copy of the slides – you’ll also have access to the on-demand replay. And if you’d like more information about BlueData software, you can visit our website at bluedata.com, contact us directly, or try the free version of our software at bluedata.com/free Again, thanks and have a great day.