SlideShare a Scribd company logo
Clean Your Data Swamp
By Migration off Hadoop
Speaker
Ron Guerrero
Senior Solutions
Architect
Agenda
● Why modernize?
● Planning your migration off of Hadoop
● Top migration topics
Why migrate off of Hadoop and
onto Databricks?
History of Hadoop
● Created 2005
● Open Source distributed processing and storage
platform running on commodity hardware
● Originally consisted of HDFS, and MapReduce, but
now incorporates numerous open source projects
(Hive, HBase, Spark)
● On-prem and on the cloud
COMPLEX FIXED
Today Hadoop is very hard
● Many tools: Need to understand
multiple technologies.
● Real-time and batch ingestion to
build AI models requires
integrating many components.
Slow Innovation
● 24/7 clusters.
● Fixed capacity: CPU
+ RAM + Disk.
● Costly to upgrade.
Cost Prohibitive
MAINTENANCE
INTENSIVE
● Hadoop ecosystem is
complex and hard to
manage that is prone to
failures.
Low Productivity
X
Enterprises Need a Modern
Data Analytics Architecture
CRITICAL REQUIREMENTS
Cost-effective scale and performance in the cloud
Easy to manage and highly reliable for diverse data
Predictive and real-time insights to drive innovation
Structured Semi-structured Unstructured Streaming
Lakehouse Platform
Data Engineering
BI & SQL
Analytics
Real-time Data
Applications
Data Science
& Machine Learning
Data Management & Governance
Open Data Lake
SIMPLE OPEN COLLABORATIVE
Planning your migration off of
Hadoop and onto Databricks
Migration Planning
● Internal Questions
● Assessment
● Technical Planning
● Enablement and Evaluation
● Migration Execution
Migration Planning
Internal Question
● why?
● who?
● desired start and end dates
● internal stakeholders
● cloud strategy
Migration Planning
Assessment
● Environment inventory
○ compute, data, tooling
● Use case prioritization
● Workload analysis
● Existing TCO
● Projected TCO
● Migration timelines
Migration Planning
Technical Planning
● Target state architecture
● Data migration
● Workload migration
○ Lift and shift, transformative, hybrid
● Data governance approach
● Automated deployment
● Monitoring and Operations
Migration Planning
Enablement and Evaluation
● Workshops,Technical deep dives
● Training
● Proof of technology / MVP
○ Validate assumptions and designs
Migration Planning
Migration Execution
● Environment Deployment
● Iterate of use cases
○ Data Migration
○ Workload Migration
○ Dual Production Deployment - Old and New
○ Validation
○ Cut-over and Decommission of Hadoop
Top Migration Topics
Key Areas of Migration
1. Administration
2. Data Migration
3. Data Processing
4. Security & Governance
5. SQL and BI Layer
Administration
Hadoop Ecosystem to Databricks Concepts
Hadoop
HDFS
c
disk1
disk2
disk3
disk4
disk5
disk6
...
disk
N
YARN
Impala
HBase
c
c
c
c
c MR
mapper
c MR
mapper
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
c
c
c
2x12c = 24c
compute
HDFS
c
disk1
disk2
disk3
disk4
disk5
disk6
...
disk
N
YARN
Impala
HBase
c
c
c
c
c MR
mapper
c MR
mapper
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
c
c
c
2x12c = 24c
compute
HDFS
c
disk1
disk2
disk3
disk4
disk5
disk6
...
disk
N
YARN
Impala
HBase
c
c
c
c
c MR
mapper
c MR
mapper
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c
Spark
Driver
c
c
c
c
c
c
2x12c = 24c
compute
...
Node 1 Node 2 Node N
Hive
Metastore
Hive
Server
Impala
(LoadBalancer)
HBase
API
Sentry
Table Metadata +
HDFS ACLs
JDBC/ODBC
Node makeup
▪ Local disks
▪ Cores/Memory carved to services
▪ Submitted jobs compete for resources
▪ Services constrained to accommodate
resources
Metadata and Security
▪ Sentry table metadata permissions combined
with syncing HDFS ACLs OR
▪ Apache Ranger, policy based access control
Endpoints
▪ Direct Access to HDFS / Copied dataset
▪ Hive (on MR or Spark) accepts incoming
connections
▪ Impala for interactive queries
▪ HBase APIs as required
Ranger
Policy based
access control
OR
Hadoop Ecosystem to Databricks Concepts
Hadoop
HDFS
c
disk1
disk2
disk3
disk4
disk5
disk6
...
disk
N
YARN
Impala
HBase
c
c
c
c
c MR
mapper
c MR
mapper
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
c
c
c
2x12c = 24c
compute
HDFS
c
disk1
disk2
disk3
disk4
disk5
disk6
...
disk
N
YARN
Impala
HBase
c
c
c
c
c MR
mapper
c MR
mapper
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
c
c
c
2x12c = 24c
compute
HDFS
c
disk1
disk2
disk3
disk4
disk5
disk6
...
disk
N
YARN
Impala
HBase
c
c
c
c
c MR
mapper
c MR
mapper
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c MR
mapper
c
Spark
Worker
(Executor
)
c
c
c
c
Spark
Driver
c
c
c
c
c
c
2x12c = 24c
compute
...
Node 1 Node 2 Node N
Hive
Metastore
Hive
Server
Impala
(LoadBalancer)
HBase
API
Sentry/Ranger
Table Metadata +
HDFS ACLs
Hive
Metastore
(managed)
Databricks
SQL Endpoint
JDBC/ODBC
High Conc. Cluster SQL Analytics
CosmosDB/
DynamoDB/
Keyspaces
Object Storage
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Driver
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
Databricks Cluster
Spark ETL
(Batch/Streaming)
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Driver
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
Databricks Cluster
SQL Analytics
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Driver
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
Databricks Cluster
ML Runtime
Table
ACLs
Object Storage ACLs
Ephemeral
Clusters for
All-purpose
or Jobs
JDBC/ODBC
Hadoop Ecosystem to Databricks Concepts
Hive
Metastore
(managed)
Databricks
SQL Endpoint
High Conc. Cluster SQL Analytics
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Driver
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
Databricks Cluster
Spark ETL
(Batch/Streaming)
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Driver
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
Databricks Cluster
SQL Analytics
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Driver
c
c
c
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
c
Spark
Worker
(Executor
)
c
c
c
Delta
Engine
Databricks Cluster
ML Runtime
Table
ACLs
Ephemeral
Clusters or
long running
for
All-purpose
or Jobs
JDBC/ODBC
Node makeup
▪ Each Node (VM), maps to single Spark
Driver/Worker
▪ Cluster of nodes completely isolated from other
jobs/compute
▪ De-coupled compute and storage
Metadata and Security
▪ Managed Hive metastore (other options
available)
▪ Table ACLs (Databricks) and Object Storage
permissions
Endpoints
▪ SQL endpoint for both advanced analytics and
simple SQL analytics
▪ Code access to data - Notebooks
▪ HBase → maps to Azure CosmosDB, AWS
DynamoDB/Keyspaces (non-Databricks
solution)
Object Storage Object Storage ACLs
CosmosDB/
DynamoDB/
Keyspaces
Demo - Administration
Data Migration
Data Migration
- On-premise block storage.
- Fixed disk capacity.
- Health checks to validate data
Integrity.
- As data volumes grow, must
add more nodes to cluster and
rebalance data.
MIGRATE
- Fully managed cloud object storage.
- Unlimited capacity.
- No maintenance, no health checks, no rebalancing.
- 99.99% availability, 99.9999999% durability.
- Use native cloud services to migrate data.
- Leverage partner solutions:
Data Migration
Build a Data Lake in cloud storage with Delta Lake
● Open source and uses Parquet file format.
● Performance: Data indexing → Faster queries.
● Reliability: ACID Transactions → Guaranteed data integrity.
● Scalability: Handle petabyte-scale tables with billions of partitions and files at ease.
● Enhanced Spark SQL: UPDATE, MERGE, and DELETE commands.
● Unify Batch and Stream processing → No more LAMBDA architecture.
● Schema Enforcement: Specify schema on write.
● Schema Evolution: Automatically change schemas on the fly.
● Audit History: Full audit trail of the changes.
● Time Travel: Restore data from past versions.
● 100% Compatible with Apache Spark API.
Start with Dual ingestion
● Add a feed to cloud storage
● Enable new use cases with new data
● Introduces options for backup
How to migrate data
● Leverage existing Data Delivery tools to point to cloud storage
● Introduce simplified flows to land data into cloud storage
How to migrate data
● Push the data
○ DistCP
○ 3rd Party Tooling
○ In-house frameworks
○ Cloud Native - Snowmobile , Azure Data Box, Google Transfer Appliance
○ Typically easier to approve (security)
● Pull the data
○ Spark Streaming
○ Spark Batch
■ File Ingest
■ JDBC
○ 3rd Party Tooling
How to migrate data - Pull approach
● Set up connectivity to On Premises
○ AWS Direct Connect
○ Azure ExpressRoute / VPN Gateway
○ This may be needed for some use cases
● Kerberized Hadoop Environments
○ Databricks clusters initialization scripts
■ Kerberos client setup
■ krb5.conf, keytab
■ kinit()
● Shared External Metastore
○ Databricks and Hadoop can share a metastore
Demo - Databricks Pull
Data Processing
Technology Mapping
Migrating Spark Jobs
● Spark versions
● RDD to Dataframes
● Changes to submission
● Hard coded references to hadoop environment
Converting non-Spark workloads
● MapReduce
● Sqoop
● Flume
● Nifi Considerations
Migrating HiveQL
● Hive queries have high compatibility
● Minor changes in DDL
● Serdes, and UDFs
Migration Workflow Orchestration
● Create Airflow, Azure Data Factory, or other, equivalents
● Databricks REST APIs allows integration to any Scheduler
Automated Tooling
● MLens
○ PySpark
○ HiveQL
○ Oozie to Airflow, Azure Data Factory (roadmap)
Security and Governance
Security and Governance
Authentication Authorization Metadata Management
- Single Sign On (SSO) with SAML
2.0 supported corporate
directory.
- Access Control Lists (ACLs) for
Databricks RBAC.
- Table ACLs - Dynamic Views for
Column/Row permissionons
- Leverage cloud native
security: IAM Federation and
AAD passthrough.
- Integration with Ranger an
Immuta for more advanced
RBAC and ABAC.
- Integration with 3rd party
services.
Amazon Glue
Pivacera
Migrating Security Policies from
Hadoop to Databricks
Enabling enterprises to responsibly use their data in the cloud
Powered by Apache Ranger
HADOOP ECOSYSTEM
● 100s and 1000s of tables in
Apache Hive
● 100s of policies in Apache
Ranger
● Variety of policies. Resource
Based, Tag Based, Masking, Row
Level Filters, etc.
● Policies for Users and Groups
from AD/LDAP
PRIVACERA AND
DATABRICKS
Hive MetaStore MetaStore
Dataset
Schema
Policies
SEAMLESS MIGRATION
INSTANTLY TRANSFER
YEARS OF EFFORT
INSTANTLY IMPLEMENT THE SAME
POLICIES IN DATABRICKS AS ON-PREM
● Richer, deeper, and more robust Access Control
● Row/Column level access control in SQL
● Dynamic and Static data de-identification
● File level access control for Dataframes, object level access
● Read/Write operations supported
Object Store
(S3/ADLS)
Privacera
+
Databricks
S3 - Bucket
Level
Y
S3 - Object
Level
Y
ADLS Y
Privacera Value Add - Enhancing Databricks Authorization
Spark SQL and R Privacera +
Databricks
Table Y
Column Y
Column Masking Y
Row Level Filtering Y
Tag Based Policies Y
Attribute based policies Y
Centralized Auditing Y
Databricks SQL/Python Cluster
Spark Driver Ranger Plugin
Spark Executors
Spark Executors Ranger Policy Manager
Privacera Portal
Privacera Audit Server
DB Solr
Apache Kafka
Splunk
Cloud Watch
SIEM
Privacera Cloud
Spark SQL
and/or Spark
Read/Write
Privacera
Anomaly
Detection and
Alerting
Databricks Cluster
Privacera Discovery
Business User
Admin User
Privacera Approval
Workflow
AD/LDAP
3rd Party Catalog
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
SQL and BI
What about the SQL Community
Hadoop
● HUE
○ Data browsing
○ SQL Editor
○ Visualizations
● Interactive SQL
○ Impala
○ Hive LLAP
Databricks
● SQL Analytics Workspace
○ Data Browser
○ SQL Editor
○ Visualizations
● Interactive SQL
○ Spark optimizations - Adaptive Query Execution
○ Advanced Caching
○ Project Photon
○ Scaling cluster of clusters
SQL & BI Layer
Optimized SQL and BI
Performance BI Integrations Tuned
- Fast queries with Delta Engine
on Delta Engine.
- Support for high-concurrency
with auto-scaling clusters.
- Optimized JDBC/ODBC drivers.
- Optimized and tuned for BI and
and SQL out of the box.
Compatible with any BI client
and tool that supports Spark.
Vision
Give SQL users a home in Databricks
Provide SQL workbench, light
dashboarding, and alerting capabilities
Great BI experience on the data lake
Enable companies to effectively leverage
the data lake from any BI tool without
having to move the data around.
Easy to use & price-performant
Minimal setup & configuration. Data lake
price performance.
SQL-native user interface for
analysts
▪ Familiar SQL Editor
▪ Auto Complete
▪ Built in visualizations
▪ Data Browser
▪ Automatic Alerts
▪ Trigger based upon values
▪ Email or Slack integration
▪ Dashboards
▪ Simply convert queries to
dashboards
▪ Share with Access Control
Built-in connectors for existing
BI tools
Other BI & SQL clients
that support
▪ Supports your favorite tool
▪ Connectors for top BI & SQL clients
▪ Simple connection setup
▪ Optimized performance
▪ OAuth & Single Sign On
▪ Quick and easy authentication
experience. No need to deal with
access tokens.
▪ Power BI Available now
▪ Others coming soon
Performance
Delta Metadata Performance
Improved read performance for cold queries on Delta
tables. Provides interactive metadata performance
regardless of # of Delta tables in a query or table sizes.
New ODBC / JDBC Drivers
Wire protocol re-engineered to provide lower latencies
& higher data transfer speeds:
▪ Lower latency / less overhead (~¼ sec) with reduced
round trips per request
▪ Higher transfer rate (up to 50%) using Apache Arrow
▪ Optimized metadata performance for ODBC/JDBC
APIs (up to 10x for metadata retrieval operations)
Photon - Delta Engine
[Preview]
New MPP engine built from scratch in C++.
Vectorized to exploit data level parallelism and
instruction-level parallelism. Optimized for
modern structured and semi-structured
workloads.
Summary
It all starts with a plan
● Databricks and are partner community can help you
○ Assess
○ Plan
○ Validate
○ Execute
Considerations for your migration to
Databricks
● Administration
● Data Migration
● Data Processing
● Security & Governance
● SQL and BI Layer
Next Steps
Next Steps
● You will receive a follow up email from our teams
● Let us help you with your Hadoop Migration Journey
Follow up materials - Useful links
Databricks Reference Architecture
Databricks Azure Reference Architecture
Databricks AWS Reference Architecture
Demo

More Related Content

What's hot (20)

PPTX
Introduction to Azure Databricks
James Serra
 
PDF
Moving to Databricks & Delta
Databricks
 
PDF
Pipelines and Data Flows: Introduction to Data Integration in Azure Synapse A...
Cathrine Wilhelmsen
 
PPTX
Databricks Fundamentals
Dalibor Wijas
 
PDF
Five Things to Consider About Data Mesh and Data Governance
DATAVERSITY
 
PDF
Lakehouse in Azure
Sergio Zenatti Filho
 
PPTX
Data Lakehouse, Data Mesh, and Data Fabric (r2)
James Serra
 
PPTX
Azure data platform overview
James Serra
 
PDF
Data Mesh Part 4 Monolith to Mesh
Jeffrey T. Pollock
 
PPTX
Building a modern data warehouse
James Serra
 
PDF
Data Mesh
Piethein Strengholt
 
PDF
Introducing Databricks Delta
Databricks
 
PDF
Time to Talk about Data Mesh
LibbySchulze
 
PDF
Architect’s Open-Source Guide for a Data Mesh Architecture
Databricks
 
PDF
Apache Kafka With Spark Structured Streaming With Emma Liu, Nitin Saksena, Ra...
HostedbyConfluent
 
PPTX
Introducing the Snowflake Computing Cloud Data Warehouse
Snowflake Computing
 
PDF
Databricks Delta Lake and Its Benefits
Databricks
 
PPTX
Snowflake Architecture.pptx
chennakesava44
 
PPTX
Architecting a datalake
Laurent Leturgez
 
PPTX
Free Training: How to Build a Lakehouse
Databricks
 
Introduction to Azure Databricks
James Serra
 
Moving to Databricks & Delta
Databricks
 
Pipelines and Data Flows: Introduction to Data Integration in Azure Synapse A...
Cathrine Wilhelmsen
 
Databricks Fundamentals
Dalibor Wijas
 
Five Things to Consider About Data Mesh and Data Governance
DATAVERSITY
 
Lakehouse in Azure
Sergio Zenatti Filho
 
Data Lakehouse, Data Mesh, and Data Fabric (r2)
James Serra
 
Azure data platform overview
James Serra
 
Data Mesh Part 4 Monolith to Mesh
Jeffrey T. Pollock
 
Building a modern data warehouse
James Serra
 
Introducing Databricks Delta
Databricks
 
Time to Talk about Data Mesh
LibbySchulze
 
Architect’s Open-Source Guide for a Data Mesh Architecture
Databricks
 
Apache Kafka With Spark Structured Streaming With Emma Liu, Nitin Saksena, Ra...
HostedbyConfluent
 
Introducing the Snowflake Computing Cloud Data Warehouse
Snowflake Computing
 
Databricks Delta Lake and Its Benefits
Databricks
 
Snowflake Architecture.pptx
chennakesava44
 
Architecting a datalake
Laurent Leturgez
 
Free Training: How to Build a Lakehouse
Databricks
 

Similar to 5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop (20)

PDF
Hd insight essentials quick view
Rajesh Nadipalli
 
PDF
HdInsight essentials Hadoop on Microsoft Platform
nvvrajesh
 
PDF
Hd insight essentials quick view
Rajesh Nadipalli
 
PDF
SQL Engines for Hadoop - The case for Impala
markgrover
 
PDF
Customer Education Webcast: New Features in Data Integration and Streaming CDC
Precisely
 
PPTX
Paris Data Geek - Spark Streaming
Djamel Zouaoui
 
PDF
Introduction to Hadoop Administration
Ramesh Pabba - seeking new projects
 
PDF
Introduction to Hadoop Administration
Ramesh Pabba - seeking new projects
 
PDF
Hadoop and OpenStack - Hadoop Summit San Jose 2014
spinningmatt
 
PDF
Hadoop and OpenStack
DataWorks Summit
 
PPTX
FireEye & Scylla: Intel Threat Analysis Using a Graph Database
ScyllaDB
 
PPTX
Hadoop ppt on the basics and architecture
saipriyacoool
 
PPTX
Oracle big data appliance and solutions
solarisyougood
 
PDF
Key trends in Big Data and new reference architecture from Hewlett Packard En...
Ontico
 
PPTX
Introducing Apache Kudu (Incubating) - Montreal HUG May 2016
Mladen Kovacevic
 
PPTX
Hadoop - Just the Basics for Big Data Rookies (SpringOne2GX 2013)
VMware Tanzu
 
PPTX
Scylla Summit 2019 Keynote - Avi Kivity
ScyllaDB
 
PDF
Tcloud Computing Hadoop Family and Ecosystem Service 2013.Q3
tcloudcomputing-tw
 
PDF
Spark Driven Big Data Analytics
inoshg
 
PDF
9/2017 STL HUG - Back to School
Adam Doyle
 
Hd insight essentials quick view
Rajesh Nadipalli
 
HdInsight essentials Hadoop on Microsoft Platform
nvvrajesh
 
Hd insight essentials quick view
Rajesh Nadipalli
 
SQL Engines for Hadoop - The case for Impala
markgrover
 
Customer Education Webcast: New Features in Data Integration and Streaming CDC
Precisely
 
Paris Data Geek - Spark Streaming
Djamel Zouaoui
 
Introduction to Hadoop Administration
Ramesh Pabba - seeking new projects
 
Introduction to Hadoop Administration
Ramesh Pabba - seeking new projects
 
Hadoop and OpenStack - Hadoop Summit San Jose 2014
spinningmatt
 
Hadoop and OpenStack
DataWorks Summit
 
FireEye & Scylla: Intel Threat Analysis Using a Graph Database
ScyllaDB
 
Hadoop ppt on the basics and architecture
saipriyacoool
 
Oracle big data appliance and solutions
solarisyougood
 
Key trends in Big Data and new reference architecture from Hewlett Packard En...
Ontico
 
Introducing Apache Kudu (Incubating) - Montreal HUG May 2016
Mladen Kovacevic
 
Hadoop - Just the Basics for Big Data Rookies (SpringOne2GX 2013)
VMware Tanzu
 
Scylla Summit 2019 Keynote - Avi Kivity
ScyllaDB
 
Tcloud Computing Hadoop Family and Ecosystem Service 2013.Q3
tcloudcomputing-tw
 
Spark Driven Big Data Analytics
inoshg
 
9/2017 STL HUG - Back to School
Adam Doyle
 
Ad

More from Databricks (20)

PPTX
Data Lakehouse Symposium | Day 1 | Part 1
Databricks
 
PPT
Data Lakehouse Symposium | Day 1 | Part 2
Databricks
 
PPTX
Data Lakehouse Symposium | Day 2
Databricks
 
PPTX
Data Lakehouse Symposium | Day 4
Databricks
 
PDF
Democratizing Data Quality Through a Centralized Platform
Databricks
 
PDF
Learn to Use Databricks for Data Science
Databricks
 
PDF
Why APM Is Not the Same As ML Monitoring
Databricks
 
PDF
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Databricks
 
PDF
Stage Level Scheduling Improving Big Data and AI Integration
Databricks
 
PDF
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Databricks
 
PDF
Scaling your Data Pipelines with Apache Spark on Kubernetes
Databricks
 
PDF
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Databricks
 
PDF
Sawtooth Windows for Feature Aggregations
Databricks
 
PDF
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Databricks
 
PDF
Re-imagine Data Monitoring with whylogs and Spark
Databricks
 
PDF
Raven: End-to-end Optimization of ML Prediction Queries
Databricks
 
PDF
Processing Large Datasets for ADAS Applications using Apache Spark
Databricks
 
PDF
Massive Data Processing in Adobe Using Delta Lake
Databricks
 
PDF
Machine Learning CI/CD for Email Attack Detection
Databricks
 
PDF
Jeeves Grows Up: An AI Chatbot for Performance and Quality
Databricks
 
Data Lakehouse Symposium | Day 1 | Part 1
Databricks
 
Data Lakehouse Symposium | Day 1 | Part 2
Databricks
 
Data Lakehouse Symposium | Day 2
Databricks
 
Data Lakehouse Symposium | Day 4
Databricks
 
Democratizing Data Quality Through a Centralized Platform
Databricks
 
Learn to Use Databricks for Data Science
Databricks
 
Why APM Is Not the Same As ML Monitoring
Databricks
 
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Databricks
 
Stage Level Scheduling Improving Big Data and AI Integration
Databricks
 
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Databricks
 
Scaling your Data Pipelines with Apache Spark on Kubernetes
Databricks
 
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Databricks
 
Sawtooth Windows for Feature Aggregations
Databricks
 
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Databricks
 
Re-imagine Data Monitoring with whylogs and Spark
Databricks
 
Raven: End-to-end Optimization of ML Prediction Queries
Databricks
 
Processing Large Datasets for ADAS Applications using Apache Spark
Databricks
 
Massive Data Processing in Adobe Using Delta Lake
Databricks
 
Machine Learning CI/CD for Email Attack Detection
Databricks
 
Jeeves Grows Up: An AI Chatbot for Performance and Quality
Databricks
 
Ad

Recently uploaded (20)

PDF
UNISE-Operation-Procedure-InDHIS2trainng
ahmedabduselam23
 
PDF
Unlocking Insights: Introducing i-Metrics Asia-Pacific Corporation and Strate...
Janette Toral
 
PDF
Driving Employee Engagement in a Hybrid World.pdf
Mia scott
 
PDF
apidays Singapore 2025 - From API Intelligence to API Governance by Harsha Ch...
apidays
 
PPTX
03_Ariane BERCKMOES_Ethias.pptx_AIBarometer_release_event
FinTech Belgium
 
PPT
tuberculosiship-2106031cyyfuftufufufivifviviv
AkshaiRam
 
PPTX
big data eco system fundamentals of data science
arivukarasi
 
PPTX
05_Jelle Baats_Tekst.pptx_AI_Barometer_Release_Event
FinTech Belgium
 
PDF
Business implication of Artificial Intelligence.pdf
VishalChugh12
 
PPTX
Powerful Uses of Data Analytics You Should Know
subhashenia
 
PPTX
ER_Model_with_Diagrams_Presentation.pptx
dharaadhvaryu1992
 
PDF
NIS2 Compliance for MSPs: Roadmap, Benefits & Cybersecurity Trends (2025 Guide)
GRC Kompas
 
PDF
Data Science Course Certificate by Sigma Software University
Stepan Kalika
 
PDF
SQL for Accountants and Finance Managers
ysmaelreyes
 
PDF
apidays Singapore 2025 - Streaming Lakehouse with Kafka, Flink and Iceberg by...
apidays
 
PDF
1750162332_Snapshot-of-Indias-oil-Gas-data-May-2025.pdf
sandeep718278
 
PDF
apidays Singapore 2025 - The API Playbook for AI by Shin Wee Chuang (PAND AI)
apidays
 
PPTX
BinarySearchTree in datastructures in detail
kichokuttu
 
PDF
apidays Singapore 2025 - Surviving an interconnected world with API governanc...
apidays
 
PPTX
What Is Data Integration and Transformation?
subhashenia
 
UNISE-Operation-Procedure-InDHIS2trainng
ahmedabduselam23
 
Unlocking Insights: Introducing i-Metrics Asia-Pacific Corporation and Strate...
Janette Toral
 
Driving Employee Engagement in a Hybrid World.pdf
Mia scott
 
apidays Singapore 2025 - From API Intelligence to API Governance by Harsha Ch...
apidays
 
03_Ariane BERCKMOES_Ethias.pptx_AIBarometer_release_event
FinTech Belgium
 
tuberculosiship-2106031cyyfuftufufufivifviviv
AkshaiRam
 
big data eco system fundamentals of data science
arivukarasi
 
05_Jelle Baats_Tekst.pptx_AI_Barometer_Release_Event
FinTech Belgium
 
Business implication of Artificial Intelligence.pdf
VishalChugh12
 
Powerful Uses of Data Analytics You Should Know
subhashenia
 
ER_Model_with_Diagrams_Presentation.pptx
dharaadhvaryu1992
 
NIS2 Compliance for MSPs: Roadmap, Benefits & Cybersecurity Trends (2025 Guide)
GRC Kompas
 
Data Science Course Certificate by Sigma Software University
Stepan Kalika
 
SQL for Accountants and Finance Managers
ysmaelreyes
 
apidays Singapore 2025 - Streaming Lakehouse with Kafka, Flink and Iceberg by...
apidays
 
1750162332_Snapshot-of-Indias-oil-Gas-data-May-2025.pdf
sandeep718278
 
apidays Singapore 2025 - The API Playbook for AI by Shin Wee Chuang (PAND AI)
apidays
 
BinarySearchTree in datastructures in detail
kichokuttu
 
apidays Singapore 2025 - Surviving an interconnected world with API governanc...
apidays
 
What Is Data Integration and Transformation?
subhashenia
 

5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop

  • 1. Clean Your Data Swamp By Migration off Hadoop
  • 3. Agenda ● Why modernize? ● Planning your migration off of Hadoop ● Top migration topics
  • 4. Why migrate off of Hadoop and onto Databricks?
  • 5. History of Hadoop ● Created 2005 ● Open Source distributed processing and storage platform running on commodity hardware ● Originally consisted of HDFS, and MapReduce, but now incorporates numerous open source projects (Hive, HBase, Spark) ● On-prem and on the cloud
  • 6. COMPLEX FIXED Today Hadoop is very hard ● Many tools: Need to understand multiple technologies. ● Real-time and batch ingestion to build AI models requires integrating many components. Slow Innovation ● 24/7 clusters. ● Fixed capacity: CPU + RAM + Disk. ● Costly to upgrade. Cost Prohibitive MAINTENANCE INTENSIVE ● Hadoop ecosystem is complex and hard to manage that is prone to failures. Low Productivity X
  • 7. Enterprises Need a Modern Data Analytics Architecture CRITICAL REQUIREMENTS Cost-effective scale and performance in the cloud Easy to manage and highly reliable for diverse data Predictive and real-time insights to drive innovation
  • 8. Structured Semi-structured Unstructured Streaming Lakehouse Platform Data Engineering BI & SQL Analytics Real-time Data Applications Data Science & Machine Learning Data Management & Governance Open Data Lake SIMPLE OPEN COLLABORATIVE
  • 9. Planning your migration off of Hadoop and onto Databricks
  • 10. Migration Planning ● Internal Questions ● Assessment ● Technical Planning ● Enablement and Evaluation ● Migration Execution
  • 11. Migration Planning Internal Question ● why? ● who? ● desired start and end dates ● internal stakeholders ● cloud strategy
  • 12. Migration Planning Assessment ● Environment inventory ○ compute, data, tooling ● Use case prioritization ● Workload analysis ● Existing TCO ● Projected TCO ● Migration timelines
  • 13. Migration Planning Technical Planning ● Target state architecture ● Data migration ● Workload migration ○ Lift and shift, transformative, hybrid ● Data governance approach ● Automated deployment ● Monitoring and Operations
  • 14. Migration Planning Enablement and Evaluation ● Workshops,Technical deep dives ● Training ● Proof of technology / MVP ○ Validate assumptions and designs
  • 15. Migration Planning Migration Execution ● Environment Deployment ● Iterate of use cases ○ Data Migration ○ Workload Migration ○ Dual Production Deployment - Old and New ○ Validation ○ Cut-over and Decommission of Hadoop
  • 17. Key Areas of Migration 1. Administration 2. Data Migration 3. Data Processing 4. Security & Governance 5. SQL and BI Layer
  • 19. Hadoop Ecosystem to Databricks Concepts Hadoop HDFS c disk1 disk2 disk3 disk4 disk5 disk6 ... disk N YARN Impala HBase c c c c c MR mapper c MR mapper c MR mapper c Spark Worker (Executor ) c c c c MR mapper c Spark Worker (Executor ) c c c c Spark Worker (Executor ) c c c c c c 2x12c = 24c compute HDFS c disk1 disk2 disk3 disk4 disk5 disk6 ... disk N YARN Impala HBase c c c c c MR mapper c MR mapper c MR mapper c Spark Worker (Executor ) c c c c MR mapper c Spark Worker (Executor ) c c c c Spark Worker (Executor ) c c c c c c 2x12c = 24c compute HDFS c disk1 disk2 disk3 disk4 disk5 disk6 ... disk N YARN Impala HBase c c c c c MR mapper c MR mapper c MR mapper c Spark Worker (Executor ) c c c c MR mapper c Spark Worker (Executor ) c c c c Spark Driver c c c c c c 2x12c = 24c compute ... Node 1 Node 2 Node N Hive Metastore Hive Server Impala (LoadBalancer) HBase API Sentry Table Metadata + HDFS ACLs JDBC/ODBC Node makeup ▪ Local disks ▪ Cores/Memory carved to services ▪ Submitted jobs compete for resources ▪ Services constrained to accommodate resources Metadata and Security ▪ Sentry table metadata permissions combined with syncing HDFS ACLs OR ▪ Apache Ranger, policy based access control Endpoints ▪ Direct Access to HDFS / Copied dataset ▪ Hive (on MR or Spark) accepts incoming connections ▪ Impala for interactive queries ▪ HBase APIs as required Ranger Policy based access control OR
  • 20. Hadoop Ecosystem to Databricks Concepts Hadoop HDFS c disk1 disk2 disk3 disk4 disk5 disk6 ... disk N YARN Impala HBase c c c c c MR mapper c MR mapper c MR mapper c Spark Worker (Executor ) c c c c MR mapper c Spark Worker (Executor ) c c c c Spark Worker (Executor ) c c c c c c 2x12c = 24c compute HDFS c disk1 disk2 disk3 disk4 disk5 disk6 ... disk N YARN Impala HBase c c c c c MR mapper c MR mapper c MR mapper c Spark Worker (Executor ) c c c c MR mapper c Spark Worker (Executor ) c c c c Spark Worker (Executor ) c c c c c c 2x12c = 24c compute HDFS c disk1 disk2 disk3 disk4 disk5 disk6 ... disk N YARN Impala HBase c c c c c MR mapper c MR mapper c MR mapper c Spark Worker (Executor ) c c c c MR mapper c Spark Worker (Executor ) c c c c Spark Driver c c c c c c 2x12c = 24c compute ... Node 1 Node 2 Node N Hive Metastore Hive Server Impala (LoadBalancer) HBase API Sentry/Ranger Table Metadata + HDFS ACLs Hive Metastore (managed) Databricks SQL Endpoint JDBC/ODBC High Conc. Cluster SQL Analytics CosmosDB/ DynamoDB/ Keyspaces Object Storage c Spark Worker (Executor ) c c c Delta Engine c Spark Driver c c c c Spark Worker (Executor ) c c c Delta Engine c Spark Worker (Executor ) c c c Delta Engine Databricks Cluster Spark ETL (Batch/Streaming) c Spark Worker (Executor ) c c c Delta Engine c Spark Driver c c c c Spark Worker (Executor ) c c c Delta Engine c Spark Worker (Executor ) c c c Delta Engine Databricks Cluster SQL Analytics c Spark Worker (Executor ) c c c Delta Engine c Spark Driver c c c c Spark Worker (Executor ) c c c Delta Engine c Spark Worker (Executor ) c c c Delta Engine Databricks Cluster ML Runtime Table ACLs Object Storage ACLs Ephemeral Clusters for All-purpose or Jobs JDBC/ODBC
  • 21. Hadoop Ecosystem to Databricks Concepts Hive Metastore (managed) Databricks SQL Endpoint High Conc. Cluster SQL Analytics c Spark Worker (Executor ) c c c Delta Engine c Spark Driver c c c c Spark Worker (Executor ) c c c Delta Engine c Spark Worker (Executor ) c c c Delta Engine Databricks Cluster Spark ETL (Batch/Streaming) c Spark Worker (Executor ) c c c Delta Engine c Spark Driver c c c c Spark Worker (Executor ) c c c Delta Engine c Spark Worker (Executor ) c c c Delta Engine Databricks Cluster SQL Analytics c Spark Worker (Executor ) c c c Delta Engine c Spark Driver c c c c Spark Worker (Executor ) c c c Delta Engine c Spark Worker (Executor ) c c c Delta Engine Databricks Cluster ML Runtime Table ACLs Ephemeral Clusters or long running for All-purpose or Jobs JDBC/ODBC Node makeup ▪ Each Node (VM), maps to single Spark Driver/Worker ▪ Cluster of nodes completely isolated from other jobs/compute ▪ De-coupled compute and storage Metadata and Security ▪ Managed Hive metastore (other options available) ▪ Table ACLs (Databricks) and Object Storage permissions Endpoints ▪ SQL endpoint for both advanced analytics and simple SQL analytics ▪ Code access to data - Notebooks ▪ HBase → maps to Azure CosmosDB, AWS DynamoDB/Keyspaces (non-Databricks solution) Object Storage Object Storage ACLs CosmosDB/ DynamoDB/ Keyspaces
  • 24. Data Migration - On-premise block storage. - Fixed disk capacity. - Health checks to validate data Integrity. - As data volumes grow, must add more nodes to cluster and rebalance data. MIGRATE - Fully managed cloud object storage. - Unlimited capacity. - No maintenance, no health checks, no rebalancing. - 99.99% availability, 99.9999999% durability. - Use native cloud services to migrate data. - Leverage partner solutions:
  • 25. Data Migration Build a Data Lake in cloud storage with Delta Lake ● Open source and uses Parquet file format. ● Performance: Data indexing → Faster queries. ● Reliability: ACID Transactions → Guaranteed data integrity. ● Scalability: Handle petabyte-scale tables with billions of partitions and files at ease. ● Enhanced Spark SQL: UPDATE, MERGE, and DELETE commands. ● Unify Batch and Stream processing → No more LAMBDA architecture. ● Schema Enforcement: Specify schema on write. ● Schema Evolution: Automatically change schemas on the fly. ● Audit History: Full audit trail of the changes. ● Time Travel: Restore data from past versions. ● 100% Compatible with Apache Spark API.
  • 26. Start with Dual ingestion ● Add a feed to cloud storage ● Enable new use cases with new data ● Introduces options for backup
  • 27. How to migrate data ● Leverage existing Data Delivery tools to point to cloud storage ● Introduce simplified flows to land data into cloud storage
  • 28. How to migrate data ● Push the data ○ DistCP ○ 3rd Party Tooling ○ In-house frameworks ○ Cloud Native - Snowmobile , Azure Data Box, Google Transfer Appliance ○ Typically easier to approve (security) ● Pull the data ○ Spark Streaming ○ Spark Batch ■ File Ingest ■ JDBC ○ 3rd Party Tooling
  • 29. How to migrate data - Pull approach ● Set up connectivity to On Premises ○ AWS Direct Connect ○ Azure ExpressRoute / VPN Gateway ○ This may be needed for some use cases ● Kerberized Hadoop Environments ○ Databricks clusters initialization scripts ■ Kerberos client setup ■ krb5.conf, keytab ■ kinit() ● Shared External Metastore ○ Databricks and Hadoop can share a metastore
  • 33. Migrating Spark Jobs ● Spark versions ● RDD to Dataframes ● Changes to submission ● Hard coded references to hadoop environment
  • 34. Converting non-Spark workloads ● MapReduce ● Sqoop ● Flume ● Nifi Considerations
  • 35. Migrating HiveQL ● Hive queries have high compatibility ● Minor changes in DDL ● Serdes, and UDFs
  • 36. Migration Workflow Orchestration ● Create Airflow, Azure Data Factory, or other, equivalents ● Databricks REST APIs allows integration to any Scheduler
  • 37. Automated Tooling ● MLens ○ PySpark ○ HiveQL ○ Oozie to Airflow, Azure Data Factory (roadmap)
  • 39. Security and Governance Authentication Authorization Metadata Management - Single Sign On (SSO) with SAML 2.0 supported corporate directory. - Access Control Lists (ACLs) for Databricks RBAC. - Table ACLs - Dynamic Views for Column/Row permissionons - Leverage cloud native security: IAM Federation and AAD passthrough. - Integration with Ranger an Immuta for more advanced RBAC and ABAC. - Integration with 3rd party services. Amazon Glue
  • 41. Migrating Security Policies from Hadoop to Databricks Enabling enterprises to responsibly use their data in the cloud Powered by Apache Ranger
  • 42. HADOOP ECOSYSTEM ● 100s and 1000s of tables in Apache Hive ● 100s of policies in Apache Ranger ● Variety of policies. Resource Based, Tag Based, Masking, Row Level Filters, etc. ● Policies for Users and Groups from AD/LDAP
  • 43. PRIVACERA AND DATABRICKS Hive MetaStore MetaStore Dataset Schema Policies
  • 44. SEAMLESS MIGRATION INSTANTLY TRANSFER YEARS OF EFFORT INSTANTLY IMPLEMENT THE SAME POLICIES IN DATABRICKS AS ON-PREM
  • 45. ● Richer, deeper, and more robust Access Control ● Row/Column level access control in SQL ● Dynamic and Static data de-identification ● File level access control for Dataframes, object level access ● Read/Write operations supported Object Store (S3/ADLS) Privacera + Databricks S3 - Bucket Level Y S3 - Object Level Y ADLS Y Privacera Value Add - Enhancing Databricks Authorization Spark SQL and R Privacera + Databricks Table Y Column Y Column Masking Y Row Level Filtering Y Tag Based Policies Y Attribute based policies Y Centralized Auditing Y
  • 46. Databricks SQL/Python Cluster Spark Driver Ranger Plugin Spark Executors Spark Executors Ranger Policy Manager Privacera Portal Privacera Audit Server DB Solr Apache Kafka Splunk Cloud Watch SIEM Privacera Cloud Spark SQL and/or Spark Read/Write Privacera Anomaly Detection and Alerting Databricks Cluster Privacera Discovery Business User Admin User Privacera Approval Workflow AD/LDAP 3rd Party Catalog
  • 49. What about the SQL Community Hadoop ● HUE ○ Data browsing ○ SQL Editor ○ Visualizations ● Interactive SQL ○ Impala ○ Hive LLAP Databricks ● SQL Analytics Workspace ○ Data Browser ○ SQL Editor ○ Visualizations ● Interactive SQL ○ Spark optimizations - Adaptive Query Execution ○ Advanced Caching ○ Project Photon ○ Scaling cluster of clusters
  • 50. SQL & BI Layer Optimized SQL and BI Performance BI Integrations Tuned - Fast queries with Delta Engine on Delta Engine. - Support for high-concurrency with auto-scaling clusters. - Optimized JDBC/ODBC drivers. - Optimized and tuned for BI and and SQL out of the box. Compatible with any BI client and tool that supports Spark.
  • 51. Vision Give SQL users a home in Databricks Provide SQL workbench, light dashboarding, and alerting capabilities Great BI experience on the data lake Enable companies to effectively leverage the data lake from any BI tool without having to move the data around. Easy to use & price-performant Minimal setup & configuration. Data lake price performance.
  • 52. SQL-native user interface for analysts ▪ Familiar SQL Editor ▪ Auto Complete ▪ Built in visualizations ▪ Data Browser ▪ Automatic Alerts ▪ Trigger based upon values ▪ Email or Slack integration ▪ Dashboards ▪ Simply convert queries to dashboards ▪ Share with Access Control
  • 53. Built-in connectors for existing BI tools Other BI & SQL clients that support ▪ Supports your favorite tool ▪ Connectors for top BI & SQL clients ▪ Simple connection setup ▪ Optimized performance ▪ OAuth & Single Sign On ▪ Quick and easy authentication experience. No need to deal with access tokens. ▪ Power BI Available now ▪ Others coming soon
  • 54. Performance Delta Metadata Performance Improved read performance for cold queries on Delta tables. Provides interactive metadata performance regardless of # of Delta tables in a query or table sizes. New ODBC / JDBC Drivers Wire protocol re-engineered to provide lower latencies & higher data transfer speeds: ▪ Lower latency / less overhead (~¼ sec) with reduced round trips per request ▪ Higher transfer rate (up to 50%) using Apache Arrow ▪ Optimized metadata performance for ODBC/JDBC APIs (up to 10x for metadata retrieval operations) Photon - Delta Engine [Preview] New MPP engine built from scratch in C++. Vectorized to exploit data level parallelism and instruction-level parallelism. Optimized for modern structured and semi-structured workloads.
  • 56. It all starts with a plan ● Databricks and are partner community can help you ○ Assess ○ Plan ○ Validate ○ Execute
  • 57. Considerations for your migration to Databricks ● Administration ● Data Migration ● Data Processing ● Security & Governance ● SQL and BI Layer
  • 59. Next Steps ● You will receive a follow up email from our teams ● Let us help you with your Hadoop Migration Journey
  • 60. Follow up materials - Useful links
  • 63. Databricks AWS Reference Architecture
  • 64. Demo