SlideShare a Scribd company logo
2
Most read
3
Most read
7
Most read
Hadoop: Distributed Data Processing
OutlineScaling for Large Data ProcessingWhat is Hadoop?HDFS and MapReduceHadoop EcosystemHadoop vsRDBMSesConclusion
Current Storage Systems Can’t ComputeAd hoc Queries &Data MiningInteractive AppsRDBMS (200GB/day)ETL GridNon-ConsumptionFiler heads are a bottleneckStorage Farm for Unstructured Data (20TB/day)Mostly AppendCollectionInstrumentation
The Solution: A Store-Compute GridInteractive Apps“Batch” AppsRDBMSAd hoc Queries& Data MiningETL and AggregationsStorage + ComputationMostly AppendCollectionInstrumentation
What is Hadoop?A scalable fault-tolerant grid operating system  for data storage and processingIts scalability comes from the marriage of:HDFS: Self-Healing High-Bandwidth Clustered StorageMapReduce: Fault-Tolerant Distributed ProcessingOperates on unstructured and structured dataA large and active ecosystem (many developers and additions like HBase, Hive, Pig, …)Open source under the friendly Apache Licensehttp://wiki.apache.org/hadoop/
Hadoop History2002-2004: Doug Cutting and Mike Cafarella started working on Nutch2003-2004: Google publishes GFS and MapReduce papers 2004: Cutting adds DFS & MapReduce support to Nutch2006: Yahoo! hires Cutting, Hadoop spins out of Nutch2007: NY Times converts 4TB of archives over 100 EC2s2008: Web-scale deployments at Y!, Facebook, Last.fmApril 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodesMay 2009:Yahoo does fastest sort of a TB, 62secs over 1460 nodesYahoo sorts a PB in 16.25hours over 3658 nodesJune 2009, Oct 2009: Hadoop Summit (750), Hadoop World (500)September 2009: Doug Cutting joins Cloudera
Hadoop Design AxiomsSystem Shall Manage and Heal ItselfPerformance Shall Scale Linearly Compute Should Move to DataSimple Core, Modular and Extensible
HDFS: Hadoop Distributed File SystemBlock Size = 64MBReplication Factor = 3Cost/GB is a few ¢/month vs $/month
MapReduce: Distributed Processing
MapReduce Example for Word CountSELECT word, COUNT(1) FROM docs GROUP BY word;cat *.txt | mapper.pl | sort | reducer.pl > out.txt(docid, text)(words, counts)Map 1(sorted words, counts)Reduce 1Output File 1(sorted words, sum of counts)Split 1Be, 5“To Be Or Not To Be?”Be, 30Be, 12Reduce iOutput File i(sorted words, sum of counts)(docid, text)Map iSplit iBe, 7Be, 6ShuffleReduce ROutput File R(sorted words, sum of counts)(docid, text)Map M(sorted words, counts)(words, counts)Split N
Hadoop High-Level ArchitectureHadoop ClientContacts Name Node for data or Job Tracker to submit jobsName NodeMaintains mapping of file blocks to data node slavesJob TrackerSchedules jobs across task tracker slavesData NodeStores and serves blocks of dataTask TrackerRuns tasks (work units) within a jobShare Physical Node
Apache Hadoop EcosystemBI ReportingETL ToolsRDBMSHive (SQL)SqoopPig (Data Flow)MapReduce (Job Scheduling/Execution System)(Streaming/Pipes APIs)HBase(key-value store)Avro (Serialization)Zookeepr (Coordination)HDFS(Hadoop Distributed File System)
Relational Databases:Hadoop:Use The Right Tool For The Right Job When to use?Affordable Storage/Compute
Structured or Not (Agility)
Resilient Auto ScalabilityWhen to use?Interactive Reporting (
Multistep Transactions
InteroperabilityEconomics of HadoopTypical Hardware:Two Quad Core Nehalems24GB RAM12 * 1TB SATA disks (JBOD mode, no need for RAID)1 Gigabit Ethernet cardCost/node: $5K/nodeEffective HDFS Space:¼ reserved for temp shuffle space, which leaves 9TB/node3 way replication leads to 3TB effective HDFS space/nodeBut assuming 7x compression that becomes ~ 20TB/nodeEffective Cost per user TB: $250/TBOther solutions cost in the range of $5K to $100K per user TB
Sample Talks from Hadoop World ‘09VISA: Large Scale Transaction AnalysisJP Morgan Chase: Data Processing for Financial ServicesChina Mobile: Data Mining Platform for Telecom IndustryRackspace: Cross Data Center Log ProcessingBooz Allen Hamilton: Protein Alignment using HadoopeHarmony: Matchmaking in the Hadoop CloudGeneral Sentiment: Understanding Natural LanguageYahoo!: Social Graph AnalysisVisible Technologies: Real-Time Business IntelligenceFacebook: Rethinking the Data Warehouse with Hadoop and HiveSlides and Videos at http://www.cloudera.com/hadoop-world-nyc
Cloudera Desktop

More Related Content

What's hot (20)

PDF
Deep Dive: Memory Management in Apache Spark
Databricks
 
PDF
Data Day Texas 2017: Scaling Data Science at Stitch Fix
Stefan Krawczyk
 
PPTX
Data warehousing - Dr. Radhika Kotecha
Radhika Kotecha
 
PDF
Diving into Delta Lake: Unpacking the Transaction Log
Databricks
 
PDF
Apache Spark Introduction
sudhakara st
 
PDF
Learn to Use Databricks for Data Science
Databricks
 
PDF
Performance Analysis of Apache Spark and Presto in Cloud Environments
Databricks
 
PDF
Oracle database high availability solutions
Kirill Loifman
 
PDF
How Impala Works
Yue Chen
 
PPTX
Flink vs. Spark
Slim Baltagi
 
PDF
Hadoop Interview Questions And Answers Part-2 | Big Data Interview Questions ...
Simplilearn
 
PDF
Stage Level Scheduling Improving Big Data and AI Integration
Databricks
 
PDF
Webinar: 99 Ways to Enrich Streaming Data with Apache Flink - Konstantin Knauf
Ververica
 
PDF
Unlocking the Power of Apache Flink: An Introduction in 4 Acts
HostedbyConfluent
 
PDF
How to use Impala query plan and profile to fix performance issues
Cloudera, Inc.
 
PDF
Introduction to MapReduce - Hadoop Streaming | Big Data Hadoop Spark Tutorial...
CloudxLab
 
PDF
Introduction to apache spark
Aakashdata
 
PPTX
Zero to Snowflake Presentation
Brett VanderPlaats
 
PDF
High-speed Database Throughput Using Apache Arrow Flight SQL
ScyllaDB
 
PDF
How Prometheus Store the Data
Hao Chen
 
Deep Dive: Memory Management in Apache Spark
Databricks
 
Data Day Texas 2017: Scaling Data Science at Stitch Fix
Stefan Krawczyk
 
Data warehousing - Dr. Radhika Kotecha
Radhika Kotecha
 
Diving into Delta Lake: Unpacking the Transaction Log
Databricks
 
Apache Spark Introduction
sudhakara st
 
Learn to Use Databricks for Data Science
Databricks
 
Performance Analysis of Apache Spark and Presto in Cloud Environments
Databricks
 
Oracle database high availability solutions
Kirill Loifman
 
How Impala Works
Yue Chen
 
Flink vs. Spark
Slim Baltagi
 
Hadoop Interview Questions And Answers Part-2 | Big Data Interview Questions ...
Simplilearn
 
Stage Level Scheduling Improving Big Data and AI Integration
Databricks
 
Webinar: 99 Ways to Enrich Streaming Data with Apache Flink - Konstantin Knauf
Ververica
 
Unlocking the Power of Apache Flink: An Introduction in 4 Acts
HostedbyConfluent
 
How to use Impala query plan and profile to fix performance issues
Cloudera, Inc.
 
Introduction to MapReduce - Hadoop Streaming | Big Data Hadoop Spark Tutorial...
CloudxLab
 
Introduction to apache spark
Aakashdata
 
Zero to Snowflake Presentation
Brett VanderPlaats
 
High-speed Database Throughput Using Apache Arrow Flight SQL
ScyllaDB
 
How Prometheus Store the Data
Hao Chen
 

Viewers also liked (20)

PPT
Distributed Processing
Imtiaz Hussain
 
PPT
Distributed Database Management System
Hardik Patil
 
PPTX
Distributed database
ReachLocal Services India
 
PDF
Technically Distributed - Tools and Techniques for Distributed Teams
Cory Foy
 
PDF
chef loves windows
Mat Schaffer
 
PDF
Apache Drill: An Active, Ad-hoc Query System for large-scale Data Sets
MapR Technologies
 
PPT
Hadoop a Natural Choice for Data Intensive Log Processing
Hitendra Kumar
 
PPTX
Hadoop 3.0 features
anand murari
 
PDF
Hadoop: Distributed data processing
royans
 
PPS
Heterogeneous Or Homogeneous Classrooms Jane
Kevin Hodgson
 
PDF
Distributed Data Processing Workshop - SBU
Amir Sedighi
 
PDF
Distributed processing
Neil Stein
 
PDF
Solving Problems with Graphs
Marko Rodriguez
 
PDF
Quantum Processes in Graph Computing
Marko Rodriguez
 
PPTX
Distributed Query Processing
Mythili Kannan
 
DOCX
Load balancing in Distributed Systems
Richa Singh
 
PPT
امن الوثائق والمعلومات عرض تقديمى
Nasser Shafik
 
PDF
Data Lake for the Cloud: Extending your Hadoop Implementation
Hortonworks
 
PPTX
Apache Hadoop 3.0 What's new in YARN and MapReduce
DataWorks Summit/Hadoop Summit
 
PPTX
Database design process
Tayyab Hameed
 
Distributed Processing
Imtiaz Hussain
 
Distributed Database Management System
Hardik Patil
 
Distributed database
ReachLocal Services India
 
Technically Distributed - Tools and Techniques for Distributed Teams
Cory Foy
 
chef loves windows
Mat Schaffer
 
Apache Drill: An Active, Ad-hoc Query System for large-scale Data Sets
MapR Technologies
 
Hadoop a Natural Choice for Data Intensive Log Processing
Hitendra Kumar
 
Hadoop 3.0 features
anand murari
 
Hadoop: Distributed data processing
royans
 
Heterogeneous Or Homogeneous Classrooms Jane
Kevin Hodgson
 
Distributed Data Processing Workshop - SBU
Amir Sedighi
 
Distributed processing
Neil Stein
 
Solving Problems with Graphs
Marko Rodriguez
 
Quantum Processes in Graph Computing
Marko Rodriguez
 
Distributed Query Processing
Mythili Kannan
 
Load balancing in Distributed Systems
Richa Singh
 
امن الوثائق والمعلومات عرض تقديمى
Nasser Shafik
 
Data Lake for the Cloud: Extending your Hadoop Implementation
Hortonworks
 
Apache Hadoop 3.0 What's new in YARN and MapReduce
DataWorks Summit/Hadoop Summit
 
Database design process
Tayyab Hameed
 
Ad

Similar to Hadoop: Distributed Data Processing (20)

PPTX
Hadoop: An Industry Perspective
Cloudera, Inc.
 
PPT
How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
Amr Awadallah
 
PPTX
Hadoop introduction , Why and What is Hadoop ?
sudhakara st
 
PPTX
Large Scale Data With Hadoop
guest27e6764
 
PPT
hadoop
swatic018
 
PPT
hadoop
swatic018
 
PPTX
عصر کلان داده، چرا و چگونه؟
datastack
 
PPTX
Hadoop and BigData - July 2016
Ranjith Sekar
 
PDF
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
Cognizant
 
PPTX
Presentation sreenu dwh-services
Sreenu Musham
 
PPTX
Big Data and Hadoop
Flavio Vit
 
PPT
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...
Chris Baglieri
 
PDF
Harnessing Hadoop and Big Data to Reduce Execution Times
David Tjahjono,MD,MBA(UK)
 
PPT
Hadoop ecosystem framework n hadoop in live environment
Delhi/NCR HUG
 
PPTX
Seminar ppt
RajatTripathi34
 
PPT
Hive @ Hadoop day seattle_2010
nzhang
 
PPTX
Hadoop_arunam_ppt
jerrin joseph
 
PDF
Hadoop Distributed File System in Big data
ramukaka777787
 
PPT
Hadoop Technology
Atul Kushwaha
 
PPTX
Hands on Hadoop and pig
Sudar Muthu
 
Hadoop: An Industry Perspective
Cloudera, Inc.
 
How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
Amr Awadallah
 
Hadoop introduction , Why and What is Hadoop ?
sudhakara st
 
Large Scale Data With Hadoop
guest27e6764
 
hadoop
swatic018
 
hadoop
swatic018
 
عصر کلان داده، چرا و چگونه؟
datastack
 
Hadoop and BigData - July 2016
Ranjith Sekar
 
Harnessing Hadoop: Understanding the Big Data Processing Options for Optimizi...
Cognizant
 
Presentation sreenu dwh-services
Sreenu Musham
 
Big Data and Hadoop
Flavio Vit
 
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...
Chris Baglieri
 
Harnessing Hadoop and Big Data to Reduce Execution Times
David Tjahjono,MD,MBA(UK)
 
Hadoop ecosystem framework n hadoop in live environment
Delhi/NCR HUG
 
Seminar ppt
RajatTripathi34
 
Hive @ Hadoop day seattle_2010
nzhang
 
Hadoop_arunam_ppt
jerrin joseph
 
Hadoop Distributed File System in Big data
ramukaka777787
 
Hadoop Technology
Atul Kushwaha
 
Hands on Hadoop and pig
Sudar Muthu
 
Ad

More from Cloudera, Inc. (20)

PPTX
Partner Briefing_January 25 (FINAL).pptx
Cloudera, Inc.
 
PPTX
Cloudera Data Impact Awards 2021 - Finalists
Cloudera, Inc.
 
PPTX
2020 Cloudera Data Impact Awards Finalists
Cloudera, Inc.
 
PPTX
Edc event vienna presentation 1 oct 2019
Cloudera, Inc.
 
PPTX
Machine Learning with Limited Labeled Data 4/3/19
Cloudera, Inc.
 
PPTX
Data Driven With the Cloudera Modern Data Warehouse 3.19.19
Cloudera, Inc.
 
PPTX
Introducing Cloudera DataFlow (CDF) 2.13.19
Cloudera, Inc.
 
PPTX
Introducing Cloudera Data Science Workbench for HDP 2.12.19
Cloudera, Inc.
 
PPTX
Shortening the Sales Cycle with a Modern Data Warehouse 1.30.19
Cloudera, Inc.
 
PPTX
Leveraging the cloud for analytics and machine learning 1.29.19
Cloudera, Inc.
 
PPTX
Modernizing the Legacy Data Warehouse – What, Why, and How 1.23.19
Cloudera, Inc.
 
PPTX
Leveraging the Cloud for Big Data Analytics 12.11.18
Cloudera, Inc.
 
PPTX
Modern Data Warehouse Fundamentals Part 3
Cloudera, Inc.
 
PPTX
Modern Data Warehouse Fundamentals Part 2
Cloudera, Inc.
 
PPTX
Modern Data Warehouse Fundamentals Part 1
Cloudera, Inc.
 
PPTX
Extending Cloudera SDX beyond the Platform
Cloudera, Inc.
 
PPTX
Federated Learning: ML with Privacy on the Edge 11.15.18
Cloudera, Inc.
 
PPTX
Analyst Webinar: Doing a 180 on Customer 360
Cloudera, Inc.
 
PPTX
Build a modern platform for anti-money laundering 9.19.18
Cloudera, Inc.
 
PPTX
Introducing the data science sandbox as a service 8.30.18
Cloudera, Inc.
 
Partner Briefing_January 25 (FINAL).pptx
Cloudera, Inc.
 
Cloudera Data Impact Awards 2021 - Finalists
Cloudera, Inc.
 
2020 Cloudera Data Impact Awards Finalists
Cloudera, Inc.
 
Edc event vienna presentation 1 oct 2019
Cloudera, Inc.
 
Machine Learning with Limited Labeled Data 4/3/19
Cloudera, Inc.
 
Data Driven With the Cloudera Modern Data Warehouse 3.19.19
Cloudera, Inc.
 
Introducing Cloudera DataFlow (CDF) 2.13.19
Cloudera, Inc.
 
Introducing Cloudera Data Science Workbench for HDP 2.12.19
Cloudera, Inc.
 
Shortening the Sales Cycle with a Modern Data Warehouse 1.30.19
Cloudera, Inc.
 
Leveraging the cloud for analytics and machine learning 1.29.19
Cloudera, Inc.
 
Modernizing the Legacy Data Warehouse – What, Why, and How 1.23.19
Cloudera, Inc.
 
Leveraging the Cloud for Big Data Analytics 12.11.18
Cloudera, Inc.
 
Modern Data Warehouse Fundamentals Part 3
Cloudera, Inc.
 
Modern Data Warehouse Fundamentals Part 2
Cloudera, Inc.
 
Modern Data Warehouse Fundamentals Part 1
Cloudera, Inc.
 
Extending Cloudera SDX beyond the Platform
Cloudera, Inc.
 
Federated Learning: ML with Privacy on the Edge 11.15.18
Cloudera, Inc.
 
Analyst Webinar: Doing a 180 on Customer 360
Cloudera, Inc.
 
Build a modern platform for anti-money laundering 9.19.18
Cloudera, Inc.
 
Introducing the data science sandbox as a service 8.30.18
Cloudera, Inc.
 

Recently uploaded (20)

PDF
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
PPTX
MuleSoft MCP Support (Model Context Protocol) and Use Case Demo
shyamraj55
 
PDF
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
PPTX
Seamless Tech Experiences Showcasing Cross-Platform App Design.pptx
presentifyai
 
PDF
Staying Human in a Machine- Accelerated World
Catalin Jora
 
PDF
Automating Feature Enrichment and Station Creation in Natural Gas Utility Net...
Safe Software
 
PPTX
Future Tech Innovations 2025 – A TechLists Insight
TechLists
 
PDF
The 2025 InfraRed Report - Redpoint Ventures
Razin Mustafiz
 
PDF
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
PDF
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
PPTX
AUTOMATION AND ROBOTICS IN PHARMA INDUSTRY.pptx
sameeraaabegumm
 
DOCX
Cryptography Quiz: test your knowledge of this important security concept.
Rajni Bhardwaj Grover
 
PDF
Peak of Data & AI Encore AI-Enhanced Workflows for the Real World
Safe Software
 
PDF
The Rise of AI and IoT in Mobile App Tech.pdf
IMG Global Infotech
 
PDF
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
PDF
Newgen 2022-Forrester Newgen TEI_13 05 2022-The-Total-Economic-Impact-Newgen-...
darshakparmar
 
PPTX
Webinar: Introduction to LF Energy EVerest
DanBrown980551
 
PDF
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PDF
LOOPS in C Programming Language - Technology
RishabhDwivedi43
 
PDF
Mastering Financial Management in Direct Selling
Epixel MLM Software
 
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
MuleSoft MCP Support (Model Context Protocol) and Use Case Demo
shyamraj55
 
Exolore The Essential AI Tools in 2025.pdf
Srinivasan M
 
Seamless Tech Experiences Showcasing Cross-Platform App Design.pptx
presentifyai
 
Staying Human in a Machine- Accelerated World
Catalin Jora
 
Automating Feature Enrichment and Station Creation in Natural Gas Utility Net...
Safe Software
 
Future Tech Innovations 2025 – A TechLists Insight
TechLists
 
The 2025 InfraRed Report - Redpoint Ventures
Razin Mustafiz
 
CIFDAQ Market Insights for July 7th 2025
CIFDAQ
 
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
AUTOMATION AND ROBOTICS IN PHARMA INDUSTRY.pptx
sameeraaabegumm
 
Cryptography Quiz: test your knowledge of this important security concept.
Rajni Bhardwaj Grover
 
Peak of Data & AI Encore AI-Enhanced Workflows for the Real World
Safe Software
 
The Rise of AI and IoT in Mobile App Tech.pdf
IMG Global Infotech
 
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
Newgen 2022-Forrester Newgen TEI_13 05 2022-The-Total-Economic-Impact-Newgen-...
darshakparmar
 
Webinar: Introduction to LF Energy EVerest
DanBrown980551
 
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
LOOPS in C Programming Language - Technology
RishabhDwivedi43
 
Mastering Financial Management in Direct Selling
Epixel MLM Software
 

Hadoop: Distributed Data Processing

  • 2. OutlineScaling for Large Data ProcessingWhat is Hadoop?HDFS and MapReduceHadoop EcosystemHadoop vsRDBMSesConclusion
  • 3. Current Storage Systems Can’t ComputeAd hoc Queries &Data MiningInteractive AppsRDBMS (200GB/day)ETL GridNon-ConsumptionFiler heads are a bottleneckStorage Farm for Unstructured Data (20TB/day)Mostly AppendCollectionInstrumentation
  • 4. The Solution: A Store-Compute GridInteractive Apps“Batch” AppsRDBMSAd hoc Queries& Data MiningETL and AggregationsStorage + ComputationMostly AppendCollectionInstrumentation
  • 5. What is Hadoop?A scalable fault-tolerant grid operating system for data storage and processingIts scalability comes from the marriage of:HDFS: Self-Healing High-Bandwidth Clustered StorageMapReduce: Fault-Tolerant Distributed ProcessingOperates on unstructured and structured dataA large and active ecosystem (many developers and additions like HBase, Hive, Pig, …)Open source under the friendly Apache Licensehttp://wiki.apache.org/hadoop/
  • 6. Hadoop History2002-2004: Doug Cutting and Mike Cafarella started working on Nutch2003-2004: Google publishes GFS and MapReduce papers 2004: Cutting adds DFS & MapReduce support to Nutch2006: Yahoo! hires Cutting, Hadoop spins out of Nutch2007: NY Times converts 4TB of archives over 100 EC2s2008: Web-scale deployments at Y!, Facebook, Last.fmApril 2008: Yahoo does fastest sort of a TB, 3.5mins over 910 nodesMay 2009:Yahoo does fastest sort of a TB, 62secs over 1460 nodesYahoo sorts a PB in 16.25hours over 3658 nodesJune 2009, Oct 2009: Hadoop Summit (750), Hadoop World (500)September 2009: Doug Cutting joins Cloudera
  • 7. Hadoop Design AxiomsSystem Shall Manage and Heal ItselfPerformance Shall Scale Linearly Compute Should Move to DataSimple Core, Modular and Extensible
  • 8. HDFS: Hadoop Distributed File SystemBlock Size = 64MBReplication Factor = 3Cost/GB is a few ¢/month vs $/month
  • 10. MapReduce Example for Word CountSELECT word, COUNT(1) FROM docs GROUP BY word;cat *.txt | mapper.pl | sort | reducer.pl > out.txt(docid, text)(words, counts)Map 1(sorted words, counts)Reduce 1Output File 1(sorted words, sum of counts)Split 1Be, 5“To Be Or Not To Be?”Be, 30Be, 12Reduce iOutput File i(sorted words, sum of counts)(docid, text)Map iSplit iBe, 7Be, 6ShuffleReduce ROutput File R(sorted words, sum of counts)(docid, text)Map M(sorted words, counts)(words, counts)Split N
  • 11. Hadoop High-Level ArchitectureHadoop ClientContacts Name Node for data or Job Tracker to submit jobsName NodeMaintains mapping of file blocks to data node slavesJob TrackerSchedules jobs across task tracker slavesData NodeStores and serves blocks of dataTask TrackerRuns tasks (work units) within a jobShare Physical Node
  • 12. Apache Hadoop EcosystemBI ReportingETL ToolsRDBMSHive (SQL)SqoopPig (Data Flow)MapReduce (Job Scheduling/Execution System)(Streaming/Pipes APIs)HBase(key-value store)Avro (Serialization)Zookeepr (Coordination)HDFS(Hadoop Distributed File System)
  • 13. Relational Databases:Hadoop:Use The Right Tool For The Right Job When to use?Affordable Storage/Compute
  • 14. Structured or Not (Agility)
  • 15. Resilient Auto ScalabilityWhen to use?Interactive Reporting (
  • 17. InteroperabilityEconomics of HadoopTypical Hardware:Two Quad Core Nehalems24GB RAM12 * 1TB SATA disks (JBOD mode, no need for RAID)1 Gigabit Ethernet cardCost/node: $5K/nodeEffective HDFS Space:¼ reserved for temp shuffle space, which leaves 9TB/node3 way replication leads to 3TB effective HDFS space/nodeBut assuming 7x compression that becomes ~ 20TB/nodeEffective Cost per user TB: $250/TBOther solutions cost in the range of $5K to $100K per user TB
  • 18. Sample Talks from Hadoop World ‘09VISA: Large Scale Transaction AnalysisJP Morgan Chase: Data Processing for Financial ServicesChina Mobile: Data Mining Platform for Telecom IndustryRackspace: Cross Data Center Log ProcessingBooz Allen Hamilton: Protein Alignment using HadoopeHarmony: Matchmaking in the Hadoop CloudGeneral Sentiment: Understanding Natural LanguageYahoo!: Social Graph AnalysisVisible Technologies: Real-Time Business IntelligenceFacebook: Rethinking the Data Warehouse with Hadoop and HiveSlides and Videos at http://www.cloudera.com/hadoop-world-nyc
  • 20. ConclusionHadoop is a data grid operating system which provides an economically scalable solution for storing and processing large amounts of unstructured or structured data over long periods of time.
  • 21. Contact InformationAmrAwadallahCTO, Cloudera [email protected]://twitter.com/awadallahOnline Training Videos and Info:http://cloudera.com/hadoop-traininghttp://cloudera.com/bloghttp://twitter.com/cloudera

Editor's Notes

  • #5: The solution is to *augment* the current RDBMSes with a “smart” storage/processing system. The original event level data is kept in this smart storage layer and can be mined as needed. The aggregate data is kept in the RDBMSes for interactive reporting and analytics.
  • #6: The system is self-healing in the sense that it automatically routes around failure. If a node fails then its workload and data are transparently shifted some where else.The system is intelligent in the sense that the MapReduce scheduler optimizes for the processing to happen on the same node storing the associated data (or co-located on the same leaf Ethernet switch), it also speculatively executes redundant tasks if certain nodes are detected to be slow.One of the key benefits of Hadoop is the ability to just upload any unstructured files to it without having to “schematize” them first. You can dump any type of data into Hadoop then the input record readers will abstract it out as if it was structured (i.e. schema on read vs on write)Open Source Software allows for innovation by partners and customers. It also enables third-party inspection of source code which provides assurances on security and product quality.1 HDD = 75 MB/sec, 1000 HDDs = 75 GB/sec, the “head of fileserver” bottleneck is eliminated.
  • #7: http://developer.yahoo.net/blogs/hadoop/2009/05/hadoop_sorts_a_petabyte_in_162.html100s of deployments worldwide (http://wiki.apache.org/hadoop/PoweredBy)
  • #8: Speculative Execution, Data rebalancing, Background Checksumming, etc.
  • #9: Pool commodity servers in a single hierarchical namespace.Designed for large files that are written once and read many times.Example here shows what happens with a replication factor of 3, each data block is present in at least 3 separate data nodes.Typical Hadoop node is eight cores with 16GB ram and four 1TB SATA disks.Default block size is 64MB, though most folks now set it to 128MB
  • #10: Differentiate between MapReduce the platform and MapReduce the programming model. The analogy is similar to the RDBMs which executes the queries, and SQL which is the language for the queries.MapReduce can run on top of HDFS or a selection of other storage systemsIntelligent scheduling algorithms for locality, sharing, and resource optimization.
  • #11: Think: SELECT word, count(*) FROM documents GROUP BY wordCheckout ParBASH:http://cloud-dev.blogspot.com/2009/06/introduction-to-parbash.html
  • #12: The Data Node slave and the Task Tracker slave can, and should, share the same server instance to leverage data locality whenever possible.The NameNode and JobTracker are currently SPOFs which can affect the availability of the system by around 15 mins (no data loss though, so the system is reliable, but can suffer from downtime occasionally). That issue is currently being addressed by the Apache Hadoop community using Zookeeper.
  • #13: HBase: Low Latency Random-Access with per-row consistency for updates/inserts/deletesJava MapReduceGives the most flexibility and performance, but with a potentially longer development cycleStreaming MapReduceAllows you to develop in any language of your choice, but slightly slower performancePigA relatively new data-flow language (contributed by Yahoo), suitable for ETL like workloads (procedural multi-stage jobs)HiveA SQL warehouse on top of MapReduce (contributed by Facebook), translates SQL into MapReduceHive Features: A subset of SQL covering the most common statementsAgile data types: Array, Map, Struct, and JSON objectsUser Defined Functions and AggregatesRegular Expression supportMapReduce supportJDBC supportPartitions and Buckets (for performance optimization)In The Works: Indices, Columnar Storage, Views, Microstrategy compatibility, Explode/CollectMore details: http://wiki.apache.org/hadoop/HiveQuery: SELECT, FROM, WHERE, JOIN, GROUP BY, SORT BY, LIMIT, DISTINCT, UNION ALLJoin: LEFT, RIGHT, FULL, OUTER, INNERDDL: CREATE TABLE, ALTER TABLE, DROP TABLE, DROP PARTITION, SHOW TABLES, SHOW PARTITIONSDML: LOAD DATA INTO, FROM INSERTTypes: TINYINT, INT, BIGINT, BOOLEAN, DOUBLE, STRING, ARRAY, MAP, STRUCT, JSON OBJECTQuery:Subqueries in FROM, User Defined Functions, User Defined Aggregates, Sampling (TABLESAMPLE)Relational: IS NULL, IS NOT NULL, LIKE, REGEXPBuilt in aggregates: COUNT, MAX, MIN, AVG, SUMBuilt in functions: CAST, IF, REGEXP_REPLACE, …Other: EXPLAIN, MAP, REDUCE, DISTRIBUTE BYList and Map operators: array[i], map[k], struct.field
  • #14: Sports car is refined, accelerates very fast, and has a lot of addons/features. But it is pricey on a per bit basis and is expensive to maintain.Cargo train is rough, missing a lot of “luxury”, slow to accelerate, but it can carry almost anything and once it gets going it can move a lot of stuff very economically.Hadoop:A data grid operating systemStores Files (Unstructured)Stores 10s of petabytesProcesses 10s of PB/jobWeak ConsistencyScan all blocks in all filesQueries & Data ProcessingBatch response (>1sec)Relational Databases:An ACID Database systemStores Tables (Schema)Stores 100s of terabytesProcesses 10s of TB/queryTransactional ConsistencyLookup rows using indexMostly queriesInteractive responseHadoop Myths:Hadoop MapReduce requires Rocket ScientistsHadoop has the benefit of both worlds, the simplicity of SQL and the power of Java (or any other language for that matter)Hadoop is not very efficient hardware wiseHadoop optimizes for scalability, stability and flexibility versus squeezing every tiny bit of hardware performance It is cost efficient to throw more “pizza box” servers to gain performance than hire more engineers to manage, configure, and optimize the system or pay 10x the hardware cost in softwareHadoop can’t do quick random lookupsHBase enables low-latency key-value pair lookups (no fast joins)Hadoop doesn’t support updates/inserts/deletesNot for multi-row transactions, but HBase enables transactions with row-level consistency semanticsHadoop isn’t highly availableThough Hadoop rarely loses data, it can suffer from down-time if the master NameNode goes down. This issue is currently being addressed, and there are HW/OS/VM solutions for itHadoop can’t be backed-up/recovered quicklyHDFS, like other file systems, can copy files very quickly. It also has utilities to copy data between HDFS clustersHadoop doesn’t have securityHadoop has Unix style user/group permissions, and the community is working on improving its security modelHadoop can’t talk to other systemsHadoop can talk to BI tools using JDBC, to RDBMSes using Sqoop, and to other systems using FUSE, WebDAV & FTP