SlideShare a Scribd company logo
Amr Awadallah CTO, Cloudera, Inc. August 5, 2009 How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
Outline Problems We Wanted to Solve What is Hadoop? HDFS and MapReduce Access Languages for Hadoop Hadoop vs RDBMSes Conclusion
Our Older Systems Limited Raw Data Access Storage Farm for Unstructured Data (20TB/day) Instrumentation Collection RDBMS (200GB/day) BI / Reports Mostly Append Ad hoc Queries & Data Mining ETL Grid Non-Consumption Filer heads are a bottleneck
We Needed To Be More Agile (part 1) Data Errors and Reprocessing   We encountered data errors that required reprocessing, which could happen a long time after the fact. “Tape Data” was cost prohibitive to reprocess, we needed to retain raw-data online for long time periods Conformation Loss Conversion of data from raw format to conformed dimensions causes some information loss. We needed access to the original data to recover lost information whenever needed (e.g.: a new browser user agent) Shrinking ETL Window The storage filers for raw data started becoming a significant bottleneck as large amounts of data needed to be copied to the ETL grid for processing (e.g. 30 hours to process a day’s worth of data) Ad Hoc Queries on Raw Data We wanted to run ad hoc queries against the original raw event data, but the storage filers only store and can’t compute
We Needed To Be More Agile (part 2) Data Model Agility: Schema-on-Read vs Schema-on-Write We wanted to access data even if it had no schema yet,   e.g. frequently a new product or feature will launch but we can’t get their dashboards since their schemas weren’t defined yet  Schema-on-Read is slower in terms of machine time (due to read overhead) but it allows us to evolve in an agile way, then we materialize to relational datamarts when data model stabilizes Consolidated Repository and Ubiquitous Access We wanted to eliminate borders and have a single repository where anybody can store, join, and process any of our data bits Beyond Reporting (Data-As-Product) Last, but not least, we wanted to process the data in ways that feed directly into the product/business (e.g. Email Spam Filtering,  Ad Targeting, Collaborative Filtering, Multimedia Processing)
The Solution: A Store-Compute Grid Storage + Computation Instrumentation Collection RDBMS Interactive Apps “ Batch” Apps Mostly Append ETL and Aggregations Ad hoc Queries & Data Mining
What is Hadoop? A scalable fault-tolerant  grid operating system  for data storage and processing Its scalability comes from the marriage of: HDFS:  Self-Healing High-Bandwidth Clustered Storage MapReduce:  Fault-Tolerant Distributed Processing Operates on  unstructured and structured data A  large and active ecosystem  (many developers and additions like HBase, Hive, Pig, …) Open source  under the friendly  Apache License http://wiki.apache.org/hadoop/
Hadoop History 2002-2004:  Doug Cutting and Mike Cafarella started working on Nutch (a web-scale crawler-based search system) 2003-2004:  Google publishes GFS and MapReduce papers  2004:  Cutting adds DFS & MapReduce support to Nutch 2006:  Yahoo! hires Cutting, Hadoop spins out of Nutch 2007:  NY Times converts 4TB of archives over 100 EC2s 2008:  Web-scale deployments at Y!, Facebook, Last.fm April 2008:  Fastest sort of a TB, 3.5mins over 910 nodes May 2009:   Fastest sort of a TB, 62secs over 1460 nodes Sorted a PB in 16.25hours over 3658 nodes 100s of deployments worldwide  ( http://wiki.apache.org/hadoop/PoweredBy ) June 2009:  Hadoop Summit 2009 – 750 attendees
Hadoop Design Axioms System Shall Manage and Heal Itself Performance Shall Scale Linearly  Compute Should Move to Data Simple Core, Modular and Extensible
HDFS: Hadoop Distributed File System Block Size = 64MB Replication Factor = 3 Cost/GB is a few ¢/month vs $/month
MapReduce: Distributed Processing
MapReduce Example for Word Count cat *.txt | mapper.pl | sort | reducer.pl > out.txt Split 1 Split i Split N Map 1 (docid, text) (docid, text) Map i (docid, text) Map M Reduce 1 Output File 1 (sorted words,  sum of  counts) Reduce i Output File i (sorted words,  sum of  counts) Reduce R Output File R (sorted words,  sum of  counts) (words, counts) (sorted words, counts) Map (in_key, in_value) => list of (out_key, intermediate_value) Reduce (out_key, list of intermediate_values) => out_value(s) Shuffle (words, counts) (sorted words, counts) “ To Be Or Not To Be?” Be, 5 Be, 12 Be, 7 Be, 6 Be, 30
Hadoop Is More Than Just Analytics/BI Building the Web Search Index Processing News/Content Feeds Content/Ad Targeting Optimization Fraud Detection and Fighting Email Spam Facebook Lexicon: Trends of words on walls Collaborative Filtering (you might like) Batch Video/Image Transcoding Gene Sequence Alignment
Apache Hadoop Ecosystem HDFS (Hadoop Distributed File System) HBase  (Key-Value store) MapReduce  (Job Scheduling/Execution System) Pig  (Data Flow) Hive  (SQL) BI Reporting ETL Tools Avro  (Serialization) Zookeepr  (Coordination) Sqoop RDBMS
Hadoop Development Languages Java MapReduce Gives the most flexibility and performance, but with a potentially longer development cycle Streaming MapReduce Allows you to develop in any language of your choice, but slightly slower performance Pig A relatively new data-flow language contributed by Yahoo, suitable for ETL like workloads (procedural multi-stage jobs) Hive A SQL warehouse on top of MapReduce (contributed by Facebook). It has two main components:  A meta-store which keeps the schema for files, and  An interpreter which converts the SQL query into MapReduce
Hive Features A subset of SQL covering the most common statements Agile data types: Array, Map, Struct, and JSON objects User Defined Functions and Aggregates Regular Expression support MapReduce support JDBC support Partitions and Buckets (for performance optimization) In The Works: Indices, Columnar Storage, Views, Microstrategy compatibility, Explode/Collect More details:  http://wiki.apache.org/hadoop/Hive
Relational Databases: An ACID Database system Stores Tables (Schema) Stores 100s of terabytes Processes 10s of TB/query Transactional Consistency Lookup rows using index Mostly queries Interactive response Hadoop: A data grid operating system Stores Files (Unstructured) Stores 10s of petabytes Processes 10s of PB/job Weak Consistency Scan all blocks in all files Queries & Data Processing Batch response (>1sec) Hadoop vs. Relational Databases
Relational Databases: Hadoop: Use The Right Tool For The Right Job
Hadoop Criticisms (part 1) Hadoop MapReduce requires Rocket Scientists Hadoop has the benefit of both worlds, the simplicity of SQL and the power of Java (or any other language for that matter) Hadoop is not very efficient hardware wise Hadoop optimizes for scalability, stability and flexibility versus squeezing every tiny bit of hardware performance  It is  cost efficient  to throw more “pizza box” servers to gain performance than hire more engineers to manage, configure, and optimize the system or pay 10x the hardware cost in software Hadoop can’t do quick random lookups HBase enables low-latency key-value pair lookups (no fast joins) Hadoop doesn’t support updates/inserts/deletes Not for multi-row transactions, but HBase enables transactions with row-level consistency semantics
Hadoop Criticisms (part 2) Hadoop isn’t highly available Though Hadoop rarely loses data, it can suffer from down-time if the master NameNode goes down. This issue is currently being addressed, and there are HW/OS/VM solutions for it Hadoop can’t be backed-up/recovered quickly HDFS, like other file systems, can copy files very quickly. It also has utilities to copy data between HDFS clusters Hadoop doesn’t have security Hadoop has Unix style user/group permissions, and the community is working on improving its security model Hadoop can’t talk to other systems Hadoop can talk to BI tools using JDBC, to RDBMSes using Sqoop, and to other systems using FUSE, WebDAV & FTP
Conclusion Hadoop is a  data grid operating system  which  augments  current BI systems and improves their  agility  by providing an  economically scalable  solution  for  storing and processing large amounts  of  unstructured data  over  long periods of time
Contact Information If you have further questions or comments: Amr Awadallah CTO, Cloudera Inc. [email_address] 650-362-0488 twitter.com/awadallah twitter.com/cloudera
APPENDIX
Hadoop High-Level Architecture Name Node Maintains mapping of file blocks to data node slaves Job Tracker Schedules jobs across task tracker slaves Data Node Stores and serves  blocks of data Hadoop Client Contacts Name Node for data or Job Tracker to submit jobs Task Tracker Runs tasks (work units) within a job Share Physical Node

More Related Content

PPTX
Data Lake Overview
James Serra
 
PDF
MapReduce Tutorial | What is MapReduce | Hadoop MapReduce Tutorial | Edureka
Edureka!
 
PPTX
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...
Simplilearn
 
PPTX
Apache HBase™
Prashant Gupta
 
PPTX
Map Reduce
Prashant Gupta
 
PDF
Snowflake Data Science and AI/ML at Scale
Adam Doyle
 
PPTX
Introduction to azure cosmos db
Ratan Parai
 
PPTX
Data Lakehouse, Data Mesh, and Data Fabric (r1)
James Serra
 
Data Lake Overview
James Serra
 
MapReduce Tutorial | What is MapReduce | Hadoop MapReduce Tutorial | Edureka
Edureka!
 
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...
Simplilearn
 
Apache HBase™
Prashant Gupta
 
Map Reduce
Prashant Gupta
 
Snowflake Data Science and AI/ML at Scale
Adam Doyle
 
Introduction to azure cosmos db
Ratan Parai
 
Data Lakehouse, Data Mesh, and Data Fabric (r1)
James Serra
 

What's hot (20)

PPTX
Introducing MongoDB Atlas
MongoDB
 
PPTX
Big Data & Hadoop Tutorial
Edureka!
 
PDF
Ozone and HDFS's Evolution
DataWorks Summit
 
PDF
Introduction to HBase
Avkash Chauhan
 
PPTX
6.hive
Prashant Gupta
 
PPTX
Building an Effective Data Warehouse Architecture
James Serra
 
PPTX
Big Data Analytics with Hadoop
Philippe Julio
 
PPTX
Big Data and Hadoop
Flavio Vit
 
PPTX
Hadoop Installation presentation
puneet yadav
 
PPT
Data Lakehouse Symposium | Day 1 | Part 2
Databricks
 
PPTX
Hadoop and Enterprise Data Warehouse
DataWorks Summit
 
PPT
Big Data
Vinayak Kamath
 
PDF
Fraud Detection with Hadoop
markgrover
 
PDF
Big Data Analytics with Spark
Mohammed Guller
 
PPTX
Big data architectures and the data lake
James Serra
 
PPTX
Data as a service
Khushbu Joshi
 
PPTX
Information Retrieval-05(wild card query_positional index_spell correction)
Jeet Das
 
PPTX
Snowflake: The Good, the Bad, and the Ugly
Tyler Wishnoff
 
PDF
Introduction to PySpark
Russell Jurney
 
PPTX
Database Performance Tuning
Arno Huetter
 
Introducing MongoDB Atlas
MongoDB
 
Big Data & Hadoop Tutorial
Edureka!
 
Ozone and HDFS's Evolution
DataWorks Summit
 
Introduction to HBase
Avkash Chauhan
 
Building an Effective Data Warehouse Architecture
James Serra
 
Big Data Analytics with Hadoop
Philippe Julio
 
Big Data and Hadoop
Flavio Vit
 
Hadoop Installation presentation
puneet yadav
 
Data Lakehouse Symposium | Day 1 | Part 2
Databricks
 
Hadoop and Enterprise Data Warehouse
DataWorks Summit
 
Big Data
Vinayak Kamath
 
Fraud Detection with Hadoop
markgrover
 
Big Data Analytics with Spark
Mohammed Guller
 
Big data architectures and the data lake
James Serra
 
Data as a service
Khushbu Joshi
 
Information Retrieval-05(wild card query_positional index_spell correction)
Jeet Das
 
Snowflake: The Good, the Bad, and the Ugly
Tyler Wishnoff
 
Introduction to PySpark
Russell Jurney
 
Database Performance Tuning
Arno Huetter
 
Ad

Similar to How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook (20)

PPTX
Hadoop introduction , Why and What is Hadoop ?
sudhakara st
 
PPTX
Overview of big data & hadoop version 1 - Tony Nguyen
Thanh Nguyen
 
PPTX
Overview of Big data, Hadoop and Microsoft BI - version1
Thanh Nguyen
 
PPTX
Hadoop and BigData - July 2016
Ranjith Sekar
 
PPTX
Hadoop: An Industry Perspective
Cloudera, Inc.
 
PPTX
Hadoop An Introduction
Mohanasundaram Ponnusamy
 
PPT
Hadoop Frameworks Panel__HadoopSummit2010
Yahoo Developer Network
 
PPTX
Hadoop Framework, its characteristics, advantages and uses
UswaAbid1
 
DOCX
HDFS
Vardhman Kale
 
PPTX
Hadoop: Distributed Data Processing
Cloudera, Inc.
 
PPTX
Big-Data Hadoop Tutorials - MindScripts Technologies, Pune
amrutupre
 
PPT
Hadoop in action
Mahmoud Yassin
 
PPTX
Big data or big deal
eduarderwee
 
PPTX
Hadoop_arunam_ppt
jerrin joseph
 
PPTX
عصر کلان داده، چرا و چگونه؟
datastack
 
PPTX
Seminar ppt
RajatTripathi34
 
PPTX
Hadoop by kamran khan
KamranKhan587
 
PPTX
Hands on Hadoop and pig
Sudar Muthu
 
PPTX
Overview of big data & hadoop v1
Thanh Nguyen
 
PDF
What is Apache Hadoop and its ecosystem?
tommychauhan
 
Hadoop introduction , Why and What is Hadoop ?
sudhakara st
 
Overview of big data & hadoop version 1 - Tony Nguyen
Thanh Nguyen
 
Overview of Big data, Hadoop and Microsoft BI - version1
Thanh Nguyen
 
Hadoop and BigData - July 2016
Ranjith Sekar
 
Hadoop: An Industry Perspective
Cloudera, Inc.
 
Hadoop An Introduction
Mohanasundaram Ponnusamy
 
Hadoop Frameworks Panel__HadoopSummit2010
Yahoo Developer Network
 
Hadoop Framework, its characteristics, advantages and uses
UswaAbid1
 
Hadoop: Distributed Data Processing
Cloudera, Inc.
 
Big-Data Hadoop Tutorials - MindScripts Technologies, Pune
amrutupre
 
Hadoop in action
Mahmoud Yassin
 
Big data or big deal
eduarderwee
 
Hadoop_arunam_ppt
jerrin joseph
 
عصر کلان داده، چرا و چگونه؟
datastack
 
Seminar ppt
RajatTripathi34
 
Hadoop by kamran khan
KamranKhan587
 
Hands on Hadoop and pig
Sudar Muthu
 
Overview of big data & hadoop v1
Thanh Nguyen
 
What is Apache Hadoop and its ecosystem?
tommychauhan
 
Ad

More from Amr Awadallah (6)

PPTX
Schema-on-Read vs Schema-on-Write
Amr Awadallah
 
PDF
How Apache Hadoop is Revolutionizing Business Intelligence and Data Analytics...
Amr Awadallah
 
PPTX
Cloudera/Stanford EE203 (Entrepreneurial Engineer)
Amr Awadallah
 
PPT
Service Primitives for Internet Scale Applications
Amr Awadallah
 
PPT
Applications of Virtual Machine Monitors for Scalable, Reliable, and Interact...
Amr Awadallah
 
PDF
Yahoo Microstrategy 2008
Amr Awadallah
 
Schema-on-Read vs Schema-on-Write
Amr Awadallah
 
How Apache Hadoop is Revolutionizing Business Intelligence and Data Analytics...
Amr Awadallah
 
Cloudera/Stanford EE203 (Entrepreneurial Engineer)
Amr Awadallah
 
Service Primitives for Internet Scale Applications
Amr Awadallah
 
Applications of Virtual Machine Monitors for Scalable, Reliable, and Interact...
Amr Awadallah
 
Yahoo Microstrategy 2008
Amr Awadallah
 

Recently uploaded (20)

PDF
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
PDF
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
PDF
GDG Cloud Munich - Intro - Luiz Carneiro - #BuildWithAI - July - Abdel.pdf
Luiz Carneiro
 
PDF
OFFOFFBOX™ – A New Era for African Film | Startup Presentation
ambaicciwalkerbrian
 
PDF
Oracle AI Vector Search- Getting Started and what's new in 2025- AIOUG Yatra ...
Sandesh Rao
 
PPTX
Agile Chennai 18-19 July 2025 | Emerging patterns in Agentic AI by Bharani Su...
AgileNetwork
 
PDF
Structs to JSON: How Go Powers REST APIs
Emily Achieng
 
PPTX
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
PDF
AI-Cloud-Business-Management-Platforms-The-Key-to-Efficiency-Growth.pdf
Artjoker Software Development Company
 
PDF
Google I/O Extended 2025 Baku - all ppts
HusseinMalikMammadli
 
PPTX
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
PDF
Trying to figure out MCP by actually building an app from scratch with open s...
Julien SIMON
 
PPTX
Applied-Statistics-Mastering-Data-Driven-Decisions.pptx
parmaryashparmaryash
 
PDF
AI Unleashed - Shaping the Future -Starting Today - AIOUG Yatra 2025 - For Co...
Sandesh Rao
 
PPTX
The-Ethical-Hackers-Imperative-Safeguarding-the-Digital-Frontier.pptx
sujalchauhan1305
 
PDF
The Future of Artificial Intelligence (AI)
Mukul
 
PDF
Using Anchore and DefectDojo to Stand Up Your DevSecOps Function
Anchore
 
PDF
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
PPTX
cloud computing vai.pptx for the project
vaibhavdobariyal79
 
PDF
Accelerating Oracle Database 23ai Troubleshooting with Oracle AHF Fleet Insig...
Sandesh Rao
 
The Future of Mobile Is Context-Aware—Are You Ready?
iProgrammer Solutions Private Limited
 
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
GDG Cloud Munich - Intro - Luiz Carneiro - #BuildWithAI - July - Abdel.pdf
Luiz Carneiro
 
OFFOFFBOX™ – A New Era for African Film | Startup Presentation
ambaicciwalkerbrian
 
Oracle AI Vector Search- Getting Started and what's new in 2025- AIOUG Yatra ...
Sandesh Rao
 
Agile Chennai 18-19 July 2025 | Emerging patterns in Agentic AI by Bharani Su...
AgileNetwork
 
Structs to JSON: How Go Powers REST APIs
Emily Achieng
 
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
AI-Cloud-Business-Management-Platforms-The-Key-to-Efficiency-Growth.pdf
Artjoker Software Development Company
 
Google I/O Extended 2025 Baku - all ppts
HusseinMalikMammadli
 
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
Trying to figure out MCP by actually building an app from scratch with open s...
Julien SIMON
 
Applied-Statistics-Mastering-Data-Driven-Decisions.pptx
parmaryashparmaryash
 
AI Unleashed - Shaping the Future -Starting Today - AIOUG Yatra 2025 - For Co...
Sandesh Rao
 
The-Ethical-Hackers-Imperative-Safeguarding-the-Digital-Frontier.pptx
sujalchauhan1305
 
The Future of Artificial Intelligence (AI)
Mukul
 
Using Anchore and DefectDojo to Stand Up Your DevSecOps Function
Anchore
 
CIFDAQ's Market Wrap : Bears Back in Control?
CIFDAQ
 
cloud computing vai.pptx for the project
vaibhavdobariyal79
 
Accelerating Oracle Database 23ai Troubleshooting with Oracle AHF Fleet Insig...
Sandesh Rao
 

How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook

  • 1. Amr Awadallah CTO, Cloudera, Inc. August 5, 2009 How Hadoop Revolutionized Data Warehousing at Yahoo and Facebook
  • 2. Outline Problems We Wanted to Solve What is Hadoop? HDFS and MapReduce Access Languages for Hadoop Hadoop vs RDBMSes Conclusion
  • 3. Our Older Systems Limited Raw Data Access Storage Farm for Unstructured Data (20TB/day) Instrumentation Collection RDBMS (200GB/day) BI / Reports Mostly Append Ad hoc Queries & Data Mining ETL Grid Non-Consumption Filer heads are a bottleneck
  • 4. We Needed To Be More Agile (part 1) Data Errors and Reprocessing We encountered data errors that required reprocessing, which could happen a long time after the fact. “Tape Data” was cost prohibitive to reprocess, we needed to retain raw-data online for long time periods Conformation Loss Conversion of data from raw format to conformed dimensions causes some information loss. We needed access to the original data to recover lost information whenever needed (e.g.: a new browser user agent) Shrinking ETL Window The storage filers for raw data started becoming a significant bottleneck as large amounts of data needed to be copied to the ETL grid for processing (e.g. 30 hours to process a day’s worth of data) Ad Hoc Queries on Raw Data We wanted to run ad hoc queries against the original raw event data, but the storage filers only store and can’t compute
  • 5. We Needed To Be More Agile (part 2) Data Model Agility: Schema-on-Read vs Schema-on-Write We wanted to access data even if it had no schema yet, e.g. frequently a new product or feature will launch but we can’t get their dashboards since their schemas weren’t defined yet Schema-on-Read is slower in terms of machine time (due to read overhead) but it allows us to evolve in an agile way, then we materialize to relational datamarts when data model stabilizes Consolidated Repository and Ubiquitous Access We wanted to eliminate borders and have a single repository where anybody can store, join, and process any of our data bits Beyond Reporting (Data-As-Product) Last, but not least, we wanted to process the data in ways that feed directly into the product/business (e.g. Email Spam Filtering, Ad Targeting, Collaborative Filtering, Multimedia Processing)
  • 6. The Solution: A Store-Compute Grid Storage + Computation Instrumentation Collection RDBMS Interactive Apps “ Batch” Apps Mostly Append ETL and Aggregations Ad hoc Queries & Data Mining
  • 7. What is Hadoop? A scalable fault-tolerant grid operating system for data storage and processing Its scalability comes from the marriage of: HDFS: Self-Healing High-Bandwidth Clustered Storage MapReduce: Fault-Tolerant Distributed Processing Operates on unstructured and structured data A large and active ecosystem (many developers and additions like HBase, Hive, Pig, …) Open source under the friendly Apache License http://wiki.apache.org/hadoop/
  • 8. Hadoop History 2002-2004: Doug Cutting and Mike Cafarella started working on Nutch (a web-scale crawler-based search system) 2003-2004: Google publishes GFS and MapReduce papers 2004: Cutting adds DFS & MapReduce support to Nutch 2006: Yahoo! hires Cutting, Hadoop spins out of Nutch 2007: NY Times converts 4TB of archives over 100 EC2s 2008: Web-scale deployments at Y!, Facebook, Last.fm April 2008: Fastest sort of a TB, 3.5mins over 910 nodes May 2009: Fastest sort of a TB, 62secs over 1460 nodes Sorted a PB in 16.25hours over 3658 nodes 100s of deployments worldwide ( http://wiki.apache.org/hadoop/PoweredBy ) June 2009: Hadoop Summit 2009 – 750 attendees
  • 9. Hadoop Design Axioms System Shall Manage and Heal Itself Performance Shall Scale Linearly Compute Should Move to Data Simple Core, Modular and Extensible
  • 10. HDFS: Hadoop Distributed File System Block Size = 64MB Replication Factor = 3 Cost/GB is a few ¢/month vs $/month
  • 12. MapReduce Example for Word Count cat *.txt | mapper.pl | sort | reducer.pl > out.txt Split 1 Split i Split N Map 1 (docid, text) (docid, text) Map i (docid, text) Map M Reduce 1 Output File 1 (sorted words, sum of counts) Reduce i Output File i (sorted words, sum of counts) Reduce R Output File R (sorted words, sum of counts) (words, counts) (sorted words, counts) Map (in_key, in_value) => list of (out_key, intermediate_value) Reduce (out_key, list of intermediate_values) => out_value(s) Shuffle (words, counts) (sorted words, counts) “ To Be Or Not To Be?” Be, 5 Be, 12 Be, 7 Be, 6 Be, 30
  • 13. Hadoop Is More Than Just Analytics/BI Building the Web Search Index Processing News/Content Feeds Content/Ad Targeting Optimization Fraud Detection and Fighting Email Spam Facebook Lexicon: Trends of words on walls Collaborative Filtering (you might like) Batch Video/Image Transcoding Gene Sequence Alignment
  • 14. Apache Hadoop Ecosystem HDFS (Hadoop Distributed File System) HBase (Key-Value store) MapReduce (Job Scheduling/Execution System) Pig (Data Flow) Hive (SQL) BI Reporting ETL Tools Avro (Serialization) Zookeepr (Coordination) Sqoop RDBMS
  • 15. Hadoop Development Languages Java MapReduce Gives the most flexibility and performance, but with a potentially longer development cycle Streaming MapReduce Allows you to develop in any language of your choice, but slightly slower performance Pig A relatively new data-flow language contributed by Yahoo, suitable for ETL like workloads (procedural multi-stage jobs) Hive A SQL warehouse on top of MapReduce (contributed by Facebook). It has two main components: A meta-store which keeps the schema for files, and An interpreter which converts the SQL query into MapReduce
  • 16. Hive Features A subset of SQL covering the most common statements Agile data types: Array, Map, Struct, and JSON objects User Defined Functions and Aggregates Regular Expression support MapReduce support JDBC support Partitions and Buckets (for performance optimization) In The Works: Indices, Columnar Storage, Views, Microstrategy compatibility, Explode/Collect More details: http://wiki.apache.org/hadoop/Hive
  • 17. Relational Databases: An ACID Database system Stores Tables (Schema) Stores 100s of terabytes Processes 10s of TB/query Transactional Consistency Lookup rows using index Mostly queries Interactive response Hadoop: A data grid operating system Stores Files (Unstructured) Stores 10s of petabytes Processes 10s of PB/job Weak Consistency Scan all blocks in all files Queries & Data Processing Batch response (>1sec) Hadoop vs. Relational Databases
  • 18. Relational Databases: Hadoop: Use The Right Tool For The Right Job
  • 19. Hadoop Criticisms (part 1) Hadoop MapReduce requires Rocket Scientists Hadoop has the benefit of both worlds, the simplicity of SQL and the power of Java (or any other language for that matter) Hadoop is not very efficient hardware wise Hadoop optimizes for scalability, stability and flexibility versus squeezing every tiny bit of hardware performance It is cost efficient to throw more “pizza box” servers to gain performance than hire more engineers to manage, configure, and optimize the system or pay 10x the hardware cost in software Hadoop can’t do quick random lookups HBase enables low-latency key-value pair lookups (no fast joins) Hadoop doesn’t support updates/inserts/deletes Not for multi-row transactions, but HBase enables transactions with row-level consistency semantics
  • 20. Hadoop Criticisms (part 2) Hadoop isn’t highly available Though Hadoop rarely loses data, it can suffer from down-time if the master NameNode goes down. This issue is currently being addressed, and there are HW/OS/VM solutions for it Hadoop can’t be backed-up/recovered quickly HDFS, like other file systems, can copy files very quickly. It also has utilities to copy data between HDFS clusters Hadoop doesn’t have security Hadoop has Unix style user/group permissions, and the community is working on improving its security model Hadoop can’t talk to other systems Hadoop can talk to BI tools using JDBC, to RDBMSes using Sqoop, and to other systems using FUSE, WebDAV & FTP
  • 21. Conclusion Hadoop is a data grid operating system which augments current BI systems and improves their agility by providing an economically scalable solution for storing and processing large amounts of unstructured data over long periods of time
  • 22. Contact Information If you have further questions or comments: Amr Awadallah CTO, Cloudera Inc. [email_address] 650-362-0488 twitter.com/awadallah twitter.com/cloudera
  • 24. Hadoop High-Level Architecture Name Node Maintains mapping of file blocks to data node slaves Job Tracker Schedules jobs across task tracker slaves Data Node Stores and serves blocks of data Hadoop Client Contacts Name Node for data or Job Tracker to submit jobs Task Tracker Runs tasks (work units) within a job Share Physical Node

Editor's Notes

  • #6: Data-As-Product is also referred to as Active DW, Operational BI, Online BI, etc.
  • #7: The solution is to *augment* the current RDBMSes with a “smart” storage/processing system. The original event level data is kept in this smart storage layer and can be mined as needed. The aggregate data is kept in the RDBMSes for interactive reporting and analytics.
  • #8: The system is self-healing in the sense that it automatically routes around failure. If a node fails then its workload and data are transparently shifted some where else. The system is intelligent in the sense that the MapReduce scheduler optimizes for the processing to happen on the same node storing the associated data (or co-located on the same leaf Ethernet switch), it also speculatively executes redundant tasks if certain nodes are detected to be slow. One of the key benefits of Hadoop is the ability to just upload any unstructured files to it without having to “schematize” them first. You can dump any type of data into Hadoop then the input record readers will abstract it out as if it was structured (i.e. schema on read vs on write) Open Source Software allows for innovation by partners and customers. It also enables third-party inspection of source code which provides assurances on security and product quality. 1 HDD = 75 MB/sec, 1000 HDDs = 75 GB/sec, the “head of fileserver” bottleneck is eliminated.
  • #9: http://developer.yahoo.net/blogs/hadoop/2009/05/hadoop_sorts_a_petabyte_in_162.html
  • #10: Speculative Execution, Data rebalancing, Background Checksumming, etc.
  • #11: Pool commodity servers in a single hierarchical namespace. Designed for large files that are written once and read many times. Example here shows what happens with a replication factor of 3, each data block is present in at least 3 separate data nodes. Typical Hadoop node is eight cores with 16GB ram and four 1TB SATA disks. Default block size is 64MB, though most folks now set it to 128MB
  • #12: Differentiate between MapReduce the platform and MapReduce the programming model. The analogy is similar to the RDBMs which executes the queries, and SQL which is the language for the queries. MapReduce can run on top of HDFS or a selection of other storage systems Intelligent scheduling algorithms for locality, sharing, and resource optimization.
  • #13: Think: SELECT word, count(*) FROM documents GROUP BY word Checkout ParBASH: http://cloud-dev.blogspot.com/2009/06/introduction-to-parbash.html
  • #14: Other uses like face recognition, document discovery, OCR, gene sequence alignment, etc. Data Mining: ** Search and Text Analytics ** Clustering/Categorization ** Modeling/Machine Learning ** Optimization/Operations Research ** Response Prediction/Forecasting ** Simulation, Monte-Carlo like. ** Random Walks of Connectivity Graphs
  • #15: HBase: Low Latency Random-Access with per-row consistency for updates/inserts/deletes
  • #16: First bullet is like assembly, then it gets higher level from there.
  • #17: Query: SELECT, FROM, WHERE, JOIN, GROUP BY, SORT BY, LIMIT, DISTINCT, UNION ALL Join: LEFT, RIGHT, FULL, OUTER, INNER DDL: CREATE TABLE, ALTER TABLE, DROP TABLE, DROP PARTITION, SHOW TABLES, SHOW PARTITIONS DML: LOAD DATA INTO, FROM INSERT Types: TINYINT, INT, BIGINT, BOOLEAN, DOUBLE, STRING, ARRAY, MAP, STRUCT, JSON OBJECT Query: Subqueries in FROM, User Defined Functions, User Defined Aggregates, Sampling (TABLESAMPLE) Relational: IS NULL, IS NOT NULL, LIKE, REGEXP Built in aggregates: COUNT, MAX, MIN, AVG, SUM Built in functions: CAST, IF, REGEXP_REPLACE, … Other: EXPLAIN, MAP, REDUCE, DISTRIBUTE BY List and Map operators: array[i], map[k], struct.field
  • #18: Hadoop is good for storing and processing large amounts of unstructured or structured data in batch form (i.e. full table scans) Hadoop with HBASE (or Hypertable) can do inserts/updates/deletes with reasonable interactive response times (also see Cassandra).
  • #19: Sports car is refined, accelerates very fast, and has a lot of addons/features. But it is pricey on a per bit basis and is expensive to maintain. Cargo train is rough, missing a lot of functionality, slow to start, but once it gets going it can carry a lot of stuff very economically.
  • #20: Hadoop is efficient on a cost basis. Security: Need better integration with systems like LDAP or Kerberos. Also need better isolation against malicious users, though auditing can potentially catch those.
  • #25: The Data Node slave and the Task Tracker slave can, and should, share the same server instance to leverage data locality whenever possible.