SlideShare a Scribd company logo
Getting Started on Hadoop

Silicon Valley Cloud Computing Meetup
Mountain View, 2010-07-19
http://www.meetup.com/cloudcomputing/calendar/13911740/


Paco Nathan
@pacoid
http://ceteri.blogspot.com/


Examples of Hadoop Streaming, based on Python scripts
running on the AWS Elastic MapReduce service.

• first, a brief history…
• AWS Elastic MapReduce
• “WordCount” example as “Hello World” for MapReduce
• text mining Enron Email Dataset from Infochimps.com
• inverted index, semantic lexicon, social graph
• data visualization using R and Gephi

All source code for this talk is available at:

 http://github.com/ceteri/ceteri-mapred
How Does MapReduce Work?


  map(k1, v1) → list(k2, v2)
  reduce(k2, list(v2)) → list(v3)

Several phases, which partition a problem into many tasks:
 • load data into DFS…
 • map phase: input split → (key, value) pairs, with optional combiner
 • shuffle phase: sort on keys to group pairs… load-test your network!
 • reduce phase: each task receives the values for one key
 • pull data from DFS…
NB: “map” phase is required, the rest are optional.
Think of set operations on tuples (and check out Cascading.org).
Meanwhile, given all those (key, value) pairs listed above, it’s no
wonder that key/value stores have become such a popular topic of
conversation…
How Does MapReduce Work?


  map(k1, v1) → list(k2, v2)
  reduce(k2, list(v2)) → list(v3)

The property of data independence among tasks allows for highly
parallel processing… maybe, if the stars are all aligned :)

Primarily, a MapReduce framework is largely about fault tolerance, and
how to leverage “commodity hardware” to replace “big iron” solutions…

That phrase “big iron” might apply to Oracle + NetApp. Or perhaps an
IBM zSeries mainframe… Or something – expensive, undoubtably.

Bonus questions for self-admitted math geeks: Foresee any concerns
about O(n) complexity, given the functional definitions listed above?
Keep in mind that each phase cannot conclude and progress to the
next phase until after each of its tasks has successfully completed.
A Brief History…

circa 1979 – Stanford, MIT, CMU, etc.
     set/list operations in LISP, Prolog, etc., for parallel processing
    http://www-formal.stanford.edu/jmc/history/lisp/lisp.htm

circa 2004 – Google
     MapReduce: Simplified Data Processing on Large Clusters
     Jeffrey Dean and Sanjay Ghemawat
    http://labs.google.com/papers/mapreduce.html

circa 2006 – Apache
     Hadoop, originating from the Nutch Project
     Doug Cutting
    http://research.yahoo.com/files/cutting.pdf

circa 2008 – Yahoo
     web scale search indexing
     Hadoop Summit, HUG, etc.
    http://developer.yahoo.com/hadoop/

circa 2009 – Amazon AWS
     Elastic MapReduce
     Hadoop modified for EC2/S3, plus support for Hive, Pig, etc.
    http://aws.amazon.com/elasticmapreduce/
Why run Hadoop in AWS?

• elastic: batch jobs on clusters can consume many nodes,
  scalable demand, not 24/7 – great case for using EC2
• commodity hardware: MR is built for fault tolerance, great
  case for leveraging AMIs
• right-sizing: difficult to know a priori how large of a cluster
  is needed – without running significant jobs (test k/v skew,
  data quality, etc.)
• when your input data is already in S3, SDB, EBS, RDS…
• when your output needs to be consumed in AWS …
You really don't want to buy rack space in a datacenter before
assessing these issues – besides, a private datacenter probably
won’t even be cost-effective afterward.
But why run Hadoop on Elastic MapReduce?

• virtualization: Hadoop needs some mods to run well in
 that kind of environment
• pay-per-drink: absorbs cost of launching nodes
• secret sauce: Cluster Compute Instances (CCI) and
 Spot Instances (SI)
• DevOps: EMR job flow mgmt optimizes where your staff
 spends their (limited) time+capital
• logging to S3, works wonders for troubleshooting
A Tale of Two Ventures…

Adknowledge: in 2008, our team became one of the larger
use cases running Hadoop on AWS
• prior to the launch of EMR
• launching clusters of up to 100 m1.xlarge
• initially 12 hrs/day, optimized down to 4 hrs/day
• displaced $3MM capex for Netezza

ShareThis: in 2009, our team used even more Hadoop
on AWS than that previous team
• this time with EMR
• larger/more frequent jobs
• lower batch failure rate
• faster turnaround on results
• excellent support
• smaller team required
• much less budget
“WordCount”, a “Hello World” for MapReduce

Definition: count how often each word appears within a collection
of text documents.

A simple program which illustrates a pretty good test case for what
MapReduce can perform, since it incorporates:

• minimal amount code
• document feature extraction (where words are “terms”)
• symbolic and numeric values
• potential use of a combiner
• bipartite graph of (doc, term) tuples
• not so many steps away from useful indexing…
When a framework can run “WordCount” in parallel at scale, then it
can handle much larger, more interesting compute problems as well.
Bipartite Graph

Wikipedia: “…a bipartite graph is a graph whose vertices can be divided
into two disjoint sets U and V such that every edge connects a vertex in U
to one in V… ”

     http://en.wikipedia.org/wiki/Bipartite_graph




Consider the case where:


   U ≡ { documents }

   V ≡ { terms }


Many kinds of text analytics products
can be constructed based on this
data structure as a foundation.
“WordCount”, in other words…


  map(doc_id, text)
  → list(word, count)

  reduce(word, list(count))
   → list(sum_count)
“WordCount”, in other words…

  void map (String doc_id, String text):
   for each word w in segment(text):
    emitPartial(w, "1");



  void reduce (String word, Iterator partial_counts):
   int count = 0;

   for each pc in partial_counts:
    count += Int(pc);

   emitResult(String(count));
Hadoop Streaming

One way to approach MapReduce jobs in Hadoop is to use streaming.
In other words, use any kind of script which can be run from a command
line and read/write data via stdin and stdout:
    http://hadoop.apache.org/common/docs/current/streaming.html#Hadoop+Streaming


The following examples use Python scripts for Hadoop Streaming. One
really great benefit is that then you can dev/test/debug your MapReduce
code on small data sets from a command line simply by using pipes:


     cat input.txt | mapper.py | sort | reducer.py

BTW, there are much better ways to handle Hadoop Streaming in Python
on Elastic MapReduce – for example, using the “boto” library. However,
these examples are kept simple so they’ll fit into a tech talk!
“WordCount”, in other words…
“WordCount”, in other words…
“WordCount”, in other words…

  # this Linux command line...
  cat foo.txt | map_wc.py | sort | red_wc.py


  # produces output like this...
  tuple
    
  9
  term
     
  6
  tfidf
    
  6
  sort
 
   5
  analysis
 2
  wordcount
 1
  user
     
  1


  # depending on input -
  # which could be HTML content, tweets, email, etc.
Speaking of Email…

Enron pioneered innovative corporate accounting methods and energy market
manipulations, involving a baffling array of fraud techniques. The firm soared to
a valuation of over $60B (growing 56% in 1999, 87% in 2000) while inducing a
state of emergency in California – which cost the state over $40B. Subsequent
prosecution of top execs plus the meteoric decline in the firm’s 2001 share value
made for a spectacular #EPIC #FAIL
     http://en.wikipedia.org/wiki/Enron_scandal
     http://en.wikipedia.org/wiki/California_electricity_crisis


Thanks to CALO and Infochimps, we have a half million email messages
collected from Enron managers during their, um, “heyday” period:
     http://infochimps.org/datasets/enron-email-dataset--2
     http://www.cs.cmu.edu/~enron/


Let’s use Hadoop to help find out: what were some of
the things those managers were talking about?
Simple Text Analytics

Extending from how “WordCount” works, we’ll add multiple kinds of output
tuples, plus two stages of mappers and reducers, to generate different kinds
of text analytics products:

• inverted index
• co-occurrence analysis
• TF-IDF filter
• social graph

While doing that, we'll also perform other statistical
analysis and data visualization using R and Gephi
Mapper 1: RFC822 Parser

map_parse.py takes a list of URI for where to read email messages, parses
each message, then emits multiple kinds of output tuples:



   (doc_id, msg_uri, date)

   (sender, receiver, doc_id)

   (term, term_freq, doc_id)

   (term, co_term, doc_id)

Note that our dataset includes approximately 500,000 email messages, with an
average of about 100 words in each message.
Also, there are 10E+5 unique terms. That will tend to be a constant in English
texts, which is great to know when configuring capacity.
Reducer 1: TF-IDF and Co-Occurrence

red_idf.py takes the shuffled output from map_parse.py, collects metadata
for each term, calculates TF-IDF to use in a later stage for filtering, calculates
co-occurrence probability, then emits all these results:



   (doc_id, msg_uri, date)

   (sender, receiver, doc_id)

   (term, idf, count)

   (term, co_term, prob_cooc)

   (term, tfidf, doc_id)

   (term, max_tfidf)
Mapper 1 + Reducer 1
Mapper 1 Output
Reducer 1 Output
Mapper 2 + Reducer 2: Threshold Filter

map_filter.py and red_filter.py apply a threshold (based on statistical
analysis of TF-IDF) to filter results of co-occurrence analysis so that we begin to
produce a semantic lexicon for exploring the data set.


How do we determine a reasonable value for the TF-IDF threshold, for filtering
terms? Sampling from the (term, max_tfidf) tuple, we run summary stats and
visualization in R:


   cat dat.idf | util_extract.py m > thresh.tsv


We also convert the sender/receiver social graph into CSV format for Gephi
visualization:


   cat dat.parsed | util_extract.py s | util_gephi.py | sort -u > graph.csv
Mapper 2 + Reducer 2
Elastic MapReduce Job Flows…
Elastic MapReduce Job Flows…
Elastic MapReduce Job Flows…
Elastic MapReduce Job Flows…
Elastic MapReduce Job Flows…
Elastic MapReduce Job Flows…
Elastic MapReduce Job Flows…
Elastic MapReduce Job Flows…
Elastic MapReduce Output Part Files…
Exporting Data into Other Tools…
Using R to Determine a Threshold…

  data <- read.csv("thresh.tsv", sep='t', header=F)
  t_data <- data[,3]
  print(summary(t_data))

  # pass through values for 80+ percentile
  qntile <- .8
  t_thresh <- quantile(t_data, qntile)

  # CDF plot
  title <- "CDF threshold max(tfidf)"
  xtitle <- paste("thresh:", t_thresh)
  par(mfrow=c(2, 1))
  plot(ecdf(t_data), xlab=xtitle, main=title)
  abline(v=t_thresh, col="red")
  abline(h=qtile, col="yellow")

  # box-and-whisker plot
  boxplot(t_data, horizontal=TRUE)
  rug(t_data, side=1)
Using R to Determine a Threshold…
                            CDF threshold max(tfidf)
          0.8
  Fn(x)

          0.4
          0.0




                    0           2             4            6           8

                                    thresh: 0.063252




                0       1   2        3         4       5       6   7
Using Gephi to Explore the Social Graph…
Best Practices

• Again, there are much more efficient ways to handle Hadoop Streaming
  and Text Analytics…
• Unit Tests, Continuous Integration, etc., – all great stuff, but “Big Data”
  software engineering requires additional steps
• Sample data, measure data ratios and cluster behaviors, analyze in R,
  visualize everything you can, calibrate any necessary “magic numbers”
• Develop and test code on a personal computer in IDE, cmd line, etc., using
  a minimal data sets
• Deploy to staging cluster with larger data sets for integration tests and QA
• Run in production with A/B testing were feasible to evaluate changes
  quantitatively
• Learn from others at meetups, unconfs, forums, etc.
Great Resources for Diving into Hadoop

Google: Cluster Computing and MapReduce Lectures
    http://code.google.com/edu/submissions/mapreduce-minilecture/listing.html


Amazon AWS Elastic MapReduce
    http://aws.amazon.com/elasticmapreduce/


Hadoop: The Definitive Guide, by Tom White
    http://oreilly.com/catalog/9780596521981


Apache Hadoop
    http://hadoop.apache.org/


Python “boto” interface to EMR
    http://boto.cloudhackers.com/emr_tut.html
Excellent Products for Hadoop in Production

Datameer
     http://www.datameer.com/
     “Democratizing Big Data”

Designed for business users, Datameer Analytics Solution (DAS) builds
on the power and scalability of Apache Hadoop to deliver an easy-to-use
and cost-effective solution for big data analytics. The solution integrates
rapidly with existing and new data sources to deliver sophisticated
analytics.


Cascading
     http://www.cascading.org/

Cascading is a feature-rich API for defining and executing complex,
scale-free, and fault tolerant data processing workflows on a Hadoop
cluster, which provides a thin Java library that sits on top of Hadoop's
MapReduce layer. Open source in Java.
Scale Unlimited – Hadoop Boot Camp

Santa Clara, 22-23 July 2010
http://www.scaleunlimited.com/courses/hadoop-bootcamp-santaclara

• An extensive overview of the Hadoop architecture
• Theory and practice of solving large scale data processing problems
• Hands-on labs covering Hadoop installation, development, debugging
• Common and advanced “Big Data” tasks and solutions
Special $500 discount for SVCC Meetup members:
http://hadoopbootcamp.eventbrite.com/?discount=DBDatameer


Sample material – list of questions about intro Hadoop from
the recent BigDataCamp:
http://www.scaleunlimited.com/blog/intro-to-hadoop-at-bigdatacamp
Getting Started on Hadoop

Silicon Valley Cloud Computing Meetup
Mountain View, 2010-07-19
http://www.meetup.com/cloudcomputing/calendar/13911740/


Paco Nathan
@pacoid
http://ceteri.blogspot.com/


Examples of Hadoop Streaming, based on Python scripts
running on the AWS Elastic MapReduce service.

All source code for this talk is available at:

 http://github.com/ceteri/ceteri-mapred

More Related Content

What's hot (20)

PPTX
Practical Hadoop using Pig
David Wellman
 
PPTX
MapReduce basic
Chirag Ahuja
 
KEY
Rainbird: Realtime Analytics at Twitter (Strata 2011)
Kevin Weil
 
PPT
Introduction To Map Reduce
rantav
 
PPT
Another Intro To Hadoop
Adeel Ahmad
 
PPTX
Apache Pig
Shashidhar Basavaraju
 
PPTX
Hadoop & Hive Change the Data Warehousing Game Forever
DataWorks Summit
 
PDF
Interview questions on Apache spark [part 2]
knowbigdata
 
PDF
Karmasphere hadoop-productivity-tools
Hadoop User Group
 
PPTX
Python in big data world
Rohit
 
PDF
Hadoop pig
Sean Murphy
 
PDF
Introduction to the Hadoop Ecosystem (codemotion Edition)
Uwe Printz
 
PPT
Mapreduce in Search
Amund Tveit
 
PPT
Hadoop at Yahoo! -- Hadoop World NY 2009
yhadoop
 
KEY
Intro to Hadoop
jeffturner
 
PDF
Practical Problem Solving with Apache Hadoop & Pig
Milind Bhandarkar
 
KEY
Protocol Buffers and Hadoop at Twitter
Kevin Weil
 
PDF
Geek camp
jdhok
 
PPT
Hadoop at Yahoo! -- University Talks
yhadoop
 
PPTX
Yahoo! Mail antispam - Bay area Hadoop user group
Hadoop User Group
 
Practical Hadoop using Pig
David Wellman
 
MapReduce basic
Chirag Ahuja
 
Rainbird: Realtime Analytics at Twitter (Strata 2011)
Kevin Weil
 
Introduction To Map Reduce
rantav
 
Another Intro To Hadoop
Adeel Ahmad
 
Hadoop & Hive Change the Data Warehousing Game Forever
DataWorks Summit
 
Interview questions on Apache spark [part 2]
knowbigdata
 
Karmasphere hadoop-productivity-tools
Hadoop User Group
 
Python in big data world
Rohit
 
Hadoop pig
Sean Murphy
 
Introduction to the Hadoop Ecosystem (codemotion Edition)
Uwe Printz
 
Mapreduce in Search
Amund Tveit
 
Hadoop at Yahoo! -- Hadoop World NY 2009
yhadoop
 
Intro to Hadoop
jeffturner
 
Practical Problem Solving with Apache Hadoop & Pig
Milind Bhandarkar
 
Protocol Buffers and Hadoop at Twitter
Kevin Weil
 
Geek camp
jdhok
 
Hadoop at Yahoo! -- University Talks
yhadoop
 
Yahoo! Mail antispam - Bay area Hadoop user group
Hadoop User Group
 

Viewers also liked (20)

PPTX
Optimal Execution Of MapReduce Jobs In Cloud - Voices 2015
Deanna Kosaraju
 
PPTX
70a monitoring & troubleshooting
mapr-academy
 
PPT
Policy governance reviewed
Suzanne Benoit
 
PDF
Why Are Change Management And Metrics Such Crucial Aspects To Your Overall De...
AIIM International
 
PDF
Bfit for healthcare - A Document Management System for Healthcare Industry
Globalsion Software Sdn Bhd
 
PDF
Troubleshooting Hadoop: Distributed Debugging
Great Wide Open
 
PPTX
DMAvatar
HGTechSolutions
 
PPTX
Technology Investment for Mutual Insurance Companies
Chris Reynolds
 
PPT
A Practical Guide to Capturing, Organizing, and Securing Your Documents
Scott Abel
 
PDF
MapReduce for Idiots
petewarden
 
PPT
Introduction to MapReduce Data Transformations
swooledge
 
PDF
The Chief Data Officer Agenda: Metrics for Information and Data Management
DATAVERSITY
 
PPT
Alfresco As SharePoint Alternative - Architecture Overview
Alfresco Software
 
PPTX
Scale your Alfresco Solutions
Alfresco Software
 
PPT
Intro To Alfresco Part 1
Jeff Potts
 
PPT
EDRMS Pre implementation project plan
Donna_Maree_Findlay
 
PDF
Alfresco 5.2 REST API
J V
 
KEY
Large scale ETL with Hadoop
OReillyStrata
 
PDF
Big Data simplified
Praveen Hanchinal
 
PDF
Apache Mahout Tutorial - Recommendation - 2013/2014
Cataldo Musto
 
Optimal Execution Of MapReduce Jobs In Cloud - Voices 2015
Deanna Kosaraju
 
70a monitoring & troubleshooting
mapr-academy
 
Policy governance reviewed
Suzanne Benoit
 
Why Are Change Management And Metrics Such Crucial Aspects To Your Overall De...
AIIM International
 
Bfit for healthcare - A Document Management System for Healthcare Industry
Globalsion Software Sdn Bhd
 
Troubleshooting Hadoop: Distributed Debugging
Great Wide Open
 
DMAvatar
HGTechSolutions
 
Technology Investment for Mutual Insurance Companies
Chris Reynolds
 
A Practical Guide to Capturing, Organizing, and Securing Your Documents
Scott Abel
 
MapReduce for Idiots
petewarden
 
Introduction to MapReduce Data Transformations
swooledge
 
The Chief Data Officer Agenda: Metrics for Information and Data Management
DATAVERSITY
 
Alfresco As SharePoint Alternative - Architecture Overview
Alfresco Software
 
Scale your Alfresco Solutions
Alfresco Software
 
Intro To Alfresco Part 1
Jeff Potts
 
EDRMS Pre implementation project plan
Donna_Maree_Findlay
 
Alfresco 5.2 REST API
J V
 
Large scale ETL with Hadoop
OReillyStrata
 
Big Data simplified
Praveen Hanchinal
 
Apache Mahout Tutorial - Recommendation - 2013/2014
Cataldo Musto
 
Ad

Similar to Getting Started on Hadoop (20)

PPT
L19CloudMapReduce introduction for cloud computing .ppt
MaruthiPrasad96
 
PDF
Hadoop Overview & Architecture
EMC
 
PDF
Getting started with Hadoop, Hive, and Elastic MapReduce
obdit
 
PPTX
Hadoop and Mapreduce for .NET User Group
Csaba Toth
 
PPT
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...
Chris Baglieri
 
PDF
Hadoop Overview kdd2011
Milind Bhandarkar
 
PPT
11. From Hadoop to Spark 1:2
Fabio Fumarola
 
PDF
Hadoop Tutorial with @techmilind
EMC
 
PPT
Hadoop tutorial
Aamir Ameen
 
PDF
MapReduce: teoria e prática
PET Computação
 
PDF
Parallel Computing for Econometricians with Amazon Web Services
stephenjbarr
 
PDF
An Introduction to MapReduce
Sina Ebrahimi
 
PDF
Hadoop breizhjug
David Morin
 
PDF
Getting Started with Hadoop
Josh Devins
 
PDF
Hadoop first mr job - inverted index construction
Subhas Kumar Ghosh
 
PPTX
A brief history of "big data"
Nicola Ferraro
 
PPTX
Big data analytics involves examining large, complex datasets
anamikaagithkumar
 
PPT
MAPREDUCE ppt big data computing fall 2014 indranil gupta.ppt
zuhaibmohammed465
 
PPT
L3.fa14.ppt
Tushar557668
 
PPTX
Hadoop for sysadmins
ericwilliammarshall
 
L19CloudMapReduce introduction for cloud computing .ppt
MaruthiPrasad96
 
Hadoop Overview & Architecture
EMC
 
Getting started with Hadoop, Hive, and Elastic MapReduce
obdit
 
Hadoop and Mapreduce for .NET User Group
Csaba Toth
 
Finding the needles in the haystack. An Overview of Analyzing Big Data with H...
Chris Baglieri
 
Hadoop Overview kdd2011
Milind Bhandarkar
 
11. From Hadoop to Spark 1:2
Fabio Fumarola
 
Hadoop Tutorial with @techmilind
EMC
 
Hadoop tutorial
Aamir Ameen
 
MapReduce: teoria e prática
PET Computação
 
Parallel Computing for Econometricians with Amazon Web Services
stephenjbarr
 
An Introduction to MapReduce
Sina Ebrahimi
 
Hadoop breizhjug
David Morin
 
Getting Started with Hadoop
Josh Devins
 
Hadoop first mr job - inverted index construction
Subhas Kumar Ghosh
 
A brief history of "big data"
Nicola Ferraro
 
Big data analytics involves examining large, complex datasets
anamikaagithkumar
 
MAPREDUCE ppt big data computing fall 2014 indranil gupta.ppt
zuhaibmohammed465
 
L3.fa14.ppt
Tushar557668
 
Hadoop for sysadmins
ericwilliammarshall
 
Ad

More from Paco Nathan (20)

PDF
Human in the loop: a design pattern for managing teams working with ML
Paco Nathan
 
PDF
Human-in-the-loop: a design pattern for managing teams that leverage ML
Paco Nathan
 
PDF
Human-in-a-loop: a design pattern for managing teams which leverage ML
Paco Nathan
 
PDF
Humans in a loop: Jupyter notebooks as a front-end for AI
Paco Nathan
 
PDF
Humans in the loop: AI in open source and industry
Paco Nathan
 
PDF
Computable Content
Paco Nathan
 
PDF
Computable Content: Lessons Learned
Paco Nathan
 
PDF
SF Python Meetup: TextRank in Python
Paco Nathan
 
PDF
Use of standards and related issues in predictive analytics
Paco Nathan
 
PDF
Data Science in 2016: Moving Up
Paco Nathan
 
PDF
Data Science Reinvents Learning?
Paco Nathan
 
PDF
Jupyter for Education: Beyond Gutenberg and Erasmus
Paco Nathan
 
PDF
GalvanizeU Seattle: Eleven Almost-Truisms About Data
Paco Nathan
 
PDF
Microservices, containers, and machine learning
Paco Nathan
 
PDF
GraphX: Graph analytics for insights about developer communities
Paco Nathan
 
PDF
Graph Analytics in Spark
Paco Nathan
 
PDF
Apache Spark and the Emerging Technology Landscape for Big Data
Paco Nathan
 
PDF
QCon São Paulo: Real-Time Analytics with Spark Streaming
Paco Nathan
 
PDF
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and More
Paco Nathan
 
PDF
A New Year in Data Science: ML Unpaused
Paco Nathan
 
Human in the loop: a design pattern for managing teams working with ML
Paco Nathan
 
Human-in-the-loop: a design pattern for managing teams that leverage ML
Paco Nathan
 
Human-in-a-loop: a design pattern for managing teams which leverage ML
Paco Nathan
 
Humans in a loop: Jupyter notebooks as a front-end for AI
Paco Nathan
 
Humans in the loop: AI in open source and industry
Paco Nathan
 
Computable Content
Paco Nathan
 
Computable Content: Lessons Learned
Paco Nathan
 
SF Python Meetup: TextRank in Python
Paco Nathan
 
Use of standards and related issues in predictive analytics
Paco Nathan
 
Data Science in 2016: Moving Up
Paco Nathan
 
Data Science Reinvents Learning?
Paco Nathan
 
Jupyter for Education: Beyond Gutenberg and Erasmus
Paco Nathan
 
GalvanizeU Seattle: Eleven Almost-Truisms About Data
Paco Nathan
 
Microservices, containers, and machine learning
Paco Nathan
 
GraphX: Graph analytics for insights about developer communities
Paco Nathan
 
Graph Analytics in Spark
Paco Nathan
 
Apache Spark and the Emerging Technology Landscape for Big Data
Paco Nathan
 
QCon São Paulo: Real-Time Analytics with Spark Streaming
Paco Nathan
 
Strata 2015 Data Preview: Spark, Data Visualization, YARN, and More
Paco Nathan
 
A New Year in Data Science: ML Unpaused
Paco Nathan
 

Recently uploaded (20)

PDF
Biography of Daniel Podor.pdf
Daniel Podor
 
PDF
From Code to Challenge: Crafting Skill-Based Games That Engage and Reward
aiyshauae
 
PDF
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
PDF
Reverse Engineering of Security Products: Developing an Advanced Microsoft De...
nwbxhhcyjv
 
PDF
CIFDAQ Market Wrap for the week of 4th July 2025
CIFDAQ
 
PDF
Empower Inclusion Through Accessible Java Applications
Ana-Maria Mihalceanu
 
PDF
What Makes Contify’s News API Stand Out: Key Features at a Glance
Contify
 
PDF
Achieving Consistent and Reliable AI Code Generation - Medusa AI
medusaaico
 
PDF
HubSpot Main Hub: A Unified Growth Platform
Jaswinder Singh
 
PDF
Presentation - Vibe Coding The Future of Tech
yanuarsinggih1
 
PDF
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
PDF
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
PPTX
From Sci-Fi to Reality: Exploring AI Evolution
Svetlana Meissner
 
PPTX
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
PPTX
COMPARISON OF RASTER ANALYSIS TOOLS OF QGIS AND ARCGIS
Sharanya Sarkar
 
PDF
Agentic AI lifecycle for Enterprise Hyper-Automation
Debmalya Biswas
 
PPTX
Webinar: Introduction to LF Energy EVerest
DanBrown980551
 
PDF
Chris Elwell Woburn, MA - Passionate About IT Innovation
Chris Elwell Woburn, MA
 
PDF
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
PDF
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 
Biography of Daniel Podor.pdf
Daniel Podor
 
From Code to Challenge: Crafting Skill-Based Games That Engage and Reward
aiyshauae
 
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
Reverse Engineering of Security Products: Developing an Advanced Microsoft De...
nwbxhhcyjv
 
CIFDAQ Market Wrap for the week of 4th July 2025
CIFDAQ
 
Empower Inclusion Through Accessible Java Applications
Ana-Maria Mihalceanu
 
What Makes Contify’s News API Stand Out: Key Features at a Glance
Contify
 
Achieving Consistent and Reliable AI Code Generation - Medusa AI
medusaaico
 
HubSpot Main Hub: A Unified Growth Platform
Jaswinder Singh
 
Presentation - Vibe Coding The Future of Tech
yanuarsinggih1
 
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
From Sci-Fi to Reality: Exploring AI Evolution
Svetlana Meissner
 
Q2 FY26 Tableau User Group Leader Quarterly Call
lward7
 
COMPARISON OF RASTER ANALYSIS TOOLS OF QGIS AND ARCGIS
Sharanya Sarkar
 
Agentic AI lifecycle for Enterprise Hyper-Automation
Debmalya Biswas
 
Webinar: Introduction to LF Energy EVerest
DanBrown980551
 
Chris Elwell Woburn, MA - Passionate About IT Innovation
Chris Elwell Woburn, MA
 
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 

Getting Started on Hadoop

  • 1. Getting Started on Hadoop Silicon Valley Cloud Computing Meetup Mountain View, 2010-07-19 http://www.meetup.com/cloudcomputing/calendar/13911740/ Paco Nathan @pacoid http://ceteri.blogspot.com/ Examples of Hadoop Streaming, based on Python scripts running on the AWS Elastic MapReduce service. • first, a brief history… • AWS Elastic MapReduce • “WordCount” example as “Hello World” for MapReduce • text mining Enron Email Dataset from Infochimps.com • inverted index, semantic lexicon, social graph • data visualization using R and Gephi All source code for this talk is available at: http://github.com/ceteri/ceteri-mapred
  • 2. How Does MapReduce Work? map(k1, v1) → list(k2, v2) reduce(k2, list(v2)) → list(v3) Several phases, which partition a problem into many tasks: • load data into DFS… • map phase: input split → (key, value) pairs, with optional combiner • shuffle phase: sort on keys to group pairs… load-test your network! • reduce phase: each task receives the values for one key • pull data from DFS… NB: “map” phase is required, the rest are optional. Think of set operations on tuples (and check out Cascading.org). Meanwhile, given all those (key, value) pairs listed above, it’s no wonder that key/value stores have become such a popular topic of conversation…
  • 3. How Does MapReduce Work? map(k1, v1) → list(k2, v2) reduce(k2, list(v2)) → list(v3) The property of data independence among tasks allows for highly parallel processing… maybe, if the stars are all aligned :) Primarily, a MapReduce framework is largely about fault tolerance, and how to leverage “commodity hardware” to replace “big iron” solutions… That phrase “big iron” might apply to Oracle + NetApp. Or perhaps an IBM zSeries mainframe… Or something – expensive, undoubtably. Bonus questions for self-admitted math geeks: Foresee any concerns about O(n) complexity, given the functional definitions listed above? Keep in mind that each phase cannot conclude and progress to the next phase until after each of its tasks has successfully completed.
  • 4. A Brief History… circa 1979 – Stanford, MIT, CMU, etc. set/list operations in LISP, Prolog, etc., for parallel processing http://www-formal.stanford.edu/jmc/history/lisp/lisp.htm circa 2004 – Google MapReduce: Simplified Data Processing on Large Clusters Jeffrey Dean and Sanjay Ghemawat http://labs.google.com/papers/mapreduce.html circa 2006 – Apache Hadoop, originating from the Nutch Project Doug Cutting http://research.yahoo.com/files/cutting.pdf circa 2008 – Yahoo web scale search indexing Hadoop Summit, HUG, etc. http://developer.yahoo.com/hadoop/ circa 2009 – Amazon AWS Elastic MapReduce Hadoop modified for EC2/S3, plus support for Hive, Pig, etc. http://aws.amazon.com/elasticmapreduce/
  • 5. Why run Hadoop in AWS? • elastic: batch jobs on clusters can consume many nodes, scalable demand, not 24/7 – great case for using EC2 • commodity hardware: MR is built for fault tolerance, great case for leveraging AMIs • right-sizing: difficult to know a priori how large of a cluster is needed – without running significant jobs (test k/v skew, data quality, etc.) • when your input data is already in S3, SDB, EBS, RDS… • when your output needs to be consumed in AWS … You really don't want to buy rack space in a datacenter before assessing these issues – besides, a private datacenter probably won’t even be cost-effective afterward.
  • 6. But why run Hadoop on Elastic MapReduce? • virtualization: Hadoop needs some mods to run well in that kind of environment • pay-per-drink: absorbs cost of launching nodes • secret sauce: Cluster Compute Instances (CCI) and Spot Instances (SI) • DevOps: EMR job flow mgmt optimizes where your staff spends their (limited) time+capital • logging to S3, works wonders for troubleshooting
  • 7. A Tale of Two Ventures… Adknowledge: in 2008, our team became one of the larger use cases running Hadoop on AWS • prior to the launch of EMR • launching clusters of up to 100 m1.xlarge • initially 12 hrs/day, optimized down to 4 hrs/day • displaced $3MM capex for Netezza ShareThis: in 2009, our team used even more Hadoop on AWS than that previous team • this time with EMR • larger/more frequent jobs • lower batch failure rate • faster turnaround on results • excellent support • smaller team required • much less budget
  • 8. “WordCount”, a “Hello World” for MapReduce Definition: count how often each word appears within a collection of text documents. A simple program which illustrates a pretty good test case for what MapReduce can perform, since it incorporates: • minimal amount code • document feature extraction (where words are “terms”) • symbolic and numeric values • potential use of a combiner • bipartite graph of (doc, term) tuples • not so many steps away from useful indexing… When a framework can run “WordCount” in parallel at scale, then it can handle much larger, more interesting compute problems as well.
  • 9. Bipartite Graph Wikipedia: “…a bipartite graph is a graph whose vertices can be divided into two disjoint sets U and V such that every edge connects a vertex in U to one in V… ” http://en.wikipedia.org/wiki/Bipartite_graph Consider the case where: U ≡ { documents } V ≡ { terms } Many kinds of text analytics products can be constructed based on this data structure as a foundation.
  • 10. “WordCount”, in other words… map(doc_id, text) → list(word, count) reduce(word, list(count)) → list(sum_count)
  • 11. “WordCount”, in other words… void map (String doc_id, String text): for each word w in segment(text): emitPartial(w, "1"); void reduce (String word, Iterator partial_counts): int count = 0; for each pc in partial_counts: count += Int(pc); emitResult(String(count));
  • 12. Hadoop Streaming One way to approach MapReduce jobs in Hadoop is to use streaming. In other words, use any kind of script which can be run from a command line and read/write data via stdin and stdout: http://hadoop.apache.org/common/docs/current/streaming.html#Hadoop+Streaming The following examples use Python scripts for Hadoop Streaming. One really great benefit is that then you can dev/test/debug your MapReduce code on small data sets from a command line simply by using pipes: cat input.txt | mapper.py | sort | reducer.py BTW, there are much better ways to handle Hadoop Streaming in Python on Elastic MapReduce – for example, using the “boto” library. However, these examples are kept simple so they’ll fit into a tech talk!
  • 15. “WordCount”, in other words… # this Linux command line... cat foo.txt | map_wc.py | sort | red_wc.py # produces output like this... tuple 9 term 6 tfidf 6 sort 5 analysis 2 wordcount 1 user 1 # depending on input - # which could be HTML content, tweets, email, etc.
  • 16. Speaking of Email… Enron pioneered innovative corporate accounting methods and energy market manipulations, involving a baffling array of fraud techniques. The firm soared to a valuation of over $60B (growing 56% in 1999, 87% in 2000) while inducing a state of emergency in California – which cost the state over $40B. Subsequent prosecution of top execs plus the meteoric decline in the firm’s 2001 share value made for a spectacular #EPIC #FAIL http://en.wikipedia.org/wiki/Enron_scandal http://en.wikipedia.org/wiki/California_electricity_crisis Thanks to CALO and Infochimps, we have a half million email messages collected from Enron managers during their, um, “heyday” period: http://infochimps.org/datasets/enron-email-dataset--2 http://www.cs.cmu.edu/~enron/ Let’s use Hadoop to help find out: what were some of the things those managers were talking about?
  • 17. Simple Text Analytics Extending from how “WordCount” works, we’ll add multiple kinds of output tuples, plus two stages of mappers and reducers, to generate different kinds of text analytics products: • inverted index • co-occurrence analysis • TF-IDF filter • social graph While doing that, we'll also perform other statistical analysis and data visualization using R and Gephi
  • 18. Mapper 1: RFC822 Parser map_parse.py takes a list of URI for where to read email messages, parses each message, then emits multiple kinds of output tuples: (doc_id, msg_uri, date) (sender, receiver, doc_id) (term, term_freq, doc_id) (term, co_term, doc_id) Note that our dataset includes approximately 500,000 email messages, with an average of about 100 words in each message. Also, there are 10E+5 unique terms. That will tend to be a constant in English texts, which is great to know when configuring capacity.
  • 19. Reducer 1: TF-IDF and Co-Occurrence red_idf.py takes the shuffled output from map_parse.py, collects metadata for each term, calculates TF-IDF to use in a later stage for filtering, calculates co-occurrence probability, then emits all these results: (doc_id, msg_uri, date) (sender, receiver, doc_id) (term, idf, count) (term, co_term, prob_cooc) (term, tfidf, doc_id) (term, max_tfidf)
  • 20. Mapper 1 + Reducer 1
  • 23. Mapper 2 + Reducer 2: Threshold Filter map_filter.py and red_filter.py apply a threshold (based on statistical analysis of TF-IDF) to filter results of co-occurrence analysis so that we begin to produce a semantic lexicon for exploring the data set. How do we determine a reasonable value for the TF-IDF threshold, for filtering terms? Sampling from the (term, max_tfidf) tuple, we run summary stats and visualization in R: cat dat.idf | util_extract.py m > thresh.tsv We also convert the sender/receiver social graph into CSV format for Gephi visualization: cat dat.parsed | util_extract.py s | util_gephi.py | sort -u > graph.csv
  • 24. Mapper 2 + Reducer 2
  • 33. Elastic MapReduce Output Part Files…
  • 34. Exporting Data into Other Tools…
  • 35. Using R to Determine a Threshold… data <- read.csv("thresh.tsv", sep='t', header=F) t_data <- data[,3] print(summary(t_data)) # pass through values for 80+ percentile qntile <- .8 t_thresh <- quantile(t_data, qntile) # CDF plot title <- "CDF threshold max(tfidf)" xtitle <- paste("thresh:", t_thresh) par(mfrow=c(2, 1)) plot(ecdf(t_data), xlab=xtitle, main=title) abline(v=t_thresh, col="red") abline(h=qtile, col="yellow") # box-and-whisker plot boxplot(t_data, horizontal=TRUE) rug(t_data, side=1)
  • 36. Using R to Determine a Threshold… CDF threshold max(tfidf) 0.8 Fn(x) 0.4 0.0 0 2 4 6 8 thresh: 0.063252 0 1 2 3 4 5 6 7
  • 37. Using Gephi to Explore the Social Graph…
  • 38. Best Practices • Again, there are much more efficient ways to handle Hadoop Streaming and Text Analytics… • Unit Tests, Continuous Integration, etc., – all great stuff, but “Big Data” software engineering requires additional steps • Sample data, measure data ratios and cluster behaviors, analyze in R, visualize everything you can, calibrate any necessary “magic numbers” • Develop and test code on a personal computer in IDE, cmd line, etc., using a minimal data sets • Deploy to staging cluster with larger data sets for integration tests and QA • Run in production with A/B testing were feasible to evaluate changes quantitatively • Learn from others at meetups, unconfs, forums, etc.
  • 39. Great Resources for Diving into Hadoop Google: Cluster Computing and MapReduce Lectures http://code.google.com/edu/submissions/mapreduce-minilecture/listing.html Amazon AWS Elastic MapReduce http://aws.amazon.com/elasticmapreduce/ Hadoop: The Definitive Guide, by Tom White http://oreilly.com/catalog/9780596521981 Apache Hadoop http://hadoop.apache.org/ Python “boto” interface to EMR http://boto.cloudhackers.com/emr_tut.html
  • 40. Excellent Products for Hadoop in Production Datameer http://www.datameer.com/ “Democratizing Big Data” Designed for business users, Datameer Analytics Solution (DAS) builds on the power and scalability of Apache Hadoop to deliver an easy-to-use and cost-effective solution for big data analytics. The solution integrates rapidly with existing and new data sources to deliver sophisticated analytics. Cascading http://www.cascading.org/ Cascading is a feature-rich API for defining and executing complex, scale-free, and fault tolerant data processing workflows on a Hadoop cluster, which provides a thin Java library that sits on top of Hadoop's MapReduce layer. Open source in Java.
  • 41. Scale Unlimited – Hadoop Boot Camp Santa Clara, 22-23 July 2010 http://www.scaleunlimited.com/courses/hadoop-bootcamp-santaclara • An extensive overview of the Hadoop architecture • Theory and practice of solving large scale data processing problems • Hands-on labs covering Hadoop installation, development, debugging • Common and advanced “Big Data” tasks and solutions Special $500 discount for SVCC Meetup members: http://hadoopbootcamp.eventbrite.com/?discount=DBDatameer Sample material – list of questions about intro Hadoop from the recent BigDataCamp: http://www.scaleunlimited.com/blog/intro-to-hadoop-at-bigdatacamp
  • 42. Getting Started on Hadoop Silicon Valley Cloud Computing Meetup Mountain View, 2010-07-19 http://www.meetup.com/cloudcomputing/calendar/13911740/ Paco Nathan @pacoid http://ceteri.blogspot.com/ Examples of Hadoop Streaming, based on Python scripts running on the AWS Elastic MapReduce service. All source code for this talk is available at: http://github.com/ceteri/ceteri-mapred