SlideShare a Scribd company logo
INTRODUCTION
HADOOP
ADMINISTRATION
edubodhi
HADOOP A DISTRIBUTED
FILE SYSTEM
1. Introduction: Hadoop’s history and advantages
2. Architecture in detail
3. Hadoop in industry
edubodhi
Apache top level project, open-source implementation of frameworks for reliable,
scalable, distributed computing and data storage.
It is a flexible and highly-available architecture for large scale computation and data
processing on a network of commodity hardware.
edubodhi
BRIEF HISTORY OF HADOOP
Designed to answer the question: “How to process big data
with reasonable cost and time?”
edubodhi
SEARCH ENGINES IN 1990
1997
1996
edubodhi
1998
2013
edubodhi
Doug Cutting
2005: Doug Cutting and Michael J. Cafarella developed
Hadoop to support distribution for the Nutch search engine
project.
The project was funded by Yahoo.
2006: Yahoo gave the project to Apache
Software Foundation.
edubodhi
GOOGLE ORIGINS
2003
2004
2006
edubodhi
SOME HADOOP
MILESTONES
• 2008 - Hadoop Wins Terabyte Sort Benchmark (sorted 1 terabyte of data
in 209 seconds, compared to previous record of 297 seconds)
• 2009 - Avro and Chukwa became new members of Hadoop Framework
family
• 2010 - Hadoop's Hbase, Hive and Pig subprojects completed, adding more
computational power to Hadoop framework
• 2011 - ZooKeeper Completed
• 2013 - Hadoop 1.1.2 and Hadoop 2.0.3 alpha.
- Ambari, Cassandra, Mahout have been added
edubodhi
WHAT IS HADOOP?
• Hadoop:
• an open-source software framework that supports data-
intensive distributed applications, licensed under the Apache
v2 license.
• Goals / Requirements:
• Abstract and facilitate the storage and processing of large
and/or rapidly growing data sets
• Structured and non-structured data
• Simple programming models
• High scalability and availability
• Use commodity (cheap!) hardware with little redundancy
• Fault-tolerance
• Move computation rather than data
edubodhi
HADOOP FRAMEWORK
TOOLS
edubodhi
HADOOP ARCHITECTURE
• Distributed, with some centralization
• Main nodes of cluster are where most of the computational power and
storage of the system lies
• Main nodes run TaskTracker to accept and reply to MapReduce tasks,
and also DataNode to store needed blocks closely as possible
• Central control node runs NameNode to keep track of HDFS directories
& files, and JobTracker to dispatch compute tasks to TaskTracker
• Written in Java, also supports Python and Ruby
edubodhi
HADOOP ARCHITECTURE edubodhi
HADOOP ARCHITECTURE
• Hadoop Distributed Filesystem
• Tailored to needs of MapReduce
• Targeted towards many reads of filestreams
• Writes are more costly
• High degree of data replication (3x by default)
• No need for RAID on normal nodes
• Large blocksize (64MB)
• Location awareness of DataNodes in network
edubodhi
HADOOP ARCHITECTURE
NameNode:
• Stores metadata for the files, like the directory structure of a
typical FS.
• The server holding the NameNode instance is quite crucial, as
there is only one.
• Transaction log for file deletes/adds, etc. Does not use
transactions for whole blocks or file-streams, only metadata.
• Handles creation of more replica blocks when necessary after a
DataNode failure
edubodhi
HADOOP ARCHITECTURE
DataNode:
• Stores the actual data in HDFS
• Can run on any underlying filesystem (ext3/4, NTFS, etc)
• Notifies NameNode of what blocks it has
• NameNode replicates blocks 2x in local rack, 1x elsewhere
edubodhi
HADOOP ARCHITECTURE
MAPREDUCE ENGINE
edubodhi
HADOOP ARCHITECTURE edubodhi
HADOOP ARCHITECTURE
MapReduce Engine:
• JobTracker & TaskTracker
• JobTracker splits up data into smaller tasks(“Map”) and sends it to
the TaskTracker process in each node
• TaskTracker reports back to the JobTracker node and reports on
job progress, sends data (“Reduce”) or requests new jobs
edubodhi
HADOOP ARCHITECTURE
• None of these components are necessarily limited to using HDFS
• Many other distributed file-systems with quite different
architectures work
• Many other software packages besides Hadoop's MapReduce
platform make use of HDFS
edubodhi
HADOOP IN THE WILD
• Hadoop is in use at most organizations that handle big data:
o Yahoo!
o Facebook
o Amazon
o Netflix
o Etc…
• Some examples of scale:
o Yahoo!’s Search Webmap runs on 10,000 core Linux cluster
and powers Yahoo! Web search
o FB’s Hadoop cluster hosts 100+ PB of data (July, 2012) &
growing at ½ PB/day (Nov, 2012)
edubodhi
HADOOP IN THE WILD
• Advertisement (Mining user behavior to generate
recommendations)
• Searches (group related documents)
• Security (search for uncommon patterns)
Three main applications of Hadoop:
edubodhi
HADOOP IN THE WILD:
FACEBOOK MESSAGES
• Design requirements:
o Integrate display of email, SMS and chat
messages between pairs and groups of
users
o Strong control over who users receive
messages from
o Suited for production use between 500
million people immediately after launch
o Stringent latency & uptime
requirements
edubodhi
HADOOP IN THE WILD
• System requirements
o High write throughput
o Cheap, elastic storage
o Low latency
o High consistency (within a
single data center good
enough)
o Disk-efficient sequential and
random read performance
edubodhi
HADOOP IN THE WILD
• Classic alternatives
o These requirements typically met using large MySQL cluster &
caching tiers using Memcached
o Content on HDFS could be loaded into MySQL or Memcached if
needed by web tier
• Problems with previous solutions
o MySQL has low random write throughput… BIG problem for
messaging!
o Difficult to scale MySQL clusters rapidly while maintaining
performance
o MySQL clusters have high management overhead, require more
expensive hardware
edubodhi
HADOOP IN THE WILD
• Facebook’s solution
o Hadoop + HBase as foundations
o Improve & adapt HDFS and HBase to scale to FB’s workload and
operational considerations
 Major concern was availability: NameNode is SPOF & failover times
are at least 20 minutes
 Proprietary “AvatarNode”: eliminates SPOF, makes HDFS safe to
deploy even with 24/7 uptime requirement
 Performance improvements for realtime workload: RPC timeout.
Rather fail fast and try a different DataNode
edubodhi
DATA NODE
▪ A Block Sever
▪ Stores data in local file system
▪ Stores meta-data of a block - checksum
▪ Serves data and meta-data to clients
▪ Block Report
▪ Periodically sends a report of all existing blocks to
NameNode
▪ Facilitate Pipelining of Data
▪ Forwards data to other specified DataNodes
edubodhi
BLOCK PLACEMENT
▪ Replication Strategy
▪ One replica on local node
▪ Second replica on a remote rack
▪ Third replica on same remote rack
▪ Additional replicas are randomly placed
▪ Clients read from nearest replica
edubodhi
DATA CORRECTNESS
▪ Use Checksums to validate data – CRC32
▪ File Creation
▪ Client computes checksum per 512 byte
▪ DataNode stores the checksum
▪ File Access
▪ Client retrieves the data and checksum from DataNode
▪ If validation fails, client tries other replicas
edubodhi
INTER PROCESS COMMUNICATION
IPC/RPC (ORG.APACHE.HADOOP.IPC)
▪ Protocol
▪ JobClient <-------------> JobTracker
▪ TaskTracker <------------> JobTracker
▪ TaskTracker <-------------> Child
▪ JobTracker impliments both protocol and works as server in
both IPC
▪ TaskTracker implements the TaskUmbilicalProtocol; Child gets
task information and reports task status through it.
JobSubmissionProtocol
InterTrackerProtocol
TaskUmbilicalProtocol
edubodhi
JOBCLIENT.SUBMITJOB - 1
▪ Check input and output, e.g. check if the output directory is already
existing
▪ job.getInputFormat().validateInput(job);
▪ job.getOutputFormat().checkOutputSpecs(fs, job);
▪ Get InputSplits, sort, and write output to HDFS
▪ InputSplit[] splits = job.getInputFormat().
getSplits(job, job.getNumMapTasks());
▪ writeSplitsFile(splits, out); // out is $SYSTEMDIR/$JOBID/job.split
edubodhi
JOBCLIENT.SUBMITJOB - 2
▪ The jar file and configuration file will be uploaded to HDFS system
directory
▪ job.write(out); // out is $SYSTEMDIR/$JOBID/job.xml
▪ JobStatus status = jobSubmitClient.submitJob(jobId);
▪ This is an RPC invocation, jobSubmitClient is a proxy created in the initialization
edubodhi
DATA PIPELINING
▪ Client retrieves a list of DataNodes on which to place replicas of a block
▪ Client writes block to the first DataNode
▪ The first DataNode forwards the data to the next DataNode in the
Pipeline
▪ When all replicas are written, the client moves on to write the next
block in file
edubodhi
HADOOP MAPREDUCE
▪ MapReduce programming model
▪ Framework for distributed processing of large data sets
▪ Pluggable user code runs in generic framework
▪ Common design pattern in data processing
▪ cat * | grep | sort | uniq -c | cat > file
▪ input | map | shuffle | reduce | output
edubodhi
MAPREDUCE USAGE
▪ Log processing
▪ Web search indexing
▪ Ad-hoc queries
edubodhi
CLOSER LOOK
▪ MapReduce Component
▪ JobClient
▪ JobTracker
▪ TaskTracker
▪ Child
▪ Job Creation/Execution Process
edubodhi
MAPREDCE PROCESS
(ORG.APACHE.HADOOP.MAPRED)
▪ JobClient
▪ Submit job
▪ JobTracker
▪ Manage and schedule job, split job into tasks
▪ TaskTracker
▪ Start and monitor the task execution
▪ Child
▪ The process that really execute the task
edubodhi
JOB INITIALIZATION ON JOBTRACKER - 1
▪ JobTracker.submitJob(jobID) <-- receive RPC invocation request
▪ JobInProgress job = new JobInProgress(jobId, this, this.conf)
▪ Add the job into Job Queue
▪ jobs.put(job.getProfile().getJobId(), job);
▪ jobsByPriority.add(job);
▪ jobInitQueue.add(job);
edubodhi
JOB INITIALIZATION ON JOBTRACKER - 2
▪ Sort by priority
▪ resortPriority();
▪ compare the JobPrioity first, then compare the JobSubmissionTime
▪ Wake JobInitThread
▪ jobInitQueue.notifyall();
▪ job = jobInitQueue.remove(0);
▪ job.initTasks();
edubodhi
JOBINPROGRESS - 1
▪ JobInProgress(String jobid, JobTracker jobtracker, JobConf
default_conf);
▪ JobInProgress.initTasks()
▪ DataInputStream splitFile = fs.open(new Path(conf.get(“mapred.job.split.file”)));
// mapred.job.split.file --> $SYSTEMDIR/$JOBID/job.split
edubodhi
JOBINPROGRESS - 2
▪ splits = JobClient.readSplitFile(splitFile);
▪ numMapTasks = splits.length;
▪ maps[i] = new TaskInProgress(jobId, jobFile, splits[i], jobtracker, conf,
this, i);
▪ reduces[i] = new TaskInProgress(jobId, jobFile, splits[i], jobtracker, conf,
this, i);
▪ JobStatus --> JobStatus.RUNNING
edubodhi
JOBTRACKER TASK SCHEDULING - 1
▪ Task getNewTaskForTaskTracker(String taskTracker)
▪ Compute the maximum tasks that can be running on taskTracker
▪ int maxCurrentMap Tasks = tts.getMaxMapTasks();
▪ int maxMapLoad = Math.min(maxCurrentMapTasks, (int)Math.ceil(double)
remainingMapLoad/numTaskTrackers));
edubodhi
JOB INITIALIZATION ON JOBTRACKER - 1
▪ JobTracker.submitJob(jobID) <-- receive RPC invocation request
▪ JobInProgress job = new JobInProgress(jobId, this, this.conf)
▪ Add the job into Job Queue
▪ jobs.put(job.getProfile().getJobId(), job);
▪ jobsByPriority.add(job);
▪ jobInitQueue.add(job);
edubodhi
JOB INITIALIZATION ON JOBTRACKER - 2
▪ Sort by priority
▪ resortPriority();
▪ compare the JobPrioity first, then compare the JobSubmissionTime
▪ Wake JobInitThread
▪ jobInitQueue.notifyall();
▪ job = jobInitQueue.remove(0);
▪ job.initTasks();
edubodhi
JOBINPROGRESS - 1
▪ JobInProgress(String jobid, JobTracker jobtracker, JobConf
default_conf);
▪ JobInProgress.initTasks()
▪ DataInputStream splitFile = fs.open(new Path(conf.get(“mapred.job.split.file”)));
// mapred.job.split.file --> $SYSTEMDIR/$JOBID/job.split
edubodhi
JOBINPROGRESS - 2
▪ splits = JobClient.readSplitFile(splitFile);
▪ numMapTasks = splits.length;
▪ maps[i] = new TaskInProgress(jobId, jobFile, splits[i], jobtracker, conf,
this, i);
▪ reduces[i] = new TaskInProgress(jobId, jobFile, splits[i], jobtracker, conf,
this, i);
▪ JobStatus --> JobStatus.RUNNING
edubodhi
JOBTRACKER TASK SCHEDULING - 1
▪ Task getNewTaskForTaskTracker(String taskTracker)
▪ Compute the maximum tasks that can be running on taskTracker
▪ int maxCurrentMap Tasks = tts.getMaxMapTasks();
▪ int maxMapLoad = Math.min(maxCurrentMapTasks, (int)Math.ceil(double)
remainingMapLoad/numTaskTrackers));
edubodhi
JOBTRACKER TASK SCHEDULING - 2
▪ int numMaps = tts.countMapTasks(); // running tasks number
▪ If numMaps < maxMapLoad, then more tasks can be allocated, then
based on priority, pick the first job from the jobsByPriority Queue,
create a task, and return to TaskTracker
▪ Task t = job.obtainNewMapTask(tts, numTaskTrackers);
edubodhi
START TASKTRACKER - 1
▪ initialize()
▪ Remove original local directory
▪ RPC initialization
▪ TaskReportServer = RPC.getServer(this, bindAddress, tmpPort, max, false, this, fConf);
▪ InterTrackerProtocol jobClient = (InterTrackerProtocol)
RPC.waitForProxy(InterTrackerProtocol.class, InterTrackerProtocol.versionID,
jobTrackAddr, this.fConf);
edubodhi
START TASKTRACKER - 2
▪ run();
▪ offerService();
▪ TaskTracker talks to JobTracker with HeartBeat message periodically
▪ HeatbeatResponse heartbeatResponse = transmitHeartBeat();
edubodhi
RUN TASK ON TASKTRACKER - 1
▪ TaskTracker.localizeJob(TaskInProgress tip);
▪ launchTasksForJob(tip, new JobConf(rjob.jobFile));
▪ tip.launchTask(); // TaskTracker.TaskInProgress
▪ tip.localizeTask(task); // create folder, symbol link
▪ runner = task.createRunner(TaskTracker.this);
▪ runner.start(); // start TaskRunner thread
edubodhi
RUN TASK ON TASKTRACKER - 2
▪ TaskRunner.run();
▪ Configure child process’ jvm parameters, i.e. classpath, taskid, taskReportServer’s
address & port
▪ Start Child Process
▪ runChild(wrappedCommand, workDir, taskid);
edubodhi
CHILD.MAIN()
▪ Create RPC Proxy, and execute RPC invocation
▪ TaskUmbilicalProtocol umbilical = (TaskUmbilicalProtocol)
RPC.getProxy(TaskUmbilicalProtocol.class, TaskUmbilicalProtocol.versionID,
address, defaultConf);
▪ Task task = umbilical.getTask(taskid);
▪ task.run(); // mapTask / reduceTask.run
edubodhi
FINISH JOB - 1
▪ Child
▪ task.done(umilical);
▪ RPC call: umbilical.done(taskId, shouldBePromoted)
▪ TaskTracker
▪ done(taskId, shouldPromote)
▪ TaskInProgress tip = tasks.get(taskid);
▪ tip.reportDone(shouldPromote);
▪ taskStatus.setRunState(TaskStatus.State.SUCCEEDED)
edubodhi
FINISH JOB - 2
▪ JobTracker
▪ TaskStatus report: status.getTaskReports();
▪ TaskInProgress tip = taskidToTIPMap.get(taskId);
▪ JobInProgress update JobStatus
▪ tip.getJob().updateTaskStatus(tip, report, myMetrics);
▪ One task of current job is finished
▪ completedTask(tip, taskStatus, metrics);
▪ If (this.status.getRunState() == JobStatus.RUNNING && allDone)
{this.status.setRunState(JobStatus.SUCCEEDED)}
edubodhi
RESULT
▪ Word Count
▪ hadoop jar hadoop-0.20.2-examples.jar wordcount <input dir> <output dir>
▪ Hive
▪ hive -f pagerank.hive
edubodhi
THANK YOU
Contact : Knowledgebee@beenovo.com
edubodhi

More Related Content

What's hot (20)

PPT
Hadoop
Mallikarjuna G D
 
PPTX
Hadoop Architecture | HDFS Architecture | Hadoop Architecture Tutorial | HDFS...
Simplilearn
 
PPTX
HDFS: Hadoop Distributed Filesystem
Steve Loughran
 
PPTX
Learn Hadoop Administration
Edureka!
 
PPTX
Big data- HDFS(2nd presentation)
Takrim Ul Islam Laskar
 
PPTX
Moving from C#/.NET to Hadoop/MongoDB
MongoDB
 
PDF
Hadoop and object stores: Can we do it better?
gvernik
 
PPTX
Managing growth in Production Hadoop Deployments
DataWorks Summit
 
PDF
Top 5 Hadoop Admin Tasks
Edureka!
 
PDF
Hadoop 3.0 - Revolution or evolution?
Uwe Printz
 
PPTX
Hadoop Demystified + MapReduce (Java and C#), Pig, and Hive Demos
Lester Martin
 
PDF
Hadoop Operations - Best practices from the field
Uwe Printz
 
PDF
Improving Hadoop Performance via Linux
Alex Moundalexis
 
PPT
Borthakur hadoop univ-research
saintdevil163
 
PPTX
A Basic Introduction to the Hadoop eco system - no animation
Sameer Tiwari
 
PPTX
Hadoop Operations - Best Practices from the Field
DataWorks Summit
 
PPTX
Big data and hadoop anupama
Anupama Prabhudesai
 
PDF
YARN - Strata 2014
Hortonworks
 
PPTX
Column Stores and Google BigQuery
Csaba Toth
 
PPTX
Hadoop Distributed File System
Rutvik Bapat
 
Hadoop Architecture | HDFS Architecture | Hadoop Architecture Tutorial | HDFS...
Simplilearn
 
HDFS: Hadoop Distributed Filesystem
Steve Loughran
 
Learn Hadoop Administration
Edureka!
 
Big data- HDFS(2nd presentation)
Takrim Ul Islam Laskar
 
Moving from C#/.NET to Hadoop/MongoDB
MongoDB
 
Hadoop and object stores: Can we do it better?
gvernik
 
Managing growth in Production Hadoop Deployments
DataWorks Summit
 
Top 5 Hadoop Admin Tasks
Edureka!
 
Hadoop 3.0 - Revolution or evolution?
Uwe Printz
 
Hadoop Demystified + MapReduce (Java and C#), Pig, and Hive Demos
Lester Martin
 
Hadoop Operations - Best practices from the field
Uwe Printz
 
Improving Hadoop Performance via Linux
Alex Moundalexis
 
Borthakur hadoop univ-research
saintdevil163
 
A Basic Introduction to the Hadoop eco system - no animation
Sameer Tiwari
 
Hadoop Operations - Best Practices from the Field
DataWorks Summit
 
Big data and hadoop anupama
Anupama Prabhudesai
 
YARN - Strata 2014
Hortonworks
 
Column Stores and Google BigQuery
Csaba Toth
 
Hadoop Distributed File System
Rutvik Bapat
 

Similar to Introduction to Hadoop Administration (20)

PDF
Introduction to Hadoop Administration
Ramesh Pabba - seeking new projects
 
PDF
Introduction to Hadoop Administration
Ramesh Pabba - seeking new projects
 
PPTX
Hadoop ppt1
chariorienit
 
PPTX
Hadoop and Big data in Big data and cloud.pptx
gvlbcy
 
PPTX
Hadoop.pptx
arslanhaneef
 
PPTX
Hadoop.pptx
sonukumar379092
 
PPTX
List of Engineering Colleges in Uttarakhand
Roorkee College of Engineering, Roorkee
 
PPTX
Hadoop introduction
musrath mohammad
 
PPT
Hadoop training in bangalore
Kelly Technologies
 
PDF
Hadoop Maharajathi,II-M.sc.,Computer Science,Bonsecours college for women
maharajothip1
 
PPTX
02 Hadoop.pptx HADOOP VENNELA DONTHIREDDY
Venneladonthireddy1
 
PDF
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.
MaharajothiP
 
PDF
Intro to Apache Hadoop
Sufi Nawaz
 
PPTX
Introduction to BIg Data and Hadoop
Amir Shaikh
 
PPTX
Big data processing using hadoop poster presentation
Amrut Patil
 
PDF
Hadoop Primer
Steve Staso
 
PPTX
Hadoop ppt on the basics and architecture
saipriyacoool
 
PPTX
Asbury Hadoop Overview
Brian Enochson
 
PPTX
Module 2 C2_HadoopEcosystemComponents.pptx
Shrinivasa6
 
PPT
Big Data Analytics (Collection of Huge Data 2)
htihor40
 
Introduction to Hadoop Administration
Ramesh Pabba - seeking new projects
 
Introduction to Hadoop Administration
Ramesh Pabba - seeking new projects
 
Hadoop ppt1
chariorienit
 
Hadoop and Big data in Big data and cloud.pptx
gvlbcy
 
Hadoop.pptx
arslanhaneef
 
Hadoop.pptx
sonukumar379092
 
List of Engineering Colleges in Uttarakhand
Roorkee College of Engineering, Roorkee
 
Hadoop introduction
musrath mohammad
 
Hadoop training in bangalore
Kelly Technologies
 
Hadoop Maharajathi,II-M.sc.,Computer Science,Bonsecours college for women
maharajothip1
 
02 Hadoop.pptx HADOOP VENNELA DONTHIREDDY
Venneladonthireddy1
 
P.Maharajothi,II-M.sc(computer science),Bon secours college for women,thanjavur.
MaharajothiP
 
Intro to Apache Hadoop
Sufi Nawaz
 
Introduction to BIg Data and Hadoop
Amir Shaikh
 
Big data processing using hadoop poster presentation
Amrut Patil
 
Hadoop Primer
Steve Staso
 
Hadoop ppt on the basics and architecture
saipriyacoool
 
Asbury Hadoop Overview
Brian Enochson
 
Module 2 C2_HadoopEcosystemComponents.pptx
Shrinivasa6
 
Big Data Analytics (Collection of Huge Data 2)
htihor40
 
Ad

More from Ramesh Pabba - seeking new projects (7)

DOCX
Devops &amp; linux administration
Ramesh Pabba - seeking new projects
 
PPTX
Devops Online Training - Edubodhi
Ramesh Pabba - seeking new projects
 
PPTX
Prince2@ Foundation Certification Course
Ramesh Pabba - seeking new projects
 
PPTX
Oracle Data integrator 11g (ODI) - Online Training Course
Ramesh Pabba - seeking new projects
 
PPTX
Oracle BI apps Online Training
Ramesh Pabba - seeking new projects
 
PPTX
Oracle BI Apps training
Ramesh Pabba - seeking new projects
 
PDF
Data Visualization with Tableau - by Knowledgebee Trainings
Ramesh Pabba - seeking new projects
 
Devops &amp; linux administration
Ramesh Pabba - seeking new projects
 
Devops Online Training - Edubodhi
Ramesh Pabba - seeking new projects
 
Prince2@ Foundation Certification Course
Ramesh Pabba - seeking new projects
 
Oracle Data integrator 11g (ODI) - Online Training Course
Ramesh Pabba - seeking new projects
 
Oracle BI apps Online Training
Ramesh Pabba - seeking new projects
 
Oracle BI Apps training
Ramesh Pabba - seeking new projects
 
Data Visualization with Tableau - by Knowledgebee Trainings
Ramesh Pabba - seeking new projects
 
Ad

Recently uploaded (20)

PDF
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
PDF
Researching The Best Chat SDK Providers in 2025
Ray Fields
 
PDF
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
PDF
Trying to figure out MCP by actually building an app from scratch with open s...
Julien SIMON
 
PDF
How Open Source Changed My Career by abdelrahman ismail
a0m0rajab1
 
PPTX
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
PPTX
Farrell_Programming Logic and Design slides_10e_ch02_PowerPoint.pptx
bashnahara11
 
PDF
Economic Impact of Data Centres to the Malaysian Economy
flintglobalapac
 
PDF
TrustArc Webinar - Navigating Data Privacy in LATAM: Laws, Trends, and Compli...
TrustArc
 
PDF
The Future of Artificial Intelligence (AI)
Mukul
 
PDF
OFFOFFBOX™ – A New Era for African Film | Startup Presentation
ambaicciwalkerbrian
 
PDF
Brief History of Internet - Early Days of Internet
sutharharshit158
 
PPTX
Agile Chennai 18-19 July 2025 | Workshop - Enhancing Agile Collaboration with...
AgileNetwork
 
PDF
Data_Analytics_vs_Data_Science_vs_BI_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
PDF
introduction to computer hardware and sofeware
chauhanshraddha2007
 
PDF
MASTERDECK GRAPHSUMMIT SYDNEY (Public).pdf
Neo4j
 
PDF
Market Insight : ETH Dominance Returns
CIFDAQ
 
PPTX
What-is-the-World-Wide-Web -- Introduction
tonifi9488
 
PPTX
IT Runs Better with ThousandEyes AI-driven Assurance
ThousandEyes
 
PDF
Structs to JSON: How Go Powers REST APIs
Emily Achieng
 
Research-Fundamentals-and-Topic-Development.pdf
ayesha butalia
 
Researching The Best Chat SDK Providers in 2025
Ray Fields
 
Presentation about Hardware and Software in Computer
snehamodhawadiya
 
Trying to figure out MCP by actually building an app from scratch with open s...
Julien SIMON
 
How Open Source Changed My Career by abdelrahman ismail
a0m0rajab1
 
AI in Daily Life: How Artificial Intelligence Helps Us Every Day
vanshrpatil7
 
Farrell_Programming Logic and Design slides_10e_ch02_PowerPoint.pptx
bashnahara11
 
Economic Impact of Data Centres to the Malaysian Economy
flintglobalapac
 
TrustArc Webinar - Navigating Data Privacy in LATAM: Laws, Trends, and Compli...
TrustArc
 
The Future of Artificial Intelligence (AI)
Mukul
 
OFFOFFBOX™ – A New Era for African Film | Startup Presentation
ambaicciwalkerbrian
 
Brief History of Internet - Early Days of Internet
sutharharshit158
 
Agile Chennai 18-19 July 2025 | Workshop - Enhancing Agile Collaboration with...
AgileNetwork
 
Data_Analytics_vs_Data_Science_vs_BI_by_CA_Suvidha_Chaplot.pdf
CA Suvidha Chaplot
 
introduction to computer hardware and sofeware
chauhanshraddha2007
 
MASTERDECK GRAPHSUMMIT SYDNEY (Public).pdf
Neo4j
 
Market Insight : ETH Dominance Returns
CIFDAQ
 
What-is-the-World-Wide-Web -- Introduction
tonifi9488
 
IT Runs Better with ThousandEyes AI-driven Assurance
ThousandEyes
 
Structs to JSON: How Go Powers REST APIs
Emily Achieng
 

Introduction to Hadoop Administration

  • 2. HADOOP A DISTRIBUTED FILE SYSTEM 1. Introduction: Hadoop’s history and advantages 2. Architecture in detail 3. Hadoop in industry edubodhi
  • 3. Apache top level project, open-source implementation of frameworks for reliable, scalable, distributed computing and data storage. It is a flexible and highly-available architecture for large scale computation and data processing on a network of commodity hardware. edubodhi
  • 4. BRIEF HISTORY OF HADOOP Designed to answer the question: “How to process big data with reasonable cost and time?” edubodhi
  • 5. SEARCH ENGINES IN 1990 1997 1996 edubodhi
  • 7. Doug Cutting 2005: Doug Cutting and Michael J. Cafarella developed Hadoop to support distribution for the Nutch search engine project. The project was funded by Yahoo. 2006: Yahoo gave the project to Apache Software Foundation. edubodhi
  • 9. SOME HADOOP MILESTONES • 2008 - Hadoop Wins Terabyte Sort Benchmark (sorted 1 terabyte of data in 209 seconds, compared to previous record of 297 seconds) • 2009 - Avro and Chukwa became new members of Hadoop Framework family • 2010 - Hadoop's Hbase, Hive and Pig subprojects completed, adding more computational power to Hadoop framework • 2011 - ZooKeeper Completed • 2013 - Hadoop 1.1.2 and Hadoop 2.0.3 alpha. - Ambari, Cassandra, Mahout have been added edubodhi
  • 10. WHAT IS HADOOP? • Hadoop: • an open-source software framework that supports data- intensive distributed applications, licensed under the Apache v2 license. • Goals / Requirements: • Abstract and facilitate the storage and processing of large and/or rapidly growing data sets • Structured and non-structured data • Simple programming models • High scalability and availability • Use commodity (cheap!) hardware with little redundancy • Fault-tolerance • Move computation rather than data edubodhi
  • 12. HADOOP ARCHITECTURE • Distributed, with some centralization • Main nodes of cluster are where most of the computational power and storage of the system lies • Main nodes run TaskTracker to accept and reply to MapReduce tasks, and also DataNode to store needed blocks closely as possible • Central control node runs NameNode to keep track of HDFS directories & files, and JobTracker to dispatch compute tasks to TaskTracker • Written in Java, also supports Python and Ruby edubodhi
  • 14. HADOOP ARCHITECTURE • Hadoop Distributed Filesystem • Tailored to needs of MapReduce • Targeted towards many reads of filestreams • Writes are more costly • High degree of data replication (3x by default) • No need for RAID on normal nodes • Large blocksize (64MB) • Location awareness of DataNodes in network edubodhi
  • 15. HADOOP ARCHITECTURE NameNode: • Stores metadata for the files, like the directory structure of a typical FS. • The server holding the NameNode instance is quite crucial, as there is only one. • Transaction log for file deletes/adds, etc. Does not use transactions for whole blocks or file-streams, only metadata. • Handles creation of more replica blocks when necessary after a DataNode failure edubodhi
  • 16. HADOOP ARCHITECTURE DataNode: • Stores the actual data in HDFS • Can run on any underlying filesystem (ext3/4, NTFS, etc) • Notifies NameNode of what blocks it has • NameNode replicates blocks 2x in local rack, 1x elsewhere edubodhi
  • 19. HADOOP ARCHITECTURE MapReduce Engine: • JobTracker & TaskTracker • JobTracker splits up data into smaller tasks(“Map”) and sends it to the TaskTracker process in each node • TaskTracker reports back to the JobTracker node and reports on job progress, sends data (“Reduce”) or requests new jobs edubodhi
  • 20. HADOOP ARCHITECTURE • None of these components are necessarily limited to using HDFS • Many other distributed file-systems with quite different architectures work • Many other software packages besides Hadoop's MapReduce platform make use of HDFS edubodhi
  • 21. HADOOP IN THE WILD • Hadoop is in use at most organizations that handle big data: o Yahoo! o Facebook o Amazon o Netflix o Etc… • Some examples of scale: o Yahoo!’s Search Webmap runs on 10,000 core Linux cluster and powers Yahoo! Web search o FB’s Hadoop cluster hosts 100+ PB of data (July, 2012) & growing at ½ PB/day (Nov, 2012) edubodhi
  • 22. HADOOP IN THE WILD • Advertisement (Mining user behavior to generate recommendations) • Searches (group related documents) • Security (search for uncommon patterns) Three main applications of Hadoop: edubodhi
  • 23. HADOOP IN THE WILD: FACEBOOK MESSAGES • Design requirements: o Integrate display of email, SMS and chat messages between pairs and groups of users o Strong control over who users receive messages from o Suited for production use between 500 million people immediately after launch o Stringent latency & uptime requirements edubodhi
  • 24. HADOOP IN THE WILD • System requirements o High write throughput o Cheap, elastic storage o Low latency o High consistency (within a single data center good enough) o Disk-efficient sequential and random read performance edubodhi
  • 25. HADOOP IN THE WILD • Classic alternatives o These requirements typically met using large MySQL cluster & caching tiers using Memcached o Content on HDFS could be loaded into MySQL or Memcached if needed by web tier • Problems with previous solutions o MySQL has low random write throughput… BIG problem for messaging! o Difficult to scale MySQL clusters rapidly while maintaining performance o MySQL clusters have high management overhead, require more expensive hardware edubodhi
  • 26. HADOOP IN THE WILD • Facebook’s solution o Hadoop + HBase as foundations o Improve & adapt HDFS and HBase to scale to FB’s workload and operational considerations  Major concern was availability: NameNode is SPOF & failover times are at least 20 minutes  Proprietary “AvatarNode”: eliminates SPOF, makes HDFS safe to deploy even with 24/7 uptime requirement  Performance improvements for realtime workload: RPC timeout. Rather fail fast and try a different DataNode edubodhi
  • 27. DATA NODE ▪ A Block Sever ▪ Stores data in local file system ▪ Stores meta-data of a block - checksum ▪ Serves data and meta-data to clients ▪ Block Report ▪ Periodically sends a report of all existing blocks to NameNode ▪ Facilitate Pipelining of Data ▪ Forwards data to other specified DataNodes edubodhi
  • 28. BLOCK PLACEMENT ▪ Replication Strategy ▪ One replica on local node ▪ Second replica on a remote rack ▪ Third replica on same remote rack ▪ Additional replicas are randomly placed ▪ Clients read from nearest replica edubodhi
  • 29. DATA CORRECTNESS ▪ Use Checksums to validate data – CRC32 ▪ File Creation ▪ Client computes checksum per 512 byte ▪ DataNode stores the checksum ▪ File Access ▪ Client retrieves the data and checksum from DataNode ▪ If validation fails, client tries other replicas edubodhi
  • 30. INTER PROCESS COMMUNICATION IPC/RPC (ORG.APACHE.HADOOP.IPC) ▪ Protocol ▪ JobClient <-------------> JobTracker ▪ TaskTracker <------------> JobTracker ▪ TaskTracker <-------------> Child ▪ JobTracker impliments both protocol and works as server in both IPC ▪ TaskTracker implements the TaskUmbilicalProtocol; Child gets task information and reports task status through it. JobSubmissionProtocol InterTrackerProtocol TaskUmbilicalProtocol edubodhi
  • 31. JOBCLIENT.SUBMITJOB - 1 ▪ Check input and output, e.g. check if the output directory is already existing ▪ job.getInputFormat().validateInput(job); ▪ job.getOutputFormat().checkOutputSpecs(fs, job); ▪ Get InputSplits, sort, and write output to HDFS ▪ InputSplit[] splits = job.getInputFormat(). getSplits(job, job.getNumMapTasks()); ▪ writeSplitsFile(splits, out); // out is $SYSTEMDIR/$JOBID/job.split edubodhi
  • 32. JOBCLIENT.SUBMITJOB - 2 ▪ The jar file and configuration file will be uploaded to HDFS system directory ▪ job.write(out); // out is $SYSTEMDIR/$JOBID/job.xml ▪ JobStatus status = jobSubmitClient.submitJob(jobId); ▪ This is an RPC invocation, jobSubmitClient is a proxy created in the initialization edubodhi
  • 33. DATA PIPELINING ▪ Client retrieves a list of DataNodes on which to place replicas of a block ▪ Client writes block to the first DataNode ▪ The first DataNode forwards the data to the next DataNode in the Pipeline ▪ When all replicas are written, the client moves on to write the next block in file edubodhi
  • 34. HADOOP MAPREDUCE ▪ MapReduce programming model ▪ Framework for distributed processing of large data sets ▪ Pluggable user code runs in generic framework ▪ Common design pattern in data processing ▪ cat * | grep | sort | uniq -c | cat > file ▪ input | map | shuffle | reduce | output edubodhi
  • 35. MAPREDUCE USAGE ▪ Log processing ▪ Web search indexing ▪ Ad-hoc queries edubodhi
  • 36. CLOSER LOOK ▪ MapReduce Component ▪ JobClient ▪ JobTracker ▪ TaskTracker ▪ Child ▪ Job Creation/Execution Process edubodhi
  • 37. MAPREDCE PROCESS (ORG.APACHE.HADOOP.MAPRED) ▪ JobClient ▪ Submit job ▪ JobTracker ▪ Manage and schedule job, split job into tasks ▪ TaskTracker ▪ Start and monitor the task execution ▪ Child ▪ The process that really execute the task edubodhi
  • 38. JOB INITIALIZATION ON JOBTRACKER - 1 ▪ JobTracker.submitJob(jobID) <-- receive RPC invocation request ▪ JobInProgress job = new JobInProgress(jobId, this, this.conf) ▪ Add the job into Job Queue ▪ jobs.put(job.getProfile().getJobId(), job); ▪ jobsByPriority.add(job); ▪ jobInitQueue.add(job); edubodhi
  • 39. JOB INITIALIZATION ON JOBTRACKER - 2 ▪ Sort by priority ▪ resortPriority(); ▪ compare the JobPrioity first, then compare the JobSubmissionTime ▪ Wake JobInitThread ▪ jobInitQueue.notifyall(); ▪ job = jobInitQueue.remove(0); ▪ job.initTasks(); edubodhi
  • 40. JOBINPROGRESS - 1 ▪ JobInProgress(String jobid, JobTracker jobtracker, JobConf default_conf); ▪ JobInProgress.initTasks() ▪ DataInputStream splitFile = fs.open(new Path(conf.get(“mapred.job.split.file”))); // mapred.job.split.file --> $SYSTEMDIR/$JOBID/job.split edubodhi
  • 41. JOBINPROGRESS - 2 ▪ splits = JobClient.readSplitFile(splitFile); ▪ numMapTasks = splits.length; ▪ maps[i] = new TaskInProgress(jobId, jobFile, splits[i], jobtracker, conf, this, i); ▪ reduces[i] = new TaskInProgress(jobId, jobFile, splits[i], jobtracker, conf, this, i); ▪ JobStatus --> JobStatus.RUNNING edubodhi
  • 42. JOBTRACKER TASK SCHEDULING - 1 ▪ Task getNewTaskForTaskTracker(String taskTracker) ▪ Compute the maximum tasks that can be running on taskTracker ▪ int maxCurrentMap Tasks = tts.getMaxMapTasks(); ▪ int maxMapLoad = Math.min(maxCurrentMapTasks, (int)Math.ceil(double) remainingMapLoad/numTaskTrackers)); edubodhi
  • 43. JOB INITIALIZATION ON JOBTRACKER - 1 ▪ JobTracker.submitJob(jobID) <-- receive RPC invocation request ▪ JobInProgress job = new JobInProgress(jobId, this, this.conf) ▪ Add the job into Job Queue ▪ jobs.put(job.getProfile().getJobId(), job); ▪ jobsByPriority.add(job); ▪ jobInitQueue.add(job); edubodhi
  • 44. JOB INITIALIZATION ON JOBTRACKER - 2 ▪ Sort by priority ▪ resortPriority(); ▪ compare the JobPrioity first, then compare the JobSubmissionTime ▪ Wake JobInitThread ▪ jobInitQueue.notifyall(); ▪ job = jobInitQueue.remove(0); ▪ job.initTasks(); edubodhi
  • 45. JOBINPROGRESS - 1 ▪ JobInProgress(String jobid, JobTracker jobtracker, JobConf default_conf); ▪ JobInProgress.initTasks() ▪ DataInputStream splitFile = fs.open(new Path(conf.get(“mapred.job.split.file”))); // mapred.job.split.file --> $SYSTEMDIR/$JOBID/job.split edubodhi
  • 46. JOBINPROGRESS - 2 ▪ splits = JobClient.readSplitFile(splitFile); ▪ numMapTasks = splits.length; ▪ maps[i] = new TaskInProgress(jobId, jobFile, splits[i], jobtracker, conf, this, i); ▪ reduces[i] = new TaskInProgress(jobId, jobFile, splits[i], jobtracker, conf, this, i); ▪ JobStatus --> JobStatus.RUNNING edubodhi
  • 47. JOBTRACKER TASK SCHEDULING - 1 ▪ Task getNewTaskForTaskTracker(String taskTracker) ▪ Compute the maximum tasks that can be running on taskTracker ▪ int maxCurrentMap Tasks = tts.getMaxMapTasks(); ▪ int maxMapLoad = Math.min(maxCurrentMapTasks, (int)Math.ceil(double) remainingMapLoad/numTaskTrackers)); edubodhi
  • 48. JOBTRACKER TASK SCHEDULING - 2 ▪ int numMaps = tts.countMapTasks(); // running tasks number ▪ If numMaps < maxMapLoad, then more tasks can be allocated, then based on priority, pick the first job from the jobsByPriority Queue, create a task, and return to TaskTracker ▪ Task t = job.obtainNewMapTask(tts, numTaskTrackers); edubodhi
  • 49. START TASKTRACKER - 1 ▪ initialize() ▪ Remove original local directory ▪ RPC initialization ▪ TaskReportServer = RPC.getServer(this, bindAddress, tmpPort, max, false, this, fConf); ▪ InterTrackerProtocol jobClient = (InterTrackerProtocol) RPC.waitForProxy(InterTrackerProtocol.class, InterTrackerProtocol.versionID, jobTrackAddr, this.fConf); edubodhi
  • 50. START TASKTRACKER - 2 ▪ run(); ▪ offerService(); ▪ TaskTracker talks to JobTracker with HeartBeat message periodically ▪ HeatbeatResponse heartbeatResponse = transmitHeartBeat(); edubodhi
  • 51. RUN TASK ON TASKTRACKER - 1 ▪ TaskTracker.localizeJob(TaskInProgress tip); ▪ launchTasksForJob(tip, new JobConf(rjob.jobFile)); ▪ tip.launchTask(); // TaskTracker.TaskInProgress ▪ tip.localizeTask(task); // create folder, symbol link ▪ runner = task.createRunner(TaskTracker.this); ▪ runner.start(); // start TaskRunner thread edubodhi
  • 52. RUN TASK ON TASKTRACKER - 2 ▪ TaskRunner.run(); ▪ Configure child process’ jvm parameters, i.e. classpath, taskid, taskReportServer’s address & port ▪ Start Child Process ▪ runChild(wrappedCommand, workDir, taskid); edubodhi
  • 53. CHILD.MAIN() ▪ Create RPC Proxy, and execute RPC invocation ▪ TaskUmbilicalProtocol umbilical = (TaskUmbilicalProtocol) RPC.getProxy(TaskUmbilicalProtocol.class, TaskUmbilicalProtocol.versionID, address, defaultConf); ▪ Task task = umbilical.getTask(taskid); ▪ task.run(); // mapTask / reduceTask.run edubodhi
  • 54. FINISH JOB - 1 ▪ Child ▪ task.done(umilical); ▪ RPC call: umbilical.done(taskId, shouldBePromoted) ▪ TaskTracker ▪ done(taskId, shouldPromote) ▪ TaskInProgress tip = tasks.get(taskid); ▪ tip.reportDone(shouldPromote); ▪ taskStatus.setRunState(TaskStatus.State.SUCCEEDED) edubodhi
  • 55. FINISH JOB - 2 ▪ JobTracker ▪ TaskStatus report: status.getTaskReports(); ▪ TaskInProgress tip = taskidToTIPMap.get(taskId); ▪ JobInProgress update JobStatus ▪ tip.getJob().updateTaskStatus(tip, report, myMetrics); ▪ One task of current job is finished ▪ completedTask(tip, taskStatus, metrics); ▪ If (this.status.getRunState() == JobStatus.RUNNING && allDone) {this.status.setRunState(JobStatus.SUCCEEDED)} edubodhi
  • 56. RESULT ▪ Word Count ▪ hadoop jar hadoop-0.20.2-examples.jar wordcount <input dir> <output dir> ▪ Hive ▪ hive -f pagerank.hive edubodhi