SlideShare a Scribd company logo
Hadoop Programming
Overview
• MapReduce Types
• Input Formats
• Output Formats
• Serialization
• Job
• http://hadoop.apache.org/docs/r2.2.0/api/or
g/apache/hadoop/mapreduce/package-
summary.html
Mapper<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
• Maps input key/value pairs to a set of intermediate key/value pairs.
• Maps are the individual tasks which transform input records into a intermediate
records. The transformed intermediate records need not be of the same type as the
input records. A given input pair may map to zero or many output pairs.
• The Hadoop Map-Reduce framework spawns one map task for each InputSplit
generated by the InputFormat for the job.
• The framework first calls setup(org.apache.hadoop.mapreduce.Mapper.Context),
followed by map(Object, Object, Context) for each key/value pair in the InputSplit.
Finally cleanup(Context) is called.
http://hadoop.apache.org/docs/r2.2.0/api/org/apache/hadoop/mapreduce/Mapper.ht
ml
public static class TokenizerMapper
extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
What is Writable?
• Hadoop defines its own “box” classes for
strings (Text), integers (IntWritable), etc.
• All values are instances of Writable
• All keys are instances of WritableComparable
Writable
• A serializable object which implements a simple,
efficient, serialization protocol, based on DataInput
and DataOutput.
• Any key or value type in the Hadoop Map-Reduce
framework implements this interface.
• Implementations typically implement a static
read(DataInput) method which constructs a new
instance, calls readFields(DataInput) and returns the
instance.
• http://hadoop.apache.org/docs/r2.2.0/api/or
g/apache/hadoop/io/Writable.html
Hadoop Programming - MapReduce, Input, Output, Serialization, Job
public class MyWritable implements Writable {
// Some data
private int counter;
private long timestamp;
public void write(DataOutput out) throws IOException {
out.writeInt(counter);
out.writeLong(timestamp);
}
public void readFields(DataInput in) throws IOException {
counter = in.readInt();
timestamp = in.readLong();
}
public static MyWritable read(DataInput in) throws IOException {
MyWritable w = new MyWritable();
w.readFields(in);
return w;
}
}
public class MyWritableComparable implements WritableComparable {
// Some data
private int counter;
private long timestamp;
public void write(DataOutput out) throws IOException {
out.writeInt(counter);
out.writeLong(timestamp);
}
public void readFields(DataInput in) throws IOException {
counter = in.readInt();
timestamp = in.readLong();
}
public int compareTo(MyWritableComparable w) {
int thisValue = this.value;
int thatValue = ((IntWritable)o).value;
return (thisValue < thatValue ? -1 : (thisValue==thatValue ? 0 : 1));
}
}
Getting Data To The Mapper
Input file
InputSplit InputSplit InputSplit InputSplit
Input file
RecordReader RecordReader RecordReader RecordReader
Mapper
(intermediates)
Mapper
(intermediates)
Mapper
(intermediates)
Mapper
(intermediates)
InputFormat
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
Reading Data
• Data sets are specified by InputFormats
– Defines input data (e.g., a directory)
– Identifies partitions of the data that form an
InputSplit
– Factory for RecordReader objects to extract (k, v)
records from the input source
Input Format
• InputFormat describes the input-specification for a Map-
Reduce job
• The Map-Reduce framework relies on the InputFormat of the
job to:
– Validate the input-specification of the job.
– Split-up the input file(s) into logical InputSplits, each of which is then
assigned to an individual Mapper.
– Provide the RecordReader implementation to be used to glean input
records from the logical InputSplit for processing by the Mapper.
http://hadoop.apache.org/docs/r2.2.0/api/org/apache/hadoop/mapreduce/Inp
utFormat.html
FileInputFormat and Friends
• TextInputFormat
– Treats each ‘n’-terminated line of a file as a value
• KeyValueTextInputFormat
– Maps ‘n’- terminated text lines of “k SEP v”
• SequenceFileInputFormat
– Binary file of (k, v) pairs (passing data between the output
of one MapReduce job to the input of some other
MapReduce job)
• SequenceFileAsTextInputFormat
– Same, but maps (k.toString(), v.toString())
Filtering File Inputs
• FileInputFormat will read all files out of a
specified directory and send them to the
mapper
• Delegates filtering this file list to a method
subclasses may override
– e.g., Create your own “xyzFileInputFormat” to
read *.xyz from directory list
Record Readers
• Each InputFormat provides its own
RecordReader implementation
– Provides (unused?) capability multiplexing
• LineRecordReader
– Reads a line from a text file
• KeyValueRecordReader
– Used by KeyValueTextInputFormat
Input Split Size
• FileInputFormat will divide large files into
chunks
– Exact size controlled by mapred.min.split.size
• RecordReaders receive file, offset, and length
of chunk
• Custom InputFormat implementations may
override split size
– e.g., “NeverChunkFile”
Hadoop Programming - MapReduce, Input, Output, Serialization, Job
public class ObjectPositionInputFormat extends
FileInputFormat<Text, Point3D> {
public RecordReader<Text, Point3D> getRecordReader(
InputSplit input, JobConf job, Reporter reporter)
throws IOException {
reporter.setStatus(input.toString());
return new ObjPosRecordReader(job, (FileSplit)input);
}
InputSplit[] getSplits(JobConf job, int numSplits) throuw IOException;
}
class ObjPosRecordReader implements RecordReader<Text, Point3D> {
public ObjPosRecordReader(JobConf job, FileSplit split) throws IOException
{}
public boolean next(Text key, Point3D value) throws IOException {
// get the next line}
public Text createKey() {
}
public Point3D createValue() {
}
public long getPos() throws IOException {
}
public void close() throws IOException {
}
public float getProgress() throws IOException {}
}
Sending Data To Reducers
• Map function produces Map.Context object
– Map.context() takes (k, v) elements
• Any (WritableComparable, Writable) can be
used
WritableComparator
• Compares WritableComparable data
– Will call WritableComparable.compare()
– Can provide fast path for serialized data
Partition And Shuffle
Mapper
(intermediates)
Mapper
(intermediates)
Mapper
(intermediates)
Mapper
(intermediates)
Reducer Reducer Reducer
(intermediates) (intermediates) (intermediates)
Partitioner Partitioner Partitioner Partitioner
shuffling
Partitioner
• int getPartition(key, val, numPartitions)
– Outputs the partition number for a given key
– One partition == values sent to one Reduce task
• HashPartitioner used by default
– Uses key.hashCode() to return partition num
• Job sets Partitioner implementation
public class MyPartitioner implements Partitioner<IntWritable,Text> {
@Override
public int getPartition(IntWritable key, Text value, int numPartitions) {
/* Pretty ugly hard coded partitioning function. Don't do that in practice,
it is just for the sake of understanding. */
int nbOccurences = key.get();
if( nbOccurences < 3 )
return 0;
else
return 1;
}
@Override
public void configure(JobConf arg0) {
}
}
job.setPartitionerClass(MyPartitioner.class);
Reducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT>
• Reduces a set of intermediate values which
share a key to a smaller set of values.
• Reducer has 3 primary phases:
– Shuffle
– Sort
– Reduce
• http://hadoop.apache.org/docs/r2.2.0/api/or
g/apache/hadoop/mapreduce/Reducer.html
public static class IntSumReducer
extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
Finally: Writing The Output
Reducer Reducer Reducer
RecordWriter RecordWriter RecordWriter
output file output file output file
OutputFormat
OutputFormat
• Analogous to InputFormat
• TextOutputFormat
– Writes “key valn” strings to output file
• SequenceFileOutputFormat
– Uses a binary format to pack (k, v) pairs
• NullOutputFormat
– Discards output
Hadoop Programming - MapReduce, Input, Output, Serialization, Job
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
Job
• The job submitter's view of the Job.
• It allows the user to configure the job, submit it,
control its execution, and query the state. The set
methods only work until the job is submitted,
afterwards they will throw an IllegalStateException.
• Normally the user creates the application, describes
various facets of the job via Job and then submits the
job and monitor its progress.
http://hadoop.apache.org/docs/r2.2.0/api/org/apache/hadoop/mapreduce/Job.html
Hadoop Programming - MapReduce, Input, Output, Serialization, Job

More Related Content

Similar to Hadoop Programming - MapReduce, Input, Output, Serialization, Job (20)

PDF
Introduction to the Map-Reduce framework.pdf
BikalAdhikari4
 
PPTX
MapReduce and Hadoop Introcuctory Presentation
ssuserb91a20
 
PPTX
Types_of_Stats.pptxTypes_of_Stats.pptxTypes_of_Stats.pptx
veyetas395
 
PPT
Big-data-analysis-training-in-mumbai
Unmesh Baile
 
PPTX
Map Reduce
Prashant Gupta
 
PPTX
MAP REDUCE IN DATA SCIENCE.pptx
HARIKRISHNANU13
 
PPT
hadoop.ppt
AnushkaChauhan68
 
PDF
Hadoop
devakalyan143
 
PPTX
Map reducefunnyslide
letstalkbigdata
 
PPTX
Map reduce in Hadoop BIG DATA ANALYTICS
Archana Gopinath
 
PPTX
S_MapReduce_Types_Formats_Features_07.pptx
RajiArun7
 
PPTX
TheEdge10 : Big Data is Here - Hadoop to the Rescue
Shay Sofer
 
PPT
Hadoop 2
EasyMedico.com
 
PPT
Hadoop 3
shams03159691010
 
PPTX
Hadoop training-in-hyderabad
Kelly Technologies
 
PDF
Hadoop first mr job - inverted index construction
Subhas Kumar Ghosh
 
PPTX
map reduce Technic in big data
Jay Nagar
 
PDF
Processing massive amount of data with Map Reduce using Apache Hadoop - Indi...
IndicThreads
 
PPTX
Cs267 hadoop programming
Kuldeep Dhole
 
PDF
Mapreduce by examples
Andrea Iacono
 
Introduction to the Map-Reduce framework.pdf
BikalAdhikari4
 
MapReduce and Hadoop Introcuctory Presentation
ssuserb91a20
 
Types_of_Stats.pptxTypes_of_Stats.pptxTypes_of_Stats.pptx
veyetas395
 
Big-data-analysis-training-in-mumbai
Unmesh Baile
 
Map Reduce
Prashant Gupta
 
MAP REDUCE IN DATA SCIENCE.pptx
HARIKRISHNANU13
 
hadoop.ppt
AnushkaChauhan68
 
Map reducefunnyslide
letstalkbigdata
 
Map reduce in Hadoop BIG DATA ANALYTICS
Archana Gopinath
 
S_MapReduce_Types_Formats_Features_07.pptx
RajiArun7
 
TheEdge10 : Big Data is Here - Hadoop to the Rescue
Shay Sofer
 
Hadoop 2
EasyMedico.com
 
Hadoop training-in-hyderabad
Kelly Technologies
 
Hadoop first mr job - inverted index construction
Subhas Kumar Ghosh
 
map reduce Technic in big data
Jay Nagar
 
Processing massive amount of data with Map Reduce using Apache Hadoop - Indi...
IndicThreads
 
Cs267 hadoop programming
Kuldeep Dhole
 
Mapreduce by examples
Andrea Iacono
 

More from Jason J Pulikkottil (20)

PDF
Unix/Linux Command Reference - File Commands and Shortcuts
Jason J Pulikkottil
 
PDF
Introduction to PERL Programming - Complete Notes
Jason J Pulikkottil
 
PDF
VLSI System Verilog Notes with Coding Examples
Jason J Pulikkottil
 
PDF
VLSI Physical Design Physical Design Concepts
Jason J Pulikkottil
 
PDF
Verilog Coding examples of Digital Circuits
Jason J Pulikkottil
 
PDF
Floor Plan, Placement Questions and Answers
Jason J Pulikkottil
 
PDF
Physical Design, ASIC Design, Standard Cells
Jason J Pulikkottil
 
PDF
Basic Electronics, Digital Electronics, Static Timing Analysis Notes
Jason J Pulikkottil
 
PDF
Floorplan, Powerplan and Data Setup, Stages
Jason J Pulikkottil
 
PDF
Floorplanning Power Planning and Placement
Jason J Pulikkottil
 
PDF
Digital Electronics Questions and Answers
Jason J Pulikkottil
 
PDF
Different Types Of Cells, Types of Standard Cells
Jason J Pulikkottil
 
PDF
DFT Rules, set of rules with illustration
Jason J Pulikkottil
 
PDF
Clock Definitions Static Timing Analysis for VLSI Engineers
Jason J Pulikkottil
 
PDF
Basic Synthesis Flow and Commands, Logic Synthesis
Jason J Pulikkottil
 
PDF
ASIC Design Types, Logical Libraries, Optimization
Jason J Pulikkottil
 
PDF
Floorplanning and Powerplanning - Definitions and Notes
Jason J Pulikkottil
 
PDF
Physical Design Flow - Standard Cells and Special Cells
Jason J Pulikkottil
 
PDF
Physical Design - Import Design Flow Floorplan
Jason J Pulikkottil
 
PDF
Physical Design-Floor Planning Goals And Placement
Jason J Pulikkottil
 
Unix/Linux Command Reference - File Commands and Shortcuts
Jason J Pulikkottil
 
Introduction to PERL Programming - Complete Notes
Jason J Pulikkottil
 
VLSI System Verilog Notes with Coding Examples
Jason J Pulikkottil
 
VLSI Physical Design Physical Design Concepts
Jason J Pulikkottil
 
Verilog Coding examples of Digital Circuits
Jason J Pulikkottil
 
Floor Plan, Placement Questions and Answers
Jason J Pulikkottil
 
Physical Design, ASIC Design, Standard Cells
Jason J Pulikkottil
 
Basic Electronics, Digital Electronics, Static Timing Analysis Notes
Jason J Pulikkottil
 
Floorplan, Powerplan and Data Setup, Stages
Jason J Pulikkottil
 
Floorplanning Power Planning and Placement
Jason J Pulikkottil
 
Digital Electronics Questions and Answers
Jason J Pulikkottil
 
Different Types Of Cells, Types of Standard Cells
Jason J Pulikkottil
 
DFT Rules, set of rules with illustration
Jason J Pulikkottil
 
Clock Definitions Static Timing Analysis for VLSI Engineers
Jason J Pulikkottil
 
Basic Synthesis Flow and Commands, Logic Synthesis
Jason J Pulikkottil
 
ASIC Design Types, Logical Libraries, Optimization
Jason J Pulikkottil
 
Floorplanning and Powerplanning - Definitions and Notes
Jason J Pulikkottil
 
Physical Design Flow - Standard Cells and Special Cells
Jason J Pulikkottil
 
Physical Design - Import Design Flow Floorplan
Jason J Pulikkottil
 
Physical Design-Floor Planning Goals And Placement
Jason J Pulikkottil
 
Ad

Recently uploaded (20)

PPTX
apidays Munich 2025 - Building Telco-Aware Apps with Open Gateway APIs, Subhr...
apidays
 
PPTX
apidays Singapore 2025 - From Data to Insights: Building AI-Powered Data APIs...
apidays
 
PPTX
apidays Singapore 2025 - Designing for Change, Julie Schiller (Google)
apidays
 
PDF
Merits and Demerits of DBMS over File System & 3-Tier Architecture in DBMS
MD RIZWAN MOLLA
 
PDF
Avatar for apidays apidays PRO June 07, 2025 0 5 apidays Helsinki & North 2...
apidays
 
PPTX
AI Presentation Tool Pitch Deck Presentation.pptx
ShyamPanthavoor1
 
PDF
OPPOTUS - Malaysias on Malaysia 1Q2025.pdf
Oppotus
 
PPT
AI Future trends and opportunities_oct7v1.ppt
SHIKHAKMEHTA
 
PDF
apidays Helsinki & North 2025 - How (not) to run a Graphql Stewardship Group,...
apidays
 
PDF
Building Production-Ready AI Agents with LangGraph.pdf
Tamanna
 
PDF
What does good look like - CRAP Brighton 8 July 2025
Jan Kierzyk
 
PDF
Data Chunking Strategies for RAG in 2025.pdf
Tamanna
 
PPT
tuberculosiship-2106031cyyfuftufufufivifviviv
AkshaiRam
 
PPTX
ER_Model_with_Diagrams_Presentation.pptx
dharaadhvaryu1992
 
PPTX
ER_Model_Relationship_in_DBMS_Presentation.pptx
dharaadhvaryu1992
 
PPTX
Listify-Intelligent-Voice-to-Catalog-Agent.pptx
nareshkottees
 
PDF
The European Business Wallet: Why It Matters and How It Powers the EUDI Ecosy...
Lal Chandran
 
PPTX
Exploring Multilingual Embeddings for Italian Semantic Search: A Pretrained a...
Sease
 
PPTX
apidays Singapore 2025 - The Quest for the Greenest LLM , Jean Philippe Ehre...
apidays
 
PDF
apidays Helsinki & North 2025 - REST in Peace? Hunting the Dominant Design fo...
apidays
 
apidays Munich 2025 - Building Telco-Aware Apps with Open Gateway APIs, Subhr...
apidays
 
apidays Singapore 2025 - From Data to Insights: Building AI-Powered Data APIs...
apidays
 
apidays Singapore 2025 - Designing for Change, Julie Schiller (Google)
apidays
 
Merits and Demerits of DBMS over File System & 3-Tier Architecture in DBMS
MD RIZWAN MOLLA
 
Avatar for apidays apidays PRO June 07, 2025 0 5 apidays Helsinki & North 2...
apidays
 
AI Presentation Tool Pitch Deck Presentation.pptx
ShyamPanthavoor1
 
OPPOTUS - Malaysias on Malaysia 1Q2025.pdf
Oppotus
 
AI Future trends and opportunities_oct7v1.ppt
SHIKHAKMEHTA
 
apidays Helsinki & North 2025 - How (not) to run a Graphql Stewardship Group,...
apidays
 
Building Production-Ready AI Agents with LangGraph.pdf
Tamanna
 
What does good look like - CRAP Brighton 8 July 2025
Jan Kierzyk
 
Data Chunking Strategies for RAG in 2025.pdf
Tamanna
 
tuberculosiship-2106031cyyfuftufufufivifviviv
AkshaiRam
 
ER_Model_with_Diagrams_Presentation.pptx
dharaadhvaryu1992
 
ER_Model_Relationship_in_DBMS_Presentation.pptx
dharaadhvaryu1992
 
Listify-Intelligent-Voice-to-Catalog-Agent.pptx
nareshkottees
 
The European Business Wallet: Why It Matters and How It Powers the EUDI Ecosy...
Lal Chandran
 
Exploring Multilingual Embeddings for Italian Semantic Search: A Pretrained a...
Sease
 
apidays Singapore 2025 - The Quest for the Greenest LLM , Jean Philippe Ehre...
apidays
 
apidays Helsinki & North 2025 - REST in Peace? Hunting the Dominant Design fo...
apidays
 
Ad

Hadoop Programming - MapReduce, Input, Output, Serialization, Job

  • 2. Overview • MapReduce Types • Input Formats • Output Formats • Serialization • Job • http://hadoop.apache.org/docs/r2.2.0/api/or g/apache/hadoop/mapreduce/package- summary.html
  • 3. Mapper<KEYIN,VALUEIN,KEYOUT,VALUEOUT> • Maps input key/value pairs to a set of intermediate key/value pairs. • Maps are the individual tasks which transform input records into a intermediate records. The transformed intermediate records need not be of the same type as the input records. A given input pair may map to zero or many output pairs. • The Hadoop Map-Reduce framework spawns one map task for each InputSplit generated by the InputFormat for the job. • The framework first calls setup(org.apache.hadoop.mapreduce.Mapper.Context), followed by map(Object, Object, Context) for each key/value pair in the InputSplit. Finally cleanup(Context) is called. http://hadoop.apache.org/docs/r2.2.0/api/org/apache/hadoop/mapreduce/Mapper.ht ml
  • 4. public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>{ private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context ) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken()); context.write(word, one); } } }
  • 5. What is Writable? • Hadoop defines its own “box” classes for strings (Text), integers (IntWritable), etc. • All values are instances of Writable • All keys are instances of WritableComparable
  • 6. Writable • A serializable object which implements a simple, efficient, serialization protocol, based on DataInput and DataOutput. • Any key or value type in the Hadoop Map-Reduce framework implements this interface. • Implementations typically implement a static read(DataInput) method which constructs a new instance, calls readFields(DataInput) and returns the instance. • http://hadoop.apache.org/docs/r2.2.0/api/or g/apache/hadoop/io/Writable.html
  • 8. public class MyWritable implements Writable { // Some data private int counter; private long timestamp; public void write(DataOutput out) throws IOException { out.writeInt(counter); out.writeLong(timestamp); } public void readFields(DataInput in) throws IOException { counter = in.readInt(); timestamp = in.readLong(); } public static MyWritable read(DataInput in) throws IOException { MyWritable w = new MyWritable(); w.readFields(in); return w; } }
  • 9. public class MyWritableComparable implements WritableComparable { // Some data private int counter; private long timestamp; public void write(DataOutput out) throws IOException { out.writeInt(counter); out.writeLong(timestamp); } public void readFields(DataInput in) throws IOException { counter = in.readInt(); timestamp = in.readLong(); } public int compareTo(MyWritableComparable w) { int thisValue = this.value; int thatValue = ((IntWritable)o).value; return (thisValue < thatValue ? -1 : (thisValue==thatValue ? 0 : 1)); } }
  • 10. Getting Data To The Mapper Input file InputSplit InputSplit InputSplit InputSplit Input file RecordReader RecordReader RecordReader RecordReader Mapper (intermediates) Mapper (intermediates) Mapper (intermediates) Mapper (intermediates) InputFormat
  • 11. public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: wordcount <in> <out>"); System.exit(2); } Job job = new Job(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); }
  • 12. Reading Data • Data sets are specified by InputFormats – Defines input data (e.g., a directory) – Identifies partitions of the data that form an InputSplit – Factory for RecordReader objects to extract (k, v) records from the input source
  • 13. Input Format • InputFormat describes the input-specification for a Map- Reduce job • The Map-Reduce framework relies on the InputFormat of the job to: – Validate the input-specification of the job. – Split-up the input file(s) into logical InputSplits, each of which is then assigned to an individual Mapper. – Provide the RecordReader implementation to be used to glean input records from the logical InputSplit for processing by the Mapper. http://hadoop.apache.org/docs/r2.2.0/api/org/apache/hadoop/mapreduce/Inp utFormat.html
  • 14. FileInputFormat and Friends • TextInputFormat – Treats each ‘n’-terminated line of a file as a value • KeyValueTextInputFormat – Maps ‘n’- terminated text lines of “k SEP v” • SequenceFileInputFormat – Binary file of (k, v) pairs (passing data between the output of one MapReduce job to the input of some other MapReduce job) • SequenceFileAsTextInputFormat – Same, but maps (k.toString(), v.toString())
  • 15. Filtering File Inputs • FileInputFormat will read all files out of a specified directory and send them to the mapper • Delegates filtering this file list to a method subclasses may override – e.g., Create your own “xyzFileInputFormat” to read *.xyz from directory list
  • 16. Record Readers • Each InputFormat provides its own RecordReader implementation – Provides (unused?) capability multiplexing • LineRecordReader – Reads a line from a text file • KeyValueRecordReader – Used by KeyValueTextInputFormat
  • 17. Input Split Size • FileInputFormat will divide large files into chunks – Exact size controlled by mapred.min.split.size • RecordReaders receive file, offset, and length of chunk • Custom InputFormat implementations may override split size – e.g., “NeverChunkFile”
  • 19. public class ObjectPositionInputFormat extends FileInputFormat<Text, Point3D> { public RecordReader<Text, Point3D> getRecordReader( InputSplit input, JobConf job, Reporter reporter) throws IOException { reporter.setStatus(input.toString()); return new ObjPosRecordReader(job, (FileSplit)input); } InputSplit[] getSplits(JobConf job, int numSplits) throuw IOException; }
  • 20. class ObjPosRecordReader implements RecordReader<Text, Point3D> { public ObjPosRecordReader(JobConf job, FileSplit split) throws IOException {} public boolean next(Text key, Point3D value) throws IOException { // get the next line} public Text createKey() { } public Point3D createValue() { } public long getPos() throws IOException { } public void close() throws IOException { } public float getProgress() throws IOException {} }
  • 21. Sending Data To Reducers • Map function produces Map.Context object – Map.context() takes (k, v) elements • Any (WritableComparable, Writable) can be used
  • 22. WritableComparator • Compares WritableComparable data – Will call WritableComparable.compare() – Can provide fast path for serialized data
  • 23. Partition And Shuffle Mapper (intermediates) Mapper (intermediates) Mapper (intermediates) Mapper (intermediates) Reducer Reducer Reducer (intermediates) (intermediates) (intermediates) Partitioner Partitioner Partitioner Partitioner shuffling
  • 24. Partitioner • int getPartition(key, val, numPartitions) – Outputs the partition number for a given key – One partition == values sent to one Reduce task • HashPartitioner used by default – Uses key.hashCode() to return partition num • Job sets Partitioner implementation
  • 25. public class MyPartitioner implements Partitioner<IntWritable,Text> { @Override public int getPartition(IntWritable key, Text value, int numPartitions) { /* Pretty ugly hard coded partitioning function. Don't do that in practice, it is just for the sake of understanding. */ int nbOccurences = key.get(); if( nbOccurences < 3 ) return 0; else return 1; } @Override public void configure(JobConf arg0) { } } job.setPartitionerClass(MyPartitioner.class);
  • 26. Reducer<KEYIN,VALUEIN,KEYOUT,VALUEOUT> • Reduces a set of intermediate values which share a key to a smaller set of values. • Reducer has 3 primary phases: – Shuffle – Sort – Reduce • http://hadoop.apache.org/docs/r2.2.0/api/or g/apache/hadoop/mapreduce/Reducer.html
  • 27. public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> { private IntWritable result = new IntWritable(); public void reduce(Text key, Iterable<IntWritable> values, Context context ) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } }
  • 28. Finally: Writing The Output Reducer Reducer Reducer RecordWriter RecordWriter RecordWriter output file output file output file OutputFormat
  • 29. OutputFormat • Analogous to InputFormat • TextOutputFormat – Writes “key valn” strings to output file • SequenceFileOutputFormat – Uses a binary format to pack (k, v) pairs • NullOutputFormat – Discards output
  • 31. public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: wordcount <in> <out>"); System.exit(2); } Job job = new Job(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(TokenizerMapper.class); job.setCombinerClass(IntSumReducer.class); job.setReducerClass(IntSumReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); }
  • 32. Job • The job submitter's view of the Job. • It allows the user to configure the job, submit it, control its execution, and query the state. The set methods only work until the job is submitted, afterwards they will throw an IllegalStateException. • Normally the user creates the application, describes various facets of the job via Job and then submits the job and monitor its progress. http://hadoop.apache.org/docs/r2.2.0/api/org/apache/hadoop/mapreduce/Job.html