Map Reduce Types :
Hadoop uses the MapReduce programming model for the data processing of input and output for the map and to reduce functions represented as key-value pairs.
They are subject to the parallel execution of datasets situated in a wide array of machines in a distributed architecture.
The programming paradigm is essentially functional in nature in combining while using the technique of map and reduce.
Map Reduce Types :
Mapping is the core technique of processing a list of data elements that come in pairs of keys and values.
The map function applies to individual elements defined as key-value pairs of a list and produces a new list.
The general idea of the map and reduce the function of Hadoop can be illustrated as follows:
map: (K1, V1)-> list (K2, V2)
reduce: (K2, list(V2)) -> list (K3, V3)
The input parameters of the key and value pair, represented by K1 and V1 respectively, are different from the output pair type: K2 and V2.
The reduce function accepts the same format output by the map, but the type of output again of the reduce operation is different: K3 and V3.
The Java API for this is as follows:
public interface Mapper<K1, V1, K2, V2> extends JobConfigurable, Closeable { |
The OutputCollector is the generalized interface of the Map-Reduce framework to facilitate the collection of data output either by the Mapper or the Reducer.
These outputs are nothing but the intermediate output of the job.
Therefore, they must be parameterized with their types.
The reporter facilitates the Map-Reduce application to report progress and update counters and status information.
If the combine function is used, it has the same form as the reduce function and the output is fed to the reduce function.
This may be illustrated as follows:
map: (K1, V1) → list (K2, V2)
combine: (K2, list(V2)) → list (K2, V2)
reduce: (K2, list(V2)) → list (K3, V3)
Note that they combine and reduce functions use the same type, except in the variable names where K3 is K2 and V3 is V2.
The partition function operates on the intermediate key-value types.
It controls the partitioning of the keys of the intermediate map outputs.
The key derives the partition using a typical hash function.
The total number of partitions is the same as the number of reduced tasks for the job.
The partition is determined only by the key ignoring the value.
public interface Partitioner<K2, V2> extends JobConfigurable { |
Input Format :
Hadoop has to accept and process a variety of formats, from text files to databases.
A chunk of input, called input split, is processed by a single map. Each split is further divided into logical records given to the map to process in key-value pair.
In the context of a database, the split means reading a range of tuples from an SQL table, as done by the DBInputFormat and producing LongWritables containing record numbers as keys and DBWritables as values.
The Java API for input splits is as follows:
public interface InputSplit extends Writable { |
The InputSplit represents the data to be processed by a Mapper.
It returns the length in bytes and has a reference to the input data.
It is the responsibility of the InputFormat to create the input splits and divide them into records.
public interface InputFormat<K, V>
{ |
The JobClient invokes the getSplits() method with an appropriate number of split arguments. Once the split is calculated it is sent to the jobtracker. The jobtracker schedules map tasks for the tasktracker using storage location. The task tracker then passes the split by invoking the getRecordReader() method on the InputFormat to get RecordReader for the split. The FileInputFormat is the base class for the file data source.
It has the responsibility to identify the files that are to be included as the job input and the definition for generating the split.
Hadoop also includes the processing of unstructured data that often comes in textual format, the TextInputFormat is the default InputFormat for such data.
The SequenceInputFormat takes up binary inputs and stores sequences of binary key-value pairs. DBInputFormat provides the capability to read data from a relational database using JDBC.
Output Format :
The output format classes are similar to their corresponding input format classes and work in the reverse direction.
For example :
0 Comments