This document provides an overview of Map & Reduce, a programming model for processing large datasets in parallel. It describes how Map & Reduce works by applying mapping functions to each element to generate intermediate key-value pairs, shuffling and sorting the data, then applying reduction functions to aggregate the values associated with each key. As an example, it walks through how the "word count" problem can be solved using Map & Reduce. Finally, it briefly discusses Google's implementation of MapReduce and the Apache Hadoop framework.