Learn By Example: Hadoop, MapReduce for Big Data problems
4.6 (187 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
1,906 students enrolled
Wishlisted Wishlist

Please confirm that you want to add Learn By Example: Hadoop, MapReduce for Big Data problems to your Wishlist.

Add to Wishlist

Learn By Example: Hadoop, MapReduce for Big Data problems

A hands-on workout in Hadoop, MapReduce and the art of thinking "parallel"
4.6 (187 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
1,906 students enrolled
Created by Loony Corn
Last updated 2/2017
Learn Fest Sale
Current price: $10 Original price: $50 Discount: 80% off
1 day left at this price!
30-Day Money-Back Guarantee
  • 13.5 hours on-demand video
  • 1 Article
  • 111 Supplemental Resources
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
What Will I Learn?
  • Develop advanced MapReduce applications to process BigData
  • Master the art of "thinking parallel" - how to break up a task into Map/Reduce transformations
  • Self-sufficiently set up their own mini-Hadoop cluster whether it's a single node, a physical cluster or in the cloud.
  • Use Hadoop + MapReduce to solve a wide variety of problems : from NLP to Inverted Indices to Recommendations
  • Understand HDFS, MapReduce and YARN and how they interact with each other
  • Understand the basics of performance tuning and managing your own cluster
View Curriculum
  • You'll need an IDE where you can write Java code or open the source code that's shared. IntelliJ and Eclipse are both great options.
  • You'll need some background in Object-Oriented Programming, preferably in Java. All the source code is in Java and we dive right in without going into Objects, Classes etc
  • A bit of exposure to Linux/Unix shells would be helpful, but it won't be a blocker

Taught by a 4 person team including 2 Stanford-educated, ex-Googlers  and 2 ex-Flipkart Lead Analysts. This team has decades of practical experience in working with Java and with billions of rows of data. 

This course is a zoom-in, zoom-out, hands-on workout involving Hadoop, MapReduce and the art of thinking parallel. 

Let’s parse that.

Zoom-in, Zoom-Out:  This course is both broad and deep. It covers the individual components of Hadoop in great detail, and also gives you a higher level picture of how they interact with each other. 

Hands-on workout involving Hadoop, MapReduce : This course will get you hands-on with Hadoop very early on.  You'll learn how to set up your own cluster using both VMs and the Cloud. All the major features of MapReduce are covered - including advanced topics like Total Sort and Secondary Sort. 

The art of thinking parallel: MapReduce completely changed the way people thought about processing Big Data. Breaking down any problem into parallelizable units is an art. The examples in this course will train you to "think parallel". 

What's Covered:

Lot's of cool stuff ..

  • Using MapReduce to 

    • Recommend friends in a Social Networking site: Generate Top 10 friend recommendations using a Collaborative filtering algorithm. 
    • Build an Inverted Index for Search Engines: Use MapReduce to parallelize the humongous task of building an inverted index for a search engine. 
    • Generate Bigrams from text: Generate bigrams and compute their frequency distribution in a corpus of text. 

  • Build your Hadoop cluster: 

    • Install Hadoop in Standalone, Pseudo-Distributed and Fully Distributed modes 
    • Set up a hadoop cluster using Linux VMs.
    • Set up a cloud Hadoop cluster on AWS with Cloudera Manager.
    • Understand HDFS, MapReduce and YARN and their interaction 

  • Customize your MapReduce Jobs: 

    • Chain multiple MR jobs together
    • Write your own Customized Partitioner
    • Total Sort : Globally sort a large amount of data by sampling input files
    • Secondary sorting 
    • Unit tests with MR Unit
    • Integrate with Python using the Hadoop Streaming API

.. and of course all the basics: 

  • MapReduce : Mapper, Reducer, Sort/Merge, Partitioning, Shuffle and Sort
  • HDFS & YARN: Namenode, Datanode, Resource manager, Node manager, the anatomy of a MapReduce application, YARN Scheduling, Configuring HDFS and YARN to performance tune your cluster. 

Using discussion forums

Please use the discussion forums on this course to engage with other students and to help each other out. Unfortunately, much as we would like to, it is not possible for us at Loonycorn to respond to individual questions from students:-(

We're super small and self-funded with only 2-3 people developing technical video content. Our mission is to make high-quality courses available at super low prices.

The only way to keep our prices this low is to *NOT offer additional technical support over email or in-person*. The truth is, direct support is hugely expensive and just does not scale.

We understand that this is not ideal and that a lot of students might benefit from this additional support. Hiring resources for additional support would make our offering much more expensive, thus defeating our original purpose.

It is a hard trade-off.

Thank you for your patience and understanding!

Who is the target audience?
  • Yep! Analysts who want to leverage the power of HDFS where traditional databases don't cut it anymore
  • Yep! Engineers who want to develop complex distributed computing applications to process lot's of data
  • Yep! Data Scientists who want to add MapReduce to their bag of tricks for processing data
Students Who Viewed This Course Also Viewed
Curriculum For This Course
Expand All 73 Lectures Collapse All 73 Lectures 13:42:43
1 Lecture 01:52

We start off with an introduction on what this course is all about.

Preview 01:52
Why is Big Data a Big Deal
6 Lectures 57:01

Big data may be a cliched term, but what does it really mean? Where does this data come from and why is it big?

Preview 14:20

Distributed computing makes processing very fast - but why? Let's take a simple example and see why distributed computing is so powerful.

Preview 08:37

What exactly is Hadoop? Its origins and its logical components explained.

What is Hadoop?

HDFS based on GFS (The Google File System) is the storage layer within Hadoop. It stores files in blocks of 128MB. 

HDFS or the Hadoop Distributed File System

MapReduce is the framework which allows developers to write massively parallel programs without worrying about the underlying details of distributed computing. The developer simply implements the map() and reduce() functions in order to crunch large input sets of data.

MapReduce Introduced

Yarn is responsible for managing resources in the Hadoop cluster. Yarn was introduced recently in Hadoop 2.0.

YARN or Yet Another Resource Negotiator
Installing Hadoop in a Local Environment
3 Lectures 36:02

Hadoop has 3 different install modes - Standalone, Pseudo-distributed and Fully Distributed. Get an overview of when to use each

Preview 08:32

How to set up Hadoop in the standalone mode. Windows users need to install a Virtual Linux instance before this video. 

Hadoop Standalone mode Install

Set up Hadoop in the Pseudo-Distributed mode. All Hadoop services will be up and running! 

Hadoop Pseudo-Distributed mode Install
The MapReduce "Hello World"
7 Lectures 01:08:43

In the world of MapReduce every problem can be thought of in terms of key values pairs. Map transforms the key-value pair in a meaningful way, they are sorted and merged and reduce combines key-value pairs in a meaningful way.

Preview 08:49

If you're learning MapReduce for the very first time - it's best to visualize what exactly it does before you get down into the little details.

MapReduce - Visualized And Explained

What really goes on with a single record as it flows through the map and then reduce phase?

MapReduce - Digging a little deeper at every step

Counting the number of times a word occurs in input text is the Hello World of MapReduce. This was the very first example given in Jeff Dean and Sanjay Ghemawat's original paper on MapReduce.

"Hello World" in MapReduce

Nothing is real unless it is on code. Setting up our very first Mapper.

The Mapper

Nothing is real unless it is on code. Setting up our very first Reducer.

The Reducer

Nothing is real unless it is on code. Setting up our very first MapReduce Job.

The Job
Run a MapReduce Job
2 Lectures 25:28

Learn how to use HDFS's command line interface and add data to HDFS to run your jobs on. 

Get comfortable with HDFS

Run your very first MapReduce Job. We'll also explore the Web interface for YARN and HDFS and see how to track your jobs.

Run your first MapReduce Job
Juicing your MapReduce - Combiners, Shuffle and Sort and The Streaming API
6 Lectures 01:09:52

The reduce phase can be optimized by combining the output of the map phase at the map node itself. This is an optimization of the reduce phase to allow it to work on data that has been "partially reduced".

Parallelize the reduce phase - use the Combiner

Using a Combiner should not change the output of the MapReduce. Which means not every Reducer can work as a combine function

Not all Reducers are Combiners

The number of mapper processes depend on the number of input splits of your data. It's not really in your control. What you, as a developer, do control, is the number of reducers.

Preview 08:23

In order to have more than one Reducer work on your map data, you need partitions. Visualize how partitions and shuffle and sort work.

Parallelizing reduce using Shuffle And Sort

The Hadoop Streaming API uses the standard input and output to communicate with mapper and reducer functions in any language. Understand how Hadoop interacts with mappers and reducers in other languages.

MapReduce is not limited to the Java language - Introducing the Streaming API

It's not real till it's in code. Implement the word count MapReduce example in Python using the Streaming API. 

Python for MapReduce
HDFS and Yarn
7 Lectures 01:22:00

Let's understand HDFS and it's data replication strategy in some detail.

Preview 15:32

Name nodes provide an index of what file is stored where in the data nodes. If the name node is lost the mapping of where the files are is lost. Which means even though the data is present in the data nodes, we'll have no idea how to access it!

HDFS - Name nodes and why they're critical

Hadoop backs up name nodes using two strategies. Backing up the snapshot and edits to the file system and by setting up a secondary name node.

HDFS - Checkpointing to backup name node information

The Resource Manager assigns resources to processes based on policies and constraints of the cluster while the Node Manager manages memory, and other resource for a single node. These two form the basic components of Yarn.

Yarn - Basic components

What happens under the hood when you submit a job to Yarn? Resource Manager, Container, the Application Master and the Node Manager all work together to run your MapReduce job. 

Yarn - Submitting a job to Yarn

The Resource Manager acts as a pure scheduler and allows plugging in different policies to schedule jobs. Understand how the FIFO scheduler, the Capacity scheduler and the Fair scheduler work.

Yarn - Plug in scheduling policies

The user has a lot of leeway in configuring how the scheduler works. Let's study some of the options we can specify in the various config files.

Yarn - Configure the scheduler
MapReduce Customizations For Finer Grained Control
4 Lectures 52:19

The Main class in your MapReduce needs some special set up before it can accept command line arguments.

Setting up your MapReduce to accept command line arguments

The library classes and interfaces which allow parsing command line arguments. Learn what they are and how to use them.

The Tool, ToolRunner and GenericOptionsParser

The Job object allows you to plug in your own classes to control inputs, outputs and many intermediate steps in the MapReduce.

Configuring properties of the Job object

Between the Map phase and the Reduce phase lie a whole number of intermediate steps performed by the Hadoop framework. Partitioning, Sorting and Grouping are 3 specific operations and each of these can be customized to fit your problem statement.

Customizing the Partitioner, Sort Comparator, and Group Comparator
The Inverted Index, Custom Data Types for Keys, Bigram Counts and Unit Tests!
7 Lectures 01:11:30

The Inverted Index which provides a mapping from every word to the page on which that word occurs is at the heart of every search engine. This is one of the original use cases for MapReduce.

Preview 14:40

It's not real unless it's in code, generate the inverted index using a MR job.

Generating the inverted index using MapReduce

Understand why we need the Writable and the WritableComparable interface and why the keys in the Mapper output implement these interfaces.

Custom data types for keys - The Writable Interface

A Bigram is a pair of adjacent words, use a special data type to represent a Bigram, it needs to be a WritableComparable to be serialized across the network and sorted and merged by Hadoop.

Represent a Bigram using a WritableComparable

Use the Bigram data type in your MapReduce to produce a count of all Bigrams in the input text file.

MapReduce to count the Bigrams in input text

Follow these instructions to set up your Hadoop project. 

Setting up your Hadoop project

No code is complete without unit tests. The MRUnit framework uses JUnit to test MapReduce jobs. Write test cases for the Bigram count code.

Test your MapReduce job using MRUnit
Input and Output Formats and Customized Partitioning
7 Lectures 01:14:33

The Input Format specifies the kind of input data that feeds into the MapReduce. The FileInputFormat is the base class for all inputs which are files

Preview 12:48

The most common kind of files are text files and binary files and Hadoop has built in library classes to represent both of these.

Text And Sequence File Formats

What if you want to partition on something other than key hashes? Custom partitioners allow you to partition on whatever metric you, you just need to write a bit of code.

Data partitioning using a custom partitioner

Make the custom partitioner real in code

Total Order Partitioning is a mind bending concept in Hadoop. This allows you to locally sort data such that it's in globally sorted order. Sounds confusing? It is a hard concept to wrap one's head around but the results are pretty amazing!

Total Order Partitioning

Input sampling, samples the input data to produce a key to partition mapping. The total order partitioner uses this mapping to partition the data in such a manner that locally sorting the data results in a globally sorted result.

Input Sampling, Distribution, Partitioning and configuring these

The Hadoop Sort/Merge operation sorts the output keys of the mapper. Here is a neat trick to sort the values for each key as well.

Secondary Sort
5 More Sections
About the Instructor
Loony Corn
4.3 Average rating
3,812 Reviews
29,429 Students
74 Courses
A small team;ex-Google, Stanford and Flipkart

Loonycorn is us, Janani Ravi and Vitthal Srinivasan. Between us, we have studied at Stanford, been admitted to IIM Ahmedabad and have spent years  working in tech, in the Bay Area, New York, Singapore and Bangalore.

Janani: 7 years at Google (New York, Singapore); Studied at Stanford; also worked at Flipkart and Microsoft

Vitthal: Also Google (Singapore) and studied at Stanford; Flipkart, Credit Suisse and INSEAD too

We think we might have hit upon a neat way of teaching complicated tech courses in a funny, practical, engaging way, which is why we are so excited to be here on Udemy!

We hope you will try our offerings, and think you'll like them :-)