Learn By Example: Hadoop, MapReduce for Big Data problems
4.4 (232 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
2,950 students enrolled
Wishlisted Wishlist

Please confirm that you want to add Learn By Example: Hadoop, MapReduce for Big Data problems to your Wishlist.

Add to Wishlist

Learn By Example: Hadoop, MapReduce for Big Data problems

A hands-on workout in Hadoop, MapReduce and the art of thinking "parallel"
Bestselling
4.4 (232 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
2,950 students enrolled
Created by Loony Corn
Last updated 2/2017
English
Current price: $10 Original price: $50 Discount: 80% off
5 hours left at this price!
30-Day Money-Back Guarantee
Includes:
  • 13.5 hours on-demand video
  • 1 Article
  • 111 Supplemental Resources
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
What Will I Learn?
  • Develop advanced MapReduce applications to process BigData
  • Master the art of "thinking parallel" - how to break up a task into Map/Reduce transformations
  • Self-sufficiently set up their own mini-Hadoop cluster whether it's a single node, a physical cluster or in the cloud.
  • Use Hadoop + MapReduce to solve a wide variety of problems : from NLP to Inverted Indices to Recommendations
  • Understand HDFS, MapReduce and YARN and how they interact with each other
  • Understand the basics of performance tuning and managing your own cluster
View Curriculum
Requirements
  • You'll need an IDE where you can write Java code or open the source code that's shared. IntelliJ and Eclipse are both great options.
  • You'll need some background in Object-Oriented Programming, preferably in Java. All the source code is in Java and we dive right in without going into Objects, Classes etc
  • A bit of exposure to Linux/Unix shells would be helpful, but it won't be a blocker
Description

Taught by a 4 person team including 2 Stanford-educated, ex-Googlers  and 2 ex-Flipkart Lead Analysts. This team has decades of practical experience in working with Java and with billions of rows of data. 

This course is a zoom-in, zoom-out, hands-on workout involving Hadoop, MapReduce and the art of thinking parallel. 

Let’s parse that.

Zoom-in, Zoom-Out:  This course is both broad and deep. It covers the individual components of Hadoop in great detail, and also gives you a higher level picture of how they interact with each other. 

Hands-on workout involving Hadoop, MapReduce : This course will get you hands-on with Hadoop very early on.  You'll learn how to set up your own cluster using both VMs and the Cloud. All the major features of MapReduce are covered - including advanced topics like Total Sort and Secondary Sort. 

The art of thinking parallel: MapReduce completely changed the way people thought about processing Big Data. Breaking down any problem into parallelizable units is an art. The examples in this course will train you to "think parallel". 

What's Covered:

Lot's of cool stuff ..

  • Using MapReduce to 


    • Recommend friends in a Social Networking site: Generate Top 10 friend recommendations using a Collaborative filtering algorithm. 
    • Build an Inverted Index for Search Engines: Use MapReduce to parallelize the humongous task of building an inverted index for a search engine. 
    • Generate Bigrams from text: Generate bigrams and compute their frequency distribution in a corpus of text. 


  • Build your Hadoop cluster: 


    • Install Hadoop in Standalone, Pseudo-Distributed and Fully Distributed modes 
    • Set up a hadoop cluster using Linux VMs.
    • Set up a cloud Hadoop cluster on AWS with Cloudera Manager.
    • Understand HDFS, MapReduce and YARN and their interaction 


  • Customize your MapReduce Jobs: 


    • Chain multiple MR jobs together
    • Write your own Customized Partitioner
    • Total Sort : Globally sort a large amount of data by sampling input files
    • Secondary sorting 
    • Unit tests with MR Unit
    • Integrate with Python using the Hadoop Streaming API


.. and of course all the basics: 

  • MapReduce : Mapper, Reducer, Sort/Merge, Partitioning, Shuffle and Sort
  • HDFS & YARN: Namenode, Datanode, Resource manager, Node manager, the anatomy of a MapReduce application, YARN Scheduling, Configuring HDFS and YARN to performance tune your cluster. 


Using discussion forums

Please use the discussion forums on this course to engage with other students and to help each other out. Unfortunately, much as we would like to, it is not possible for us at Loonycorn to respond to individual questions from students:-(

We're super small and self-funded with only 2-3 people developing technical video content. Our mission is to make high-quality courses available at super low prices.

The only way to keep our prices this low is to *NOT offer additional technical support over email or in-person*. The truth is, direct support is hugely expensive and just does not scale.

We understand that this is not ideal and that a lot of students might benefit from this additional support. Hiring resources for additional support would make our offering much more expensive, thus defeating our original purpose.

It is a hard trade-off.

Thank you for your patience and understanding!

Who is the target audience?
  • Yep! Analysts who want to leverage the power of HDFS where traditional databases don't cut it anymore
  • Yep! Engineers who want to develop complex distributed computing applications to process lot's of data
  • Yep! Data Scientists who want to add MapReduce to their bag of tricks for processing data
Students Who Viewed This Course Also Viewed
Curriculum For This Course
73 Lectures
13:42:43
+
Introduction
1 Lecture 01:52

We start off with an introduction on what this course is all about.

Preview 01:52
+
Why is Big Data a Big Deal
6 Lectures 57:01

Big data may be a cliched term, but what does it really mean? Where does this data come from and why is it big?

Preview 14:20

Distributed computing makes processing very fast - but why? Let's take a simple example and see why distributed computing is so powerful.

Preview 08:37

What exactly is Hadoop? Its origins and its logical components explained.

What is Hadoop?
07:25

HDFS based on GFS (The Google File System) is the storage layer within Hadoop. It stores files in blocks of 128MB. 

HDFS or the Hadoop Distributed File System
11:00

MapReduce is the framework which allows developers to write massively parallel programs without worrying about the underlying details of distributed computing. The developer simply implements the map() and reduce() functions in order to crunch large input sets of data.

MapReduce Introduced
11:39

Yarn is responsible for managing resources in the Hadoop cluster. Yarn was introduced recently in Hadoop 2.0.

YARN or Yet Another Resource Negotiator
04:00
+
Installing Hadoop in a Local Environment
3 Lectures 36:02

Hadoop has 3 different install modes - Standalone, Pseudo-distributed and Fully Distributed. Get an overview of when to use each

Preview 08:32

How to set up Hadoop in the standalone mode. Windows users need to install a Virtual Linux instance before this video. 

Hadoop Standalone mode Install
15:46

Set up Hadoop in the Pseudo-Distributed mode. All Hadoop services will be up and running! 

Hadoop Pseudo-Distributed mode Install
11:44
+
The MapReduce "Hello World"
7 Lectures 01:08:43

In the world of MapReduce every problem can be thought of in terms of key values pairs. Map transforms the key-value pair in a meaningful way, they are sorted and merged and reduce combines key-value pairs in a meaningful way.

Preview 08:49

If you're learning MapReduce for the very first time - it's best to visualize what exactly it does before you get down into the little details.

MapReduce - Visualized And Explained
09:03

What really goes on with a single record as it flows through the map and then reduce phase?

MapReduce - Digging a little deeper at every step
10:21

Counting the number of times a word occurs in input text is the Hello World of MapReduce. This was the very first example given in Jeff Dean and Sanjay Ghemawat's original paper on MapReduce.

"Hello World" in MapReduce
10:29

Nothing is real unless it is on code. Setting up our very first Mapper.

The Mapper
09:48

Nothing is real unless it is on code. Setting up our very first Reducer.

The Reducer
07:46

Nothing is real unless it is on code. Setting up our very first MapReduce Job.

The Job
12:27
+
Run a MapReduce Job
2 Lectures 25:28

Learn how to use HDFS's command line interface and add data to HDFS to run your jobs on. 

Get comfortable with HDFS
10:58

Run your very first MapReduce Job. We'll also explore the Web interface for YARN and HDFS and see how to track your jobs.

Run your first MapReduce Job
14:30
+
Juicing your MapReduce - Combiners, Shuffle and Sort and The Streaming API
6 Lectures 01:09:52

The reduce phase can be optimized by combining the output of the map phase at the map node itself. This is an optimization of the reduce phase to allow it to work on data that has been "partially reduced".

Parallelize the reduce phase - use the Combiner
14:39

Using a Combiner should not change the output of the MapReduce. Which means not every Reducer can work as a combine function

Not all Reducers are Combiners
14:31

The number of mapper processes depend on the number of input splits of your data. It's not really in your control. What you, as a developer, do control, is the number of reducers.

Preview 08:23

In order to have more than one Reducer work on your map data, you need partitions. Visualize how partitions and shuffle and sort work.

Parallelizing reduce using Shuffle And Sort
14:55

The Hadoop Streaming API uses the standard input and output to communicate with mapper and reducer functions in any language. Understand how Hadoop interacts with mappers and reducers in other languages.

MapReduce is not limited to the Java language - Introducing the Streaming API
05:05

It's not real till it's in code. Implement the word count MapReduce example in Python using the Streaming API. 

Python for MapReduce
12:19
+
HDFS and Yarn
7 Lectures 01:22:00

Let's understand HDFS and it's data replication strategy in some detail.

Preview 15:32

Name nodes provide an index of what file is stored where in the data nodes. If the name node is lost the mapping of where the files are is lost. Which means even though the data is present in the data nodes, we'll have no idea how to access it!

HDFS - Name nodes and why they're critical
06:48

Hadoop backs up name nodes using two strategies. Backing up the snapshot and edits to the file system and by setting up a secondary name node.

HDFS - Checkpointing to backup name node information
11:10

The Resource Manager assigns resources to processes based on policies and constraints of the cluster while the Node Manager manages memory, and other resource for a single node. These two form the basic components of Yarn.

Yarn - Basic components
08:33

What happens under the hood when you submit a job to Yarn? Resource Manager, Container, the Application Master and the Node Manager all work together to run your MapReduce job. 

Yarn - Submitting a job to Yarn
13:10

The Resource Manager acts as a pure scheduler and allows plugging in different policies to schedule jobs. Understand how the FIFO scheduler, the Capacity scheduler and the Fair scheduler work.

Yarn - Plug in scheduling policies
14:21

The user has a lot of leeway in configuring how the scheduler works. Let's study some of the options we can specify in the various config files.

Yarn - Configure the scheduler
12:26
+
MapReduce Customizations For Finer Grained Control
4 Lectures 52:19

The Main class in your MapReduce needs some special set up before it can accept command line arguments.

Setting up your MapReduce to accept command line arguments
13:47

The library classes and interfaces which allow parsing command line arguments. Learn what they are and how to use them.

The Tool, ToolRunner and GenericOptionsParser
12:35

The Job object allows you to plug in your own classes to control inputs, outputs and many intermediate steps in the MapReduce.

Configuring properties of the Job object
10:41

Between the Map phase and the Reduce phase lie a whole number of intermediate steps performed by the Hadoop framework. Partitioning, Sorting and Grouping are 3 specific operations and each of these can be customized to fit your problem statement.

Customizing the Partitioner, Sort Comparator, and Group Comparator
15:16
+
The Inverted Index, Custom Data Types for Keys, Bigram Counts and Unit Tests!
7 Lectures 01:11:30

The Inverted Index which provides a mapping from every word to the page on which that word occurs is at the heart of every search engine. This is one of the original use cases for MapReduce.

Preview 14:40

It's not real unless it's in code, generate the inverted index using a MR job.

Generating the inverted index using MapReduce
10:25

Understand why we need the Writable and the WritableComparable interface and why the keys in the Mapper output implement these interfaces.

Custom data types for keys - The Writable Interface
10:23

A Bigram is a pair of adjacent words, use a special data type to represent a Bigram, it needs to be a WritableComparable to be serialized across the network and sorted and merged by Hadoop.

Represent a Bigram using a WritableComparable
13:13

Use the Bigram data type in your MapReduce to produce a count of all Bigrams in the input text file.

MapReduce to count the Bigrams in input text
08:26

Follow these instructions to set up your Hadoop project. 

Setting up your Hadoop project
00:42

No code is complete without unit tests. The MRUnit framework uses JUnit to test MapReduce jobs. Write test cases for the Bigram count code.

Test your MapReduce job using MRUnit
13:41
+
Input and Output Formats and Customized Partitioning
7 Lectures 01:14:33

The Input Format specifies the kind of input data that feeds into the MapReduce. The FileInputFormat is the base class for all inputs which are files

Preview 12:48

The most common kind of files are text files and binary files and Hadoop has built in library classes to represent both of these.

Text And Sequence File Formats
10:21

What if you want to partition on something other than key hashes? Custom partitioners allow you to partition on whatever metric you, you just need to write a bit of code.

Data partitioning using a custom partitioner
07:11

Make the custom partitioner real in code
10:25

Total Order Partitioning is a mind bending concept in Hadoop. This allows you to locally sort data such that it's in globally sorted order. Sounds confusing? It is a hard concept to wrap one's head around but the results are pretty amazing!

Total Order Partitioning
10:10

Input sampling, samples the input data to produce a key to partition mapping. The total order partitioner uses this mapping to partition the data in such a manner that locally sorting the data results in a globally sorted result.

Input Sampling, Distribution, Partitioning and configuring these
09:04

The Hadoop Sort/Merge operation sorts the output keys of the mapper. Here is a neat trick to sort the values for each key as well.

Secondary Sort
14:34
5 More Sections
About the Instructor
Loony Corn
4.3 Average rating
4,595 Reviews
36,748 Students
75 Courses
An ex-Google, Stanford and Flipkart team

Loonycorn is us, Janani Ravi and Vitthal Srinivasan. Between us, we have studied at Stanford, been admitted to IIM Ahmedabad and have spent years  working in tech, in the Bay Area, New York, Singapore and Bangalore.

Janani: 7 years at Google (New York, Singapore); Studied at Stanford; also worked at Flipkart and Microsoft

Vitthal: Also Google (Singapore) and studied at Stanford; Flipkart, Credit Suisse and INSEAD too

We think we might have hit upon a neat way of teaching complicated tech courses in a funny, practical, engaging way, which is why we are so excited to be here on Udemy!

We hope you will try our offerings, and think you'll like them :-)