Taming Big Data with MapReduce and Hadoop - Hands On!
4.3 (2,414 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
20,034 students enrolled

Taming Big Data with MapReduce and Hadoop - Hands On!

Learn MapReduce fast by building over 10 real examples, using Python, MRJob, and Amazon's Elastic MapReduce Service.
4.3 (2,414 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
20,034 students enrolled
Last updated 7/2020
English, French [Auto], 5 more
  • German [Auto]
  • Indonesian [Auto]
  • Italian [Auto]
  • Portuguese [Auto]
  • Spanish [Auto]
Current price: $62.99 Original price: $89.99 Discount: 30% off
5 hours left at this price!
30-Day Money-Back Guarantee
This course includes
  • 5 hours on-demand video
  • 3 articles
  • 5 downloadable resources
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
Training 5 or more people?

Get your team access to 4,000+ top Udemy courses anytime, anywhere.

Try Udemy for Business
What you'll learn
  • Understand how MapReduce can be used to analyze big data sets
  • Write your own MapReduce jobs using Python and MRJob
  • Run MapReduce jobs on Hadoop clusters using Amazon Elastic MapReduce
  • Chain MapReduce jobs together to analyze more complex problems
  • Analyze social network data using MapReduce
  • Analyze movie ratings data using MapReduce and produce movie recommendations with it.
  • Understand other Hadoop-based technologies, including Hive, Pig, and Spark
  • Understand what Hadoop is for, and how it works
  • You'll need a Windows system, and we'll walk you through downloading and installing a Python development environment and the tools you need as part of the course. If you're on Linux and already have a Python development environment in place that you're familiar with, that's OK too. Again, be sure you have at least some programming or scripting experience under your belt. You won't need to be a Python expert to succeed in this course, but you'll need the fundamental concepts of programming in order to pick up what we're doing.

“Big data" analysis is a hot and highly valuable skill – and this course will teach you two technologies fundamental to big data quickly: MapReduce and Hadoop. Ever wonder how Google manages to analyze the entire Internet on a continual basis? You'll learn those same techniques, using your own Windows system right at home.

Learn and master the art of framing data analysis problems as MapReduce problems through over 10 hands-on examples, and then scale them up to run on cloud computing services in this course. You'll be learning from an ex-engineer and senior manager from Amazon and IMDb.

  • Learn the concepts of MapReduce
  • Run MapReduce jobs quickly using Python and MRJob
  • Translate complex analysis problems into multi-stage MapReduce jobs
  • Scale up to larger data sets using Amazon's Elastic MapReduce service
  • Understand how Hadoop distributes MapReduce across computing clusters
  • Learn about other Hadoop technologies, like Hive, Pig, and Spark

By the end of this course, you'll be running code that analyzes gigabytes worth of information – in the cloud – in a matter of minutes.

We'll have some fun along the way. You'll get warmed up with some simple examples of using MapReduce to analyze movie ratings data and text in a book. Once you've got the basics under your belt, we'll move to some more complex and interesting tasks. We'll use a million movie ratings to find movies that are similar to each other, and you might even discover some new movies you might like in the process! We'll analyze a social graph of superheroes, and learn who the most “popular" superhero is – and develop a system to find “degrees of separation" between superheroes. Are all Marvel superheroes within a few degrees of being connected to The Incredible Hulk? You'll find the answer.

This course is very hands-on; you'll spend most of your time following along with the instructor as we write, analyze, and run real code together – both on your own system, and in the cloud using Amazon's Elastic MapReduce service. Over 5 hours of video content is included, with over 10 real examples of increasing complexity you can build, run and study yourself. Move through them at your own pace, on your own schedule. The course wraps up with an overview of other Hadoop-based technologies, including Hive, Pig, and the very hot Spark framework – complete with a working example in Spark.

Don't take my word for it - check out some of our unsolicited reviews from real students:

"I have gone through many courses on map reduce; this is undoubtedly the best, way at the top."

"This is one of the best courses I have ever seen since 4 years passed I am using Udemy for courses."

"The best hands on course on MapReduce and Python. I really like the run it yourself approach in this course. Everything is well organized, and the lecturer is top notch."

Who this course is for:
  • This course is best for students with some prior programming or scripting ability. We will treat you as a beginner when it comes to MapReduce and getting everything set up for writing MapReduce jobs with Python, MRJob, and Amazon's Elastic MapReduce service - but we won't spend a lot of time teaching you how to write code. The focus is on framing data analysis problems as MapReduce problems and running them either locally or on a Hadoop cluster. If you don't know Python, you'll need to be able to pick it up based on the examples we give. If you're new to programming, you'll want to learn a programming or scripting language before taking this course.
Course content
Expand all 54 lectures 05:03:14
+ Introduction, and Getting Started
4 lectures 13:57

Learn the scope of this course, and the credentials of your instructor.

Preview 03:22
Udemy 101: Getting the Most From This Course
Updated setup instructions!

I'll walk you through installing Enthought Canopy, the mrjob Python package, and some sample movie ratings data from MovieLens - and then we'll run a simple MapReduce job on your desktop!

Preview 07:44
+ Understanding MapReduce
16 lectures 01:31:49

Understand the basic concepts of MapReduce - what a mapper does, what a reducer does, and what happens in between.

MapReduce Basic Concepts
A quick note on file names.

We'll analyze the source of your ratings histogram job, and understand how it works.

Walkthrough of Rating Histogram Code

Understand why MapReduce is a powerful tool for scaling big data analysis problems across compute clusters.

Understanding How MapReduce Scales / Distributed Computing

In our next example, we'll look at some fake social data and compute the average number of friends by age.

Average Friends by Age Example: Part 1

Actually run the friends by age example on your machine, and analyze the results.

Average Friends by Age Example: Part 2

In another example, we'll use real weather data from the year 1800 and find the minimum temperature at each weather station for the year.

Preview 09:39

Now, we'll modify that same example to find the maximum temperature for the year, and run it too.

Maximum Temperature By Location Example

Another hands-on example: we'll find how often each word is used in a real book's text.

Word Frequency in a Book Example

We build on the previous example to do a better job of identifying words, using regular expressions in Python.

Making the Word Frequency Mapper Better with Regular Expressions

We'll build further on the same example, this time using MapReduce to sort the results the way we want them, using a multi-stage MapReduce job.

Sorting the Word Frequency Results Using Multi-Stage MapReduce Jobs

Your first homework assignment! Propose what the mapper and reducer should do for a job that computes the total amount spent by customer in a fake e-commerce data set.

Activity: Design a Mapper and Reducer for Total Spent by Customer

We'll review your approach for the e-commerce problem, and set you loose with the tools you need to go write your first MapReduce job on your own.

Activity: Write Code for Total Spent by Customer

Compare your code to mine for analyzing our e-commerce data. Now, build upon your code to sort the final results to find the biggest spender.

Compare Your Code to Mine. Activity: Sort Results by Amount Spent

We'll review your homework to sort the results of the e-commerce analysis, and compare your code to mine.

Compare your Code to Mine for Sorted Results.

Learn how combiners can help reduce network throughput in MapReduce jobs, and run a simple example of using a combiner function.

+ Advanced MapReduce Examples
12 lectures 01:21:27

Using the MovieLens data set, we'll write and run a MapReduce job to find the most-rated movie.

Preview 07:23

Extend the previous example to send movie ID - to - movie name lookup data along with our MapReduce tasks, so we can display results in human readable format.

Including Ancillary Lookup Data in the Example

We'll introduce the Marvel social graph data set, and cover how we'll find the most "popular" superhero!

Example: Most Popular Superhero, Part 1

Actually implement and run the code to identify the most popular superhero. I bet it's not who you think it is!

Example: Most Popular Superhero, Part 2

In a more advanced example, we'll describe how to use MapReduce to find degrees of separation between superheroes in a social graph. We'll use a breadth-first-search algorithm in MapReduce to find the answers we want.

Preview 12:27

First we walk through transforming the Marvel data set into a format usable for the BFS algorithm.

Degrees of Separation: Preprocessing the Data

Now we'll cover the code needed to iteratively run breadth-first search using MapReduce, and use Hadoop counters to flag our results.

Degrees of Separation: Code Walkthrough

Actually run the code that allows us to find the degrees of separation between any two superheroes, and analyze the results.

Degrees of Separation: Running and Analyzing the Results

In another advanced example, we'll cover item-based collaborative filtering and how it can be used to identify movies similar to each other based on ratings data.

Preview 07:24

We'll walk through how a creative multi-step MapReduce job can compute similar movies with a surprisingly small amount of code.

Similar Movies: Code Walkthrough

We'll run our code on the 100K MovieLens data set, and analyze the results.

Similar Movies: Running and Analyzing the Results

Your homework is to modify our program to produce better results, and I'll give you a few ideas on things you might try.

Learning Activity: Improving our Movie Similarities MapReduce Job
+ Using Hadoop and Elastic MapReduce
8 lectures 37:07

We'll cover what Hadoop is, and how it enables running MapReduce jobs across a cluster of computers.

Fundamental Concepts of Hadoop

Learn how HDFS distributes large data sets across a cluster in a reliable manner.

The Hadoop Distributed File System (HDFS)

Learn how YARN manages resources on a Hadoop cluster running MapReduce V2

Apache YARN

Learn how Hadoop can run mappers and reducers written in any programming language, through Hadoop streaming.

Hadoop Streaming: How Hadoop Runs your Python Code

Set up an Amazon Elastic MapReduce account, so you can run larger examples in this course across real compute clusters at very low cost.

Setting Up Your Amazon Elastic MapReduce Account

Tie your Amazon Elastic MapReduce account to your Python and MRJob development environment.

Linking Your EMR Account with MRJob

Run the movie recommendation program in the cloud!

Exercise: Run Movie Recommendations on Elastic MapReduce

Analyze the results - they should be what you expect, but why did it take so long?

Analyze the Results of Your EMR Job
+ Advanced Hadoop and EMR
7 lectures 43:23

Some basic concepts on distributed computing, and the overhead associated with it.

Distributed Computing Fundamentals

Learn how to run your movie similarity program on multiple machines in EMR, and actually run it.

Activity: Running Movie Similarities on Four Machines

Analyze the results of running movie similarities across four machines. It's faster - but there is a downside!

Analyzing the Results of the 4-Machine Job

Learn how to troubleshoot EMR / MRJob programs that don't complete successfully.

Troubleshooting Hadoop Jobs with EMR and MRJob, Part 1

Hands-on example of troubleshooting a failed job after the fact.

Troubleshooting Hadoop Jobs, Part 2

Finally, some truly big data: compute similar movies using one million movie ratings across a cluster of 20 computers.

Preview 06:08

We'll analyze the results of our one-million-rating analysis, and use a new script to extract the data we want.

Analyzing One Million Movie Ratings Across 16 Machines, Part 2
+ Other Hadoop Technologies
6 lectures 34:36

A very brief overview of Apache Hive, QL, and a simple example.

Introducing Apache Hive

A very brief overview of Apache Pig, and a simple example.

Introducing Apache Pig

An overview of Spark, how it works, and why it might be a better choice than MapReduce for some tasks.

Apache Spark: Concepts

We'll walk through running a real Spark program to analyze gigabytes' worth of airline flight data to identify the worst airports in America.

Spark Example: Part 1

We'll analyze the results of our Spark program, and find out which airport has the most flight delays.

Spark Example: Part 2

Thank you for taking my course! Please remember to leave a rating.

+ Where to Go from Here
1 lecture 00:53
Bonus Lecture: More courses to explore!