Taming Big Data with MapReduce and Hadoop - Hands On!
4.5 (1,366 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
11,797 students enrolled
Wishlisted Wishlist

Please confirm that you want to add Taming Big Data with MapReduce and Hadoop - Hands On! to your Wishlist.

Add to Wishlist

Taming Big Data with MapReduce and Hadoop - Hands On!

Learn MapReduce fast by building over 10 real examples, using Python, MRJob, and Amazon's Elastic MapReduce Service.
Bestselling
4.5 (1,366 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
11,797 students enrolled
Last updated 5/2017
English
Current price: $10 Original price: $80 Discount: 88% off
4 days left at this price!
30-Day Money-Back Guarantee
Includes:
  • 5 hours on-demand video
  • 1 Article
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
What Will I Learn?
  • Understand how MapReduce can be used to analyze big data sets
  • Write your own MapReduce jobs using Python and MRJob
  • Run MapReduce jobs on Hadoop clusters using Amazon Elastic MapReduce
  • Chain MapReduce jobs together to analyze more complex problems
  • Analyze social network data using MapReduce
  • Analyze movie ratings data using MapReduce and produce movie recommendations with it.
  • Understand other Hadoop-based technologies, including Hive, Pig, and Spark
  • Understand what Hadoop is for, and how it works
View Curriculum
Requirements
  • You'll need a Windows system, and we'll walk you through downloading and installing a Python development environment and the tools you need as part of the course. If you're on Linux and already have a Python development environment in place that you're familiar with, that's OK too. Again, be sure you have at least some programming or scripting experience under your belt. You won't need to be a Python expert to succeed in this course, but you'll need the fundamental concepts of programming in order to pick up what we're doing.
Description

“Big data" analysis is a hot and highly valuable skill – and this course will teach you two technologies fundamental to big data quickly: MapReduce and Hadoop. Ever wonder how Google manages to analyze the entire Internet on a continual basis? You'll learn those same techniques, using your own Windows system right at home.

Learn and master the art of framing data analysis problems as MapReduce problems through over 10 hands-on examples, and then scale them up to run on cloud computing services in this course. You'll be learning from an ex-engineer and senior manager from Amazon and IMDb.

  • Learn the concepts of MapReduce
  • Run MapReduce jobs quickly using Python and MRJob
  • Translate complex analysis problems into multi-stage MapReduce jobs
  • Scale up to larger data sets using Amazon's Elastic MapReduce service
  • Understand how Hadoop distributes MapReduce across computing clusters
  • Learn about other Hadoop technologies, like Hive, Pig, and Spark

By the end of this course, you'll be running code that analyzes gigabytes worth of information – in the cloud – in a matter of minutes.

We'll have some fun along the way. You'll get warmed up with some simple examples of using MapReduce to analyze movie ratings data and text in a book. Once you've got the basics under your belt, we'll move to some more complex and interesting tasks. We'll use a million movie ratings to find movies that are similar to each other, and you might even discover some new movies you might like in the process! We'll analyze a social graph of superheroes, and learn who the most “popular" superhero is – and develop a system to find “degrees of separation" between superheroes. Are all Marvel superheroes within a few degrees of being connected to The Incredible Hulk? You'll find the answer.

This course is very hands-on; you'll spend most of your time following along with the instructor as we write, analyze, and run real code together – both on your own system, and in the cloud using Amazon's Elastic MapReduce service. Over 5 hours of video content is included, with over 10 real examples of increasing complexity you can build, run and study yourself. Move through them at your own pace, on your own schedule. The course wraps up with an overview of other Hadoop-based technologies, including Hive, Pig, and the very hot Spark framework – complete with a working example in Spark.

Don't take my word for it - check out some of our unsolicited reviews from real students:

"I have gone through many courses on map reduce; this is undoubtedly the best, way at the top."

"This is one of the best courses I have ever seen since 4 years passed I am using Udemy for courses."

"The best hands on course on MapReduce and Python. I really like the run it yourself approach in this course. Everything is well organized, and the lecturer is top notch."

Who is the target audience?
  • This course is best for students with some prior programming or scripting ability. We will treat you as a beginner when it comes to MapReduce and getting everything set up for writing MapReduce jobs with Python, MRJob, and Amazon's Elastic MapReduce service - but we won't spend a lot of time teaching you how to write code. The focus is on framing data analysis problems as MapReduce problems and running them either locally or on a Hadoop cluster. If you don't know Python, you'll need to be able to pick it up based on the examples we give. If you're new to programming, you'll want to learn a programming or scripting language before taking this course.
Students Who Viewed This Course Also Viewed
Curriculum For This Course
52 Lectures
05:00:35
+
Introduction, and Getting Started
2 Lectures 11:06

Learn the scope of this course, and the credentials of your instructor.

Preview 03:22

I'll walk you through installing Enthought Canopy, the mrjob Python package, and some sample movie ratings data from MovieLens - and then we'll run a simple MapReduce job on your desktop!

Preview 07:44
+
Understanding MapReduce
16 Lectures 01:31:45

Understand the basic concepts of MapReduce - what a mapper does, what a reducer does, and what happens in between.

MapReduce Basic Concepts
13:25

A quick note on file names.
00:42

We'll analyze the source of your ratings histogram job, and understand how it works.

Walkthrough of Rating Histogram Code
10:38

Understand why MapReduce is a powerful tool for scaling big data analysis problems across compute clusters.

Understanding How MapReduce Scales / Distributed Computing
03:00

In our next example, we'll look at some fake social data and compute the average number of friends by age.

Average Friends by Age Example: Part 1
03:04

Actually run the friends by age example on your machine, and analyze the results.

Average Friends by Age Example: Part 2
07:13

In another example, we'll use real weather data from the year 1800 and find the minimum temperature at each weather station for the year.

Preview 09:39

Now, we'll modify that same example to find the maximum temperature for the year, and run it too.

Maximum Temperature By Location Example
03:22

Another hands-on example: we'll find how often each word is used in a real book's text.

Word Frequency in a Book Example
05:25

We build on the previous example to do a better job of identifying words, using regular expressions in Python.

Making the Word Frequency Mapper Better with Regular Expressions
03:15

We'll build further on the same example, this time using MapReduce to sort the results the way we want them, using a multi-stage MapReduce job.

Sorting the Word Frequency Results Using Multi-Stage MapReduce Jobs
08:18

Your first homework assignment! Propose what the mapper and reducer should do for a job that computes the total amount spent by customer in a fake e-commerce data set.

Activity: Design a Mapper and Reducer for Total Spent by Customer
02:54

We'll review your approach for the e-commerce problem, and set you loose with the tools you need to go write your first MapReduce job on your own.

Activity: Write Code for Total Spent by Customer
03:57

Compare your code to mine for analyzing our e-commerce data. Now, build upon your code to sort the final results to find the biggest spender.

Compare Your Code to Mine. Activity: Sort Results by Amount Spent
05:38

We'll review your homework to sort the results of the e-commerce analysis, and compare your code to mine.

Compare your Code to Mine for Sorted Results.
03:49

Learn how combiners can help reduce network throughput in MapReduce jobs, and run a simple example of using a combiner function.

Combiners
07:26
+
Advanced MapReduce Examples
12 Lectures 01:21:27

Using the MovieLens data set, we'll write and run a MapReduce job to find the most-rated movie.

Preview 07:23

Extend the previous example to send movie ID - to - movie name lookup data along with our MapReduce tasks, so we can display results in human readable format.

Including Ancillary Lookup Data in the Example
08:00

We'll introduce the Marvel social graph data set, and cover how we'll find the most "popular" superhero!

Example: Most Popular Superhero, Part 1
04:22

Actually implement and run the code to identify the most popular superhero. I bet it's not who you think it is!

Example: Most Popular Superhero, Part 2
06:31

In a more advanced example, we'll describe how to use MapReduce to find degrees of separation between superheroes in a social graph. We'll use a breadth-first-search algorithm in MapReduce to find the answers we want.

Preview 12:27

First we walk through transforming the Marvel data set into a format usable for the BFS algorithm.

Degrees of Separation: Preprocessing the Data
05:14

Now we'll cover the code needed to iteratively run breadth-first search using MapReduce, and use Hadoop counters to flag our results.

Degrees of Separation: Code Walkthrough
06:34

Actually run the code that allows us to find the degrees of separation between any two superheroes, and analyze the results.

Degrees of Separation: Running and Analyzing the Results
05:41

In another advanced example, we'll cover item-based collaborative filtering and how it can be used to identify movies similar to each other based on ratings data.

Preview 07:24

We'll walk through how a creative multi-step MapReduce job can compute similar movies with a surprisingly small amount of code.

Similar Movies: Code Walkthrough
07:16

We'll run our code on the 100K MovieLens data set, and analyze the results.

Similar Movies: Running and Analyzing the Results
06:37

Your homework is to modify our program to produce better results, and I'll give you a few ideas on things you might try.

Learning Activity: Improving our Movie Similarities MapReduce Job
03:58
+
Using Hadoop and Elastic MapReduce
8 Lectures 37:07

We'll cover what Hadoop is, and how it enables running MapReduce jobs across a cluster of computers.

Fundamental Concepts of Hadoop
05:59

Learn how HDFS distributes large data sets across a cluster in a reliable manner.

The Hadoop Distributed File System (HDFS)
03:09

Learn how YARN manages resources on a Hadoop cluster running MapReduce V2

Apache YARN
04:20

Learn how Hadoop can run mappers and reducers written in any programming language, through Hadoop streaming.

Hadoop Streaming: How Hadoop Runs your Python Code
03:37

Set up an Amazon Elastic MapReduce account, so you can run larger examples in this course across real compute clusters at very low cost.

Setting Up Your Amazon Elastic MapReduce Account
06:49

Tie your Amazon Elastic MapReduce account to your Python and MRJob development environment.

Linking Your EMR Account with MRJob
03:40

Run the movie recommendation program in the cloud!

Exercise: Run Movie Recommendations on Elastic MapReduce
04:34

Analyze the results - they should be what you expect, but why did it take so long?

Analyze the Results of Your EMR Job
04:59
+
Advanced Hadoop and EMR
7 Lectures 43:23

Some basic concepts on distributed computing, and the overhead associated with it.

Distributed Computing Fundamentals
04:33

Learn how to run your movie similarity program on multiple machines in EMR, and actually run it.

Activity: Running Movie Similarities on Four Machines
04:27

Analyze the results of running movie similarities across four machines. It's faster - but there is a downside!

Analyzing the Results of the 4-Machine Job
05:44

Learn how to troubleshoot EMR / MRJob programs that don't complete successfully.

Troubleshooting Hadoop Jobs with EMR and MRJob, Part 1
04:01

Hands-on example of troubleshooting a failed job after the fact.

Troubleshooting Hadoop Jobs, Part 2
10:28

Finally, some truly big data: compute similar movies using one million movie ratings across a cluster of 20 computers.

Preview 06:08

We'll analyze the results of our one-million-rating analysis, and use a new script to extract the data we want.

Analyzing One Million Movie Ratings Across 16 Machines, Part 2
08:02
+
Other Hadoop Technologies
6 Lectures 34:36

A very brief overview of Apache Hive, QL, and a simple example.

Introducing Apache Hive
06:16

A very brief overview of Apache Pig, and a simple example.

Introducing Apache Pig
03:25

An overview of Spark, how it works, and why it might be a better choice than MapReduce for some tasks.

Apache Spark: Concepts
09:37

We'll walk through running a real Spark program to analyze gigabytes' worth of airline flight data to identify the worst airports in America.

Spark Example: Part 1
11:15

We'll analyze the results of our Spark program, and find out which airport has the most flight delays.

Spark Example: Part 2
03:22

Thank you for taking my course! Please remember to leave a rating.

Congratulations!
00:41
+
Where to Go from Here
1 Lecture 01:06

Let's stay in touch! Head to my website for discounts on my other courses, and to follow me on social media.

Bonus Lecture: Discounts on my other courses!
01:06
About the Instructor
Sundog Education by Frank Kane
4.5 Average rating
12,979 Reviews
64,453 Students
8 Courses
Training the World in Big Data and Machine Learning

Sundog Education's mission is to make highly valuable career skills in big data, data science, and machine learning accessible to everyone in the world. Our consortium of expert instructors shares our knowledge in these emerging fields with you, at prices anyone can afford. 

Sundog Education is led by Frank Kane and owned by Frank's company, Sundog Software LLC. Frank spent 9 years at Amazon and IMDb, developing and managing the technology that automatically delivers product and movie recommendations to hundreds of millions of customers, all the time. Frank holds 17 issued patents in the fields of distributed computing, data mining, and machine learning. In 2012, Frank left to start his own successful company, Sundog Software, which focuses on virtual reality environment technology, and teaching others about big data analysis.

Frank Kane
4.5 Average rating
12,668 Reviews
61,587 Students
6 Courses
Founder, Sundog Education

Frank spent 9 years at Amazon and IMDb, developing and managing the technology that automatically delivers product and movie recommendations to hundreds of millions of customers, all the time. Frank holds 17 issued patents in the fields of distributed computingdata mining, and machine learning. In 2012, Frank left to start his own successful company, Sundog Software, which focuses on virtual reality environment technology, and teaching others about big data analysis.