“Big data" analysis is a hot and highly valuable skill – and this course will teach you two technologies fundamental to big data quickly: MapReduce and Hadoop. Ever wonder how Google manages to analyze the entire Internet on a continual basis? You'll learn those same techniques, using your own Windows system right at home.
Learn and master the art of framing data analysis problems as MapReduce problems through over 10 hands-on examples, and then scale them up to run on cloud computing services in this course. You'll be learning from an ex-engineer and senior manager from Amazon and IMDb.
By the end of this course, you'll be running code that analyzes gigabytes worth of information – in the cloud – in a matter of minutes.
We'll have some fun along the way. You'll get warmed up with some simple examples of using MapReduce to analyze movie ratings data and text in a book. Once you've got the basics under your belt, we'll move to some more complex and interesting tasks. We'll use a million movie ratings to find movies that are similar to each other, and you might even discover some new movies you might like in the process! We'll analyze a social graph of superheroes, and learn who the most “popular" superhero is – and develop a system to find “degrees of separation" between superheroes. Are all Marvel superheroes within a few degrees of being connected to The Incredible Hulk? You'll find the answer.
This course is very hands-on; you'll spend most of your time following along with the instructor as we write, analyze, and run real code together – both on your own system, and in the cloud using Amazon's Elastic MapReduce service. Over 5 hours of video content is included, with over 10 real examples of increasing complexity you can build, run and study yourself. Move through them at your own pace, on your own schedule. The course wraps up with an overview of other Hadoop-based technologies, including Hive, Pig, and the very hot Spark framework – complete with a working example in Spark.
Don't take my word for it - check out some of our unsolicited reviews from real students:
"I have gone through many courses on map reduce; this is undoubtedly the best, way at the top."
"This is one of the best courses I have ever seen since 4 years passed I am using Udemy for courses."
"The best hands on course on MapReduce and Python. I really like the run it yourself approach in this course. Everything is well organized, and the lecturer is top notch."
I'll walk you through installing Enthought Canopy, the mrjob Python package, and some sample movie ratings data from MovieLens - and then we'll run a simple MapReduce job on your desktop!
Understand the basic concepts of MapReduce - what a mapper does, what a reducer does, and what happens in between.
We'll analyze the source of your ratings histogram job, and understand how it works.
Understand why MapReduce is a powerful tool for scaling big data analysis problems across compute clusters.
In our next example, we'll look at some fake social data and compute the average number of friends by age.
Actually run the friends by age example on your machine, and analyze the results.
In another example, we'll use real weather data from the year 1800 and find the minimum temperature at each weather station for the year.
Now, we'll modify that same example to find the maximum temperature for the year, and run it too.
Another hands-on example: we'll find how often each word is used in a real book's text.
We build on the previous example to do a better job of identifying words, using regular expressions in Python.
We'll build further on the same example, this time using MapReduce to sort the results the way we want them, using a multi-stage MapReduce job.
Your first homework assignment! Propose what the mapper and reducer should do for a job that computes the total amount spent by customer in a fake e-commerce data set.
We'll review your approach for the e-commerce problem, and set you loose with the tools you need to go write your first MapReduce job on your own.
Compare your code to mine for analyzing our e-commerce data. Now, build upon your code to sort the final results to find the biggest spender.
We'll review your homework to sort the results of the e-commerce analysis, and compare your code to mine.
Learn how combiners can help reduce network throughput in MapReduce jobs, and run a simple example of using a combiner function.
Using the MovieLens data set, we'll write and run a MapReduce job to find the most-rated movie.
Extend the previous example to send movie ID - to - movie name lookup data along with our MapReduce tasks, so we can display results in human readable format.
We'll introduce the Marvel social graph data set, and cover how we'll find the most "popular" superhero!
Actually implement and run the code to identify the most popular superhero. I bet it's not who you think it is!
In a more advanced example, we'll describe how to use MapReduce to find degrees of separation between superheroes in a social graph. We'll use a breadth-first-search algorithm in MapReduce to find the answers we want.
First we walk through transforming the Marvel data set into a format usable for the BFS algorithm.
Now we'll cover the code needed to iteratively run breadth-first search using MapReduce, and use Hadoop counters to flag our results.
Actually run the code that allows us to find the degrees of separation between any two superheroes, and analyze the results.
In another advanced example, we'll cover item-based collaborative filtering and how it can be used to identify movies similar to each other based on ratings data.
We'll walk through how a creative multi-step MapReduce job can compute similar movies with a surprisingly small amount of code.
We'll run our code on the 100K MovieLens data set, and analyze the results.
Your homework is to modify our program to produce better results, and I'll give you a few ideas on things you might try.
We'll cover what Hadoop is, and how it enables running MapReduce jobs across a cluster of computers.
Learn how HDFS distributes large data sets across a cluster in a reliable manner.
Learn how YARN manages resources on a Hadoop cluster running MapReduce V2
Learn how Hadoop can run mappers and reducers written in any programming language, through Hadoop streaming.
Set up an Amazon Elastic MapReduce account, so you can run larger examples in this course across real compute clusters at very low cost.
Tie your Amazon Elastic MapReduce account to your Python and MRJob development environment.
Run the movie recommendation program in the cloud!
Analyze the results - they should be what you expect, but why did it take so long?
Some basic concepts on distributed computing, and the overhead associated with it.
Learn how to run your movie similarity program on multiple machines in EMR, and actually run it.
Analyze the results of running movie similarities across four machines. It's faster - but there is a downside!
Learn how to troubleshoot EMR / MRJob programs that don't complete successfully.
Hands-on example of troubleshooting a failed job after the fact.
Finally, some truly big data: compute similar movies using one million movie ratings across a cluster of 20 computers.
We'll analyze the results of our one-million-rating analysis, and use a new script to extract the data we want.
A very brief overview of Apache Hive, QL, and a simple example.
A very brief overview of Apache Pig, and a simple example.
An overview of Spark, how it works, and why it might be a better choice than MapReduce for some tasks.
We'll walk through running a real Spark program to analyze gigabytes' worth of airline flight data to identify the worst airports in America.
We'll analyze the results of our Spark program, and find out which airport has the most flight delays.
Thank you for taking my course! Please remember to leave a rating.
Frank Kane spent 9 years at Amazon and IMDb, developing and managing the technology that automatically delivers product and movie recommendations to hundreds of millions of customers, all the time. Frank holds 17 issued patents in the fields of distributed computing, data mining, and machine learning. In 2012, Frank left to start his own successful company, Sundog Software, which focuses on virtual reality environment technology, and teaching others about big data analysis.