Taming Big Data with Apache Spark and Python - Hands On!

Dive right in with 15+ hands-on examples of analyzing large data sets with Apache Spark, on your desktop or on Hadoop!
4.6 (1,458 ratings) Instead of using a simple lifetime average, Udemy calculates a
course's star rating by considering a number of different factors
such as the number of ratings, the age of ratings, and the
likelihood of fraudulent ratings.
10,070 students enrolled
$19
$100
81% off
Take This Course
  • Lectures 47
  • Length 5 hours
  • Skill Level All Levels
  • Languages English, captions
  • Includes Lifetime access
    30 day money back guarantee!
    Available on iOS and Android
    Certificate of Completion
Wishlisted Wishlist

How taking a course works

Discover

Find online courses made by experts from around the world.

Learn

Take your courses with you and learn anywhere, anytime.

Master

Learn and practice real-world skills and achieve your goals.

About This Course

Published 10/2015 English Closed captions available

Course Description

New! Updated for Spark 2.0.0

“Big data" analysis is a hot and highly valuable skill – and this course will teach you the hottest technology in big data: Apache Spark. Employers including Amazon, EBay, NASA JPL, and Yahoo all use Spark to quickly extract meaning from massive data sets across a fault-tolerant Hadoop cluster. You'll learn those same techniques, using your own Windows system right at home. It's easier than you might think.

Learn and master the art of framing data analysis problems as Spark problems through over 15 hands-on examples, and then scale them up to run on cloud computing services in this course. You'll be learning from an ex-engineer and senior manager from Amazon and IMDb.

  • Learn the concepts of Spark's Resilient Distributed Datastores
  • Develop and run Spark jobs quickly using Python
  • Translate complex analysis problems into iterative or multi-stage Spark scripts
  • Scale up to larger data sets using Amazon's Elastic MapReduce service
  • Understand how Hadoop YARN distributes Spark across computing clusters
  • Learn about other Spark technologies, like Spark SQL, Spark Streaming, and GraphX

By the end of this course, you'll be running code that analyzes gigabytes worth of information – in the cloud – in a matter of minutes. 

This course uses the familiar Python programming language; if you'd rather use Scala to get the best performance out of Spark, see my "Apache Spark with Scala - Hands On with Big Data" course instead.

We'll have some fun along the way. You'll get warmed up with some simple examples of using Spark to analyze movie ratings data and text in a book. Once you've got the basics under your belt, we'll move to some more complex and interesting tasks. We'll use a million movie ratings to find movies that are similar to each other, and you might even discover some new movies you might like in the process! We'll analyze a social graph of superheroes, and learn who the most “popular" superhero is – and develop a system to find “degrees of separation" between superheroes. Are all Marvel superheroes within a few degrees of being connected to The Incredible Hulk? You'll find the answer.

This course is very hands-on; you'll spend most of your time following along with the instructor as we write, analyze, and run real code together – both on your own system, and in the cloud using Amazon's Elastic MapReduce service. 5 hours of video content is included, with over 15 real examples of increasing complexity you can build, run and study yourself. Move through them at your own pace, on your own schedule. The course wraps up with an overview of other Spark-based technologies, including Spark SQL, Spark Streaming, and GraphX.

Enjoy the course!

What are the requirements?

  • Access to a personal computer. This course uses Windows, but the sample code will work fine on Linux as well.
  • Some prior programming or scripting experience. Python experience will help a lot, but you can pick it up as we go.

What am I going to get from this course?

  • Frame big data analysis problems as Spark problems
  • Use Amazon's Elastic MapReduce service to run your job on a cluster with Hadoop YARN
  • Install and run Apache Spark on a desktop computer or on a cluster
  • Use Spark's Resilient Distributed Datasets to process and analyze large data sets across many CPU's
  • Implement iterative algorithms such as breadth-first-search using Spark
  • Use the MLLib machine learning library to answer common data mining questions
  • Understand how Spark SQL lets you work with structured data
  • Understand how Spark Streaming lets your process continuous streams of data in real time
  • Tune and troubleshoot large jobs running on a cluster
  • Share information between nodes on a Spark cluster using broadcast variables and accumulators
  • Understand how the GraphX library helps with network analysis problems

What is the target audience?

  • People with some software development background who want to learn the hottest technology in big data analysis will want to check this out. This course focuses on Spark from a software development standpoint; we introduce some machine learning and data mining concepts along the way, but that's not the focus. If you want to learn how to use Spark to carve up huge datasets and extract meaning from them, then this course is for you.
  • If you've never written a computer program or a script before, this course isn't for you - yet. I suggest starting with a Python course first, if programming is new to you.
  • If your software development job involves, or will involve, processing large amounts of data, you need to know about Spark.
  • If you're training for a new career in data science or big data, Spark is an important part of it.

What you get with this course?

Not for you? No problem.
30 day money back guarantee.

Forever yours.
Lifetime access.

Learn on the go.
Desktop, iOS and Android.

Get rewarded.
Certificate of completion.

Curriculum

Section 1: Getting Started with Spark
02:16

Meet your instructor, and we'll review what this course will cover and what you need to get started.

01:41

How to find the scripts and data associated with the lectures in this course.

12:52

We'll install Enthought Canopy, a JDK, and Apache Spark on your Windows system. When we're done, we'll run a simple little Spark script on your desktop to test it out!

03:35

Before we can analyze data with Spark, we need some data to analyze! Let's install the MovieLens dataset of movie ratings, which we'll use throughout the course.

04:52

We'll run a simple Spark script using Python, and analyze the 100,000 movie ratings you installed in the previous lecture. What is the breakdown of the rating scores in this data set? You'll find it's easy to find out!

Section 2: Spark Basics and Simple Examples
10:11

This high-level introduction will help you understand what Spark is for, who's using it, and why it's such a big deal.

12:17

Understand the core object of Spark: the Resilient Distributed Dataset (RDD), and how you can use Spark to transform and perform actions upon RDD's.

13:33

We'll dissect our original ratings histogram Spark example, and understand exactly how every line of it works!

16:13

You'll learn how to use key/value pairs in RDD's, and special operations you can perform on them. To make it real, we'll introduce a new example: computing the average number of friends by age using a fake social network data set.

05:39

We'll take another look at our "average number of friends by age" example script, actually run it, and examine the results.

08:10

Learn how the filter() operation works on RDD's, and apply this toward finding the minimum temperatures from a real-world weather data set.

05:08

We'll look at the minimum temperatures by location example as a whole, and actually run it! Then, you've got an activity: modify this script to find the maximum temperatures instead. This lecture reinforces using filters and key/value RDD's.

03:21

Check your results for writing a maximum temperature Spark script to my own.

07:28

We'll do the standard "count the number of occurrences of each word in a book" exercise here, and review the differences between map() and flatmap() in the process.

04:44

You'll learn how to use regular expressions in Python, and use them to improve the results of our word count script.

07:44

Finally, we'll learn how to implement countByValue() in a way that returns a new RDD, and sort that RDD to produce our final results for word frequency.

04:01

Write your first Spark script on your own! I'll give you the strategy and tips you need to be successful. You're given a fake e-commerce data set, and your task is to find the total amount spent, broken down by customer ID.

05:08

Compare your code to my solution for finding the total spent by customer - and take on a new challenge! Modify your script to sort your final results by the amount spent, and find the biggest spender.

03:18

Compare your solution to sorting the customers by total amount ordered to mine, and check your results.

Section 3: Advanced Examples of Spark Programs
05:52

We'll write and run a simple script to find the most-rated movie in the MovieLens data set, which we'll build upon later.

08:23

You'll learn how to use "broadcast variables" in Spark to efficiently distribute large objects to every node your Spark program may be running on, and apply this to looking up movie names in our "most popular movie" script.

04:29

We introduce the Marvel superhero social graph data set, and write a Spark job to find the superhero with the most co-occurrences with other heroes in comic books.

06:00

Review the source code of our script to discover the most popular superhero, run it, and reveal the answer!

07:54

We'll introduce the Breadth-First Search (BFS) algorithm, and how we can use it to discover degrees of separation between superheroes.

06:44

We'll learn how to turn breadth-first search into a Spark problem, and craft our strategy for writing the code. Along the way, we'll cover Spark accumulators and how we can use them to signal our driver script when it's done.

09:14

We'll get our hands on the code to actually implement breadth-first search, and run it to discover the degrees of separation between any two superheroes!

10:12

Learn one technique for finding similar movies based on the MovieLens rating data, and how we can frame it as a Spark problem. We'll also introduce the importance of using cache() or persist() on rdd's that will have more than one action performed on them.

10:54

We'll review the code for finding similar movies in Spark with the MovieLens ratings data, run it on every available core of your desktop computer, and review the results.

02:58

Get your hands dirty! I'll give you some ideas on improving the quality of your similar movie results - go try some out, and mess around with our movie similarity code.

Section 4: Running Spark on a Cluster
05:08

Learn how Amazon's Elastic MapReduce makes it easy to rent time on your very own Spark cluster, running on top of Hadoop YARN

09:55

Learn how to set up your AWS account, create a key pair for logging into your Spark / Hadoop cluster, and set up PuTTY to connect to your instances from a Windows desktop.

04:21

We'll see what needs to be done to our Movie Similarities script in order to get it to run successfully with one million ratings, on a cluster, by introducing the partitionBy() function.

05:12

We'll study the code of our modified movie similarities script, and get it ready to run on a cluster.

11:27

We'll launch a Hadoop cluster with Spark using Amazon's Elastic MapReduce service, and kick off our script to produce similar movies to Star Wars given one million movie ratings.

03:28

We'll look at our results from similar movies from one million ratings, and discuss them.

03:43

We'll look at the Spark console UI and the information it offers to help understand how to diagnose problems and optimize your large Spark jobs.

05:47

I'll share some more troubleshooting tips when running Spark on a cluster, and talk about how to manage dependencies your code may have.

Section 5: SparkSQL, DataFrames, and DataSets
06:08

We'll cover the concepts of SparkSQL, DataFrames, and DataSets, and why they are so important in Spark 2.0 and above.

08:16

We'll dive into a real example, revisiting our fake social network data and analyzing it with DataFrames through a SparkSession object.

05:52

Let's revisit our "most popular movie" example, and implement it using a DataFrame instead of with RDD's. DataFrames are the preferred API in Spark 2.0+.

Section 6: Other Spark Technologies and Libraries
08:10

We'll briefly cover the capabilities of Spark's MLLib machine learning library, and how it can help you solve data mining, machine learning, and statistical problems you may encounter. We'll go into more depth on MLLib's Alternating Least Squares (ALS) recommendation engine, and how we can use it to produce movie recommendations with the MovieLens data set.

02:56

We'll run MLLib's Alternating Least Squares recommender system on the MovieLens 100K dataset.

04:53

We'll finish running Alternating Least Squares recommendations on the MovieLens ratings data set using MLLib, and evaluate the results.

07:31

DataFrames are the preferred API for MLLib in Spark 2.0+. Let's look at an example of using linear regression with DataFrames.

07:36

An overview of how Spark Streaming lets you process continual streams on input data and aggregate it over time, and how GraphX lets you compute properties of networks.

Section 7: You Made It! Where to Go from Here.
04:09

Some suggested resources for learning more about Apache Spark, and data mining and machine learning in general.

Bonus Lecture: Discounts on my other courses!
01:48

Students Who Viewed This Course Also Viewed

  • Loading
  • Loading
  • Loading

Instructor Biography

Frank Kane, Data Miner and Software Engineer

Frank Kane spent 9 years at Amazon and IMDb, developing and managing the technology that automatically delivers product and movie recommendations to hundreds of millions of customers, all the time. Frank holds 17 issued patents in the fields of distributed computing, data mining, and machine learning. In 2012, Frank left to start his own successful company, Sundog Software, which focuses on virtual reality environment technology, and teaching others about big data analysis.

Ready to start learning?
Take This Course