Taming Big Data with Apache Spark and Python - Hands On!
4.5 (2,918 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
18,859 students enrolled
Wishlisted Wishlist

Please confirm that you want to add Taming Big Data with Apache Spark and Python - Hands On! to your Wishlist.

Add to Wishlist

Taming Big Data with Apache Spark and Python - Hands On!

Dive right in with 15+ hands-on examples of analyzing large data sets with Apache Spark, on your desktop or on Hadoop!
Best Seller
4.5 (2,918 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
18,859 students enrolled
Last updated 7/2017
English
English
Current price: $10 Original price: $100 Discount: 90% off
5 hours left at this price!
30-Day Money-Back Guarantee
Includes:
  • 5 hours on-demand video
  • 1 Article
  • 2 Supplemental Resources
  • Full lifetime access
  • Access on mobile and TV
  • Assignments
  • Certificate of Completion
What Will I Learn?
  • Frame big data analysis problems as Spark problems
  • Use Amazon's Elastic MapReduce service to run your job on a cluster with Hadoop YARN
  • Install and run Apache Spark on a desktop computer or on a cluster
  • Use Spark's Resilient Distributed Datasets to process and analyze large data sets across many CPU's
  • Implement iterative algorithms such as breadth-first-search using Spark
  • Use the MLLib machine learning library to answer common data mining questions
  • Understand how Spark SQL lets you work with structured data
  • Understand how Spark Streaming lets your process continuous streams of data in real time
  • Tune and troubleshoot large jobs running on a cluster
  • Share information between nodes on a Spark cluster using broadcast variables and accumulators
  • Understand how the GraphX library helps with network analysis problems
View Curriculum
Requirements
  • Access to a personal computer. This course uses Windows, but the sample code will work fine on Linux as well.
  • Some prior programming or scripting experience. Python experience will help a lot, but you can pick it up as we go.
Description

New! Updated for Spark 2.0.0

“Big data" analysis is a hot and highly valuable skill – and this course will teach you the hottest technology in big data: Apache Spark. Employers including Amazon, EBay, NASA JPL, and Yahoo all use Spark to quickly extract meaning from massive data sets across a fault-tolerant Hadoop cluster. You'll learn those same techniques, using your own Windows system right at home. It's easier than you might think.

Learn and master the art of framing data analysis problems as Spark problems through over 15 hands-on examples, and then scale them up to run on cloud computing services in this course. You'll be learning from an ex-engineer and senior manager from Amazon and IMDb.

  • Learn the concepts of Spark's Resilient Distributed Datastores
  • Develop and run Spark jobs quickly using Python
  • Translate complex analysis problems into iterative or multi-stage Spark scripts
  • Scale up to larger data sets using Amazon's Elastic MapReduce service
  • Understand how Hadoop YARN distributes Spark across computing clusters
  • Learn about other Spark technologies, like Spark SQL, Spark Streaming, and GraphX

By the end of this course, you'll be running code that analyzes gigabytes worth of information – in the cloud – in a matter of minutes. 

This course uses the familiar Python programming language; if you'd rather use Scala to get the best performance out of Spark, see my "Apache Spark with Scala - Hands On with Big Data" course instead.

We'll have some fun along the way. You'll get warmed up with some simple examples of using Spark to analyze movie ratings data and text in a book. Once you've got the basics under your belt, we'll move to some more complex and interesting tasks. We'll use a million movie ratings to find movies that are similar to each other, and you might even discover some new movies you might like in the process! We'll analyze a social graph of superheroes, and learn who the most “popular" superhero is – and develop a system to find “degrees of separation" between superheroes. Are all Marvel superheroes within a few degrees of being connected to The Incredible Hulk? You'll find the answer.

This course is very hands-on; you'll spend most of your time following along with the instructor as we write, analyze, and run real code together – both on your own system, and in the cloud using Amazon's Elastic MapReduce service. 5 hours of video content is included, with over 15 real examples of increasing complexity you can build, run and study yourself. Move through them at your own pace, on your own schedule. The course wraps up with an overview of other Spark-based technologies, including Spark SQL, Spark Streaming, and GraphX.

Enjoy the course!

Who is the target audience?
  • People with some software development background who want to learn the hottest technology in big data analysis will want to check this out. This course focuses on Spark from a software development standpoint; we introduce some machine learning and data mining concepts along the way, but that's not the focus. If you want to learn how to use Spark to carve up huge datasets and extract meaning from them, then this course is for you.
  • If you've never written a computer program or a script before, this course isn't for you - yet. I suggest starting with a Python course first, if programming is new to you.
  • If your software development job involves, or will involve, processing large amounts of data, you need to know about Spark.
  • If you're training for a new career in data science or big data, Spark is an important part of it.
Compare to Other Python Courses
Curriculum For This Course
45 Lectures
05:11:58
+
Getting Started with Spark
6 Lectures 27:27

Meet your instructor, and we'll review what this course will cover and what you need to get started.

Preview 02:16

How to find the scripts and data associated with the lectures in this course.

How to Use This Course
01:41

While setting things up, do NOT install Java 9 - it's not compatible with Spark yet. Scroll down on the JDK download page, and install a JDK for Java 8 instead.

Warning about Java 9!
00:13

We'll install Enthought Canopy, a JDK, and Apache Spark on your Windows system. When we're done, we'll run a simple little Spark script on your desktop to test it out!

[Activity]Getting Set Up: Installing Python, a JDK, Spark, and its Dependencies.
14:50

Before we can analyze data with Spark, we need some data to analyze! Let's install the MovieLens dataset of movie ratings, which we'll use throughout the course.

[Activity] Installing the MovieLens Movie Rating Dataset
03:35

We'll run a simple Spark script using Python, and analyze the 100,000 movie ratings you installed in the previous lecture. What is the breakdown of the rating scores in this data set? You'll find it's easy to find out!

Preview 04:52
+
Spark Basics and Simple Examples
11 Lectures 01:34:28

This high-level introduction will help you understand what Spark is for, who's using it, and why it's such a big deal.

Introduction to Spark
10:11

Understand the core object of Spark: the Resilient Distributed Dataset (RDD), and how you can use Spark to transform and perform actions upon RDD's.

The Resilient Distributed Dataset (RDD)
12:17

We'll dissect our original ratings histogram Spark example, and understand exactly how every line of it works!

Ratings Histogram Walkthrough
13:33

You'll learn how to use key/value pairs in RDD's, and special operations you can perform on them. To make it real, we'll introduce a new example: computing the average number of friends by age using a fake social network data set.

Preview 16:13

We'll take another look at our "average number of friends by age" example script, actually run it, and examine the results.

Preview 05:39

Learn how the filter() operation works on RDD's, and apply this toward finding the minimum temperatures from a real-world weather data set.

Filtering RDD's, and the Minimum Temperature by Location Example
08:10

We'll look at the minimum temperatures by location example as a whole, and actually run it! Then, you've got an activity: modify this script to find the maximum temperatures instead. This lecture reinforces using filters and key/value RDD's.

[Activity]Running the Minimum Temperature Example, and Modifying it for Maximums
05:08

Check your results for writing a maximum temperature Spark script to my own.

[Activity] Running the Maximum Temperature by Location Example
03:21

We'll do the standard "count the number of occurrences of each word in a book" exercise here, and review the differences between map() and flatmap() in the process.

[Activity] Counting Word Occurrences using flatmap()
07:28

You'll learn how to use regular expressions in Python, and use them to improve the results of our word count script.

[Activity] Improving the Word Count Script with Regular Expressions
04:44

Finally, we'll learn how to implement countByValue() in a way that returns a new RDD, and sort that RDD to produce our final results for word frequency.

Preview 07:44

Practice writing your own Spark script to add up the amount spent by each customer in a sample e-commerce data set.
Tally up amount spent by customer using Spark
1 question

Let's build on the previous assignment, and sort your final results by amount spent. Let's find out who the biggest spenders are!
Sort your results by amount spent per customer
1 question
+
Advanced Examples of Spark Programs
10 Lectures 01:12:40

We'll write and run a simple script to find the most-rated movie in the MovieLens data set, which we'll build upon later.

[Activity] Find the Most Popular Movie
05:52

You'll learn how to use "broadcast variables" in Spark to efficiently distribute large objects to every node your Spark program may be running on, and apply this to looking up movie names in our "most popular movie" script.

[Activity] Use Broadcast Variables to Display Movie Names Instead of ID Numbers
08:23

We introduce the Marvel superhero social graph data set, and write a Spark job to find the superhero with the most co-occurrences with other heroes in comic books.

Preview 04:29

Review the source code of our script to discover the most popular superhero, run it, and reveal the answer!

[Activity] Run the Script - Discover Who the Most Popular Superhero is!
06:00

We'll introduce the Breadth-First Search (BFS) algorithm, and how we can use it to discover degrees of separation between superheroes.

Superhero Degrees of Separation: Introducing Breadth-First Search
07:54

We'll learn how to turn breadth-first search into a Spark problem, and craft our strategy for writing the code. Along the way, we'll cover Spark accumulators and how we can use them to signal our driver script when it's done.

Superhero Degrees of Separation: Accumulators, and Implementing BFS in Spark
06:44

We'll get our hands on the code to actually implement breadth-first search, and run it to discover the degrees of separation between any two superheroes!

[Activity] Superhero Degrees of Separation: Review the Code and Run it
09:14

Learn one technique for finding similar movies based on the MovieLens rating data, and how we can frame it as a Spark problem. We'll also introduce the importance of using cache() or persist() on rdd's that will have more than one action performed on them.

Item-Based Collaborative Filtering in Spark, cache(), and persist()
10:12

We'll review the code for finding similar movies in Spark with the MovieLens ratings data, run it on every available core of your desktop computer, and review the results.

Preview 10:54

Get your hands dirty! I'll give you some ideas on improving the quality of your similar movie results - go try some out, and mess around with our movie similarity code.

[Exercise] Improve the Quality of Similar Movies
02:58
+
Running Spark on a Cluster
8 Lectures 49:01

Learn how Amazon's Elastic MapReduce makes it easy to rent time on your very own Spark cluster, running on top of Hadoop YARN

Introducing Elastic MapReduce
05:08

Learn how to set up your AWS account, create a key pair for logging into your Spark / Hadoop cluster, and set up PuTTY to connect to your instances from a Windows desktop.

[Activity] Setting up your AWS / Elastic MapReduce Account and Setting Up PuTTY
09:55

We'll see what needs to be done to our Movie Similarities script in order to get it to run successfully with one million ratings, on a cluster, by introducing the partitionBy() function.

Partitioning
04:21

We'll study the code of our modified movie similarities script, and get it ready to run on a cluster.

Create Similar Movies from One Million Ratings - Part 1
05:12

We'll launch a Hadoop cluster with Spark using Amazon's Elastic MapReduce service, and kick off our script to produce similar movies to Star Wars given one million movie ratings.

Preview 11:27

We'll look at our results from similar movies from one million ratings, and discuss them.

Create Similar Movies from One Million Ratings - Part 3
03:28

We'll look at the Spark console UI and the information it offers to help understand how to diagnose problems and optimize your large Spark jobs.

Troubleshooting Spark on a Cluster
03:43

I'll share some more troubleshooting tips when running Spark on a cluster, and talk about how to manage dependencies your code may have.

More Troubleshooting, and Managing Dependencies
05:47
+
SparkSQL, DataFrames, and DataSets
3 Lectures 20:16

We'll cover the concepts of SparkSQL, DataFrames, and DataSets, and why they are so important in Spark 2.0 and above.

Introducing SparkSQL
06:08

We'll dive into a real example, revisiting our fake social network data and analyzing it with DataFrames through a SparkSession object.

Executing SQL commands and SQL-style functions on a DataFrame
08:16

Let's revisit our "most popular movie" example, and implement it using a DataFrame instead of with RDD's. DataFrames are the preferred API in Spark 2.0+.

Preview 05:52
+
Other Spark Technologies and Libraries
5 Lectures 31:06

We'll briefly cover the capabilities of Spark's MLLib machine learning library, and how it can help you solve data mining, machine learning, and statistical problems you may encounter. We'll go into more depth on MLLib's Alternating Least Squares (ALS) recommendation engine, and how we can use it to produce movie recommendations with the MovieLens data set.

Introducing MLLib
08:10

We'll run MLLib's Alternating Least Squares recommender system on the MovieLens 100K dataset.

[Activity] Using MLLib to Produce Movie Recommendations
02:56

We'll finish running Alternating Least Squares recommendations on the MovieLens ratings data set using MLLib, and evaluate the results.

Analyzing the ALS Recommendations Results
04:53

DataFrames are the preferred API for MLLib in Spark 2.0+. Let's look at an example of using linear regression with DataFrames.

Preview 07:31

An overview of how Spark Streaming lets you process continual streams on input data and aggregate it over time, and how GraphX lets you compute properties of networks.

Spark Streaming and GraphX
07:36
+
You Made It! Where to Go from Here.
2 Lectures 05:49

Some suggested resources for learning more about Apache Spark, and data mining and machine learning in general.

Learning More about Spark and Data Science
04:09

Bonus Lecture: Discounts on my other courses!
01:40
About the Instructor
Sundog Education by Frank Kane
4.5 Average rating
16,722 Reviews
80,691 Students
9 Courses
Training the World in Big Data and Machine Learning

Sundog Education's mission is to make highly valuable career skills in big data, data science, and machine learning accessible to everyone in the world. Our consortium of expert instructors shares our knowledge in these emerging fields with you, at prices anyone can afford. 

Sundog Education is led by Frank Kane and owned by Frank's company, Sundog Software LLC. Frank spent 9 years at Amazon and IMDb, developing and managing the technology that automatically delivers product and movie recommendations to hundreds of millions of customers, all the time. Frank holds 17 issued patents in the fields of distributed computing, data mining, and machine learning. In 2012, Frank left to start his own successful company, Sundog Software, which focuses on virtual reality environment technology, and teaching others about big data analysis.

Frank Kane
4.5 Average rating
16,282 Reviews
76,492 Students
7 Courses
Founder, Sundog Education

Frank spent 9 years at Amazon and IMDb, developing and managing the technology that automatically delivers product and movie recommendations to hundreds of millions of customers, all the time. Frank holds 17 issued patents in the fields of distributed computingdata mining, and machine learning. In 2012, Frank left to start his own successful company, Sundog Software, which focuses on virtual reality environment technology, and teaching others about big data analysis.