Apache Spark 2.0 with Scala - Hands On with Big Data!
4.6 (1,654 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
8,970 students enrolled
Wishlisted Wishlist

Please confirm that you want to add Apache Spark 2.0 with Scala - Hands On with Big Data! to your Wishlist.

Add to Wishlist

Apache Spark 2.0 with Scala - Hands On with Big Data!

Dive right in with 20+ hands-on examples of analyzing large data sets with Apache Spark, on your desktop or on Hadoop!
Bestselling
4.6 (1,654 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
8,970 students enrolled
Last updated 5/2017
English
English
Current price: $19 Original price: $100 Discount: 81% off
30-Day Money-Back Guarantee
Includes:
  • 7.5 hours on-demand video
  • 1 Supplemental Resource
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
What Will I Learn?
  • Frame big data analysis problems as Apache Spark scripts
  • Develop distributed code using the Scala programming language
  • Optimize Spark jobs through partitioning, caching, and other techniques
  • Build, deploy, and run Spark scripts on Hadoop clusters
  • Process continual streams of data with Spark Streaming
  • Transform structured data using SparkSQL and DataFrames
  • Traverse and analyze graph structures using GraphX
View Curriculum
Requirements
  • Some prior programming or scripting experience is required. A crash course in Scala is included, but you need to know the fundamentals of programming in order to pick it up.
  • You will need a desktop PC and an Internet connection. The course is created with Windows in mind, but users comfortable with MacOS or Linux can use the same tools.
  • The software needed for this course is freely available, and I'll walk you through downloading and installing it.
Description

New! Updated for Spark 2.0.0.

“Big data" analysis is a hot and highly valuable skill – and this course will teach you the hottest technology in big data: Apache Spark. Employers including AmazonEBayNASA JPL, and Yahoo all use Spark to quickly extract meaning from massive data sets across a fault-tolerant Hadoop cluster. You'll learn those same techniques, using your own Windows system right at home. It's easier than you might think, and you'll be learning from an ex-engineer and senior manager from Amazon and IMDb.

Spark works best when using the Scala programming language, and this course includes a crash-course in Scala to get you up to speed quickly. For those more familiar with Python however, a Python version of this class is also available: "Taming Big Data with Apache Spark and Python - Hands On".

Learn and master the art of framing data analysis problems as Spark problems through over 20 hands-on examples, and then scale them up to run on cloud computing services in this course.

  • Learn the concepts of Spark's Resilient Distributed Datastores
  • Get a crash course in the Scala programming language
  • Develop and run Spark jobs quickly using Scala
  • Translate complex analysis problems into iterative or multi-stage Spark scripts
  • Scale up to larger data sets using Amazon's Elastic MapReduce service
  • Understand how Hadoop YARN distributes Spark across computing clusters
  • Practice using other Spark technologies, like Spark SQL, DataFrames, DataSets, Spark Streaming, and GraphX

By the end of this course, you'll be running code that analyzes gigabytes worth of information – in the cloud – in a matter of minutes. 

We'll have some fun along the way. You'll get warmed up with some simple examples of using Spark to analyze movie ratings data and text in a book. Once you've got the basics under your belt, we'll move to some more complex and interesting tasks. We'll use a million movie ratings to find movies that are similar to each other, and you might even discover some new movies you might like in the process! We'll analyze a social graph of superheroes, and learn who the most “popular" superhero is – and develop a system to find “degrees of separation" between superheroes. Are all Marvel superheroes within a few degrees of being connected to SpiderMan? You'll find the answer.

This course is very hands-on; you'll spend most of your time following along with the instructor as we write, analyze, and run real code together – both on your own system, and in the cloud using Amazon's Elastic MapReduce service. 7.5 hours of video content is included, with over 20 real examples of increasing complexity you can build, run and study yourself. Move through them at your own pace, on your own schedule. The course wraps up with an overview of other Spark-based technologies, including Spark SQL, Spark Streaming, and GraphX.

Enjoy the course!

Who is the target audience?
  • Software engineers who want to expand their skills into the world of big data processing on a cluster
  • If you have no previous programming or scripting experience, you'll want to take an introductory programming course first.
Curriculum For This Course
Expand All 52 Lectures Collapse All 52 Lectures 07:20:01
+
Getting Started
2 Lectures 27:27

A brief introduction to the course, and then we'll get your development environment for Spark and Scala all set up on your desktop. A quick test application will confirm Spark is working on your system! Remember - be sure to install Spark 2.0.0 or newer for this course.

Preview 14:30

Let's dive right in! We'll download a data set of 100,000 real movie ratings from real people, and run a Spark script that generates histogram data of the distribution of movie ratings. Some final setup of your Scala development environment and downloading the course materials is also part of this lecture, so be sure not to skip this one.

[Activity] Create a Histogram of Real Movie Ratings with Spark!
12:57
+
Scala Crash Course
5 Lectures 55:16

We'll go over the basic syntax and structure of Scala code with lots of examples. It's backwards from most other languages, but you quickly get used to it. Part 1 of 2.

[Activity] Scala Basics, Part 1
12:52

We'll go over the basic syntax and structure of Scala code with lots of examples. It's backwards from most other languages, but you quickly get used to it. Part 2 of 2, with some hands-on practice at the end.

[Exercise] Scala Basics, Part 2
09:41

You'll see how flow control works in Scala (if/then statements, loops, etc.), and practice what you've learned at the end.

[Exercise] Flow Control in Scala
07:18

Scala is a functional programming language, and so functions are central to the language. We'll go over the many ways functions can be declared and used in Scala, and practice what you've learned.

[Exercise] Functions in Scala
08:47

We'll cover the common data structures in Scala such as Map and List, and put them into practice.

[Exercise] Data Structures in Scala
16:38
+
Spark Basics and Simple Examples
14 Lectures 01:44:29

What is Apache Spark anyhow? What does it do, and what is it used for?

Preview 08:40

The core object of Spark programming is the Resilient Distributed Dataset, or RDD. Once you know how to use RDD's, you know how to use Spark. We'll go over what they are, and what you can do with them.

The Resilient Distributed Dataset
11:04

Now that we understand Scala and have the theory of Spark behind us, we can revisit the rating counter code from lesson 2 and better understand what's actually going on within it.

Ratings Histogram Walkthrough
07:33

How does Spark convert your script into a Directed Acyclic Graph and figure out how to distribute it on a cluster? Understanding how this process works under the hood can be important in writing optimal Spark driver scripts.

Preview 04:42

RDD's that contain a tuple of two values are key/value RDD's, and you can use them much like you might use a NoSQL data store. We'll use key/value RDD's to figure out the average number of friends by age in some fake social network data.

Key / Value RDD's, and the Average Friends by Age example
12:21

We'll run the average friends by age example on your desktop, and give you some ideas for further extending this script on your own.

[Activity] Running the Average Friends by Age Example
07:58

We'll cover how to filter data out of an RDD efficiently, and illustrate this with a new example that finds the minimum temperature by location using real weather data.

Filtering RDD's, and the Minimum Temperature by Location Example
06:43

We'll run our minimum temperature by location example, and modify it to find maximum temperatures as well. Plus, some ideas for extending this script on your own.

[Activity] Running the Minimum Temperature Example, and Modifying it for Maximum
10:10

flatmap() on an RDD can return variable amounts of new entries into the resulting RDD. We'll use this as part of a hands-on example that finds how often each word is used inside a real book's text.

[Activity] Counting Word Occurrences using Flatmap()
08:59

We extend the previous lecture's example by using regular expressions to better extract words from our book.

[Activity] Improving the Word Count Script with Regular Expressions
06:41

Finally, we sort the final results to see what the most common words in this book really are! And some ideas to extend this script on your own.

Preview 08:10

Your assignment: write a script that finds the total amount spent per customer using some fabricated e-commerce data, using what you've learned so far.

[Exercise] Find the Total Amount Spent by Customer
03:37

We'll review my solution to the previous lecture's assignment, and challenge you further to sort your results to find the biggest spenders.

[Exercise] Check your Results, and Sort Them by Total Amount Spent
04:26

Check your results for finding the biggest spenders in our e-commerce data against my own solution.

Check Your Results and Implementation Against Mine
03:25
+
Advanced Examples of Spark Programs
9 Lectures 01:16:07

We'll revisit our movie ratings data set, and start off with a simple example to find the most-rated movie.

[Activity] Find the Most Popular Movie
04:29

Broadcast variables can be used to share small amounts of data to all of the machines on your cluster. We'll use them to share a lookup table of movie ID's to movie names, and use that to get movie names in our final results.

[Activity] Use Broadcast Variables to Display Movie Names
08:52

We introduce the Marvel superhero social network data set, and write a script to find the most-connected superhero in it. It's not who you might think!

[Activity] Find the Most Popular Superhero in a Social Graph
14:10

As a more complex example, we'll apply a breadth-first-search (BFS) algorithm to the Marvel dataset to compute the degrees of separation between any two superheroes. In this lecture, we go over how BFS works.

Superhero Degrees of Separation: Introducing Breadth-First Search
06:52

We'll go over our strategy for implementing BFS within a Spark script that can be distributed, and introduce the use of Accumulators to maintain running totals that are synced across a cluster.

Superhero Degrees of Separation: Accumulators, and Implementing BFS in Spark
05:53

Finally, we'll review the code for finding the degrees of separation using breadth-first-search, run it, and see the results!

Superhero Degrees of Separation: Review the code, and run it!
10:41

Back to our movie ratings data - we'll discover movies that are similar to each other just based on user ratings. We'll cover the algorithm, and how to implement it as a Spark script.

Item-Based Collaborative Filtering in Spark, cache(), and persist()
08:16

We'll run our movie similarties script and see the results. In doing so, we'll introduce the process of exporting your Spark script as a JAR file that can be run from the command line using the spark-submit script (instead of running from within the Scala IDE.)

Preview 14:13

Your challenge: make the movie similarity results even better! Here are some ideas for you to try out.

[Exercise] Improve the Quality of Similar Movies
02:41
+
Running Spark on a Cluster
7 Lectures 01:00:48

In a production environment, you'll use spark-submit to start your driver scripts from a command line, cron job, or the like. We'll cover the details on what you need to do differently in this case.

[Activity] Using spark-submit to run Spark driver scripts
06:58

Spark / Scala scripts that have external dependencies can be bundled up into self-contained packages using the SBT tool. We'll use SBT to package up our movie similarities script as an exercise.

[Activity] Packaging driver scripts with SBT
14:06

Amazon Web Services (AWS) offers the Elastic MapReduce service (EMR,) which gives us a way to rent time on a Hadoop cluster of our choosing - with Spark pre-installed on it. We'll use EMR to illustrate running a Spark script on a real cluster, so let's go over what EMR is and how it works first.

Introducing Amazon Elastic MapReduce
07:11

Let's compute movie similarities on a real cluster in the cloud, using one million user ratings!

Preview 12:47

Explicitly partitioning your RDD's can be an important optimization; we'll go over when and how to do this.

Partitioning
05:07

Other tips and tricks for taking your script to a real cluster and getting it to run as you expect.

Best Practices for Running on a Cluster
05:31

How to troubleshoot Spark jobs on a cluster using the Spark UI and logs, and more on managing dependencies of your script and data.

Troubleshooting, and Managing Dependencies
09:08
+
SparkSQL, DataFrames, and DataSets
4 Lectures 28:09

Understand SparkSQL in Spark 2, and the new DataFrame and DataSet API's used for querying structured data in an efficient, scalable manner.

Introduction to SparkSQL
07:08

We'll revisit our fabricated social network data, but load it into a DataFrame and analyze it with actual SQL queries!

[Activity] Using SparkSQL
07:00

We'll analyze our social network data another way - this time using SQL-like functions on a DataSet, instead of actual SQL query strings.

[Activity] Using DataFrames and DataSets
06:38

We'll revisit our "most popular movie" exercise, but this time use a DataSet to make getting the answer more straightforward.

[Activity] Using DataSets instead of RDD's
07:23
+
Machine Learning with MLLib
4 Lectures 36:41

MLLib offers several distributed machine learning algorithms that you can run on a Spark cluster. We'll cover what MLLib can do and how it fits in.

Introducing MLLib
07:38

We'll use MLLib's Alternating Least Squares recommender algorithm to produce movie recommendations using our MovieLens ratings data. The results are... unexpected!

[Activity] Using MLLib to Produce Movie Recommendations
07:22

A brief overview of what linear regression is and how it works, followed by a hands-on example of finding a regression and applying it to fabricated page speed vs. revenue data.

[Activity] Linear Regression with MLLib
11:37

Spark 2 makes DataFrames the preferred API for MLLib. Let's re-write our linear regression example, this time using Spark's DataFrame MLLib API.

Preview 10:04
+
Intro to Spark Streaming
3 Lectures 26:06

Spark Streaming allows you create Spark driver scripts that run indefinitely, continually processing data as it streams in! We'll cover how it works and what it can do.

Spark Streaming Overview
09:53

As a hands-on example of using Spark Streaming, we'll set up a Twitter developer account, and run a script that will keep track of the most popular hashtags from the past five minutes in real time! Plus some ideas for extending this script on your own.

Preview 12:12

Spark 2.0 introduced experimental support for Structured Streaming, a new DataFrame-based API for writing continuous applications.

Structured Streaming
04:01
+
Intro to GraphX
2 Lectures 19:37

We cover Spark's GraphX library and how it works, followed by a strategy for re-implementing breadth-first-search using GraphX and its Pregel API.

Preview 10:38

We'll use GraphX and Pregel to recreate our earlier results analyzing the superhero social network data - but with a lot less code!

[Activity] Superhero Degrees of Separation using GraphX
08:59
+
You Made It! Where to Go from Here.
2 Lectures 05:21

You made it to the end! Here are some book recommendations if you want to learn more, as well as some career advice on landing a job in "big data".

Learning More, and Career Tips
04:15

Let's stay in touch! Head to my website for discounts on my other courses, and to follow me on social media.

Bonus Lecture: Discounts on my other "Big Data" / Data Science Courses.
01:06
About the Instructor
Sundog Education by Frank Kane
4.5 Average rating
11,726 Reviews
59,788 Students
7 Courses
Training the World in Big Data and Machine Learning

Sundog Education's mission is to make highly valuable career skills in big data, data science, and machine learning accessible to everyone in the world. Our consortium of expert instructors shares our knowledge in these emerging fields with you, at prices anyone can afford. 

Sundog Education is led by Frank Kane and owned by Frank's company, Sundog Software LLC. Frank spent 9 years at Amazon and IMDb, developing and managing the technology that automatically delivers product and movie recommendations to hundreds of millions of customers, all the time. Frank holds 17 issued patents in the fields of distributed computing, data mining, and machine learning. In 2012, Frank left to start his own successful company, Sundog Software, which focuses on virtual reality environment technology, and teaching others about big data analysis.

Frank Kane
4.5 Average rating
11,449 Reviews
57,414 Students
6 Courses
Founder, Sundog Education

Frank spent 9 years at Amazon and IMDb, developing and managing the technology that automatically delivers product and movie recommendations to hundreds of millions of customers, all the time. Frank holds 17 issued patents in the fields of distributed computingdata mining, and machine learning. In 2012, Frank left to start his own successful company, Sundog Software, which focuses on virtual reality environment technology, and teaching others about big data analysis.