Apache Spark with Scala - Hands On with Big Data!
4.5 (11,464 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
58,752 students enrolled

Apache Spark with Scala - Hands On with Big Data!

Dive right in with 20+ hands-on examples of analyzing large data sets with Apache Spark, on your desktop or on Hadoop!
Bestseller
4.5 (11,464 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
58,740 students enrolled
Last updated 3/2020
English
English, French [Auto], 5 more
  • German [Auto]
  • Italian [Auto]
  • Polish [Auto]
  • Portuguese [Auto]
  • Spanish [Auto]
Current price: $96.99 Original price: $149.99 Discount: 35% off
14 hours left at this price!
30-Day Money-Back Guarantee
This course includes
  • 7.5 hours on-demand video
  • 3 articles
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
Training 5 or more people?

Get your team access to 4,000+ top Udemy courses anytime, anywhere.

Try Udemy for Business
What you'll learn
  • Frame big data analysis problems as Apache Spark scripts
  • Develop distributed code using the Scala programming language
  • Optimize Spark jobs through partitioning, caching, and other techniques
  • Build, deploy, and run Spark scripts on Hadoop clusters
  • Process continual streams of data with Spark Streaming
  • Transform structured data using SparkSQL and DataFrames
  • Traverse and analyze graph structures using GraphX
Course content
Expand all 56 lectures 07:34:02
+ Getting Started
4 lectures 33:59
Tip: Apply for a Twitter Developer Account now!
00:51

A brief introduction to the course, and then we'll get your development environment for Spark and Scala all set up on your desktop. A quick test application will confirm Spark is working on your system! Remember - be sure to install Spark 3.0.0 and Java 8 for this course.

Preview 16:19

Let's dive right in! We'll download a data set of 100,000 real movie ratings from real people, and run a Spark script that generates histogram data of the distribution of movie ratings. Some final setup of your Scala development environment and downloading the course materials is also part of this lecture, so be sure not to skip this one.

Preview 14:39
+ Scala Crash Course [Optional]
5 lectures 55:16

We'll go over the basic syntax and structure of Scala code with lots of examples. It's backwards from most other languages, but you quickly get used to it. Part 1 of 2.

[Activity] Scala Basics, Part 1
12:52

We'll go over the basic syntax and structure of Scala code with lots of examples. It's backwards from most other languages, but you quickly get used to it. Part 2 of 2, with some hands-on practice at the end.

[Exercise] Scala Basics, Part 2
09:41

You'll see how flow control works in Scala (if/then statements, loops, etc.), and practice what you've learned at the end.

[Exercise] Flow Control in Scala
07:18

Scala is a functional programming language, and so functions are central to the language. We'll go over the many ways functions can be declared and used in Scala, and practice what you've learned.

[Exercise] Functions in Scala
08:47

We'll cover the common data structures in Scala such as Map and List, and put them into practice.

[Exercise] Data Structures in Scala
16:38
+ Spark Basics and Simple Examples
15 lectures 01:51:18

Apache Spark 3 was released in early 2020 - here's what's new, what's improved, and what's deprecated.

Preview 06:48

What is Apache Spark anyhow? What does it do, and what is it used for?

Preview 08:40

The core object of Spark programming is the Resilient Distributed Dataset, or RDD. Once you know how to use RDD's, you know how to use Spark. We'll go over what they are, and what you can do with them.

The Resilient Distributed Dataset
11:04

Now that we understand Scala and have the theory of Spark behind us, we can revisit the rating counter code from lesson 2 and better understand what's actually going on within it.

Ratings Histogram Walkthrough
07:33

How does Spark convert your script into a Directed Acyclic Graph and figure out how to distribute it on a cluster? Understanding how this process works under the hood can be important in writing optimal Spark driver scripts.

Preview 04:42

RDD's that contain a tuple of two values are key/value RDD's, and you can use them much like you might use a NoSQL data store. We'll use key/value RDD's to figure out the average number of friends by age in some fake social network data.

Key / Value RDD's, and the Average Friends by Age example
12:21

We'll run the average friends by age example on your desktop, and give you some ideas for further extending this script on your own.

[Activity] Running the Average Friends by Age Example
07:58

We'll cover how to filter data out of an RDD efficiently, and illustrate this with a new example that finds the minimum temperature by location using real weather data.

Filtering RDD's, and the Minimum Temperature by Location Example
06:43

We'll run our minimum temperature by location example, and modify it to find maximum temperatures as well. Plus, some ideas for extending this script on your own.

[Activity] Running the Minimum Temperature Example, and Modifying it for Maximum
10:10

flatmap() on an RDD can return variable amounts of new entries into the resulting RDD. We'll use this as part of a hands-on example that finds how often each word is used inside a real book's text.

[Activity] Counting Word Occurrences using Flatmap()
08:59

We extend the previous lecture's example by using regular expressions to better extract words from our book.

[Activity] Improving the Word Count Script with Regular Expressions
06:41

Finally, we sort the final results to see what the most common words in this book really are! And some ideas to extend this script on your own.

Preview 08:10

Your assignment: write a script that finds the total amount spent per customer using some fabricated e-commerce data, using what you've learned so far.

[Exercise] Find the Total Amount Spent by Customer
03:37

We'll review my solution to the previous lecture's assignment, and challenge you further to sort your results to find the biggest spenders.

[Exercise] Check your Results, and Sort Them by Total Amount Spent
04:26

Check your results for finding the biggest spenders in our e-commerce data against my own solution.

Check Your Results and Implementation Against Mine
03:26
+ Advanced Examples of Spark Programs
9 lectures 01:16:07

We'll revisit our movie ratings data set, and start off with a simple example to find the most-rated movie.

[Activity] Find the Most Popular Movie
04:29

Broadcast variables can be used to share small amounts of data to all of the machines on your cluster. We'll use them to share a lookup table of movie ID's to movie names, and use that to get movie names in our final results.

[Activity] Use Broadcast Variables to Display Movie Names
08:52

We introduce the Marvel superhero social network data set, and write a script to find the most-connected superhero in it. It's not who you might think!

[Activity] Find the Most Popular Superhero in a Social Graph
14:10

As a more complex example, we'll apply a breadth-first-search (BFS) algorithm to the Marvel dataset to compute the degrees of separation between any two superheroes. In this lecture, we go over how BFS works.

Superhero Degrees of Separation: Introducing Breadth-First Search
06:52

We'll go over our strategy for implementing BFS within a Spark script that can be distributed, and introduce the use of Accumulators to maintain running totals that are synced across a cluster.

Superhero Degrees of Separation: Accumulators, and Implementing BFS in Spark
05:53

Finally, we'll review the code for finding the degrees of separation using breadth-first-search, run it, and see the results!

Superhero Degrees of Separation: Review the code, and run it!
10:41

Back to our movie ratings data - we'll discover movies that are similar to each other just based on user ratings. We'll cover the algorithm, and how to implement it as a Spark script.

Item-Based Collaborative Filtering in Spark, cache(), and persist()
08:16

We'll run our movie similarties script and see the results. In doing so, we'll introduce the process of exporting your Spark script as a JAR file that can be run from the command line using the spark-submit script (instead of running from within the Scala IDE.)

Preview 14:13

Your challenge: make the movie similarity results even better! Here are some ideas for you to try out.

[Exercise] Improve the Quality of Similar Movies
02:41
+ Running Spark on a Cluster
7 lectures 58:42

In a production environment, you'll use spark-submit to start your driver scripts from a command line, cron job, or the like. We'll cover the details on what you need to do differently in this case.

[Activity] Using spark-submit to run Spark driver scripts
06:58

Spark / Scala scripts that have external dependencies can be bundled up into self-contained packages using the SBT tool. We'll use SBT to package up our movie similarities script as an exercise.

[Activity] Packaging driver scripts with SBT
13:14

Amazon Web Services (AWS) offers the Elastic MapReduce service (EMR,) which gives us a way to rent time on a Hadoop cluster of our choosing - with Spark pre-installed on it. We'll use EMR to illustrate running a Spark script on a real cluster, so let's go over what EMR is and how it works first.

Introducing Amazon Elastic MapReduce
07:11

Let's compute movie similarities on a real cluster in the cloud, using one million user ratings!

Preview 11:33

Explicitly partitioning your RDD's can be an important optimization; we'll go over when and how to do this.

Partitioning
05:07

Other tips and tricks for taking your script to a real cluster and getting it to run as you expect.

Best Practices for Running on a Cluster
05:31

How to troubleshoot Spark jobs on a cluster using the Spark UI and logs, and more on managing dependencies of your script and data.

Troubleshooting, and Managing Dependencies
09:08
+ SparkSQL, DataFrames, and DataSets
4 lectures 28:09

Understand SparkSQL in Spark 2, and the new DataFrame and DataSet API's used for querying structured data in an efficient, scalable manner.

Introduction to SparkSQL
07:08

We'll revisit our fabricated social network data, but load it into a DataFrame and analyze it with actual SQL queries!

[Activity] Using SparkSQL
07:00

We'll analyze our social network data another way - this time using SQL-like functions on a DataSet, instead of actual SQL query strings.

[Activity] Using DataFrames and DataSets
06:38

We'll revisit our "most popular movie" exercise, but this time use a DataSet to make getting the answer more straightforward.

[Activity] Using DataSets instead of RDD's
07:23
+ Machine Learning with MLLib
5 lectures 38:49

MLLib offers several distributed machine learning algorithms that you can run on a Spark cluster. We'll cover what MLLib can do and how it fits in.

Introducing MLLib
09:18
If you have trouble running the following activity...
00:31

We'll use MLLib's Alternating Least Squares recommender algorithm to produce movie recommendations using our MovieLens ratings data. The results are... unexpected!

[Activity] Using MLLib to Produce Movie Recommendations
14:35

A brief overview of what linear regression is and how it works, followed by a hands-on example of finding a regression and applying it to fabricated page speed vs. revenue data.

[Activity] Linear Regression with MLLib
05:55

Spark 2 makes DataFrames the preferred API for MLLib. Let's re-write our linear regression example, this time using Spark's DataFrame MLLib API.

Preview 08:30
+ Intro to Spark Streaming
3 lectures 26:54

Spark Streaming allows you create Spark driver scripts that run indefinitely, continually processing data as it streams in! We'll cover how it works and what it can do.

Spark Streaming Overview
09:53

As a hands-on example of using Spark Streaming, we'll set up a Twitter developer account, and run a script that will keep track of the most popular hashtags from the past five minutes in real time! Plus some ideas for extending this script on your own.

Preview 12:44

Spark 2.0 introduced experimental support for Structured Streaming, a new DataFrame-based API for writing continuous applications.

Structured Streaming
04:17
+ Intro to GraphX
2 lectures 19:37

We cover Spark's GraphX library and how it works, followed by a strategy for re-implementing breadth-first-search using GraphX and its Pregel API.

Preview 10:38

We'll use GraphX and Pregel to recreate our earlier results analyzing the superhero social network data - but with a lot less code!

[Activity] Superhero Degrees of Separation using GraphX
08:59
+ You Made It! Where to Go from Here.
2 lectures 05:11

You made it to the end! Here are some book recommendations if you want to learn more, as well as some career advice on landing a job in "big data".

Learning More, and Career Tips
04:15
Bonus Lecture: More courses to explore!
00:56
Requirements
  • Some prior programming or scripting experience is required. A crash course in Scala is included, but you need to know the fundamentals of programming in order to pick it up.
  • You will need a desktop PC and an Internet connection. The course is created with Windows in mind, but users comfortable with MacOS or Linux can use the same tools.
  • The software needed for this course is freely available, and I'll walk you through downloading and installing it.
Description

New! Updated for Spark 3.0.0!

“Big data" analysis is a hot and highly valuable skill – and this course will teach you the hottest technology in big data: Apache Spark. Employers including AmazonEBayNASA JPL, and Yahoo all use Spark to quickly extract meaning from massive data sets across a fault-tolerant Hadoop cluster. You'll learn those same techniques, using your own Windows system right at home. It's easier than you might think, and you'll be learning from an ex-engineer and senior manager from Amazon and IMDb.

Spark works best when using the Scala programming language, and this course includes a crash-course in Scala to get you up to speed quickly. For those more familiar with Python however, a Python version of this class is also available: "Taming Big Data with Apache Spark and Python - Hands On".

Learn and master the art of framing data analysis problems as Spark problems through over 20 hands-on examples, and then scale them up to run on cloud computing services in this course.

  • Learn the concepts of Spark's Resilient Distributed Datastores

  • Get a crash course in the Scala programming language

  • Develop and run Spark jobs quickly using Scala

  • Translate complex analysis problems into iterative or multi-stage Spark scripts

  • Scale up to larger data sets using Amazon's Elastic MapReduce service

  • Understand how Hadoop YARN distributes Spark across computing clusters

  • Practice using other Spark technologies, like Spark SQL, DataFrames, DataSets, Spark Streaming, and GraphX

By the end of this course, you'll be running code that analyzes gigabytes worth of information – in the cloud – in a matter of minutes. 

We'll have some fun along the way. You'll get warmed up with some simple examples of using Spark to analyze movie ratings data and text in a book. Once you've got the basics under your belt, we'll move to some more complex and interesting tasks. We'll use a million movie ratings to find movies that are similar to each other, and you might even discover some new movies you might like in the process! We'll analyze a social graph of superheroes, and learn who the most “popular" superhero is – and develop a system to find “degrees of separation" between superheroes. Are all Marvel superheroes within a few degrees of being connected to SpiderMan? You'll find the answer.

This course is very hands-on; you'll spend most of your time following along with the instructor as we write, analyze, and run real code together – both on your own system, and in the cloud using Amazon's Elastic MapReduce service. 7.5 hours of video content is included, with over 20 real examples of increasing complexity you can build, run and study yourself. Move through them at your own pace, on your own schedule. The course wraps up with an overview of other Spark-based technologies, including Spark SQL, Spark Streaming, and GraphX.

Enroll now, and enjoy the course!


"I studied Spark for the first time using Frank's course "Apache Spark 2 with Scala - Hands On with Big Data!". It was a great starting point for me,  gaining knowledge in Scala and most importantly practical examples of Spark applications. It gave me an understanding of all the relevant Spark core concepts,  RDDs, Dataframes & Datasets, Spark Streaming, AWS EMR. Within a few months of completion, I used the knowledge gained from the course to propose in my current company to  work primarily on Spark applications. Since then I have continued to work with Spark. I would highly recommend any of Franks courses as he simplifies concepts well and his teaching manner is easy to follow and continue with!  " - Joey Faherty

Who this course is for:
  • Software engineers who want to expand their skills into the world of big data processing on a cluster
  • If you have no previous programming or scripting experience, you'll want to take an introductory programming course first.