Taming Big Data with Spark Streaming and Scala - Hands On!

Learn to process massive streams of data in real time on a cluster with Spark Streaming.
4.6 (499 ratings) Instead of using a simple lifetime average, Udemy calculates a
course's star rating by considering a number of different factors
such as the number of ratings, the age of ratings, and the
likelihood of fraudulent ratings.
3,458 students enrolled
$19
$100
81% off
Take This Course
  • Lectures 36
  • Length 6 hours
  • Skill Level All Levels
  • Languages English, captions
  • Includes Lifetime access
    30 day money back guarantee!
    Available on iOS and Android
    Certificate of Completion
Wishlisted Wishlist

How taking a course works

Discover

Find online courses made by experts from around the world.

Learn

Take your courses with you and learn anywhere, anytime.

Master

Learn and practice real-world skills and achieve your goals.

About This Course

Published 5/2016 English Closed captions available

Course Description

"Big Data" analysis is a hot and highly valuable skill. Thing is, "big data" never stops flowing! Spark Streaming is a new and quickly developing technology for processing massive data sets as they are created - why wait for some nightly analysis to run when you can constantly update your analysis in real time, all the time? Whether it's clickstream data from a big website, sensor data from a massive "Internet of Things" deployment, financial data, or something else - Spark Streaming is a powerful technology for transforming and analyzing that data right when it is created, all the time.

You'll be learning from an ex-engineer and senior manager from Amazon and IMDb.

This course gets your hands on to some real live Twitter data, simulated streams of Apache access logs, and even data used to train machine learning models! You'll write and run real Spark Streaming jobs right at home on your own PC, and toward the end of the course, we'll show you how to take those jobs to a real Hadoop cluster and run them in a production environment too.

Across over 30 lectures and almost 6 hours of video content, you'll:

  • Get a crash course in the Scala programming language
  • Learn how Apache Spark operates on a cluster
  • Set up discretized streams with Spark Streaming and transform them as data is received
  • Analyze streaming data over sliding windows of time
  • Maintain stateful information across streams of data
  • Connect Spark Streaming with highly scalable sources of data, including Kafka, Flume, and Kinesis
  • Dump streams of data in real-time to NoSQL databases such as Cassandra
  • Run SQL queries on streamed data in real time
  • Train machine learning models in real time with streaming data, and use them to make predictions that keep getting better over time
  • Package, deploy, and run self-contained Spark Streaming code to a real Hadoop cluser using Amazon Elastic MapReduce.

This course is very hands-on, filled with achievable activities and exercises to reinforce your learning. By the end of this course, you'll be confidently creating Spark Streaming scripts in Scala, and be prepared to tackle massive streams of data in a whole new way. You'll be surprised at how easy Spark Streaming makes it!

What are the requirements?

  • To follow along with the examples, you'll need a personal computer. The course is filmed using Windows 10, but the tools we install are available for Linux and MacOS as well.
  • We'll walk through installing the required software in the first lecture: The Scala IDE, Spark, and a JDK.
  • My "Taming Big Data with Apache Spark - Hands On!" would be a helpful introduction to Spark in general, but it is not required for this course. A quick introduction to Spark is included.
  • The course includes a crash course in the Scala programming language if you're new to it; if you already know Scala, then great.

What am I going to get from this course?

  • Process massive streams of real-time data using Spark Streaming
  • Create Spark applications using the Scala programming language
  • Integrate Spark Streaming with data sources, including Kafka, Flume, and Kinesis
  • Output transformed real-time data to Cassandra or file systems
  • Integrate Spark Streaming with Spark SQL to query streaming data in real time
  • Train machine learning models with streaming data, and use those models for real-time predictions
  • Ingest Apache access log data and transform streams of it
  • Receive real-time streams of Twitter feeds
  • Maintain stateful data across a continuous stream of input data
  • Query streaming data across sliding windows of time

What is the target audience?

  • Students with some prior programming or scripting ability SHOULD take this course.
  • If you're working for a company with "big data" that is being generated continuously, or hope to work for one, this course is for you.
  • Students with no prior software engineering or programming experience should seek an introductory programming course first.

What you get with this course?

Not for you? No problem.
30 day money back guarantee.

Forever yours.
Lifetime access.

Learn on the go.
Desktop, iOS and Android.

Get rewarded.
Certificate of completion.

Curriculum

Section 1: Getting Started
15:20

A brief introduction to the course, and then we'll get your development environment for Spark and Scala all set up on your desktop. A quick test application will confirm Spark is working on your system! Remember - be sure to install Spark 1.6.2 for this course.

11:24

Get set up with a Twitter developer account, and run your first Spark Streaming application to listen to and print out live Tweets as they happen!

Section 2: A Crash Course in Scala
11:26

We start our crash course in the Scala programming language by covering some basics of the language: types and variables, printing, and boolean comparisons.

09:41

Part 2 of our introduction to the basics of Scala programming, and a simple exercise to get you writing your own Scala code.

07:18

Our Scala crash course continues, illustrating various means of flow control in Scala. For loops, do/while loops, while loops, etc.

08:47

Scala is a functional programming language, and so understanding how functions work and are treated in Scala is hugely important! This lecture covers the fundamentals, and lets you put it into practice.

16:38

We wrap up our Scala crash course with commonly used data structures using in Spark with Scala. Tuples, lists, and maps.

Section 3: Spark Streaming Concepts
07:06

Before you can learn about Spark Streaming, you need to understand how Spark itself works at a high level! This covers the why & how of Apache Spark, of which Spark Streaming is a component.

10:40

The fundamental object of Spark programming is the Resilient Distributed Dataset (RDD), and this is used not just in Spark but also within Spark Streaming scripts. This lecture explains what they are, and what you can do with them.

08:17

Let's walk through and actually run a simple Spark script that counts the number of occurrences of each word in a book.

06:32

We finally have all the pre-requisite knowledge to start talking about Spark Streaming itself in more detail! We'll cover how it works, what it's for, and its architecture.

05:09

Now that we know more, let's go revisit that first Spark Streaming application we ran in lecture 2, and dive into how it really works.

05:00

Windowing allows you to analyze streaming data over a sliding window of time, which lets you do much more than just transform streaming data and store it someplace else. We'll cover the concepts of the batch, window, and slide intervals, and how they work together to let you aggregate streaming data over some period of time.

06:06

How can Spark Streaming do so much work continuously in a reliable manner? We'll uncover some of its tricks for reliability, as well as tips for configuring Spark Streaming to be as reliable as possible.

Section 4: Spark Streaming Examples with Twitter
13:23

We'll build on our "print tweets" example to actually store the incoming Tweets to disk, and illustrate how Spark Streaming can handle file output.

08:22

Compute the average length of a Tweet, using windowing in Spark Streaming.

14:50

This is a fun one! We'll track the most popular hashtags in Twitter over time, and watch how they change in real time!

Section 5: Spark Streaming Examples with Clickstream / Apache Access Log Data
13:27

We'll simulate an incoming stream of Apache access logs, and use Spark Streaming to keep track of the most-requested web pages in real time!

11:56

This example will listen to an Apache access log stream, and raise an alarm if too many errors are returned by the server in real time.

10:18

We'll integrate Spark Streaming with Spark SQL, allowing us to run SQL queries on data as it is streamed in! Again we will use Apache logs as an example.

08:27

Spark 2.0 introduced experimental support for Structured Streaming, a new DataSet-based API for Spark Streaming that is bound to become increasingly important. Learn how it works.

11:24

As an example, we'll stream Apache access logs in from a directory, and use Structured Streaming to count up status codes over a one-hour moving window.

Section 6: Integrating with Other Systems
12:20

Apache Kafka is a popular and robust technology for publishing messages across a cluster on a large scale. We'll show how to get Spark Streaming to listen to Kafka topics, and process them in real time.

08:51

Flume is a popular technology for publishing log information at large scale, especially on a Hadoop cluster. We'll illustrate how to set up both push-based and pull-based Flume configurations with Spark Streaming, and discuss the tradeoffs of each.

05:29

Amazon's Kinesis Streaming service is basically Kafka on AWS. If you're working with an AWS/EC2 cluster, you'll want to know how to integrate Spark Streaming with Kinesis - and that's what this lecture covers.

06:55

What if you need to integrate Spark Streaming with some proprietary system that does not have an existing connection library? Well, you can always write your own Receiver class. This example shows you how and actually lets you build and run one.

07:34

Cassandra is a popular "NoSQL" database that can be used to provide fast access to massive data sets to real-time applications. Dumping data transformed by Spark Streaming into a Cassandra database can expose that data you your larger, real-time services. We'll show you how and actually run a simple example.

Section 7: Advanced Spark Streaming Examples
15:07

Spark has the ability to track arbitrary state across streams of data as they come in, such as web sessions, running totals, etc. This example shows you how it all works, and challenges you to track your own state using our example as a baseline.

15:36

Spark Streaming integrates with some of Spark's MLLib (Machine Learning Library) capabilities. This example creates a real-time K-Means clustering example; it does unsupervised machine learning that continually gets better as more training data feeds into it.

11:50

Spark Streaming can also feed data in real-time to linear regression models, that get better over time as more data is fed into them. This example shows linear regression in action with Spark Streaming.

Section 8: Spark Streaming in Production
10:47

Your production applications won't be run from within the Scala IDE; you'll need to run them from a command line, and potentially on a cluster. The spark-submit command is used for this. We'll show you how to package up your application and run it using spark-submit from a command prompt.

10:49

If your Spark Streaming application has external library dependencies that might not be already present on every machine in your cluster, the SBT tool can manage those dependencies for you, and package them into the JAR file you run with spark-submit. We'll show you how it works with a real example.

13:13

We'll run our simple word count example on a real cluster, using Amazon's Elastic MapReduce service! This just shows you what's involved in running a Spark Streaming job on a real cluster as opposed to your desktop; there are a few parameters to spark-submit you need to worry about, and getting your scripts and data in the right place is also something you need to deal with.

12:35

Spark jobs rarely run perfectly, if at all, on the first try - some tuning and debugging is usually required, and arriving at the right scale of your cluster is also necessary. We'll cover some performance tips, and how to troubleshoot what's going on with a Spark Streaming job running on a cluster.

Section 9: You Made It!
03:44

Want to learn more about Spark Streaming? Here are a few books and other resources I've found valuable.

Bonus Lecture: Discounts on my other courses!
01:41

Students Who Viewed This Course Also Viewed

  • Loading
  • Loading
  • Loading

Instructor Biography

Frank Kane, Data Miner and Software Engineer

Frank Kane spent 9 years at Amazon and IMDb, developing and managing the technology that automatically delivers product and movie recommendations to hundreds of millions of customers, all the time. Frank holds 17 issued patents in the fields of distributed computing, data mining, and machine learning. In 2012, Frank left to start his own successful company, Sundog Software, which focuses on virtual reality environment technology, and teaching others about big data analysis.

Ready to start learning?
Take This Course