Scalable programming with Scala and Spark
4.7 (107 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
2,364 students enrolled
Wishlisted Wishlist

Please confirm that you want to add Scalable programming with Scala and Spark to your Wishlist.

Add to Wishlist

Scalable programming with Scala and Spark

Use Scala and Spark for data analysis, machine learning and analytics
Best Seller
4.7 (107 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
2,364 students enrolled
Created by Loony Corn
Last updated 3/2017
English
Current price: $10 Original price: $50 Discount: 80% off
5 hours left at this price!
30-Day Money-Back Guarantee
Includes:
  • 9 hours on-demand video
  • 96 Supplemental Resources
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
What Will I Learn?
  • Use Spark for a variety of analytics and Machine Learning tasks
  • Understand functional programming constructs in Scala
  • Implement complex algorithms like PageRank or Music Recommendations
  • Work with a variety of datasets from Airline delays to Twitter, Web graphs, Social networks and Product Ratings
  • Use all the different features and libraries of Spark : RDDs, Dataframes, Spark SQL, MLlib, Spark Streaming and GraphX
  • Write code in Scala REPL environments and build Scala applications with an IDE
View Curriculum
Requirements
  • All examples work with or without Hadoop. If you would like to use Spark with Hadoop, you'll need to have Hadoop installed (either in pseudo-distributed or cluster mode).
  • The course assumes experience with one of the popular object-oriented programming languages like Java/C++
Description

Taught by a 4 person team including 2 Stanford-educated, ex-Googlers  and 2 ex-Flipkart Lead Analysts. This team has decades of practical experience in working with Java and with billions of rows of data. 

Get your data to fly using Spark and Scala for analytics, machine learning and data science 

Let’s parse that.

What's Spark? If you are an analyst or a data scientist, you're used to having multiple systems for working with data. SQL, Python, R, Java, etc. With Spark, you have a single engine where you can explore and play with large amounts of data, run machine learning algorithms and then use the same system to productionize your code.

Scala: Scala is a general purpose programming language - like Java or C++. It's functional programming nature and the availability of a REPL environment make it particularly suited for a distributed computing framework like Spark. 

Analytics: Using Spark and Scala you can analyze and explore your data in an interactive environment with fast feedback. The course will show how to leverage the power of RDDs and Dataframes to manipulate data with ease. 

Machine Learning and Data Science : Spark's core functionality and built-in libraries make it easy to implement complex algorithms like Recommendations with very few lines of code. We'll cover a variety of datasets and algorithms including PageRank, MapReduce and Graph datasets. 

What's Covered:

Scala Programming Constructs: Classes, Traits, First Class Functions, Closures, Currying, Case Classes

Lot's of cool stuff ..

  • Music Recommendations using Alternating Least Squares and the Audioscrobbler dataset
  • Dataframes and Spark SQL to work with Twitter data
  • Using the PageRank algorithm with Google web graph dataset
  • Using Spark Streaming for stream processing 
  • Working with graph data using the  Marvel Social network dataset 

.. and of course all the Spark basic and advanced features: 

  • Resilient Distributed Datasets, Transformations (map, filter, flatMap), Actions (reduce, aggregate) 
  • Pair RDDs , reduceByKey, combineByKey 
  • Broadcast and Accumulator variables 
  • Spark for MapReduce 
  • The Java API for Spark 
  • Spark SQL, Spark Streaming, MLlib and GraphX


Using discussion forums

Please use the discussion forums on this course to engage with other students and to help each other out. Unfortunately, much as we would like to, it is not possible for us at Loonycorn to respond to individual questions from students:-(

We're super small and self-funded with only 2 people developing technical video content. Our mission is to make high-quality courses available at super low prices.

The only way to keep our prices this low is to *NOT offer additional technical support over email or in-person*. The truth is, direct support is hugely expensive and just does not scale.

We understand that this is not ideal and that a lot of students might benefit from this additional support. Hiring resources for additional support would make our offering much more expensive, thus defeating our original purpose.

It is a hard trade-off.

Thank you for your patience and understanding!

Who is the target audience?
  • Yep! Engineers who want to use a distributed computing engine for batch or stream processing or both
  • Yep! Analysts who want to leverage Spark for analyzing interesting datasets
  • Yep! Data Scientists who want a single engine for analyzing and modelling data as well as productionizing it.
Students Who Viewed This Course Also Viewed
Curriculum For This Course
54 Lectures
09:01:42
+
You, This Course and Us
2 Lectures 11:59

Install Scala and use it both in the shell and with an IDE

Installing Scala and Hello World
09:43
+
Introduction to Spark
8 Lectures 01:25:53

He has a great categorization for insights in data, really!

There is a profound truth in here which data scientists and analysts have known for years.

Preview 08:45

Explore, investigate and find patterns in data. Build fully fledged, scalable productions system. All using the same environment.

Why is Spark so cool?
12:23

RDDs are pretty magical, they are the core programming abstraction in Spark

An introduction to RDDs - Resilient Distributed Datasets
09:39

Spark is even more powerful because of the packages that come with it. Spark SQL, Spark Streaming, MLlib and GraphX.

Built-in libraries for Spark
15:37

Let's get started by installing Spark. We'll also configure Spark to work with IPython Notebook

Installing Spark
11:44

Start munging data using the Spark REPL environment.

The Spark Shell
06:55

We've learnt a little bit about how Spark and RDDs work. Let's see it in action! 

See it in Action : Munging Airlines Data with Spark
03:44

Operations on data, transform data to extract information and then  retrieve results.

Transformations and Actions
17:06
+
Resilient Distributed Datasets
8 Lectures 01:12:23

RDDs are very intuitive to use. What are some of the characteristics that make RDDs performant, resilient and efficient? 

Preview 12:35

Lazy evaluation of RDDs is possible because RDDs can reconstruct themselves. They know where they came from.

RDD Characteristics: Lineage, RDDs know where they came from
06:06

A quick overview of all operations and transformations on RDDs

What can you do with RDDs?
11:08

Parse a CSV file, transform is using the map() operation, create Flight objects on the fly.

Create your first RDD from a file
14:54

Use the flights dataset to get interesting insights.

Average distance travelled by a flight using map() and reduce() operations
06:59

Cache RDDs in memory to optimize operations using persist()

Get delayed flights using filter(), cache data using persist()
06:10

Use the aggregate() operation to calculate average flight delays in one step. Much more compact than map() and reduce().

Average flight delay in one-step using aggregate()
12:21

This is surprisingly simple!

Frequency histogram of delays using countByValue()
02:10
+
Advanced RDDs: Pair Resilient Distributed Datasets
5 Lectures 50:30

Pair RDDs are special types of RDDs where every record is a key value pair. All normal actions and transformations apply to these in addition to some special ones.

Special Transformations and Actions
14:45

Pair RDDs are useful to get information on a per-key basis. Sales per city, delays per airport etc.

Average delay per airport, use reduceByKey(), mapValues() and join()
13:35

Instead of 3 steps use just one to get the average delay per airport.

Average delay per airport in one step using combineByKey()
08:23


Looking up airport descriptions in a pair RDD can be done in many ways, understand how each work.

Lookup airport descriptions using lookup(), collectAsMap(), broadcast()
10:56
+
Advanced Spark: Accumulators, Spark Submit, MapReduce , Behind The Scenes
5 Lectures 48:08

Accumulators are special variables which allow the main driver program to collect information from nodes on which the actual processing takes place.

Get information from individual processing nodes using accumulators
09:25

Spark is more than just the Read-Evaluate-Print Loop environment, it can run long running programs as well.

Preview 07:11

Spark-Submit with Scala - A demo
06:09

How does Spark submit jobs for distributed processing? How does the scheduler work? What does the cluster manager do? All this and more in this behind the scenes.

Behind the scenes: What happens when a Spark script runs?
14:30

MapReduce is a powerful paradigm for distributed processing. Many tasks lend themselves well to this model and Spark has transformations which deal with this beautifully.

Running MapReduce operations
10:53
+
PageRank: Ranking Search Results
4 Lectures 39:12

The PageRank algorithm
06:15

This will be way simpler than the explanation.

Implement PageRank in Spark
09:45

Optimize the algorithm by making joins more performant

Join optimization in PageRank using Custom Partitioning
06:28
+
Spark SQL
1 Lecture 15:47

Pretend data is in a relational database using Dataframes. Dataframes are also RDDs, you get the best of both worlds!

Dataframes: RDDs + Tables
15:47
+
MLlib in Spark: Build a recommendations engine
4 Lectures 44:21

This is a family of algorithms which give recommendations based on user data and user preferences.

Preview 12:19

One type of collaborative filtering algorithm is latent factor analysis. There is some math here but don't worry, MLlib takes care of all this for you.

Latent Factor Analysis with the Alternating Least Squares method
11:39

Let's write a recommendation engine for music services

Music recommendations using the Audioscrobbler dataset
05:38

The code in Spark is surprisingly simple.

Implement code in Spark using MLlib
14:45
+
Spark Streaming
3 Lectures 27:31

Spark can process streaming data in near real time using DStreams. 

Preview 09:55

A script to parse logs in real time

Implement stream processing in Spark using Dstreams
09:19

Stateful transformations allow cumulative results across a stream using a sliding window.

Stateful transformations using sliding windows
08:17
+
Graph Libraries
1 Lecture 14:30

Find the most well connected Marvel character using GraphX with Spark.

The Marvel social network using Graphs
14:30
2 More Sections
About the Instructor
Loony Corn
4.3 Average rating
5,450 Reviews
42,491 Students
75 Courses
An ex-Google, Stanford and Flipkart team

Loonycorn is us, Janani Ravi and Vitthal Srinivasan. Between us, we have studied at Stanford, been admitted to IIM Ahmedabad and have spent years  working in tech, in the Bay Area, New York, Singapore and Bangalore.

Janani: 7 years at Google (New York, Singapore); Studied at Stanford; also worked at Flipkart and Microsoft

Vitthal: Also Google (Singapore) and studied at Stanford; Flipkart, Credit Suisse and INSEAD too

We think we might have hit upon a neat way of teaching complicated tech courses in a funny, practical, engaging way, which is why we are so excited to be here on Udemy!

We hope you will try our offerings, and think you'll like them :-)