Building Recommender Systems with Machine Learning and AI
4.5 (1,338 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
9,789 students enrolled

Building Recommender Systems with Machine Learning and AI

Help people discover new products and content with deep learning, neural networks, and machine learning recommendations.
Bestseller
4.5 (1,338 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
9,788 students enrolled
Last updated 3/2020
English
English
Current price: $116.99 Original price: $179.99 Discount: 35% off
13 hours left at this price!
30-Day Money-Back Guarantee
This course includes
  • 10 hours on-demand video
  • 2 articles
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
Training 5 or more people?

Get your team access to 4,000+ top Udemy courses anytime, anywhere.

Try Udemy for Business
What you'll learn
  • Understand and apply user-based and item-based collaborative filtering to recommend items to users
  • Create recommendations using deep learning at massive scale
  • Build recommender systems with neural networks and Restricted Boltzmann Machines (RBM's)
  • Make session-based recommendations with recurrent neural networks and Gated Recurrent Units (GRU)
  • Build a framework for testing and evaluating recommendation algorithms with Python
  • Apply the right measurements of a recommender system's success
  • Build recommender systems with matrix factorization methods such as SVD and SVD++
  • Apply real-world learnings from Netflix and YouTube to your own recommendation projects
  • Combine many recommendation algorithms together in hybrid and ensemble approaches
  • Use Apache Spark to compute recommendations at large scale on a cluster
  • Use K-Nearest-Neighbors to recommend items to users
  • Solve the "cold start" problem with content-based recommendations
  • Understand solutions to common issues with large-scale recommender systems
Course content
Expand all 117 lectures 10:06:19
+ Getting Started
8 lectures 36:21

After a brief introduction to the course, we'll dive right in and install what you need: Anaconda (your Python development environment,) the course materials, and the MovieLens data set of 100,00 real movie ratings from real people. We'll then run a quick example to generate movie recommendations using the SVD algorithm, to make sure it all works!

Preview 09:05

We'll just lay out the structure of the course so you know what to expect later on (and when you'll start writing some code of your own!) Also, we'll provide advice on how to navigate this course depending on your prior experience.

Course Roadmap
03:52

The phrase "recommender system" is a more general-sounding term than it really is. Let's briefly clarify what a recommender system is - and more importantly, what it is not.

What Is a Recommender System?
02:48

There are many different flavors of recommender systems, and you encounter them every day. Let's review some of the applications of recommender systems in the real world.

Preview 03:22

How do recommender systems learn about your individual tastes and preferences? We'll explain how both explicit ratings and implicit ratings work, and the strengths and weaknesses of both.

Understanding You through Implicit and Explicit Ratings
04:25

Most real-world recommender systems are "Top-N" systems, that produce a list of top results to individuals. There are a couple of main architectural approaches to building them, which we'll review here.

Top-N Recommender Architecture
05:53

We'll review what we've covered in this section with a quick 4-question quiz, and discuss the answers.

Preview 04:46
+ Introduction to Python [Optional]
4 lectures 16:59

After installing Jupyter Notebook, we'll cover the basics of what's different about Python, including its use of white-space. We'll dissect a simple function to get a feel of what Python code looks like.

[Activity] The Basics of Python
05:04

We'll look at using lists, tuples, and dictionaries in Python.

Data Structures in Python
05:17

We'll see how to define a function in Python, and how Python lets you pass functions to other functions. We'll also look at a simple example of a Lambda function.

Functions in Python
02:46

We'll look at how Boolean expressions work in Python as well as loops. Then, we'll give you a challenge to write a simple Python function on your own!

[Exercise] Booleans, loops, and a hands-on challenge
03:52
+ Evaluating Recommender Systems
9 lectures 39:51

Learn about different testing methodologies for evaluating recommender systems offline, including train/test, K-Fold Cross Validation, and Leave-One-Out cross-validation.

Preview 03:49

Learn about Root Mean Squared Error, Mean Absolute Error, and why we use these measures of recommendation prediction accuracy.

Accuracy Metrics (RMSE, MAE)
04:06

Learn about several ways to measure the accuracy of top-N recommenders, including hit rate, cumulative hit rate, average reciprocal hit rank, rating hit rate, and more.

Top-N Hit Rate - Many Ways
04:35

Learn how to measure the coverage of your recommender system, how diverse its results are, and how novel its results are.

Coverage, Diversity, and Novelty
04:55

Measure how often your recommendations change (churn,) how quickly they respond to new data (responsiveness,) and why no metric matters more than the results of real, online A/B tests. We'll also talk about perceived quality, where you explicitly ask your users to rate your recommendations.

Churn, Responsiveness, and A/B Tests
05:06

In this short quiz, we'll review what we've learned about different ways to measure the qualities and accuracy of your recommender system.

[Quiz] Review ways to measure your recommender.
02:55

Let's walk through this course's Python module for implementing the metrics we've discussed in this section on real recommender systems.

[Activity] Walkthrough of RecommenderMetrics.py
06:53

We'll walk through our sample code to apply our RecommenderMetrics module to a real SVD recommender using real MovieLens rating data, and measure its performance in many different ways.

Preview 05:08

After running TestMetrics.py, we'll look at the results for our SVD recommender, and discuss how to interpret them.

[Activity] Measure the Performance of SVD Recommendations
02:24
+ A Recommender Engine Framework
4 lectures 18:23

Let's review the architecture of our recommender engine framework, which will let us easy implement, test, and compare different algorithms throughout the rest of this course.

Our Recommender Engine Architecture
07:27

In part one of the code walkthrough of our recommender engine, we'll see how it's used, and dive into the Evaluator class.

[Activity] Recommender Engine Walkthrough, Part 1
03:55

In part two of the walkthrough, we'll dive into the EvaluationData class, and kick off a test with the SVD recommender.

[Activity] Recommender Engine Walkthrough, Part 2
03:51

Wrapping up our review of our recommender system architecture, we'll look at the results of using our framework to evaluate the SVD algorithm, and interpret them.

[Activity] Review the Results of our Algorithm Evaluation.
03:10
+ Content-Based Filtering
6 lectures 30:53

We'll talk about how content-based recommendations work, and introduce the cosine similarity metric. Cosine scores will be used throughout the course, and understanding their mathematical basis is important.

Preview 08:58

We'll cover how to factor time into our content-based recs, and how the concept of KNN will allow us to make rating predictions just based on similarity scores based on genres and release dates.

K-Nearest-Neighbors and Content Recs
03:59

We'll look at some code for producing movie recommendations based on their genres and years, and evaluate the results using the MovieLens data set.

[Activity] Producing and Evaluating Content-Based Movie Recommendations
05:23

A common point of confusion is how to use implicit ratings, such as purchase or click data, with the algorithms we're talking about. It's pretty simple, but let's cover it here.

A Note on Using Implicit Ratings.
03:36

In our first "bleeding edge alert," we'll examine the use of Mise en Scene data for providing additional content-based information to our recommendations. And, we'll turn the idea into code, and evaluate the results.

[Activity] Bleeding Edge Alert! Mise en Scene Recommendations
04:31

In two different hands-on exercises, dive into which content attributes provide the best recommendations - and try augmenting our content-based recommendations using popularity data.

[Exercise] Dive Deeper into Content-Based Recommendations
04:26
+ Neighborhood-Based Collaborative Filtering
13 lectures 53:00

Similarity between users or items is at the heart of all neighborhood-based approaches; we'll discuss how similarity measures fit into our architecture, and the effect data sparsity has on it.

Preview 04:49

We'll cover different ways of measuring similarity, including cosine, adjusted cosine, Pearson, Spearman, Jaccard, and more - and how to know when to use each one.

Similarity Metrics
08:32

We'll illustrate how user-based collaborative filtering works, where we recommend stuff that people similar to you liked.

Preview 07:25

Let's write some code to apply user-based collaborative filtering to the MovieLens data set, run it, and evaluate the results.

[Activity] User-based Collaborative Filtering, Hands-On
04:59

We'll talk about the advantages of flipping user-based collaborative filtering on its head, to give us item-based collaborative filtering - and how it works.

Item-based Collaborative Filtering
04:14

Let's write, run, and evaluate some code to apply item-based collaborative filtering to generate recommendations from the MovieLens data set, and compare it to user-based CF.

[Activity] Item-based Collaborative Filtering, Hands-On
02:23

In this exercise, you're challenged to improve upon the user-based and item-based collaborative filtering algorithms we presented, by tweaking the way candidate generation works.

[Exercise] Tuning Collaborative Filtering Algorithms
03:31

Since collaborative filtering does not make rating predictions, evaluating it offline is challenging - but we can test it with hit rate metrics, and leave-one-out cross validation. Which we'll do, in this activity.

[Activity] Evaluating Collaborative Filtering Systems Offline
01:28

In the previous activity, we measured the hit rate of a user-based collaborative filtering system. Your challenge is to do the same for an item-based system.

[Exercise] Measure the Hit Rate of Item-Based Collaborative Filtering
02:17

Learn how the ideas of neighborhood-based collaborative filtering can be applied into frameworks based on rating predictions, with K-Nearest-Neighbor recommenders.

KNN Recommenders
04:03

Let's use SurpriseLib to quickly run user-based and item-based KNN on our MovieLens data, and evaluate the results.

[Activity] Running User and Item-Based KNN on MovieLens
02:25

Try different similarity measures to see if you can improve on the results of KNN - and we'll talk about why this is so challenging.

[Exercise] Experiment with different KNN parameters.
04:25

In our next "bleeding edge alert," we'll discuss Translation-Based Recommendations - an idea unveiled in the 2017 RecSys conference for recommending sequences of events, based on vectors in item similarity space.

Bleeding Edge Alert! Translation-Based Recommendations
02:29
+ Matrix Factorization Methods
6 lectures 27:14

Let's learn how PCA allows us to reduce higher-dimensional data into lower dimensions, which is the first step toward understanding SVD.

Preview 06:31

We'll extend PCA to the problem of making movie recommendations, and learn how SVD is just a specific implementation of PCA.

Singular Value Decomposition
06:56

Let's run SVD and SVD++ on our MovieLens movie ratings data set, and evaluate the results. They're really good!

Preview 03:46

We'll talk about some variants and extensions to SVD that have emerged, and the importance of hyperparameter tuning on SVD, as well as how to tune parameters in SurpriseLib using the GridSearchCV class.

Improving on SVD
04:33

Have a go at modifying our SVD bake-off code to find the optimal values of the various hyperparameters for SVD, and see if it makes a difference in the results.

[Exercise] Tune the hyperparameters on SVD
01:58

We'll cover some exciting research from the University of Minnesota based on matrix factorization.

Bleeding Edge Alert! Sparse Linear Methods (SLIM)
03:30
+ Introduction to Deep Learning [Optional]
23 lectures 02:50:03
Important note about Tensorflow 2
00:17

A quick introduction on what to expect from this section, and who can skip it.

Deep Learning Introduction
01:30

We'll cover the concepts of Gradient Descent, Reverse Mode AutoDiff, and Softmax, which you'll need to build deep neural networks.

Deep Learning Pre-Requisites
08:13

We'll cover the evolution of neural networks from their origin in the 1940's, all the way up to the architecture of modern deep neural networks.

History of Artificial Neural Networks
10:51

We'll use the Tensorflow Playground to get a hands-on feel of how deep neural networks operate, and the effects of different topologies.

[Activity] Playing with Tensorflow
12:02

We'll cover the mechanics of different activation functions and optimization functions for neural networks, including ReLU, Adam, RMSProp, and Gradient Descent.

Training Neural Networks
05:47

We'll talk about how to prevent overfitting using techniques such as dropout layers, and how to tune your topology for the best results.

Tuning Neural Networks
03:52
Activation Functions: More Depth
10:36

We'll walk through an example of using Tensorflow's low-level API to distribute the processing of neural networks using Python.

Introduction to Tensorflow
11:29

In this hands-on activity, we'll implement handwriting recognition on real data using Tensorflow's low-level API. Part 1 of 3.

[Activity] Handwriting Recognition with Tensorflow, part 1
13:19

In this hands-on activity, we'll implement handwriting recognition on real data using Tensorflow's low-level API. Part 2 of 3.

[Activity] Handwriting Recognition with Tensorflow, part 2
12:03

Keras is a higher-level API that makes developing deep neural networks with Tensorflow a lot easier. We'll explain how it works and how to use it.

Introduction to Keras
02:48

We'll tackle the same handwriting recognition problem as before, but this time using Keras with much simpler code, and better results.

[Activity] Handwriting Recognition with Keras
09:52

There are different patterns to use in Keras for multi-class or binary classification problems; we'll talk about how to tackle each.

Classifier Patterns with Keras
03:58

As an exercise challenge, develop your own neural network using Keras to predict the political parties of politicians, based just on their votes on 16 different issues.

[Exercise] Predict Political Parties of Politicians with Keras
09:55

We'll talk about how your brain's visual cortex recognizes images seen by your eyes, and how the same approach inspires artificial convolutional neural networks.

Intro to Convolutional Neural Networks (CNN's)
08:59

The topology of CNN's can get complicated, and there are several variations of them you can choose from for certain problems, including LeNet, GoogLeNet, and ResNet.

CNN Architectures
02:54

We'll tackle handwriting recognition again, this time using Keras and CNN's for our best results yet. Can you improve upon them?

[Activity] Handwriting Recognition with Convolutional Neural Networks (CNNs)
08:38

Recurrent Neural Networks are appropriate for sequences of information, such as time series data, natural language, or music. We'll dive into how they work and some variations of them.

Intro to Recurrent Neural Networks (RNN's)
07:38

Training RNN's involve back-propagating through time, which makes them extra-challenging to work with.

Training Recurrent Neural Networks
03:21

We'll wrap up our intro to deep learning by applying RNN's to the problem of sentiment analysis, which can be modeled as a sequence-to-vector learning problem.

[Activity] Sentiment Analysis of Movie Reviews using RNN's and Keras
11:01
Tuning Neural Networks
04:39
Neural Network Regularization Techniques
06:21
+ Deep Learning for Recommender Systems
14 lectures 01:17:48

We'll introduce the idea of using neural networks to produce recommendations, and explore whether this concept is overkill or not.

Intro to Deep Learning for Recommenders
02:19

We'll cover a very simple neural network called the Restricted Boltzmann Machine, and show how it can be used to produce recommendations given sparse rating data.

Preview 08:02

We'll walk through our implementation of Restricted Boltzmann Machines integrated into our recommender framework. Part 1 of 2.

[Activity] Recommendations with RBM's, part 1
12:46

We'll walk through our implementation of Restricted Boltzmann Machines integrated into our recommender framework. Part 2 of 2.

[Activity] Recommendations with RBM's, part 2
07:11

We'll run our RBM recommender, and study its results.

[Activity] Evaluating the RBM Recommender
03:43

You're challenged to tune the RBM using GridSearchCV to see if you can improve its results.

[Exercise] Tuning Restricted Boltzmann Machines
01:43

We'll review my results from the previous exercise, so you can compare them against your own.

Exercise Results: Tuning a RBM Recommender
01:15

We'll learn how to apply modern deep neural networks to recommender systems, and the challenges sparse data creates.

Auto-Encoders for Recommendations: Deep Learning for Recs
04:27

We'll walk through our code for producing recommendations with deep learning, and evaluate the results.

[Activity] Recommendations with Deep Neural Networks
07:23

We'll introduce "GRU4Rec," a technique that applies recurrent neural networks to the problem of clickstream recommendations.

Clickstream Recommendations with RNN's
07:23

As a more challenging exercise that mimics what you might do in the real world, try and port some older research code into a modern Python and Tensorflow environment, and get it running.

[Exercise] Get GRU4Rec Working on your Desktop
02:42

We'll review my results from the previous exercise.

Exercise Results: GRU4Rec in Action
07:51

We'll explore DeepFM, which combines the strengths of Factorization Machines and of Deep Neural Networks to produce a hybrid solution that out-performs either technique.

Bleeding Edge Alert! Deep Factorization Machines
05:49

We'll cover a few more "bleeding edge" topics, including Word2Vec, 3D CNN's for session-based recommendations, and feature extraction with CNN's.

More Emerging Tech to Watch
05:14
+ Scaling it Up
11 lectures 01:11:06

We'll introduce Apache Spark as our first means of "scaling it up," and get it installed on your system if you want to experiment with it.

[Activity] Introduction and Installation of Apache Spark
05:49

We'll explain just enough about how Spark works to let you understand how it distributes its work across a cluster, and the main objects our sample code will use: RDD's and DataFrames.

Apache Spark Architecture
05:13

We'll start by using Spark's MLLib to generate recommendations with ALS for our ml-100k data set.

[Activity] Movie Recommendations with Spark, Matrix Factorization, and ALS
06:02

We'll scale things up, and use all of the cores on our local PC to process 20 million ratings and produce top-N recommendations with Apache Spark.

[Activity] Recommendations from 20 million ratings with Spark
04:57

Amazon open-sourced its recommender engine called DSSTNE, which makes it easy to apply deep neural networks to massive, sparse data sets and produce great recommendations at large scale.

Preview 04:41

Watch as we use Amazon DSSTNE on an EC2 Ubuntu instance to produce movie recommendations using a deep neural network.

DSSTNE in Action
09:25

Let's explore how Amazon scaled DSSTNE up, paired with Apache Spark, to process their massive data and produce recommendations for millions of customers.

Scaling Up DSSTNE
02:14

Amazon's SageMaker service offers some machine learning algorithms that can be used for recommendations, including factorization machines.

AWS SageMaker and Factorization Machines
04:24

Watch as I use SageMaker from a cloud-hosted Notebook to pre-process the MovieLens 1-million-rating data set, train and save a Factorization Machine model, and deploy the model for making real-time predictions for movie recommendations.

SageMaker in Action: Factorization Machines on one million ratings, in the cloud
07:38

A huge number of commercial SAAS offerings have emerged to offer easy-to-use recommender systems out of the box, and there are many open-source offerings that allow you to develop recommender systems at scale at as low a level as you want. We'll cover some of the more popular ones, and enumerate the rest.

Other Systems of Note (Amazon Personalize, RichRelevance, Recombee, and more)
10:29

The specifics of how you deploy a recommender system into production will depend on the environment you're working within, but we'll cover some high-level architectures to consider and some of the technologies you might employ.

Recommender System Architecture
10:14
Requirements
  • A Windows, Mac, or Linux PC with at least 3GB of free disk space.
  • Some experience with a programming or scripting language (preferably Python)
  • Some computer science background, and an ability to understand new algorithms.
Description

New! Updated for Tensorflow 2, Amazon Personalize, and more.

Learn how to build recommender systems from one of Amazon's pioneers in the field. Frank Kane spent over nine years at Amazon, where he managed and led the development of many of Amazon's personalized product recommendation technologies.

You've seen automated recommendations everywhere - on Netflix's home page, on YouTube, and on Amazon as these machine learning algorithms learn about your unique interests, and show the best products or content for you as an individual. These technologies have become central to the  largest, most prestigious tech employers out there, and by understanding how they work, you'll become very valuable to them.

We'll cover tried and true recommendation algorithms based on neighborhood-based collaborative filtering, and work our way up to more modern techniques including matrix factorization and even deep learning with artificial neural networks. Along the way, you'll learn from Frank's extensive industry experience to understand the real-world challenges you'll encounter when applying these algorithms at large scale and with real-world data.

Recommender systems are complex; don't enroll in this course expecting a learn-to-code type of format. There's no recipe to follow on how to make a recommender system; you need to understand the different algorithms and how to choose when to apply each one for a given situation. We assume you already know how to code.

However, this course is very hands-on; you'll develop your own framework for evaluating and combining many different recommendation algorithms together, and you'll even build your own neural networks using Tensorflow to generate recommendations from real-world movie ratings from real people. We'll cover:

  • Building a recommendation engine

  • Evaluating recommender systems

  • Content-based filtering using item attributes

  • Neighborhood-based collaborative filtering with user-based, item-based, and KNN CF

  • Model-based methods including matrix factorization and SVD

  • Applying deep learning, AI, and artificial neural networks to recommendations

  • Session-based recommendations with recursive neural networks

  • Scaling to massive data sets with Apache Spark machine learning, Amazon DSSTNE deep learning, and AWS SageMaker with factorization machines

  • Real-world challenges and solutions with recommender systems

  • Case studies from YouTube and Netflix

  • Building hybrid, ensemble recommenders

This comprehensive course takes you all the way from the early days of collaborative filtering, to bleeding-edge applications of deep neural networks and modern machine learning techniques for recommending the best items to every individual user.

The coding exercises in this course use the Python programming language. We include an intro to Python if you're new to it, but you'll need some prior programming experience in order to use this course successfully. We also include a short introduction to deep learning if you are new to the field of artificial intelligence, but you'll need to be able to understand new computer algorithms.

High-quality, hand-edited English closed captions are included to help you follow along.

I hope to see you in the course soon!

Who this course is for:
  • Software developers interested in applying machine learning and deep learning to product or content recommendations
  • Engineers working at, or interested in working at large e-commerce or web companies
  • Computer Scientists interested in the latest recommender system theory and research