Testing and Monitoring Machine Learning Model Deployments
4.7 (92 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
1,389 students enrolled

Testing and Monitoring Machine Learning Model Deployments

ML testing strategies, shadow deployments, production model monitoring and more
4.7 (92 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
1,389 students enrolled
Last updated 7/2020
English
English [Auto]
Current price: $139.99 Original price: $199.99 Discount: 30% off
5 hours left at this price!
30-Day Money-Back Guarantee
This course includes
  • 8 hours on-demand video
  • 12 articles
  • 2 downloadable resources
  • Full lifetime access
  • Access on mobile and TV
  • Assignments
  • Certificate of Completion
Training 5 or more people?

Get your team access to 4,000+ top Udemy courses anytime, anywhere.

Try Udemy for Business
What you'll learn
  • Machine Learning System Unit Testing
  • Machine Learning System Integration Testing
  • Machine Learning System Differential Testing
  • Shadow Deployments (also known as Dark/Decoy launches)
  • Statistical Techniques for Assessing Shadow Deployments
  • Monitoring ML System with Metrics (Prometheus & Grafana)
  • Monitoring ML Systems with Logs (Kibana & the Elastic Stack)
  • The Theory Around Continuous Delivery for Machine Learning
Course content
Expand all 91 lectures 08:16:13
+ Introduction
5 lectures 08:02
How to Approach This Course (Important)
03:26
All Notes & Slides For This Course
00:03
FAQ: I would like to learn more about the topics not covered
00:22
+ Setting the Scene & ML System Lifecycle
11 lectures 52:31
Deploying a Model to Production
08:31
Course Scenario: Predicting House Sale Price
09:27
Setup A: Python Installation (Important)
03:47
Setup B: Git and Github Setup (Advanced users can skip)
03:02
Course Github Repo & Data
02:38
Download dataset and Github repo: links and guidelines
00:57
Setup C: Jupyter Notebook Setup
02:13
Setup D: Install Notebook Dependencies
02:19
Introduction to the Dataset & Model Pipeline
13:21
Additional Links and Resources
00:24
+ Testing Concepts for ML Systems
7 lectures 16:55
Section Overview
00:48
Testing Focus in This Course
01:26
The Value of Testing
1 question
Testing Theory
03:47
Testing Machine Learning Systems (Important)
06:31
Setup A: Install Requirements
00:13
In this lecture, I will walk you through a Jupyter Notebook where we have setup some simple unit tests for an ML model pipeline input data. Towards the end of the lecture, I will ask you to modify the notebook so that the tests fail, and to make sure you understand why.
Hands-on Assignment: Unit Testing Input Data
1 question
Work on a Jupyter notebook to unit test data engineering steps in a model pipeline.
Hands-on Assignment: Unit Testing Data Engineering Code
1 question
Exercise to practice unit testing model quality
Assignment 3: Hands-on Assignment: Unit Testing Model Quality
1 question
Follow instructions in Jupyter notebooks (in the course github repo)
Assignment 4: Hands-on Assignment: Unit Testing Model Config
1 question
Wrap Up
00:26
+ Unit Testing a Production ML Model
18 lectures 01:34:30
Section Overview
00:45
Code Conventions
02:26
Pytest
11:49
Setup - Kaggle Data
03:22
Download the data set - Text Summary
00:28
Setup 2 - Tox
05:47
Code Base Overview
13:41
Preprocessing & Feature Engineering Unit Testing Theory - Why Do This?
03:24
Preprocessing & Feature Engineering Unit Testing
11:06
Quick note on git hygiene for the course
00:12
Model Config Unit Testing Theory - Why Do This?
03:00
Model Config Unit Testing
09:57
Input Data Testing Theory - Why Do This?
03:06
Input Data Unit Testing
08:35
Model Quality Unit Testing Theory - Why Do This?
02:19
Model Quality Unit Testing
10:10
Quick Lecture on Tooling Improvements
02:41
Wrap Up
01:41
+ Docker & Docker Compose
7 lectures 28:48
Section Overview
00:45
Quick Docker Recap
06:09
Why Use Docker?
07:24
Introduction to Docker Compose
04:28
Docker Quiz
3 questions
Docker & Docker Compose Installation
05:56
Windows Specific Docker Issue
03:48
In this exercise we will spin up a basic "Hello World" Flask application with Docker Compose
Hands on Exercise: Basic Docker Compose
1 question
Docker Space Consumption Tips
00:18
+ Integration Testing the ML API
10 lectures 28:38
Section Overview
00:40
API Conceptual Guide
02:16
Overview of the Codebase
06:47
Using our Open API Spec Part 1
01:55
WINDOWS SPECIFIC SETUP
00:03
Using our Open API Spec Part 2
02:56
Integration Testing Theory
01:52
WORK AROUND LECTURE - 32 bit Operating Systems
00:14
Integration Testing Hands-On Code
10:21
A note on benchmark integration tests
01:33
+ Differential Testing
3 lectures 11:28
Section Overview
00:32
Differential Testing Theory
03:19
Differential Testing Implementation
07:37
+ Shadow Mode Deployments
11 lectures 01:23:59
Section Overview
00:44
Shadow Mode Theory
04:23
Testing Models in Production
09:32
Tests in Shadow Deployments
15:08
Code Overview - DB Setup
13:13
WINDOWS port mapping
00:12
Setup Tests for Shadow Mode
11:40
Shadow Mode - Asynchronous Implementation
04:25
Populate Database with Shadow Predictions
05:22
Jupyter Demo - Setup
05:02
Jupyter Demo - Tests in Shadow Mode
14:18
+ Monitoring - Metrics with Prometheus
12 lectures 01:22:06
Section Overview
01:36
Why Monitor?
05:34
Monitoring Theory
08:29
Metrics for Machine Learning Systems
06:03
Prometheus & Grafana Overview
06:42
[WINDOWS ONLY] Additional Setup
02:28
Basic Prometheus Setup - Hands-on
05:33
Adding Metrics - Hands-on
08:22
Adding Grafana - Hands-on
07:21
Infrastructure Metrics - Hands-on
06:44
Adding Metrics Monitoring to Our Example Project
07:30
Creating an ML System Grafana Dashboard
15:44
+ Monitoring - Logs with Kibana
5 lectures 42:06
Monitoring Logs for ML - Theory
04:03
The Elastic Stack (Formerly ELK) - Overview
04:41
Kibana Hands-on Exercise
09:43
Integrating Kibana into The Example Project
09:36
Setting Up a Kibana Dashboard for Model Inputs
14:03
Requirements
  • Comfortable with Python
  • Familiar with Scikit-Learn, Pandas, Numpy
  • Comfortable with Data Science Fundamentals
  • Can use Git version control
  • Basic knowledge of Docker
  • This is an advanced course
Description

Learn how to test & monitor production machine learning models.


What is model testing?

You’ve taken your model from a Jupyter notebook and rewritten it in your production system. Are you sure there weren’t any mistakes when you moved from the research environment to the production system? How can you control the risk before your deployment? ML-specific unit, integration and differential tests can help you to minimize the risk.


What is model monitoring?

You’ve deployed your model to production. OK now what? Is it working as you expect? How do you know? By monitoring models, we can check for unexpected changes in:

  • Incoming data

  • Model quality

  • System operations

When we think about data science, we think about how to build machine learning models, which algorithm will be more predictive, how to engineer our features and which variables to use to make the models more accurate. However, how we are going to actually test & monitor these models in a production system is often neglected, . Only when we can effectively monitor our production models can we determine if they are performing as we expect.


Why take this course?

This is the first and only online course where you can learn how to test & monitor machine learning models. The course is comprehensive, and yet easy to follow. Throughout this course you will learn all the steps and techniques required to effectively test & monitor machine learning models professionally.

In this course, you will have at your fingertips the sequence of steps that you need to follow to test & monitor a machine learning model, plus a project template with full code, that you can adapt to your own models.


What is the course structure?

Part 1: Testing

The course begins from the most common starting point for the majority of data scientists: a Jupyter notebook with a machine learning model trained in it. We gradually build up the complexity, testing the model first in the Juyter notebook and then in a realistic production code base. Hands-on exercises are interspaced with relevant and actionable theory.

Part 2: Shadow Mode

We explain the theory & purpose of deploying a model in shadow mode to minimize your risk, and walk you through an example project setup.

Part 3: Monitoring

We take you through the theory & practical application of monitoring metrics & logs for ML systems.


Important:

  • This course does not cover model deployment (we have a separate course dedicated to that topic)


Who are the instructors?

We have gathered a fantastic team to teach this course. Sole is a leading data scientist in finance and insurance, with 3+ years of experience in building and implementing machine learning models in the field, and multiple IT awards and nominations. Chris is a tech lead & ML software engineer with enormous experience in building APIs and deploying machine learning models, allowing business to extract full benefit from their implementation and decisions.


Who is this course for?

  • Data Scientists who want to know how to test & monitor their models beyond in production

  • Software engineers who want to learn about Machine Learning engineering

  • Machine Learning engineers who want to improve their testing & monitoring skills

  • Data Engineers looking to transition to ML engineering

  • Lovers of open source technologies


How advanced is this course?

This is an advanced level course, and it requires you to have experience with Python programming and git. How much experience? It depends on how much time you would like to set aside to go ahead and learn those concepts that are new to you. To give you an example, we will work with Python environments, we will work with object oriented programming, we will work with the command line to run our scripts, and we will checkout code at different stages with git. You don’t need to be an expert in all of these topics, but you need a reasonable working knowledge. We also work with Docker a lot, though we will provide a recap of this tool.

For those relatively new to software engineering, the course will be challenging. We have added detailed lecture notes and references, so we believe that those missing some of the prerequisites can take the course, but keep in mind that you will need to put in the hours to read up on unfamiliar concepts. On this point, the course slowly increases in complexity, so you can see how we pass, gradually, from the familiar Jupyter notebook, to the less familiar production code, using a project-based approach which we believe is optimal for learning. It is important that you follow the code, as we gradually build it up.


Still not sure if this is the right course for you?

Here are some rough guidelines:

Never written a line of code before: This course is unsuitable

Never written a line of Python before: This course is unsuitable

Never trained a machine learning model before: This course is unsuitable. Ideally, you have already built a few machine learning models, either at work, or for competitions or as a hobby.

Never used docker before: The second part of the course will be very challenging. You need to be ready to read up on lecture notes & references.

Have only ever operated in the research environment: This course will be challenging, but if you are ready to read up on some of the concepts we will show you, the course will offer you a great deal of value.

Have a little experience writing production code: There may be some unfamiliar tools which we will show you, but generally you should get a lot from the course.

Non-technical: You may get a lot from just the theory lectures, so that you get a feel for the challenges of ML testing & monitoring, as well as the lifecycle of ML models. The rest of the course will be a stretch.


To sum up:

With more than 70 lectures and 8 hours of video this comprehensive course covers every aspect of model testing & monitoring. Throughout the course you will use Python as your main language and other open source technologies that will allow you to host and make calls to your machine learning models.


We hope you enjoy it and we look forward to seeing you on board!


Who this course is for:
  • Data Scientists who want to know how to test & monitor their models beyond in production
  • Software engineers who want to learn about Machine Learning engineering
  • Machine Learning engineers who want to improve their testing & monitoring skills
  • Data Engineers looking to transition to ML engineering
  • Lovers of open source technologies