Deployment of Machine Learning Models
What you'll learn
- Build machine learning model APIs and deploy models into the cloud
- Send and receive requests from deployed machine learning models
- Design testable, version controlled and reproducible production code for model deployment
- Create continuous and automated integrations to deploy your models
- Understand the optimal machine learning architecture
- Understand the different resources available to productionise your models
- Identify and mitigate the challenges of putting models in production
Requirements
- A Python installation
- A Git installation
- Confidence in Python programming, including familiarity with Numpy, Pandas and Scikit-learn
- Familiarity with the use of IDEs, like Pycharm, Sublime, Spyder or similar
- Familiarity with writing Python scripts and running them from the command line interface
- Knowledge of basic git commands, including clone, fork, branch creation and branch checkout
- Knowledge of basic git commands, including git status, git add, git commit, git pull, git push
- Knowledge of basic CLI commands, including navigating folders and using Git and Python from the CLI
- Knowledge of Linear Regression and model evaluation metrics like the MSE and R2
Description
Welcome to Deployment of Machine Learning Models, the most comprehensive machine learning deployments online course available to date. This course will show you how to take your machine learning models from the research environment to a fully integrated production environment.
What is model deployment?
Deployment of machine learning models, or simply, putting models into production, means making your models available to other systems within the organization or the web, so that they can receive data and return their predictions. Through the deployment of machine learning models, you can begin to take full advantage of the model you built.
Who is this course for?
If you’ve just built your first machine learning models and would like to know how to take them to production or deploy them into an API,
If you deployed a few models within your organization and would like to learn more about best practices on model deployment,
If you are an avid software developer who would like to step into deployment of fully integrated machine learning pipelines,
this course will show you how.
What will you learn?
We'll take you step-by-step through engaging video tutorials and teach you everything you need to know to start creating a model in the research environment, and then transform the Jupyter notebooks into production code, package the code and deploy to an API, and add continuous integration and continuous delivery. We will discuss the concept of reproducibility, why it matters, and how to maximize reproducibility during deployment, through versioning, code repositories and the use of docker. And we will also discuss the tools and platforms available to deploy machine learning models.
Specifically, you will learn:
The steps involved in a typical machine learning pipeline
How a data scientist works in the research environment
How to transform the code in Jupyter notebooks into production code
How to write production code, including introduction to tests, logging and OOP
How to deploy the model and serve predictions from an API
How to create a Python Package
How to deploy into a realistic production environment
How to use docker to control software and model versions
How to add a CI/CD layer
How to determine that the deployed model reproduces the one created in the research environment
By the end of the course you will have a comprehensive overview of the entire research, development and deployment lifecycle of a machine learning model, and understood the best coding practices, and things to consider to put a model in production. You will also have a better understanding of the tools available to you to deploy your models, and will be well placed to take the deployment of the models in any direction that serves the needs of your organization.
What else should you know?
This course will help you take the first steps towards putting your models in production. You will learn how to go from a Jupyter notebook to a fully deployed machine learning model, considering CI/CD, and deploying to cloud platforms and infrastructure.
But, there is a lot more to model deployment, like model monitoring, advanced deployment orchestration with Kubernetes, and scheduled workflows with Airflow, as well as various testing paradigms such as shadow deployments that are not covered in this course.
Want to know more? Read on...
This comprehensive course on deployment of machine learning models includes over 100 lectures spanning about 10 hours of video, and ALL topics include hands-on Python code examples which you can use for reference and re-use in your own projects.
In addition, we have now included in each section an assignment where you get to reproduce what you learnt to deploy a new model.
So what are you waiting for? Enroll today, learn how to put your models in production and begin extracting their true value.
Who this course is for:
- Data scientists who want to deploy their first machine learning model
- Data scientists who want to learn best practices model deployment
- Software developers who want to transition into machine learning
Featured review
Instructors
Hey, I am Sole. I am a data scientist and open-source Python developer with a passion for teaching and programming.
I teach intermediate and advanced courses on machine learning, covering topics like how to improve machine learning pipelines, better engineer and select features, optimize models, and deal with imbalanced datasets.
I am the developer and maintainer of Feature-engine, an open-source Python library for feature engineering and selection, and the author of Packt's "Python Feature Engineering Cookbook" and the "Feature Selection in Machine Learning with Python" book.
I received a Data Science Leaders Award in 2018 and was selected as one of "LinkedIn’s voices" in data science and analytics in 2019.
I worked as a data scientist for financial and insurance firms, developing and putting in production machine learning models to assess credit risk, process insurance claims, and prevent fraud.
I love sharing knowledge about data science and machine learning. This is why I teach online, create and contribute to open-source software, and also speak at meetups, write blogs, and participate in podcasts.
I've got an MSc in Biology, a PhD in Biochemistry, and 8+ years of experience as a research scientist at well-known institutions like University College London and the Max Planck Institute. I've also taught biochemistry for 4+ years at the University of Buenos Aires and mentored MSc and PhD students.
Feel free to contact me on LinkedIn, follow me on Twitter, or visit our website for blogs about machine learning.
My name is Chris. I'm a professional software engineer from the UK. I've been writing code for over a decade, and for the past five years I've focused on scaling machine learning applications. I've done this at fintech and healthtech companies in London, where I've worked on and grown production machine learning applications used by millions of people. I've built and maintained machine learning systems which make credit-risk and fraud detection judgements on over a billion dollars of personal loans per year for the challenger bank Zopa. I previously worked on systems for predicting health risks for patients around the world at Babylon Health.
In the past, I've worn a variety of hats. I worked at a global healthcare company, Bupa, which included being a core developer on their flagship website, and three years working in Beijing setting up mobile, web and IT for medical centers in China. Whilst in Beijing, I ran the Python meetup group, mentored a lot of junior developers, and ate a lot of dumplings. I enjoy giving talks at engineering meetups, building systems that create value, and writing software development tutorials and guides. I've written on topics ranging from wearable development, to internet security, to Python web frameworks.
I'm passionate about teaching in a way that minimizes the time between "ah hah" moments, but doesn't leave you Googling every other word. Complexity is necessary for application in the real world, but too much complexity is overwhelming and counter-productive. I will help you find the right balance.
Feel free to connect on LinkedIn (very active) or Twitter (getting more active in 2022)