Deployment of Machine Learning Models
What you'll learn
- Build machine learning model APIs and deploy models into the cloud
- Send and receive requests from deployed machine learning models
- Design testable, version controlled and reproducible production code for model deployment
- Create continuous and automated integrations to deploy your models
- Understand the optimal machine learning architecture
- Understand the different resources available to productionise your models
- Identify and mitigate the challenges of putting models in production
Course content
- Preview06:01
- Preview08:04
- Preview03:30
- 04:45Course Pacing and Practice
- 04:09Course Tips
- 02:19Guidelines on how to approach the course
- 00:31Installing Python in your computer
- 00:04Slides covered in this course
- 00:09Notes covered in this course
- 00:47FAQ: Where can I learn more about the required skills?
Requirements
- A Python installation
- A Jupyter notebook installation
- Python coding skills including pandas and scikit-learn
- Familiarity with Python environments, OOP and git
- Familiarity with Machine Learning algorithms
- This is an intermediate level course (see description)
Description
Learn how to put your machine learning models into production.
What is model deployment?
Deployment of machine learning models, or simply, putting models into production, means making your models available to your other business systems. By deploying models, other systems can send data to them and get their predictions, which are in turn populated back into the company systems. Through machine learning model deployment, you and your business can begin to take full advantage of the model you built.
When we think about data science, we think about how to build machine learning models, we think about which algorithm will be more predictive, how to engineer our features and which variables to use to make the models more accurate. However, how we are going to actually use those models is often neglected. And yet this is the most important step in the machine learning pipeline. Only when a model is fully integrated with the business systems, we can extract real value from its predictions.
Why take this course?
This is the first and only online course where you can learn how to deploy machine learning models. In this course, you will learn every aspect of how to put your models in production. The course is comprehensive, and yet easy to follow. Throughout this course you will learn all the steps and infrastructure required to deploy machine learning models professionally.
In this course, you will have at your fingertips, the sequence of steps that you need to follow to deploy a machine learning model, plus a project template with full code, that you can adapt to deploy your own models.
What is the course structure?
Part 1: The Research Environment
The course begins from the most common starting point for the majority of data scientists: a Jupyter notebook with a machine learning model trained in it.
Part 2: Understanding Machine Learning Systems
An overview of key architecture and design considerations for different types of machine learning models. This part sets the theoretical foundation for the practical part of the course.
Part 3: From Research to Production Code
A hands-on project with complete source code, which takes you through the process of converting your notebooks into production ready code.
Part 4: Deployment Tooling
Continuing with the hands-on project, this section takes you through the necessary tools for real production deployments, like CI/CD, testing, model cloud storage and more.
Part 5: Deployments
In this section, you will deploy models to both cloud platforms (Heroku) and cloud infrastructure (AWS).
Part 6: Bonus sections
In addition, there are dedicated sections which discuss handling big data, deep learning and common issues encountered when deploying models to production.
Important:
This course will help you take the first steps towards putting your models in production. You will learn how to go from a Jupyter notebook to a fully deployed machine learning model, considering CI/CD, and deploying to cloud platforms and infrastructure.
But, there is a lot more to model deployment, like model monitoring, advanced deployment orchestration with Kubernetes, and scheduled workflows with Airflow, as well as various testing paradigms such as shadow deployments that are not covered in this course.
Who are the instructors?
We have gathered a fantastic team to teach this course. Sole is a leading data scientist in finance and insurance, with 3+ years of experience in building and implementing machine learning models in the field, and multiple IT awards and nominations. Chris is an AI software engineer with enormous experience in building APIs and deploying machine learning models, allowing business to extract full benefit from their implementation and decisions.
Who is this course for?
This course is suitable for data scientists looking to deploy their first machine learning model, and software developers looking to transition into AI software engineering. Deployment of machine learning models is a very advanced topic in the data science path so the course will also be suitable for intermediate and advanced data scientists.
How advanced is this course?
This is an intermediate level course, and it requires you to have experience with Python programming and git. How much experience? It depends on how much time you would like to set aside to go ahead and learn those concepts that are new to you. To give you an example, we will work with Python environments, we will work with object oriented programming, we will work with the command line to run our scripts, and we will checkout code at different stages with git. You don’t need to be an expert in all of these topics, but it will certainly help if you have heard of them, and worked with them before.
For those relatively new to software engineering, the course may be challenging. We have added detailed lecture notes and references, so we do believe beginners can take the course, but keep in mind that you will need to put in the hours to read up on unfamiliar concepts. On this point, the course slowly increases in complexity, so you can see how we pass, gradually, from the familiar Jupyter notebook, to the less familiar production code, using a project-based approach which we believe is optimal for learning. It is important that you follow the code, as we build up on it.
Still not sure if this is the right course for you?
Here are some rough guidelines:
Never written a line of code before: This course is unsuitable
Never written a line of Python before: This course is unsuitable
Never trained a machine learning model before: This course is unsuitable. Ideally, you have already built a few machine learning models, either at work, or for competitions or as a hobby.
Have only ever operated in the research environment: This course will be challenging, but if you are ready to read up on some of the concepts we will show you, the course will offer you a great deal of value.
Have a little experience writing production code: There may be some unfamiliar tools which we will show you, but generally you should get a lot from the course.
Non-technical: You may get a lot from just the theoretical section (section 3) so that you get a feel for the possible architectures and challenges of ML deployments. The rest of the course will be a stretch.
To sum up:
With more than 50 lectures and 8 hours of video this comprehensive course covers every aspect of model deployment. Throughout the course you will use Python as your main language and other open source technologies that will allow you to host and make calls to your machine learning models.
We hope you enjoy it and we look forward to seeing you on board!
Who this course is for:
- Intermediate and advanced data scientists
- Software developers who want to transition into machine learning
- Intermediate data scientists who want to deploy their first machine learning model
- Machine Learning practicioners who want to learn best practices around model deployment
Featured review
Instructors
Soledad Galli is a lead data scientist and founder of Train in Data. She has experience in finance and insurance, received a Data Science Leaders Award in 2018 and was selected “LinkedIn’s voice” in data science and analytics in 2019. Sole is passionate about sharing knowledge and helping others succeed in data science.
As a data scientist in Finance and Insurance companies, Sole researched, developed and put in production machine learning models to assess Credit Risk, Insurance Claims and to prevent Fraud, leading in the adoption of machine learning in the organizations.
Sole is passionate about empowering people to step into and excel in data science. She mentors data scientists, writes articles online, speaks at data science meetings, and teaches online courses on machine learning.
Sole has recently created Train In Data, with the mission to facilitate and empower people and organizations worldwide to step into and excel in data science and analytics.
Sole has an MSc in Biology, a PhD in Biochemistry and 8+ years of experience as a research scientist in well-known institutions like University College London and the Max Planck Institute. She has scientific publications in various fields such as Cancer Research and Neuroscience, and her research was covered by the media on multiple occasions.
Soledad has 4+ years of experience as an instructor in Biochemistry at the University of Buenos Aires, taught seminars and tutorials at University College London, and mentored MSc and PhD students at Universities.
Feel free to contact her on LinkedIn.
========================
Soledad Galli es científica de datos y fundadora de Train in Data. Tiene experiencia en finanzas y seguros, recibió el premio Data Science Leaders Award en 2018 y fue seleccionada como "la voz de LinkedIn" en ciencia y análisis de datos en 2019. A Soledad le apasiona compartir conocimientos y ayudar a otros a tener éxito en la ciencia de datos.
Como científica de datos en compañías de finanzas y seguros, Sole desarrolló y puso en producción modelos de aprendizaje automático para evaluar el riesgo crediticio, automatizar reclamos de seguros y para prevenir el fraude, facilitando la adopción del aprendizaje de máquina en estas organizaciones.
A Sole le apasiona ayudar a que las personas aprendan y se destaquen en ciencia de datos, es por eso habla regularmente en reuniones de ciencia de datos, escribe varios artículos disponibles en la web y crea cursos sobre aprendizaje de máquina.
Sole ha creado recientemente Train In Data, con la misión de ayudar a las personas y organizaciones de todo el mundo a que aprendan y se destaquen en la ciencia y análisis de datos.
Sole tiene una maestría en biología, un doctorado en bioquímica y más de 8 años de experiencia como investigadora científica en instituciones prestigiosas como University College London y el Instituto Max Planck. Tiene publicaciones científicas en diversos campos, como la investigación contra el Cáncer y la Neurociencia, y sus resultados fueron cubiertos por los medios en múltiples ocasiones.
Soledad tiene más de 4 años de experiencia como instructora de bioquímica en la Universidad de Buenos Aires, dio seminarios y tutoriales en University College London, en Londres, y fue mentora de estudiantes de maestría y doctorado en diferentes universidades.
No dudes en contactarla en LinkedIn.
My name is Chris. I'm a professional software engineer from the UK. I've been writing code for 8 years, and for the past three years, I've focused on scaling machine learning applications. I've done this at fintech and healthtech companies in London, where I've worked on and grown production machine learning applications used by hundreds of thousands of people. I've built and maintained machine learning systems which make credit-risk and fraud detection judgements on over a billion dollars of personal loans per year for the challenger bank Zopa. I currently work on systems for predicting health risks for patients around the world at Babylon Health.
In the past, I've worn a variety of hats. I worked at a global healthcare company, Bupa, which included being a core developer on their flagship website, and three years working in Beijing setting up mobile, web and IT for medical centers in China. Whilst in Beijing, I ran the Python meetup group, mentored a lot of junior developers, and ate a lot of dumplings. I enjoy giving talks at engineering meetups, building systems that create value, and writing software development tutorials and guides. I've written on topics ranging from wearable development, to internet security, to Python web frameworks.
I'm passionate about teaching in a way that minimizes the time between "ah hah" moments, but doesn't leave you Googling every other word. Complexity is necessary for application in the real world, but too much complexity is overwhelming and counter-productive. I will help you find the right balance.
Feel free to connect on LinkedIn (very active) or Twitter (getting more active in 2021)