AI Quality Workshop: How to Test and Debug ML Models
What you'll learn
- Rapidly evaluate machine learning models for performance
- Identify and address model drift
- Debug production ML models
- Identify and address possible ML bias issues
- This course is for data scientists and ML engineers, and assumes a working knowledge of Python and an introductory course in machine learning
Want to skill up your ability to test and debug machine learning models? Ready to be a powerful contributor to the AI era, the next great wave in software and technology?
Get taught by leading instructors who have previously taught at Carnegie Mellon University and Stanford University, and who have provided training to thousands of students from around the globe, including hot startups and major global corporations:
You will learn the analytics that you need to drive model performance
You will understand how to create an automated test harness for easier, more effective ML testing
You will learn why AI explainability is the key to understanding the key mechanics of your model and to rapid debugging
Understand what Shapley Values are, why they are so important, and how to make the most of them
You will be able to identify the types of drift that can derail model performance
You will learn how to debug model performance challenges
You will be able to understand how to evaluate model fairness and identify when bias is occurring - and then address it
You will get access to some of the most powerful ML testing and debugging software tools available, for FREE
(after signing up for the course, terms and conditions apply)
Testimonials from the live, virtual version of the course:
"This is what you would pay thousand of dollars for at a university." - Mike
"Excellent course!!! Super thanks to Professor Datta, Josh, Arri, and Rick!! :D" - Trevia
"Thank you so very much. I learned a ton. Great job!" - K. M.
"Fantastic series. Great explanations and great product. Thank you." - Santosh
"Thank you everyone to make this course available... wonderful sessions!" - Chris
Who this course is for:
- Data Scientists and ML Engineers who are looking to improve their ability to test, evaluate, and debug machine learning models.
I'm Anupam Datta, a computer science professor as well as President and Chief Scientist at AI Observability and AI Quality software company TruEra. I'm the lead instructor for TruEra's educational initiatives around core AI Quality and ML performance concepts. Over my academic and technology career, I've helped thousands of students and working professionals to become better data scientists, ML engineers, and ML Ops professionals.
I have spent years focused on in-depth research on AI explainability, AI performance, and algorithmic bias, and have authored multiple research papers. I am now focused on helping as many people as possible take these concepts to be stronger professionals and to build more effective, fairer AI.
I am looking forward to being your instructor and helping you on your AI journey.