Machine Learning and Deep Learning Optimizers Implementation
What you'll learn
- Python Implementation of Gradient Descent Optimizer to Train Single Variable Linear Regression Model.
- Python Implementation of Gradient Descent Optimizer to Train Multi Variable Linear Regression Model.
- Python Implementation of Stochastic Gradient Descent (SGD) for Single Variable LR.
- Python Implementation of SGD for Multi-Variable LR.
- Python Implementation of Mini-Batch GD for Single and Multi-Variable LR.
- Generalization of Mini-Batch to Work as SGD and Batch GD.
- Generalization of Multi-Variable LR to Work for Single Variable LR.
- Numerical Optimization Problem Definition.
- Model Parameters Initialization.
- Calculating MSE Cost Function as Vector Norm.
- How to Calculate the Gradient?
- Model Parameters Update.
- Iterate Till Achieving the Optimum Parameter Values.
- Gradient Stop Condition.
- Model Performance Evaluation.
- How to Obtain Better Model Performance?
- Plotting of Learning Curves.
- Cost Convergence Check.
- Make Your Implementation as a Function.
- Multivariable Linear Regression Problem Definition.
- Going from Single Variable to Multivariable Linear Regression.
- Gradient Descent Optimizer Vectorize Implementation.
- Why We Use Vectorize Implementation.
- Generalization of the Implementation to Work for Single and Multivariable Linear Regression.
- Data Preparation for Vectorize Implementation.
- Dimensions Problem and How to Deal With it.
- Make Your Implementation of Vectorize Implementation as a Function.
- Try Different Combinations of Hyperparameters.
Requirements
- Python.
- NumPy.
- Watch Day 1 and Day 2 in the course Master Numerical Optimization for Machine Learning and Deep Learning in 5 Days.
- The course available on our channel "Artificial Intelligence & Data Science شرح بالعربي"
Description
In this course you will learn:
1- How to implement batch (vanilla) gradient descent (GD) optimizer to obtain the optimal model parameters of the single and Multi variable linear regression (LR) models.
2- How to implement mini-batch and stochastic GD for single and multi-variable LR models.
You will do this by following the guided steps represented in the attached notebook.
In addition a video series describing each step.
You will also implement the cost function, stop conditions, as well as plotting the learning curves.
You will understand the power of applying vectorize implementation of the optimizer.
This implementation will help you to solidify the concept and gain the momentum of how the optimizers work during training phase.
By the end of this course you will obtain the balance between the theoretical and practical point of view of the optimizers that is used widely in both machine learning (ML) and deep learning (DL).
In this course we will focus on the main numerical optimization concepts and techniques used in ML and DL.
Although, we apply these techniques for single and multivariable LR, the concept is the same for other ML and DL models.
We use LR here for simplification and to focus on the optimizers rather than the models.
In the subsequent practical works we will scale this vision to implement more advanced optimizers such as:
- Momentum based GD.
- Nestrov accelerated gradient NAG.
- Adaptive gradient Adagrad.
- RmsProp.
- Adam.
- BFGS.
You will be provided by the following:
- Master numerical optimization for machine learning and deep learning in 5 days course material (slides).
- Notebooks of the guided steps you should follow.
- Notebooks of the practical works ideal solution (the implementation).
- Data files.
You should do the implementation by yourself and compare your code to the practical session solution provided in a separate notebook.
A video series explaining the solution is provided. However, do not see the solution unless you finish your own implementation.
Who this course is for:
- Learners who are interested to learn how the optimizer work to train the Machine Learning or Deep Learning models.
- Learners who wants to apply there theoretical knowledge.
- Learner who wants to have deeper understanding of numerical optimization algorithms by implementing it step by step.
- Learners who wants to have the balance between both theoretical and practical knowledge.
Instructor
Artificial Intelligence and Data Science Consultant | Instructor
Over 10 years experience in the field of AI and DS.
Taught thousands of students online and in person.
Taught thousands of teaching hours online and in person.
Have hundreds of recorded courses on YouTube.
I have both practical and academic experience.
Teaching the following topics (Online & On Premises):
- Linear algebra for AI&DS.
- Calculus for AI&DS.
- Probability and Statistics for AI&DS.
- Numerical optimization for AI&DS.
- Numerical algorithms for for AI&DS.
- Machine Learning.
- Deep Learning.
- Python.
- Numpy, Pandas, Matplotlib, Seaborn, Sciket-learn.
- Keras for deep learning.
- PySpark for Big Data.