This course is designed to become a convenient resource for preparing for a technical machine learning interview. It helps you to get ready for an interview with 50 lectures covering questions and answers on a varied range of topics. The course is intended not only for candidates with a full understanding of possible questions but also for recalling knowledge in data science.
We will systematically cover the data preparation methods including data normalization, outliers handling, feature engineering, and dimensionality reduction techniques.
After processing the data in the next section, we will move on to the supervised machine learning methods. We will consider simple linear algorithms, regularization, maximum likelihood method. Besides, we will also talk about the Bayes theorem and the naive Bayes classifier. Several lectures in this section are devoted to the support vector machine model. Most of the lectures after this will be dedicated to algorithms based on decision-making trees: we will consider bagging algorithm, random forest, AdaBoost, and gradient boosting.
Having finished reviewing the interview questions on algorithms, we will move on to the subject area of machine learning and discuss such topics as good experiment design, cross-validation methods, overfitting and underfitting, feature selection methods, unbalanced data problem.
This course also includes several lectures on clustering algorithms, covering the most well-known methods and their concepts. In addition, as part of this course, we will consider various metrics for assessing the quality of supervised and unsupervised models.
In summary, this course will help you to recall the methods used by real machine learning experts and prepare you for this hot data scientist career path.