Practical Supervised and Unsupervised Learning with Python
0.0 (0 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
21 students enrolled

Practical Supervised and Unsupervised Learning with Python

Enter the world of Artificial Intelligence! Develop Python coding practices while exploring Supervised Machine Learning
0.0 (0 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
21 students enrolled
Created by Packt Publishing
Last updated 4/2019
English
English [Auto-generated]
Current price: $139.99 Original price: $199.99 Discount: 30% off
5 hours left at this price!
30-Day Money-Back Guarantee
This course includes
  • 9 hours on-demand video
  • 1 downloadable resource
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
Training 5 or more people?

Get your team access to 4,000+ top Udemy courses anytime, anywhere.

Try Udemy for Business
What you'll learn
  • Explore various Python libraries, including NumPy, Pandas, scikit-learn, Matplotlib, seaborn and Plotly.
  • Gain in-depth knowledge of Principle Component Analysis and use it to effectively manage noisy datasets.
  • Discover the power of PCA and K-Means for discovering patterns and customer profiles by analyzing wholesale product data
  • Visualize, interpret, and evaluate the quality of the analysis done using Unsupervised Learning.
  • Work with model families like recommender systems, which are immediately applicable in domains such as e-commerce and marketing.
  • Expand your expertise using various algorithms like regression, decision trees, clustering and many to become a much stronger Python developer.
  • Understand the concept of clustering and how to use it to automatically segment data.
Course content
Expand all 83 lectures 08:48:31
+ Hands-On Unsupervised Learning with Python
21 lectures 03:34:16

This video gives an overview of the entire course.

Preview 05:06

In this video we will explore and understand why unsupervised learning is so popular and useful.

  • Enable visual exploration of complex datasets

  • Extract better features for supervised learning

  • Find hidden, insightful structure in your data that is actionable

Benefits of Unsupervised Learning
09:37

Understand how to identify products that are frequently purchased together.

  • Compute association rules

How Market Basket Analysis Works
08:56

Now that we have computed the association rules let’s take it further and dive in deeper.

  • Assess confidence, lift, and statistical significance

  • Take action on the result

How Market Basket Analysis Works (Continued)
08:35

Let’s learn how to prepare the data for market basket analysis in this video.

  • Load transaction data

  • Convert raw data into a transaction-product matrix

  • Identify and visualize product popularity

The Apriori Algorithm – Preparing the Data
11:38

In this video we will see how does the apriori algorithm solve the challenge of identifying relevant rules for evaluation.

  • Define and create candidate item sets

  • Prune item sets based on their support

  • Iterate until all rules have been found

Understanding and Implementing the Apriori Algorithm
13:51

This video is about understanding how to find association rules that are informative and significant.

  • Compute the support for item sets and rules

  • Calculate confidence and lift for relevant rules

  • Evaluate the statistical significance

Finding Association Rules
07:41

Let’s explore how to explore and summarize the rules that meet all criteria in this video.

  • Create readable representation of individual rules

  • Apply the Bonferroni correction for statistical bias

  • Create interactive visualization using plotly

Visualizing and Interpreting Association Rules
09:17

This is about understanding why high-dimensional data are a ‘curse’ in machine learning.

  • The distance between data points increases

  • Predictions become harder

  • Smart algorithms represent the signal in fewer dimensions

Unsupervised Learning and the Curse of Dimensionality
11:50

Let’s explore how to find an efficient lower-dimensional representation of your data in this video.

  • Project data onto a hyper-plane using Principal Component Analysis

  • Find a non-linear manifold that describes your data

  • Use deep learning to embed your data in lower-dimensional space

Approaches to Dimensionality Reduction
10:32

This video will cover and explain how Principal Component Analysis finds the best lower dimensional description of our data.

  • Find directions of maximal variance

The Key Ideas Behind PCA
07:15

After finding directions of maximal variance, let’s take it further from where we left.

  • Make sure these directions are orthogonal

  • Describe the data as weighted average of the principal components

The Key Ideas Behind PCA (Continued)
08:19

In this video we will see how Principal Component Analysis captures the signal in our data.

  • Find the Eigenvectors of the covariance matrix

The Linear Algebra Behind PCA
10:14

Now that we have the Eigenvectors of the covariance matrix, let’s explore the next steps to understand how to captures the signal.

  • Standardize the data and run Singular Value Decomposition

  • Use the eigenvectors that represent the largest variance

The Linear Algebra Behind PCA (Continued)
09:31

In this video we will see how to interpret PCA in the context of a real dataset.

  • Explore and clean a wholesale customer dataset

PCA in Practice
09:01

Now that we have explored the dataset and cleaned it lets see what is next.

  • Compute the principal components

  • Analyze customer purchase behavior based on the principal components

PCA in Practice (Continued)
07:21

Let’s see how does clustering differ from dimensionality reduction in this video.

  • Find coherent subgroups that are distinct from other subgroups

  • Assign data to subgroups rather than create new features

  • Use combinatorial, probabilistic, or hierarchical clustering algorithms

Clustering – Key Concepts
10:22

This video is all about exploring and understanding how does the k-Means algorithm work.

  • Implement the algorithm in python

  • Visualize the evolution of clusters

  • Visualize how the algorithm partitions the data

Clustering Algorithm in Practice
13:25

Let’s understand how to evaluate the quality of clusters with the aid of this video.

  • Define measures of cluster coherence

  • Compute the silhouette score for various cluster settings

  • Decide on the most best clustering

Evaluate Clustering Results
12:00

This video is all about how to apply clustering to a real dataset.

  • Create customer profiles

  • Explore various cluster configurations

Case Study – K-Means and Wholesale Data
15:20

While exploring the clustering and understanding how to apply it to real dataset, we need to create customer profile. Now let’s see what the next steps in this video are.

  • Evaluate the various cluster configurations which have been explored

  • Analyze the resulting grouping in details

Case Study – K-Means and Wholesale Data (Continued)
14:25
Test Your Knowledge
3 questions
+ Hands-on Supervised Machine Learning with Python
24 lectures 03:05:54

This video gives an overview of the entire course.

Preview 02:34

In this video, we will set up the package and our environment, and demonstrate solving a real-world problem by training a machine learning model to predict spam emails.

  • Demonstrate the end goal: real-world ML solutions

  • Install Anaconda and set up our environment

  • Build the packtml python package

Getting Our Machine Learning Environment Setup
13:15

This video aims at defining and disambiguating supervised machine learning from business logic or rules engines.

  • Demonstrate result engines

  • Learn from the simple example of machine learning

  • Define a "one-sentence summary" used throughout the course

Supervised Learning
04:33

This video will help you understand how to train a model to learn the best solution. This will require a brief foray back into some mathematics.

  • Explore a math refresher: scalar and vector calculus

  • Demonstrate loss functions and gradient descent

  • See gradient descent in action via logistic regression

Hill Climbing and Loss Functions
11:23

We need to acknowledge some ML best practices. Here we will cover out-of-sample evaluation and data splitting.

  • Split our datasets into two pieces: train AND test

  • Train models on the training set (in-sample)

  • Evaluate our models on the test set (out-of-sample)

Model Evaluation and Data Splitting
04:24

Define the parametric modeling family, and introduce the concept and math of linear regression.

  • Explain parametric models and their formulation

  • Introduce linear regression

  • Walk through the math behind linear regression

Introduction to Parametric Models and Linear Regression
06:36

Design a class that will fit linear regression on training data and apply predictions to new data.

  • Walk through the BaseSimpleEstimator interface

  • Step through code in the SimpleLinearRegression class in the packtml package

  • Example of running the linear regression class versus scikit-learn’s

Implementing Linear Regression from Scratch
11:15

If our target is not a real number but is discrete, we cannot use linear regression. With logistic regression, we can tackle classification problems.

  • Define a link function to transform continuous values to class probabilities

  • Describe a hill climbing algorithm for logistic regression

  • Creating predictions and how it differs from linear regression

Introduction to Logistic Regression Models
03:07

Design a class that will fit logistic regression on training data and apply predictions to new data.

  • Walk through the objective function code: log_likelihood

  • Progress through the LogisticRegression class and see how it works

  • Evaluate our implementation against scikit-learn’s

Implementing Logistic Regression from Scratch
10:06

Parametric models have many pros, but they also have several cons. This is where we’ll discuss each.

  • Cover some pros of parametric models

  • Introduce some cons of parametric models

  • Introduce error due to bias

Parametric Models –Pros/Cons
02:41

Different models suffer from different sources of error. Here we will diagnose and correct several types of modeling errors.

  • Explain and diagnose error due to bias

  • Explain and diagnose error due to variance

  • Strategies for combatting each

The Bias/Variance Trade-off
05:12

Introduce non-parametric models, the complement to parametric models, and the first one we’ll cover: decision trees.

  • Introduce the concept of non-parametric models

  • Explain the concept and basic math behind portions of a decision tree

  • Walk through an example of how a decision tree learns

Introduction to Non-Parametric Models and Decision Trees
08:27

Here we will cover how a decision tree produces candidate splits and reaches its terminal state.

  • Recap splitting criteria and gini impurity

  • Define the CART algorithm

  • Split a tree by hand

Decision Trees
05:23

Design a class that will fit a CART decision tree on training data and apply predictions to new data.

  • Walk through the splitting criteria and node code

  • Progress through the CART class and see how it works

  • Demonstrate an example of our class on real data

Implementing a Decision Tree from Scratch
19:41

Introduce clustering, a very different approach to non-parametric learning.

  • Introduce the concept of supervised clustering versus unsupervised clustering

  • Cover distance metrics and spatial dissimilarity

  • Walk through kNN algorithm

Various Clustering Methods
03:44

Implement a KNN class from scratch that will predict class membership for testing data.

  • Recap the KNN learning algorithm

  • Walk through the packtml package implementation of KNN clustering

  • Demonstrate on the Iris dataset and visualize the results

Implementing K-Nearest Neighbors from Scratch
05:38

When it comes to all these different models from different families, which should we use? Well, it depends.

  • Revisit thr pros of non-parametric models

  • Recap some of the cons of non-parametric models

  • Use whatever best fits your data

Non-Parametric Models –Pros/Cons
02:46

Recommender systems allow us to grow revenue and conversation rates from customers in e-commerce platforms.

  • Introduce recommender systems and voting with your feet

  • Explain item-to-item collaborative filtering

  • Walk through some math & examples

Recommender Systems and an Introduction to Collaborative Filtering
14:06

Matrix factorization is one of the more contemporary solutions to the collaborative filtering problem, and allows us to solve it in a more scalable fashion.

  • Explain the concept of matrix factorization

  • Walk through the math and algorithm behind ALS

  • Demonstrate, in code, a simple implementation

Matrix Factorization
07:00

Here we will explore actual implementations of the alternating least squares algorithm in Python.

  • Recap our lesson on ALS

  • Walk through the Python code for the algorithm

  • Discuss limitations of recommenders in the real world

Matrix Factorization in Python
10:22

A common problem in recommender systems is the cold-start issue. Here we’ll look at a way to improve our collaborative filtering systems with content-based similarities.

  • Introduce content-based systems

  • Code snippet and example

  • Discuss ongoing work around hybridization of systems

Content-Based Filtering
05:14

Neural networks are some of the hottest topics in machine learning these days, since they allow us to learn extremely complex relationships between predictors and an outcome.

  • Introduce the structure of a neural network

  • Walk through the math and implementation of the forward step

  • Explain the math and code behind backpropagation

Neural Networks and Deep Learning
08:55

Here we will explore a Python class implementation of a neural network.

  • Recap the neural network learning procedure

  • Walk through the packtml package implementation of a neural net

  • Demonstrate on the a dataset and visualize the results

Neural Networks
11:02

Transfer learning allows us to train neural networks further from a pre-trained state.

  • Introduce the concept of transfer learning

  • Walk through code implementation

  • Look at an example of transfer learning and its results

Use Transfer Learning
08:30
Test Your Knowledge
5 questions
+ Supervised and Unsupervised Learning with Python
38 lectures 02:08:21

This video gives overview of the entire course.

Preview 03:01

Artificial Intelligence (AI) is a way to make machines think and behave intelligently. We will learn about AI and then its uses

Artificial Intelligence and Its Need
03:46

AI manifests itself in various different forms across multiple fields, so it's important to understand how it's useful in various domains.

  • Look at applications

  • Learn about the branches

Applications and Branches of AI
04:48

The legendary computer scientist and mathematician, Alan Turing, proposed the Turing Test to provide a definition of intelligence.

  • Learn about the test to see if a computer can learn to mimic human behavior

Defining Intelligence Using Turing Test
01:56

For decades, we have been trying to get the machine to think like a human. So we will see how to make machines think like human.

  • Understand the nature of human thinking

  • Learn about rational agents

Making Machines Think Like Humans
03:55

The General Problem Solver (GPS) was the first useful computer program that came into existence in the AI world.

  • Learn about GPS

  • Structure a given problem

  • Solve the problem with help of GPS

General Problem Solver
02:20

There are many ways to impart intelligence to an agent. In this video, we will focus on machine learning.

  • Impart intelligence to an agent is through data and training

Building an Intelligent Agent
02:11

We will learn to install python and other required packages.

  • Install python 3 on various OS

  • Install packages

Installing Python 3 and Packages
02:12

In order to build a learning model, we need data that's representative of the world. We will see how to use the packages to interact with data

  • Import the package containing all the datasets

  • Load the house prices dataset

  • Print the data

Loading Data
02:10

The world of machine learning is broadly divided into supervised and unsupervised learning. Let’s learn about the difference between both.

  • Understand the difference between supervised and unsupervised learning

Supervised Versus Unsupervised Learning
02:57

The process of classification is one such technique where we classify data into a given number of classes. In this video, we will learn about classification.

  • Understand classification and a good classification system

What is Classification?
02:09

Machine learning algorithms expect data to be formatted in a certain way before they start the training process. In order to prepare the data for ingestion by machine learning algorithms, we have to preprocess it.

  • Perform Binarization, Mean removal, Scaling and Normalization

Preprocessing Data
04:14

Label encoding refers to the process of transforming the word labels into numerical form. This enables the algorithms to operate on our data.

  • Create label encoder object

  • Encode a set of randomly ordered labels

Label Encoding
01:39

Logistic regression is a technique that is used to explain the relationship between input variables and output variables. Naïve Bayes is a technique used to build classifiers using Bayes theorem. Let’s learn all about them in this video.

  • Create logistic regression classifier and train and visualize data

  • Create an instance of Naïve Bayes classifier and train it and visualize data

Logistic Regression and Naïve Bayes Classifier
07:15

A Confusion matrix is a figure or a table that is used to describe the performance of a classifier. So it is important to know how it works!

  • Create confusion matrix and visualize it

  • Print the classification report

Confusion Matrix
02:56

A Support Vector Machine (SVM) is a classifier that is defined using a separating hyperplane between the classes. Given labeled training data and a binary classification problem, the SVM finds the optimal hyperplane that separates the training data into two classes. Let’s learn more in the video.

  • Understand Support Vector Machine

Support Vector Machines
01:46

In this video, we will build a Support Vector Machine classifier to predict the income bracket of a given person based on 14 attributes. Our goal is to see where the income is higher or lower than $50,000 per year.

  • Read the data and convert the list into numpy array

  • Create and train the SVM classifier

  • Compute F1 score

Classifying Income Data
03:34

Regression is the process of estimating the relationship between input and output variables. This is an important concept in machine learning.

  • Understand regression

What is Regression?
02:09

In this video, we will build a single and multivariable regressor and learn where to use each of them.

  • Create a linear regressor object. Predict output

  • For multivariable, create a polynomial regressor.

Building a Single and Multivariable Regressor
03:45

In this video, we will use SVM to build a regressor that will estimate housing prices.

  • Create and train support vector regressor using linear kernel

  • Evaluate performance

Estimating Housing Prices
02:44

Ensemble Learning refers to the process of building multiple models and then combining them in a way that can produce better results than individual models.

  • Build learning models using ensemble learning

What is Ensemble Learning?
03:17

A Decision Tree is a structure that allows us to split the dataset into branches and then make simple decisions at each level. This will allow us to arrive at the final decision by walking down the tree.

  • Build a decision tree classifier

What Are Decision Trees
04:24

Random forests are an instance of ensemble learning. They have certain advantages over other classifiers. Lets know them in detail.

  • Build random and extremely random forest classifier

  • Estimate the confidence measure of the predictions

What are Random and Extremely Random Forests?
06:20

One of the most common problems we face in the real world is the quality of data. For a classifier to perform well, it needs to see equal number of points for each class. Hence we need to make sure that we account for this imbalance algorithmically.

  • Define parameters for Extremely Random Forest classifier

  • Build, train and visualize data

  • Predict output and compute performance

Dealing with Class Imbalance
03:31

When you are working with classifiers, you do not always know what the best parameters are. This is where grid search becomes useful. Let's see how to find optimal training parameters using grid search.

  • Specify grid of parameters you want to test.

  • Define metrics to find best combination

  • Print score and performance report

Finding Optimal Training Parameters
02:26

Not all features are equally important while working with dataset. To find importance of specific features, we have to perform some operations.

  • Define and train an AdaBoost regressor and estimate performance

  • Normalize values and plot them

Computing Relative Feature Importance
02:42

In this video, we will apply the concepts we learned in previous videoto a real world problem, predicting traffic.

  • Create Label encoders. Train an extremely random forests regressor

  • Compute the performance and output

  • Predict the output

Predicting Traffic
03:38

Clustering is one of the most popular unsupervised learning techniques. This technique is used to analyze data and find clusters within that data and K-Means algorithm is a well-known algorithm for clustering data.

  • Load the input data from the file

  • Visualize the input data and boundaries

  • Plot the centers of the clusters obtained using the K-Means algorithm

Clustering Data with K-Means Algorithm
05:54

In this video, we will estimate the number of clusters with Mean Shift algorithm.

  • Estimate the bandwidth of the input data

  • Train the Mean Shift clustering model

  • Plot the center of the current cluster

Estimating the Number of Clusters
03:08

Here we will Estimate the quality of clustering with silhouette scores.

  • Initialize the variables

  • Iterate through all the values and build a K-Means mode

  • Estimate the silhouette score and print it

Estimating the Quality of Clustering
03:19

In this video, we will build a classifier based on a Gaussian Mixture Model.

  • Split the dataset into training and testing using an 80/20 split

  • Extract the number of classes in the training data

  • Train the Gaussian mixture model classifier using the training data

Building a Classifier
04:57

In this video we will segment the market based on shopping patterns

  • Estimate the bandwidth of the input data

  • Extract the labels and the centers of each cluster

  • Plot the centers of clusters

Segmenting the Market
02:17

In this video, we will see how to build a pipeline to select the top K features from an input data point and then classify them using an Extremely Random Forest classifier.

  • Generate some labeled sample data for training and testing

  • Construct the pipeline by joining the individual blocks

  • Predict the output for all the input values and print it

Creating a Training Pipeline
03:50

Nearest neighbors refers to the process of finding the closest points to the input point from the given dataset. This is frequently used to build classification systems that classify a datapoint based on the proximity of the input data point to various classes.

  • Define sample 2D datapoints

  • Define a test datapoint that will be used to extract the K nearest neighbors

  • Create and train a K Nearest Neighbors model using the input data

Extracting the Nearest Neighbors
02:16

A K-Nearest Neighbors classifier is a classification model that uses the nearest neighbors algorithm to classify a given data point. The algorithm finds the K closest data points in the training dataset to identify the category of the input data point.

  • Visualize the input data using four different marker shapes

  • Define the step size of the grid that will be used to visualize the boundaries

  • Create the mesh grid of values that will be used to visualize the grid

Building a K-Nearest Neighbors Classifier
03:40

In order to build a recommendation system, it is important to understand how to compare various objects in our dataset. The similarity score gives us an idea of how similar two objects are.

  • Define a function to compute the Euclidean score between the input users

  • Extract the movies rated by both users

  • Repeat the same to compute Pearson score

Computing similarity scores
04:56

Collaborative filtering refers to the process of identifying patterns among the objects in a dataset in order to make a decision about a new object.

  • Define a function to find the users in the dataset that are similar to the given user

  • Extract the top num_users number of users as specified by the input argument

  • Find the top three users who are similar to the user specified by the input argument

Finding Similar Users
02:54

In this video, we will build a movie recommendation system based on the data provided in the file ratings.json.

  • Define a function to parse the input arguments

  • Sort the scores and extract the movie recommendation

  • Extract the movie recommendations and print the output

Building a Movie Recommendation System
03:25
Requirements
  • Prior Python programming experience is a requirement, whereas experience with Data Analysis and Machine Learning analysis will be helpful.
Description

Are you looking forward to developing rich Python coding practices with Supervised and Unsupervised Learning? Then this is the perfect course for you!

Supervised Machine Learning is used in a wide range of industries across sectors such as finance, online advertising, and analytics, and it's here to stay. Supervised learning allows you to train your system to make pricing predictions, campaign adjustments, customer recommendations, and much more. Unsupervised Learning is used to find a hidden structure in unlabeled and unstructured data. On the other hand, supervised learning is used for analyzing structured data making use of statistical techniques. Python makes this easier with its libraries that can be used for Machine Learning. This Course covers modern tools and algorithms to discover and extract hidden yet valuable structure in your data through real-world examples. This course explains the most important Unsupervised Learning algorithms using real-world examples of business applications in Python code.

This comprehensive 3-in-1 course follows a step-by-step approach to entering the world of Artificial Intelligence and developing Python coding practices while exploring Supervised Machine Learning. Initially, you’ll learn the goals of Unsupervised Learning and also build a Recommendation Engine. Moving further, you’ll work with model families like recommender systems, which are immediately applicable in domains such as e-commerce and marketing. Finally, you’ll understand the concept of clustering and how to use it to automatically segment data.

By the end of the course, you’ll develop rich Python coding practices with Supervised and Unsupervised Learning through real-world examples.

Contents and Overview

This training program includes 3 complete courses, carefully chosen to give you the most comprehensive training possible.

The first course, Hands-On Unsupervised Learning with Python, covers clustering and dimensionality reduction in Deep Learning using Python. This course will allow you to utilize Principal Component Analysis, and to visualize and interpret the results of your datasets such as the ones in the above description. You will also be able to apply hard and soft clustering methods (k-Means and Gaussian Mixture Models) to assign segment labels to customers categorized in your sample data sets.

The second course, Hands-on Supervised Machine Learning with Python, covers developing rich Python coding practices while exploring supervised machine learning. This course will guide you through the implementation and nuances of many popular supervised machine learning algorithms while facilitating a deep understanding along the way. You’ll embark on this journey with a quick course overview and see how supervised machine learning differs from unsupervised learning. Next, we’ll explore parametric models such as linear and logistic regression, non-parametric methods such as decision trees, and various clustering techniques to facilitate decision-making and predictions. As we proceed, you’ll work hands-on with recommender systems, which are widely used by online companies to increase user interaction and enrich shopping potential. Finally, you’ll wrap up with a brief foray into neural networks and transfer learning. By the end of the video course, you’ll be equipped with hands-on techniques to gain the practical know-how needed to quickly and powerfully apply these algorithms to new problems.

The third course, Supervised and Unsupervised Learning with Python, covers an introduction to the world of Artificial Intelligence. Build real-world Artificial Intelligence (AI) applications to intelligently interact with the world around you, explore real-world scenarios, and learn about the various algorithms that can be used to build AI applications. Packed with insightful examples and topics such as predictive analytics and deep learning, this course is a must-have for Python developers.

By the end of the course, you’ll develop rich Python coding practices with Supervised and Unsupervised Learning through real-world examples.

About the Authors

  • Stefan Jansen is a data scientist with over 10 years of industry experience in fintech, investment, and as an advisor to Fortune 500 companies and startups, focusing on data strategy, predictive analytics, and machine and deep learning. He has used Unsupervised Learning extensively to segment large customer bases, detects anomalies, apply topic modeling to large volumes of legal documents to automate due diligence, and to facilitate image recognition. He holds master degrees from Harvard University and Free University Berlin, a CFA charter, and has been teaching data science and statistics for several years.

  • Taylor Smith is a machine learning enthusiast with over five years of experience who loves to apply interesting computational solutions to challenging business problems. Currently working as Principal Data Scientist, Taylor is also an active open source contributor and staunch Pythonista.

  • Prateek Joshi is an artificial intelligence researcher, published author of five books, and TEDx speaker. He is the founder of Pluto AI, a venture-funded Silicon Valley start-up that builds analytics platforms for smart water management powered by deep learning. His work in this field has led to patents, tech demos, and research papers at major IEEE conferences. He has been an invited speaker at technology and entrepreneurship conferences including TEDx, AT&T Foundry, Silicon Valley Deep Learning, and Open-Silicon Valley. Prateek has also been featured as a guest author in prominent tech magazines. His tech blog has received more than 1.2-million page views from 200 over countries and has over 6,600+ followers. He frequently writes on topics such as artificial intelligence, Python programming, and abstract mathematics. He is an avid coder and has won many hackathons utilizing a wide variety of technologies. He graduated from the University of Southern California with a master’s degree specializing in artificial intelligence. He has worked at companies such as Nvidia and Microsoft Research.

Who this course is for:
  • Data Analysts, Data Scientists, Developers who want to understand key applications of Supervised & Unsupervised Learning from both a conceptual and practical point of view.