# Practical Supervised and Unsupervised Learning with Python

**5 hours**left at this price!

- 9 hours on-demand video
- 1 downloadable resource
- Full lifetime access
- Access on mobile and TV

- Certificate of Completion

Get your team access to 4,000+ top Udemy courses anytime, anywhere.

Try Udemy for Business- Explore various Python libraries, including NumPy, Pandas, scikit-learn, Matplotlib, seaborn and Plotly.
- Gain in-depth knowledge of Principle Component Analysis and use it to effectively manage noisy datasets.
- Discover the power of PCA and K-Means for discovering patterns and customer profiles by analyzing wholesale product data
- Visualize, interpret, and evaluate the quality of the analysis done using Unsupervised Learning.
- Work with model families like recommender systems, which are immediately applicable in domains such as e-commerce and marketing.
- Expand your expertise using various algorithms like regression, decision trees, clustering and many to become a much stronger Python developer.
- Understand the concept of clustering and how to use it to automatically segment data.

This video gives an overview of the entire course.

Let’s explore how to find an efficient lower-dimensional representation of your data in this video.

Project data onto a hyper-plane using Principal Component Analysis

Find a non-linear manifold that describes your data

Use deep learning to embed your data in lower-dimensional space

While exploring the clustering and understanding how to apply it to real dataset, we need to create customer profile. Now let’s see what the next steps in this video are.

Evaluate the various cluster configurations which have been explored

Analyze the resulting grouping in details

This video gives an overview of the entire course.

In this video, we will set up the package and our environment, and demonstrate solving a real-world problem by training a machine learning model to predict spam emails.

Demonstrate the end goal: real-world ML solutions

Install Anaconda and set up our environment

Build the packtml python package

This video will help you understand how to train a model to learn the best solution. This will require a brief foray back into some mathematics.

Explore a math refresher: scalar and vector calculus

Demonstrate loss functions and gradient descent

See gradient descent in action via logistic regression

Design a class that will fit linear regression on training data and apply predictions to new data.

Walk through the BaseSimpleEstimator interface

Step through code in the SimpleLinearRegression class in the packtml package

Example of running the linear regression class versus scikit-learn’s

If our target is not a real number but is discrete, we cannot use linear regression. With logistic regression, we can tackle classification problems.

Define a link function to transform continuous values to class probabilities

Describe a hill climbing algorithm for logistic regression

Creating predictions and how it differs from linear regression

Design a class that will fit logistic regression on training data and apply predictions to new data.

Walk through the objective function code: log_likelihood

Progress through the LogisticRegression class and see how it works

Evaluate our implementation against scikit-learn’s

Introduce non-parametric models, the complement to parametric models, and the first one we’ll cover: decision trees.

Introduce the concept of non-parametric models

Explain the concept and basic math behind portions of a decision tree

Walk through an example of how a decision tree learns

Recommender systems allow us to grow revenue and conversation rates from customers in e-commerce platforms.

Introduce recommender systems and voting with your feet

Explain item-to-item collaborative filtering

Walk through some math & examples

Matrix factorization is one of the more contemporary solutions to the collaborative filtering problem, and allows us to solve it in a more scalable fashion.

Explain the concept of matrix factorization

Walk through the math and algorithm behind ALS

Demonstrate, in code, a simple implementation

A common problem in recommender systems is the cold-start issue. Here we’ll look at a way to improve our collaborative filtering systems with content-based similarities.

Introduce content-based systems

Code snippet and example

Discuss ongoing work around hybridization of systems

Neural networks are some of the hottest topics in machine learning these days, since they allow us to learn extremely complex relationships between predictors and an outcome.

Introduce the structure of a neural network

Walk through the math and implementation of the forward step

Explain the math and code behind backpropagation

This video gives overview of the entire course.

Logistic regression is a technique that is used to explain the relationship between input variables and output variables. Naïve Bayes is a technique used to build classifiers using Bayes theorem. Let’s learn all about them in this video.

Create logistic regression classifier and train and visualize data

Create an instance of Naïve Bayes classifier and train it and visualize data

A Support Vector Machine (SVM) is a classifier that is defined using a separating hyperplane between the classes. Given labeled training data and a binary classification problem, the SVM finds the optimal hyperplane that separates the training data into two classes. Let’s learn more in the video.

Understand Support Vector Machine

In this video, we will build a Support Vector Machine classifier to predict the income bracket of a given person based on 14 attributes. Our goal is to see where the income is higher or lower than $50,000 per year.

Read the data and convert the list into numpy array

Create and train the SVM classifier

Compute F1 score

One of the most common problems we face in the real world is the quality of data. For a classifier to perform well, it needs to see equal number of points for each class. Hence we need to make sure that we account for this imbalance algorithmically.

Define parameters for Extremely Random Forest classifier

Build, train and visualize data

Predict output and compute performance

When you are working with classifiers, you do not always know what the best parameters are. This is where grid search becomes useful. Let's see how to find optimal training parameters using grid search.

Specify grid of parameters you want to test.

Define metrics to find best combination

Print score and performance report

Clustering is one of the most popular unsupervised learning techniques. This technique is used to analyze data and find clusters within that data and K-Means algorithm is a well-known algorithm for clustering data.

Load the input data from the file

Visualize the input data and boundaries

Plot the centers of the clusters obtained using the K-Means algorithm

In this video, we will see how to build a pipeline to select the top K features from an input data point and then classify them using an Extremely Random Forest classifier.

Generate some labeled sample data for training and testing

Construct the pipeline by joining the individual blocks

Predict the output for all the input values and print it

Nearest neighbors refers to the process of finding the closest points to the input point from the given dataset. This is frequently used to build classification systems that classify a datapoint based on the proximity of the input data point to various classes.

Define sample 2D datapoints

Define a test datapoint that will be used to extract the K nearest neighbors

Create and train a K Nearest Neighbors model using the input data

A K-Nearest Neighbors classifier is a classification model that uses the nearest neighbors algorithm to classify a given data point. The algorithm finds the K closest data points in the training dataset to identify the category of the input data point.

Visualize the input data using four different marker shapes

Define the step size of the grid that will be used to visualize the boundaries

Create the mesh grid of values that will be used to visualize the grid

In order to build a recommendation system, it is important to understand how to compare various objects in our dataset. The similarity score gives us an idea of how similar two objects are.

Define a function to compute the Euclidean score between the input users

Extract the movies rated by both users

Repeat the same to compute Pearson score

Collaborative filtering refers to the process of identifying patterns among the objects in a dataset in order to make a decision about a new object.

Define a function to find the users in the dataset that are similar to the given user

Extract the top num_users number of users as specified by the input argument

Find the top three users who are similar to the user specified by the input argument

In this video, we will build a movie recommendation system based on the data provided in the file ratings.json.

Define a function to parse the input arguments

Sort the scores and extract the movie recommendation

Extract the movie recommendations and print the output

- Prior Python programming experience is a requirement, whereas experience with Data Analysis and Machine Learning analysis will be helpful.

Are you looking forward to developing rich Python coding practices with Supervised and Unsupervised Learning? Then this is the perfect course for you!

Supervised Machine Learning is used in a wide range of industries across sectors such as finance, online advertising, and analytics, and it's here to stay. Supervised learning allows you to train your system to make pricing predictions, campaign adjustments, customer recommendations, and much more. Unsupervised Learning is used to find a hidden structure in unlabeled and unstructured data. On the other hand, supervised learning is used for analyzing structured data making use of statistical techniques. Python makes this easier with its libraries that can be used for Machine Learning. This Course covers modern tools and algorithms to discover and extract hidden yet valuable structure in your data through real-world examples. This course explains the most important Unsupervised Learning algorithms using real-world examples of business applications in Python code.

This comprehensive 3-in-1 course follows a step-by-step approach to entering the world of Artificial Intelligence and developing Python coding practices while exploring Supervised Machine Learning. Initially, you’ll learn the goals of Unsupervised Learning and also build a Recommendation Engine. Moving further, you’ll work with model families like recommender systems, which are immediately applicable in domains such as e-commerce and marketing. Finally, you’ll understand the concept of clustering and how to use it to automatically segment data.

By the end of the course, you’ll develop rich Python coding practices with Supervised and Unsupervised Learning through real-world examples.

**Contents and Overview**

This training program includes 3 complete courses, carefully chosen to give you the most comprehensive training possible.

The first course, *Hands-On Unsupervised Learning with Python*, covers clustering and dimensionality reduction in Deep Learning using Python. This course will allow you to utilize Principal Component Analysis, and to visualize and interpret the results of your datasets such as the ones in the above description. You will also be able to apply hard and soft clustering methods (k-Means and Gaussian Mixture Models) to assign segment labels to customers categorized in your sample data sets.

The second course, *Hands-on Supervised Machine Learning with Python*, covers developing rich Python coding practices while exploring supervised machine learning. This course will guide you through the implementation and nuances of many popular supervised machine learning algorithms while facilitating a deep understanding along the way. You’ll embark on this journey with a quick course overview and see how supervised machine learning differs from unsupervised learning. Next, we’ll explore parametric models such as linear and logistic regression, non-parametric methods such as decision trees, and various clustering techniques to facilitate decision-making and predictions. As we proceed, you’ll work hands-on with recommender systems, which are widely used by online companies to increase user interaction and enrich shopping potential. Finally, you’ll wrap up with a brief foray into neural networks and transfer learning. By the end of the video course, you’ll be equipped with hands-on techniques to gain the practical know-how needed to quickly and powerfully apply these algorithms to new problems.

The third course, *Supervised and Unsupervised Learning with Python*, covers an introduction to the world of Artificial Intelligence. Build real-world Artificial Intelligence (AI) applications to intelligently interact with the world around you, explore real-world scenarios, and learn about the various algorithms that can be used to build AI applications. Packed with insightful examples and topics such as predictive analytics and deep learning, this course is a must-have for Python developers.

By the end of the course, you’ll develop rich Python coding practices with Supervised and Unsupervised Learning through real-world examples.

**About the Authors**

**Stefan Jansen**is a data scientist with over 10 years of industry experience in fintech, investment, and as an advisor to Fortune 500 companies and startups, focusing on data strategy, predictive analytics, and machine and deep learning. He has used Unsupervised Learning extensively to segment large customer bases, detects anomalies, apply topic modeling to large volumes of legal documents to automate due diligence, and to facilitate image recognition. He holds master degrees from Harvard University and Free University Berlin, a CFA charter, and has been teaching data science and statistics for several years.**Taylor Smith**is a machine learning enthusiast with over five years of experience who loves to apply interesting computational solutions to challenging business problems. Currently working as Principal Data Scientist, Taylor is also an active open source contributor and staunch Pythonista.**Prateek Joshi**is an artificial intelligence researcher, published author of five books, and TEDx speaker. He is the founder of Pluto AI, a venture-funded Silicon Valley start-up that builds analytics platforms for smart water management powered by deep learning. His work in this field has led to patents, tech demos, and research papers at major IEEE conferences. He has been an invited speaker at technology and entrepreneurship conferences including TEDx, AT&T Foundry, Silicon Valley Deep Learning, and Open-Silicon Valley. Prateek has also been featured as a guest author in prominent tech magazines. His tech blog has received more than 1.2-million page views from 200 over countries and has over 6,600+ followers. He frequently writes on topics such as artificial intelligence, Python programming, and abstract mathematics. He is an avid coder and has won many hackathons utilizing a wide variety of technologies. He graduated from the University of Southern California with a master’s degree specializing in artificial intelligence. He has worked at companies such as Nvidia and Microsoft Research.

- Data Analysts, Data Scientists, Developers who want to understand key applications of Supervised & Unsupervised Learning from both a conceptual and practical point of view.