Python Machine Learning Solutions
0.0 (0 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
26 students enrolled
Wishlisted Wishlist

Please confirm that you want to add Python Machine Learning Solutions to your Wishlist.

Add to Wishlist

Python Machine Learning Solutions

100 videos that teach you how to perform various machine learning tasks in the real world
0.0 (0 ratings)
Instead of using a simple lifetime average, Udemy calculates a course's star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.
26 students enrolled
Created by Packt Publishing
Last updated 7/2017
English
Current price: $10 Original price: $100 Discount: 90% off
5 hours left at this price!
30-Day Money-Back Guarantee
Includes:
  • 4.5 hours on-demand video
  • 1 Supplemental Resource
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
What Will I Learn?
  • Explore classification algorithms and apply them to the income bracket estimation problem
  • Use predictive modeling and apply it to real-world problems
  • Understand how to perform market segmentation using unsupervised learning
  • Explore data visualization techniques to interact with your data in diverse ways
  • Find out how to build a recommendation engine
  • Understand how to interact with text data and build models to analyze it
  • Work with speech data and recognize spoken words using Hidden Markov Models
  • Analyze stock market data using Conditional Random Fields
  • Work with image data and build systems for image recognition and biometric face recognition
  • Grasp how to use deep neural networks to build an optical character recognition system
View Curriculum
Requirements
  • This video is friendly to Python beginners, but familiarity with Python programming would certainly be useful to play around with the code.
  • These independent videos teach you how to perform various machine learning tasks in different environments. Each of the video in the section will cover a real-life scenario.
Description

Machine learning is increasingly pervasive in the modern data-driven world. It is used extensively across many fields such as search engines, robotics, self-driving cars, and more.

With this course, you will learn how to perform various machine learning tasks in different environments. We’ll start by exploring a range of real-life scenarios where machine learning can be used, and look at various building blocks. Throughout the course, you’ll use a wide variety of machine learning algorithms to solve real-world problems and use Python to implement these algorithms.

You’ll discover how to deal with various types of data and explore the differences between machine learning paradigms such as supervised and unsupervised learning. We also cover a range of regression techniques, classification algorithms, predictive modelling, data visualization techniques, recommendation engines, and more with the help of real-world examples.

About The Author

Prateek Joshi is an Artificial Intelligence researcher and a published author. He has over 8 years of experience in this field with a primary focus on content-based analysis and deep learning. He has written two books on Computer Vision and Machine Learning. His work in this field has resulted in multiple patents, tech demos, and research papers at major IEEE conferences.

His blog has been visited in more than 200 countries and has received more than a million page views. He has been featured as a guest author in prominent tech magazines. He enjoys blogging about topics such as artificial intelligence, Python programming, abstract mathematics, and cryptography.

He has won many hackathons utilizing a wide variety of technologies. He is an avid coder who is passionate about building game-changing products. He graduated from the University of Southern California and he has worked at companies such as Nvidia, Microsoft Research, Qualcomm, and a couple of early stage start-ups in Silicon Valley.

Who is the target audience?
  • This video is for Python programmers who are looking to use machine-learning algorithms to create real-world applications.
Students Who Viewed This Course Also Viewed
Curriculum For This Course
96 Lectures
04:26:32
+
The Realm of Supervised Learning
9 Lectures 32:37

Machine learning algorithms need processed data for operation. Let’s explore how to process raw data in this video.

Preview 06:38

Algorithms need data in numerical form to use them directly. But we often label data with words. So, let’s see how we transform word labels into numerical form.

Label Encoding
02:25

Linear regression uses a linear combination of input variables to estimate the underlying function that governs the mapping from input to output. Our aim would be to identify that relationship between input data and output data.

Building a Linear Regressor
04:25

There are some cases where there is difference between actual values and values predicted by regressor. We need to keep a check on its accuracy. This video will enable us to do that.

Regression Accuracy and Model Persistence
03:41

Linear regressors tend to be inaccurate sometimes, as outliers disrupt the model. We need to regularize this. We will see that in this video.

Building a Ridge Regressor
02:41

Linear model fails to capture the natural curve of datapoints, which makes it quite inaccurate. So, let’s go through polynomial regressor to see how we can improve that.

Building a Polynomial Regressor
02:33

Applying regression concepts to solve real-world problems can be quite tricky. We will explore how to do it successfully.

Estimating housing prices
03:45

We don’t really have an idea on which feature contributes to the output and which doesn’t. It becomes critical to know that, in case we’ve to omit one. This video will help you compute their relative importance.

Computing relative importance of features
01:54

There might be some problems where the basic regression methods we’ve learned won’t help. One such problem is bicycle demand distribution. You will see how to solve that here.

Estimating bicycle demand distribution
04:35
+
Constructing a Classifier
10 Lectures 33:38

Evaluating the accuracy of a classifier is an important step in the world of machine learning. We need to learn how to use the available data to get an idea as to how this model will perform in the real world. This is what we are going to learn in this section.

Preview 03:40

Despite the word regression being present in the name, logistic regression is actually used for classification purposes. Given a set of datapoints, our goal is to build a model that can draw linear boundaries between our classes. It extracts these boundaries by solving a set of equations derived from the training data.

Building a Logistic Regression Classifier
04:50

Bayes’ Theorem, which has been widely used in probability to determine the outcome of an event, enables us to classify the data in a smarter way. Let us use its concept to make our classifier more amazing.

Building a Naive Bayes’ Classifier
02:11

While working with data, splitting data correctly and logically is an important task. Let’s see how we can achieve this in Python.

Splitting the Dataset for Training and Testing
01:23

In order to make splitting of dataset more robust, we repeat the process of splitting with different subsets. If we just fine-tune it for a particular subset, we may end up over fitting the model, which may fail to perform well on unknown data. Cross validation ensures accuracy in such a situation.

Evaluating the Accuracy Using Cross-Validation
04:06

When we want to fine-tune our algorithms, we need to understand how the data gets misclassified before we make these changes. Some classes are worse than others, and the confusion matrix will help us understand this.

Visualizing the Confusion Matrix and Extracting the Performance Report
04:14

Let's see how we can apply classification techniques to a real-world problem. We will use a dataset that contains some details about cars, such as number of doors, boot space, maintenance costs, and so on, to analyze this problem.

Evaluating Cars based on Their Characteristics
05:12

Let’s see how the performance gets affected as we change the hyperparameters. This is where validation curves come into the picture. These curves help us understand how each hyperparameter influences the training score.

Extracting Validation Curves
02:49

Learning curves help us understand how the size of our training dataset influences the machine learning model. This is very useful when you have to deal with computational constraints. Let's go ahead and plot the learning curves by varying the size of our training dataset.

Extracting Learning Curves
01:37

Let’s see how we can build a classifier to estimate the income bracket of a person based on 14 attributes.

Extracting the Income Bracket
03:36
+
Predictive Modeling
7 Lectures 20:19

Building regressors and classifiers can be a bit tedious. Supervised learning models like SVM help us to a great extent. Let’s see how we can work with SVM.

Preview 04:23

There are various kernels used to build nonlinear classifiers. Let’s explore some of them and see how we can build a nonlinear classifier.

Building Nonlinear Classifier Using SVMs
01:47

A classifier often gets biased when there are more datapoints in a certain class. This can turn out to be a big problem. We need a mechanism to deal with this. Let’s explore how we can do that.

Tackling Class Imbalance
02:53

Let’s explore how we can train SVM to compute the output confidence level of a new datapoint when it is classified into a known category.

Extracting Confidence Measurements
02:36

It’s critical to evaluate the performance of a classifier. We need certain hyper parameters to do so. Let’s explore how to find those parameters.

Finding Optimal Hyper-Parameters
02:16

Now that we’ve learned the concepts of SVM thoroughly, let’s see if we can apply them to real-world problems.

Building an Event Predictor
03:45

We’ve already used SVM as a classifier to predict events. Let’s explore whether or not we can use it as a regressor for estimating traffic.

Estimating Traffic
02:39
+
Clustering with Unsupervised Learning
8 Lectures 23:47

The k-means algorithm is one of the most popular clustering algorithms, which is used to divide the input data into k subgroups using various attributes of the data. Let’s see how we can implement it in Python for Clustering data.

Preview 03:07

Vector quantization is popularly used in image compression, where we store each pixel using fewer bits than the original image to achieve compression.

Compressing an Image Using Vector Quantization
03:37

The Mean Shift is a powerful unsupervised learning algorithm that's used to cluster datapoints. It considers the distribution of datapoints as a probabilitydensity function and tries to find the modes in the feature space. Let’s see how to use it in Python.

Building a Mean Shift Clustering
02:35

Many a times, we need to segregate data and group them for the purpose of analysis and much more. We can achieve this in Python using theagglomerative clustering. Let’s see how we can do it.

Grouping Data Using Agglomerative Clustering
03:04

In supervised learning, we just compare the predicted values with the original labels to compute their accuracy. In unsupervised learning, we don't have any labels. Therefore, we need a way to measure the performance of our algorithms. Let’s see how we could evaluate their performance.

Evaluating the Performance of Clustering Algorithms
02:55

Wouldn't it be nice if there were a method that can just tell us the number of clusters in our data? This is where Density-Based Spatial Clustering of Applications with Noise (DBSCAN) comes into the picture. Let us see how we can work with it.

Automatically Estimating the Number of Clusters Using DBSCAN
03:34

How will we operate with the assumption that we don't know how many clusters there are. As we don't know the number of clusters, we can use an algorithm called Affinity Propagation to cluster. Let's see how we can use unsupervised learning for stock market analysis with this.

Finding Patterns in Stock Market Data
02:34

What could we do when wedon't have labeled data available all the time but it's important to segment the market so that people can target individual groups? Let’s learn to build a customer segmentation model for this situation.

Building a Customer Segmentation Model
02:21
+
Building Recommendation Engines
9 Lectures 24:28

One of the major parts of any machine learning system is the data processing pipeline. Instead of calling functions in a nested way, it's better to use the functional programming paradigm to build the combination. Let's take a look at how to combine functions to form a reusable function composition.

Preview 03:25

The scikit-learn library has provisions to build machine learning pipelines. We just need to specify the functions, and it will build a composed object that makes the data go through the whole pipeline. Let’s see how to build it in Python.

Building Machine Learning Pipelines
03:54

While working with the training dataset, we need to make a decision based on the number of nearest neighbors in it. This can be achieved with the help of the NearestNeighbor method in Python. Let’s see how to do it.

Finding the Nearest Neighbors
01:56

When we want to find the class to which an unknown point belongs, we find the k-nearest neighbors and take a majority vote. Let's take a look at how to construct this.

Constructing a k-nearest Neighbors Classifier
04:18

A good thing about the k-nearest neighbors algorithm is that it can also be used as a regessor. Let’s see how to do this!

Constructing a k-nearest Neighbors Regressor
02:43

In order to find users in the database who are similar to a given user we need to define a similarity metric. Euclidean distance score is one such metric that we can use to compute the distance between data points. Let’s look at this in more detail in this video.

Computing the Euclidean Distance Score
02:08

The Euclidean distance score is a good metric, but it has some shortcomings. Hence, Pearson correlation score is frequently used in recommendation engines. Let's see how to compute it.

Computing the Pearson Correlation Score
01:55

One of the most important tasks in building a recommendation engine is finding users that are similar. Let's see how to do this in this video.

Finding Similar Users in a Dataset
01:35

Now that we’ve built all the different parts of a recommendation engine, we are ready to generate movie recommendations. Let’s see how to do that in this video.

Generating Movie Recommendations
02:34
+
Analyzing Text Data
9 Lectures 27:35

With tokenization we can define our own conditions to divide the input text into meaningful tokens. This gives us the solution for dividing a chunk of text into words or into sentences. Let's take a look at how to do this.

Preview 03:00

During text analysis, it's useful to extract the base form of the words to extract some statistics to analyze the overall text. This can be achieved with stemming, which uses a heuristic process to cut off the ends of words. Let's see how to do this in Python.

Stemming Text Data
02:22

Sometimes the base words that we obtained using stemmers don't really make sense. Lemmatization solves this problem by doing things using a vocabulary and morphological analysis of words and removes inflectional word endings. Let's take a look at how to do this in this video.

Converting Text to Its Base Form Using Lemmatization
02:11

When you deal with a really large text document, you need to divide it into chunks for further analysis. In this video, we will divide the input text into a number of pieces, where each piece has a fixed number of words.

Dividing Text Using Chunking
02:03

When we deal with text documents that contain millions of words, we need to convert them into some kind of numeric representation so as to make them usable for machine learning algorithms. A bag-of- words model is what helps us achieve this task quite easily.

Building a Bag-of-Words Model
02:58

The goal of text classification is to categorize text documents into different classes. This is an extremely important analysis technique in NLP. Let us see how we can build a text classifier for this purpose.

Building a Text Classifier
04:43

Identifying the gender of a name is an interesting task in NLP. Also gender recognition is a part of many artificial intelligence technologies. Let us see how to identify gender in Python.

Identifying the Gender
02:17

How could we discover the feelings or sentiments of different people about a particular topic? This video helps us to analyze that.

Analyzing the Sentiment of a Sentence
03:09

With topic modeling, we can uncover some hidden thematic structure in a collection of documents. This will help us in organizing our documents in a better way so that we can use them for analysis. Let’s see how we can do it!

Identifying Patterns in Text Using Topic Modelling
04:52
+
Speech Recognition
7 Lectures 16:15

Reading an audio file and visualizing the signal is a good starting point that gives us a good understanding of the basic structure of audio signals. So let us see in this video how we could do it!

Preview 02:34

Audio signals consist of a complex mixture of sine waves of different frequencies, amplitudes and phases. There is a lot of information that is hidden in the frequency content of an audio signal. So it’s necessary to transform the audio signal into a frequency domain. Let’s see how to do this.

Transforming Audio Signals into the Frequency Domain
02:09

We can use NumPy to generate audio signals. As we know, audio signals are complex mixtures of sinusoids. Let’s see how we can generate audio signals with custom parameters.

Generating Audio Signals with Custom Parameters
01:45

Music has been explored since centuries and technology has set new horizons to play with it. We can also create music notes in Python. Let’s see how we can do this.

Synthesizing Music
02:10

When we deal with signals and we want to use them as input data and perform analysis, we need to convert them into frequency domain. So, let’s get hands-on with it!

Extracting Frequency Domain Features
02:06

A hidden Markov Model represents probability distributions over sequences of observations. It allows you to find the hidden states so that you can model the signal. Let us explore how we can use it to perform speech recognition.

Building Hidden Markov Models
02:19

This video will walk you through building a speech recognizer by using the audio files in a database. We will use seven different words, where each word has 15 audio files. Let’s go ahead and do it!

Building a Speech Recognizer
03:12
+
Dissecting Time Series and Sequential Data
7 Lectures 19:56

Let’s understand how to convert a sequence of observations into time series data and visualize it. We will use a library called pandas to analyze time series data. At the end of this video, you will be able to transform data into the time series format.

Preview 03:07

Extracting information from various intervals in time series data and using dates to handle subsets of our data are important tasks in data mining. Let’s see how we can slice time series data using Python.

Slicing Time Series Data
01:31

You can filter the data in many different ways. The pandas library allows you to operate on time series data in any way that you want. Let's see how to operate on time series data.

Operating on Time Series Data
01:42

One of the main reasons that we want to analyze time series data is to extract interesting statistics from it. This provides a lot of information regarding the nature of the data. Let’s see how to extract these stats.

Extracting Statistics from Time Series
02:29

Hidden Markov Models are really powerful when it comes to sequential data analysis. They are used extensively in finance, speech analysis, weather forecasting, sequencing of words, and so on. We are often interested in uncovering hidden patterns that appear over time. Let’s see how we can use it.

Building Hidden Markov Models for Sequential Data
04:15

The Conditional Random Fields (CRFs) are probabilistic models used to analyze structured data and also to label and segment sequential data. Let us see how we can use it to work on our input dataset!

Building Conditional Random Fields for Sequential Text Data
04:27

This video will get you hands-on with analyzing stock market data and understanding the fluctuations in the stocks of different companies. So let’s see how to do this!

Analyzing Stock Market Data with Hidden Markov Models
02:25
+
Image Content Analysis
8 Lectures 22:17

OpenCV is the world's most popular library for computer vision. It enables us to analyze images and do a lot of stuff with it. Let’s see how to operate it!

Preview 03:07

When working with images, it is essential to detect the edges to process the image and perform different operations with it. Let’s see how to detect edges of the input image in Python.

Detecting Edges
02:47

The human eye likes contrast! This is the reason that almost all camera systems use histogram equalization to make images look nice. This video will walk you through the use of histogram equalization in Pyhton.

Histogram Equalization
02:30

One of the essential steps in image analysis is to identify and extract the salient features for the purpose of computer vision. This can be achieved with a corner detection technique and SIFT feature point in Python. This video will enable you to achieve this goal!

Detecting Corners and SIFT Feature Points
03:46

When we build object recognition systems, we may want to use a different feature detector before we extract features using SIFT; that will give us the flexibility to cascade different blocks to get the best possible performance. Let’s see how to do it with Star feature detector.

Building a Star Feature Detector
01:34

Have you ever wondered how you could build image signatures? If yes, this video will take you through creating features by using visual codebook, which will enable you to achieve this goal. So, let’s dive in and watch it!

Creating Features Using Visual Codebook and Vector Quantization
04:10

We can construct a bunch of decision trees that are based on our image signatures, and then train the forest to make the right decision. Extremely Random Forests (ERFs) are used extensively for this purpose. Let’s dive in and see how to do it!

Training an Image Classifier Using Extremely Random Forests
02:30

While dealing with images, we tend to tackle problems with the contents of unknown images. This video will enable you to build an object recognizer which allows you to recognize the content of unknown images. So, let’s see it!

Building an object recognizer
01:53
+
Biometric Face Recognition
7 Lectures 17:21

Webcams are widely used for real-time communications and for biometric data analysis. This video will walk you through capturing and processing video from your webcam.

Preview 01:58

Haar cascade extracts a large number of simple features from the image at multiple scales. The simple features are basically edge, line, and rectangle features that are very easy to compute. It is then trained by creating a cascade of simple classifiers. Let’s see how we can detect a face with it!

Building a Face Detector using Haar Cascades
02:40

The Haar cascades method can be extended to detect all types of objects. Let's see how to use it to detect the eyes and nose in the input video.

Building Eye and Nose Detectors
01:54

Principal Components Analysis (PCA) is a dimensionality reduction technique that's used very frequently in computer vision and machine learning. It’s used to reduce the dimensionality of the data before we can train a system. This video will take you through the use of PCA.

Performing Principal Component Analysis
02:17

What if you need to reduce the number of dimensions in unorganized data? PCA, which we used in the last video, is inefficient in such situations. Let us see how we can tackle this situation.

Performing Kernel Principal Component Analysis
02:02

When we work with data or signals, they are generally received in a raw form. Or rather we can say they are a mixture of some unwanted stuff. It is essential for us to segregate them, so as to work on these signals. This video will enable you to achieve this goal.

Performing Blind Source Separation
02:16

We are now finally ready to build a face recognizer! Let’s see how to do it!

Building a Face Recognizer Using a Local Binary Patterns Histogram
04:14
2 More Sections
About the Instructor
Packt Publishing
3.9 Average rating
8,109 Reviews
58,415 Students
686 Courses
Tech Knowledge in Motion

Packt has been committed to developer learning since 2004. A lot has changed in software since then - but Packt has remained responsive to these changes, continuing to look forward at the trends and tools defining the way we work and live. And how to put them to work.

With an extensive library of content - more than 4000 books and video courses -Packt's mission is to help developers stay relevant in a rapidly changing world. From new web frameworks and programming languages, to cutting edge data analytics, and DevOps, Packt takes software professionals in every field to what's important to them now.

From skills that will help you to develop and future proof your career to immediate solutions to every day tech challenges, Packt is a go-to resource to make you a better, smarter developer.

Packt Udemy courses continue this tradition, bringing you comprehensive yet concise video courses straight from the experts.