Cluster Analysis and Unsupervised Machine Learning in Python

Data science techniques for pattern recognition, data mining, k-means clustering, and hierarchical clustering, and KDE.
4.6 (51 ratings) Instead of using a simple lifetime average, Udemy calculates a
course's star rating by considering a number of different factors
such as the number of ratings, the age of ratings, and the
likelihood of fraudulent ratings.
1,318 students enrolled
$19
$120
84% off
Take This Course
  • Lectures 22
  • Length 1.5 hours
  • Skill Level Beginner Level
  • Languages English, captions
  • Includes Lifetime access
    30 day money back guarantee!
    Available on iOS and Android
Wishlisted Wishlist

How taking a course works

Discover

Find online courses made by experts from around the world.

Learn

Take your courses with you and learn anywhere, anytime.

Master

Learn and practice real-world skills and achieve your goals.

About This Course

Published 4/2016 English Closed captions available

Course Description

Cluster analysis is a staple of unsupervised machine learning and data science.

It is very useful for data mining and big data because it automatically finds patterns in the data, without the need for labels, unlike supervised machine learning.

In a real-world environment, you can imagine that a robot or an artificial intelligence won’t always have access to the optimal answer, or maybe there isn’t an optimal correct answer. You’d want that robot to be able to explore the world on its own, and learn things just by looking for patterns.

Do you ever wonder how we get the data that we use in our supervised machine learning algorithms?

We always seem to have a nice CSV or a table, complete with Xs and corresponding Ys.

If you haven’t been involved in acquiring data yourself, you might not have thought about this, but someone has to make this data!

Those “Y”s have to come from somewhere, and a lot of the time that involves manual labor.

Sometimes, you don’t have access to this kind of information or it is infeasible or costly to acquire.

But you still want to have some idea of the structure of the data. If you're doing data analytics automating pattern recognition in your data would be invaluable.

This is where unsupervised machine learning comes into play.

In this course we are first going to talk about clustering. This is where instead of training on labels, we try to create our own labels! We’ll do this by grouping together data that looks alike.

There are 2 methods of clustering we’ll talk about: k-means clustering and hierarchical clustering.

Next, because in machine learning we like to talk about probability distributions, we’ll go into Gaussian mixture models and kernel density estimation, where we talk about how to "learn" the probability distribution of a set of data.

One interesting fact is that under certain conditions, Gaussian mixture models and k-means clustering are exactly the same! We’ll prove how this is the case.

All the algorithms we’ll talk about in this course are staples in machine learning and data science, so if you want to know how to automatically find patterns in your data with data mining and pattern extraction, without needing someone to put in manual work to label that data, then this course is for you.

All the materials for this course are FREE. You can download and install Python, Numpy, and Scipy with simple commands on Windows, Linux, or Mac.

This course focuses on "how to build and understand", not just "how to use". Anyone can learn to use an API in 15 minutes after reading some documentation. It's not about "remembering facts", it's about "seeing for yourself" via experimentation. It will teach you how to visualize what's happening in the model internally. If you want more than just a superficial look at machine learning models, this course is for you.


NOTES:

All the code for this course can be downloaded from my github: /lazyprogrammer/machine_learning_examples

In the directory: unsupervised_class

Make sure you always "git pull" so you have the latest version!


HARD PREREQUISITES / KNOWLEDGE YOU ARE ASSUMED TO HAVE:

  • calculus
  • linear algebra
  • probability
  • Python coding: if/else, loops, lists, dicts, sets
  • Numpy coding: matrix and vector operations, loading a CSV file


TIPS (for getting through the course):

  • Watch it at 2x.
  • Take handwritten notes. This will drastically increase your ability to retain the information.
  • Write down the equations. If you don't, I guarantee it will just look like gibberish.
  • Ask lots of questions on the discussion board. The more the better!
  • Realize that most exercises will take you days or weeks to complete.


USEFUL COURSE ORDERING:

  • (The Numpy Stack in Python)
  • Linear Regression in Python
  • Logistic Regression in Python
  • (Supervised Machine Learning in Python)
  • Deep Learning in Python
  • Practical Deep Learning in Theano and TensorFlow
  • Convolutional Neural Networks in Python
  • (Easy NLP)
  • (Cluster Analysis and Unsupervised Machine Learning)
  • Unsupervised Deep Learning
  • (Hidden Markov Models)
  • Recurrent Neural Networks in Python
  • Natural Language Processing with Deep Learning in Python


What are the requirements?

  • Know how to code in Python and Numpy
  • Install Numpy and Scipy

What am I going to get from this course?

  • Understand the regular K-Means algorithm
  • Understand and enumerate the disadvantages of K-Means Clustering
  • Understand the soft or fuzzy K-Means Clustering algorithm
  • Implement Soft K-Means Clustering in Code
  • Understand Hierarchical Clustering
  • Explain algorithmically how Hierarchical Agglomerative Clustering works
  • Apply Scipy's Hierarchical Clustering library to data
  • Understand how to read a dendrogram
  • Understand the different distance metrics used in clustering
  • Understand the difference between single linkage, complete linkage, Ward linkage, and UPGMA
  • Understand the Gaussian mixture model and how to use it for density estimation
  • Write a GMM in Python code
  • Explain when GMM is equivalent to K-Means Clustering
  • Explain the expectation-maximization algorithm
  • Understand how GMM overcomes some disadvantages of K-Means
  • Understand the Singular Covariance problem and how to fix it

What is the target audience?

  • Students and professionals interested in machine learning and data science
  • People who want an introduction to unsupervised machine learning and cluster analysis
  • People who want to know how to write their own clustering code
  • Professionals interested in data mining big data sets to look for patterns automatically

What you get with this course?

Not for you? No problem.
30 day money back guarantee.

Forever yours.
Lifetime access.

Learn on the go.
Desktop, iOS and Android.

Get rewarded.
Certificate of completion.

Curriculum

Section 1: Introduction to Unsupervised Learning
Introduction and Outline
Preview
02:22
What is unsupervised learning used for?
Preview
04:32
Section 2: K-Means Clustering
Visual Walkthrough of the K-Means Clustering Algorithm
Preview
02:58
Soft K-Means
02:20
The K-Means Objective Function
01:39
Soft K-Means in Python Code
10:03
Visualizing Each Step of K-Means
02:18
Examples of where K-Means can fail
07:32
Disadvantages of K-Means Clustering
02:13
How to Evaluate a Clustering (Purity, Davies-Bouldin Index)
06:33
Using K-Means on Real Data: MNIST
05:00
Section 3: Hierarchical Clustering
Visual Walkthrough of Agglomerative Hierarchical Clustering
Preview
02:35
03:38

Learn about the different possible distance metrics that can be used for both k-means and agglomerative clustering, and what constitutes a valid distance metric. Learn about the different linkage methods for hierarchical clustering, like single linkage, complete linkage, UPGMA, and Ward linkage.

Using Hierarchical Clustering in Python and Interpreting the Dendrogram
04:38
Section 4: Gaussian Mixture Models (GMMs)
Description of the Gaussian Mixture Model and How to Train a GMM
03:04
Comparison between GMM and K-Means
01:44
Write a Gaussian Mixture Model in Python Code
09:59
Practical Issues with GMM / Singular Covariance
02:55
Kernel Density Estimation
02:10
Expectation-Maximization
02:01
Future Unsupervised Learning Algorithms You Will Learn
01:01
Section 5: Appendix
How to install Numpy, Scipy, Matplotlib, Pandas, IPython, Theano, and TensorFlow
17:22

Students Who Viewed This Course Also Viewed

  • Loading
  • Loading
  • Loading

Instructor Biography

Lazy Programmer Inc., Data scientist and big data engineer

I am a data scientist, big data engineer, and full stack software engineer.

For my masters thesis I worked on brain-computer interfaces using machine learning. These assist non-verbal and non-mobile persons communicate with their family and caregivers.

I have worked in online advertising and digital media as both a data scientist and big data engineer, and built various high-throughput web services around said data. I've created new big data pipelines using Hadoop/Pig/MapReduce. I've created machine learning models to predict click-through rate, news feed recommender systems using linear regression, Bayesian Bandits, and collaborative filtering and validated the results using A/B testing.

I have taught undergraduate and graduate students in data science, statistics, machine learning, algorithms, calculus, computer graphics, and physics for students attending universities such as Columbia University, NYU, Humber College, and The New School. 

Multiple businesses have benefitted from my web programming expertise. I do all the backend (server), frontend (HTML/JS/CSS), and operations/deployment work. Some of the technologies I've used are: Python, Ruby/Rails, PHP, Bootstrap, jQuery (Javascript), Backbone, and Angular. For storage/databases I've used MySQL, Postgres, Redis, MongoDB, and more.

Ready to start learning?
Take This Course