From 0 to 1: Machine Learning, NLP & Python-Cut to the Chase

A down-to-earth, shy but confident take on machine learning techniques that you can put to work today
4.1 (396 ratings) Instead of using a simple lifetime average, Udemy calculates a
course's star rating by considering a number of different factors
such as the number of ratings, the age of ratings, and the
likelihood of fraudulent ratings.
3,579 students enrolled
$19
$50
62% off
Take This Course
  • Lectures 87
  • Length 20.5 hours
  • Skill Level All Levels
  • Languages English, captions
  • Includes Lifetime access
    30 day money back guarantee!
    Available on iOS and Android
    Certificate of Completion
Wishlisted Wishlist

How taking a course works

Discover

Find online courses made by experts from around the world.

Learn

Take your courses with you and learn anywhere, anytime.

Master

Learn and practice real-world skills and achieve your goals.

About This Course

Published 11/2015 English Closed captions available

Course Description

Prerequisites: No prerequisites, knowledge of some undergraduate level mathematics would help but is not mandatory. Working knowledge of Python would be helpful if you want to run the source code that is provided.

Taught by a Stanford-educated, ex-Googler and an IIT, IIM - educated ex-Flipkart lead analyst. This team has decades of practical experience in quant trading, analytics and e-commerce.

This course is a down-to-earth, shy but confident take on machine learning techniques that you can put to work today

Let’s parse that.

The course is down-to-earth : it makes everything as simple as possible - but not simpler

The course is shy but confident : It is authoritative, drawn from decades of practical experience -but shies away from needlessly complicating stuff.

You can put ML to work today : If Machine Learning is a car, this car will have you driving today. It won't tell you what the carburetor is.

The course is very visual : most of the techniques are explained with the help of animations to help you understand better.

This course is practical as well : There are hundreds of lines of source code with comments that can be used directly to implement natural language processing and machine learning for text summarization, text classification in Python.

The course is also quirky. The examples are irreverent. Lots of little touches: repetition, zooming out so we remember the big picture, active learning with plenty of quizzes. There’s also a peppy soundtrack, and art - all shown by studies to improve cognition and recall.

What's Covered:

Machine Learning:

Supervised/Unsupervised learning, Classification, Clustering, Association Detection, Anomaly Detection, Dimensionality Reduction, Regression.

Naive Bayes, K-nearest neighbours, Support Vector Machines, Artificial Neural Networks, K-means, Hierarchical clustering, Principal Components Analysis, Linear regression, Logistics regression, Random variables, Bayes theorem, Bias-variance tradeoff

Natural Language Processing with Python:

Corpora, stopwords, sentence and word parsing, auto-summarization, sentiment analysis (as a special case of classification), TF-IDF, Document Distance, Text summarization, Text classification with Naive Bayes and K-Nearest Neighbours and Clustering with K-Means

Sentiment Analysis:

Why it's useful, Approaches to solving - Rule-Based , ML-Based , Training , Feature Extraction, Sentiment Lexicons, Regular Expressions, Twitter API, Sentiment Analysis of Tweets with Python

A Note on Python: The code-alongs in this class all use Python 2.7. Source code (with copious amounts of comments) is attached as a resource with all the code-alongs. The source code has been provided for both Python 2 and Python 3 wherever possible.

Mail us about anything - anything! - and we will always reply :-)

What are the requirements?

  • No prerequisites, knowledge of some undergraduate level mathematics would help but is not mandatory. Working knowledge of Python would be helpful if you want to run the source code that is provided.

What am I going to get from this course?

  • Identify situations that call for the use of Machine Learning
  • Understand which type of Machine learning problem you are solving and choose the appropriate solution
  • Use Machine Learning and Natural Language processing to solve problems like text classification, text summarization in Python

What is the target audience?

  • Yep! Analytics professionals, modelers, big data professionals who haven't had exposure to machine learning
  • Yep! Engineers who want to understand or learn machine learning and apply it to problems they are solving
  • Yep! Product managers who want to have intelligent conversations with data scientists and engineers about machine learning
  • Yep! Tech executives and investors who are interested in big data, machine learning or natural language processing
  • Yep! MBA graduates or business professionals who are looking to move to a heavily quantitative role

What you get with this course?

Not for you? No problem.
30 day money back guarantee.

Forever yours.
Lifetime access.

Learn on the go.
Desktop, iOS and Android.

Get rewarded.
Certificate of completion.

Curriculum

Section 1: Introduction
03:17

We - the course instructors - start with introductions. We are a team that has studied at Stanford, IIT Madras, IIM Ahmedabad and spent several years working in top tech companies, including Google and Flipkart.

Next, we talk about the target audience for this course: Analytics professionals, modelers and big data professionals certainly, but also Engineers, Product managers, Tech Executives and Investors, or anyone who has some curiosity about machine learning.

If Machine Learning is a car, this class will teach you how to drive. By the end of this class, students will be able to: spot situations where machine learning can be used, and deploy the appropriate solutions. Product managers and executives will learn enough of the 'how' to be able intelligently converse with their data science counterparts, without being constrained by it.

This course is practical as well : There are hundreds of lines of source code with comments that can be used directly to implement natural language processing and machine learning for text summarization, text classification in Python.

Section 2: Jump right in : Machine learning for Spam detection
16:31

Machine learning is quite the buzzword these days. While it's been around for a long time, today its applications are wide and far-reaching - from computer science to social science, quant trading and even genetics. From the outside, it seems like a very abstract science that is heavy on the math and tough to visualize. But it is not at all rocket science. Machine learning is like any other science - if you approach it from first principles and visualize what is happening, you will find that it is not that hard. So, let's get right into it, we will take an example and see what Machine learning is and why it is so useful.

17:01

Machine learning usually involves a lot of terms that sound really obscure. We'll see a real life implementation of a machine learning algorithm (Naive Bayes) and by end of it you should be able to speak some of the language of ML with confidence.

17:04

We have gotten our feet wet and seen the implementation of one ML solution to spam detection - let's venture a little further and see some other ways to solve the same problem. We'll see how K-Nearest Neighbors and Support Vector machines can be used to solve spam detection.

17:26

So far we have been slowly getting comfortable with machine learning - we took one example and saw a few different approaches. That was just the the tip of the iceberg - this class is an aerial maneuver, we will scout ahead and see what are the different classes of problems that Machine Learning can solve and that we will cover in this class.

Section 3: Naive Bayes Classifier
20:10
Many popular machine learning techniques are probabilistic in nature and having some working knowledge helps. We'll cover random variables, probability distributions and the normal distribution.
18:36
We have been learning some fundamentals that will help us with probabilistic concepts in Machine Learning. In this class, we will learn about conditional probability and Bayes theorem which is the foundation of many ML techniques.
08:49
Naive Bayes Classifier is a probabilistic classifier. We have built the foundation to understand what goes on under the hood - let's understand how the Naive Bayes classifier uses the Bayes theorem
14:03
We will see how the Naive Bayes classifier can be used with an example.
Section 4: K-Nearest Neighbors
13:09
Let's understand the k-Nearest Neighbors setup with a visual representation of how the algorithm works.
14:47
There are few wrinkles in k-Nearest Neighbors. These are just the things to keep in mind if and when you decide to implement it.
Section 5: Support Vector Machines
08:16

We have been talking about different classifier algorithms. We'll learn about Support Vector Machines which are linear classifiers.

16:23
Support Vector Machines algorithm can be framed as an optimization problem. The kernel trick can be used along with SVM to perform non-linear classification.
Section 6: Clustering as a form of Unsupervised learning
19:07
Clustering helps us understand what are the patterns in a large set of data that we don't know much about. It is a form of unsupervised learning.
13:42
K-Means and DBSCAN are 2 very popular clustering algorithms. How do they work and what are the key considerations?
Section 7: Association Detection
09:12
It is all about finding relationships in the data - sometimes there are relationships that you would not intuitively expect to find. It is pretty powerful - so let's take a peek at what it does.
Section 8: Dimensionality Reduction
10:22

Data that you are working can be noisy or garbled or difficult to make sense of. It can be so complicated that its difficult to process efficiently. Dimensionality reduction to the rescue - it cleans up the noise and shows you a clear picture. Getting rid of unnecessary features makes the computation simpler.

18:53
PCA is one of the most famous Dimensionality Reduction techniques. When you have data with a lot of variables and confusing interactions, PCA clears the air and finds the underlying causes.
Section 9: Artificial Neural Networks
11:18

Artificial Neural Networks are much misunderstood because of the name. We will see the Perceptron (a prototypical example of ANNs) and how it is analogous to Support Vector Machine

Section 10: Regression as a form of supervised learning
13:54
Regression can be used to predict the value of a variable, given some predictor variables. We'll see an example to understand its use and cover two popular methods : Linear and Logistic regression
10:13
In this class, we will talk about some trade-offs which we have to be aware of when we choose our training data and model.
Section 11: Natural Language Processing and Python
09:00

Anaconda's iPython is a Python IDE. The best part about it is the ease with which one can install packages in iPython - 1 line is virtually always enough. Just say '!pip'

07:26

Natural Language Processing is a serious application for all the Machine Learning techniques we have been using. Let's get our feet wet by understanding a few of the common NLP problems and tasks. We'll get familiar with NLTK - an awesome Python toolkit for NLP

14:14

We'll continue exploring NLTK and all the cool functionality it brings out of the box - tokenization, Parts-of-Speech tagging, stemming, stopwords removal etc

18:09

Web Scraping is an integral part of NLP - its how you prepare the text data that you will actually process. Web Scraping can be a headache - but Beautiful Soup makes it elegant and intuitive.

11:34
Auto-summarize newspaper articles from a website (Washington Post). We'll use NLP techniques to remove stopwords, tokenize text and sentences and compute term frequencies. The Python source code (with many comments) is attached as a resource.
18:33

Code along with us in Python - we'll use NLTK to compute the frequencies of words in an article.

11:28

Code along with us in Python - we'll use NLTK to compute the frequencies of words in an article and the importance of sentences in an article.

10:23
Code along with us in Python - we'll use Beautiful Soup to parse an article downloaded from the Washington Post and then summarize it using the class we set up earlier.
19:29

Classify newspaper articles into tech and non-tech. We'll see how to scrape websites to build a corpus of articles. Use NLP techniques to do feature extraction and selection. Finally, apply the K-Nearest Neighbours algorithm to classify a test instance as Tech/NonTech. The Python source code (with many comments) is attached as a resource.

19:24

Classify newspaper articles into tech and non-tech. We'll see how to scrape websites to build a corpus of articles. Use NLP techniques to do feature extraction and selection. Finally, apply the Naive Bayes Classification algorithm to classify a test instance as Tech/NonTech. The Python source code (with many comments) is attached as a resource.

15:45
Code along with us in Python - we'll use BeautifulSoup to build a corpus of news articles
18:51

Code along with us in Python - we'll use NLTK to extract features from articles.

04:15
Code along with us in Python - we'll use KNN algorithm to classify articles into Tech/NonTech
08:08
Code along with us in Python - we'll use a Naive Bayes Classifier to classify articles into Tech/Non-Tech
11:03
See how search engines compute the similarity between documents. We'll represent a document as a vector, weight it with TF-IDF and see how cosine similarity or euclidean distance can be used to compute the distance between two documents.
14:32

Create clusters of similar articles within a large corpus of articles. We'll scrape a blog to download all the blog posts, use TF-IDF to represent them as vectors. Finally, we'll perform K-Means clustering to identify 5 clusters of articles. The Python source code (with many comments) is attached as a resource.

08:32
Code along with us in Python - We'll cluster articles downloaded from a blog using the KMeans algorithm.
Section 12: Sentiment Analysis
02:36

Lots of new stuff coming up in the next few classes. Sentiment Analysis (or) Opinion Mining is a field of NLP that deals with extracting subjective information (positive/negative, like/dislike, emotions). Learn why it's useful and how to approach the problem. There are Rule-Based and ML-Based approaches. The details are really important - training data and feature extraction are critical. Sentiment Lexicons provide us with lists of words in different sentiment categories that we can use for building our feature set. All this is in the run up to a serious project to perform Twitter Sentiment Analysis. We'll spend some time on Regular Expressions which are pretty handy to know as we'll see in our code-along.

17:17

As people spend more and more time on the internet, and the influence of social media explodes, knowing what your customers are saying about you online, becomes crucial. Sentiment Analysis comes in handy here - This is an NLP problem that can be approached in multiple ways. We examine a couple of rule based approaches, one of which has become standard fare (VADER)

19:57

SVM and Naive Bayes are popular ML approaches to Sentiment Analysis. But the devil really is in the details. What do you use for training data? What features should you use? Getting these right is critical.

18:49

Sentiment Lexicon's are a great help in solving problems where the subjectivity/emotion expressed by a word are important. SentiWordNet is different even among the popular sentiment lexicons (General Inquirer, LIWC, MPQA etc) all of which are touched upon

17:53

Regular expressions are a handy tool to have when you deal with text processing. They are a bit arcane, but pretty useful in the right situation. Understanding the operators from basics help you build up to constructing complex regexps.

05:41
re is the module in python to deal with regular expressions. It has functions to find a pattern, substitute a pattern etc within a string.
17:48

A serious project - Accept a search term from a user and output the prevailing sentiment on Twitter for that search term. We'll use the Twitter API, Sentiwordnet, SVM, NLTK, Regular Expressions - really work that coding muscle :)

20:00

We'll accept a search term from a user and download a 100 tweets with that term. You'll need a corpus to train a classifier which can classifiy these tweets. The corpus has only tweet_ids, so connect to Twitter API and fetch the text for the tweets.

12:24

The tweets that we downloaded have a lot of garbage, clean it up using regular expressions and NLTK and get a nice list of words to represent each tweet.

19:40

We'll train 2 different classifiers on our training data , Naive Bayes and SVM. The SVM will use Sentiwordnet to assign weights to the elements of the feature vector.

Section 13: Decision Trees
17:00

What are Decision Trees and how are they useful? Decision Trees are a visual and intuitive way of predicting what the outcome will be given some inputs. They assign an order of importance to the input variables that helps you see clearly what really influences your outcome.

18:03

Recursive Partitioning is the most common strategy for growing Decision Trees from a training set.

Learn what makes one attribute be higher up in a Decision Tree compared to others.

18:51

We'll take a small detour into Information Theory to understand the concept of Information Gain. This concept forms the basis of how popular Decision Tree Learning algorithms work.

07:50

ID3, C4.5, CART and CHAID are commonly used Decision Tree Learning algorithms. Learn what makes them different from each other. Pruning is a mechanism to avoid one of the risks inherent with Decision Trees ie overfitting.

19:21

Build a decision tree to predict the survival of a passenger on the Titanic. This is a challenge posed by Kaggle (a competitive online data science community). We'll start off by exploring the data and transforming the data into feature vectors that can be fed to a Decision Tree Classifier.

14:16

We continue with the Kaggle challenge. Let's feed the training set to a Decision Tree Classifier and then parse the results.

13:00

We'll use our Decision Tree Classifier to predict the results on Kaggle's test data set. Submit the results to Kaggle and see where you stand!

Section 14: A Few Useful Things to Know About Overfitting
19:03

Overfitting is one of the biggest problems with Machine Learning - it's a trap that's easy to fall into and important to be aware of.

11:19

Overfitting is a difficult problem to solve - there is no way to avoid it completely, by correcting for it, we fall into the opposite error of underfitting.

18:55

Cross Validation is a popular way to choose between models. There are a few different variants - K-Fold Cross validation is the most well known.

07:18

Overfitting occurs when the model becomes too complex. Regularization helps maintain the balance between accuracy and complexity of the model.

16:39

The crowd is indeed wiser than the individual - at least with ensemble learning. The Netflix competition showed that ensemble learning helps achieve tremendous improvements in accuracy - many learners perform better than just 1.

18:02

Bagging, Boosting and Stacking are different techniques to help build an ensemble that rocks!

Section 15: Random Forests
12:28

Decision trees are cool but painstaking to build - because they really tend to overfit. Random Forests to the rescue! Use an ensemble of decision trees - all the benefits of decision trees, few of the pains!

20:03

Machine learning is not a one-shot process. You'll need to iterate, test multiple models to see what works better. Let's use cross validation to compare the accuracy of different models - Decision trees vs Random Forests

Section 16: Recommendation Systems
16:43

Recommendations - good quality, personalized recommendations - are the holy grail for many online stores. What is the driving force behind this quest?

10:45

Recommendation Engines perform a variety of tasks - but the most important one is to find products that are most relevant to the user. Content based filtering, collaborative filtering and Association rules are common approaches to do so.

13:35

Content based filtering finds products relevant to a user - based on the content of the product (attributes, description, words etc).

10:26

Collaborative Filtering is a general term for an idea that users can help each other find what products they like. Today this is by far the most popular approach to Recommendations

17:51

Neighbourhood models - also known as Memory based approaches - rely on finding users similar to the active user. Similarity can be measured in many ways - Euclidean Distance, Pearson Correlation and Cosine similarity being a few popular ones.

09:41

We continue with Neighbourhood models and see how to predict the rating of a user for a new product. Use this to find the top picks for a user.

20:13

Latent factor methods identify hidden factors that influence users from user history. Matrix Factorization is used to find these factors. This method was first used and then popularized for recommendations by the Netflix Prize winners. Many modern recommendation systems including Netflix, use some form of matrix factorization.

12:09

Matrix Factorization for Recommendations can be expressed as an optimization problem. Stochastic Gradient Descent or Alternating least squares can then be used to solve that problem.

08:12

Gray Sheep, Synonymy, Data Sparsity, Shilling Attacks etc are a few challenges that people face with Collaborative Filtering.

18:31

Association rules help you find recommendations for products that might complement the user's choices. The seminal paper on association rules introduced an efficient technique for finding these rules - The Apriori Algorithm

Section 17: Recommendation Systems in Python
18:05

Numpy arrays are pretty cool for performing mathematical computations on your data.

14:19

We continue with a basic tutorial on Numpy and Scipy

16:45

Movielens is a famous dataset with movie ratings. Use Pandas to read and play around with the data.

06:18

We continue playing with Movielens data - lets find the top n rated movies for a user.

18:10

Let's find some recommendations now. We'll use neighbour based collaborative filtering to find the users most similar to a user and then predict their rating for a movie

06:16

We've predicted the user's rating for all movies. Let's pick the top recommendations for a user.

17:55

Matrix Factorization was first used for recommendations during the Netflix challenge. Let's implement this on the Movielens data and find some recommendations!

09:50

The Apriori algorithm was introduced in a seminal paper that described how to mine large datasets for association rules efficiently. Let's work through the algorithm in Python.

Section 18: A Taste of Deep Learning and Computer Vision
18:08

A quick intro to Computer Vision, and one of the most popular starter problems - identifying handwritten digits using the MNIST database. We also talk about feature extraction from images.

Students Who Viewed This Course Also Viewed

  • Loading
  • Loading
  • Loading

Instructor Biography

Loony Corn, A 4-person team;ex-Google; Stanford, IIM Ahmedabad, IIT

Loonycorn is us, Janani Ravi, Vitthal Srinivasan, Swetha Kolalapudi and Navdeep Singh. Between the four of us, we have studied at Stanford, IIM Ahmedabad, the IITs and have spent years (decades, actually) working in tech, in the Bay Area, New York, Singapore and Bangalore.

Janani: 7 years at Google (New York, Singapore); Studied at Stanford; also worked at Flipkart and Microsoft

Vitthal: Also Google (Singapore) and studied at Stanford; Flipkart, Credit Suisse and INSEAD too

Swetha: Early Flipkart employee, IIM Ahmedabad and IIT Madras alum

Navdeep: longtime Flipkart employee too, and IIT Guwahati alum

We think we might have hit upon a neat way of teaching complicated tech courses in a funny, practical, engaging way, which is why we are so excited to be here on Udemy!

We hope you will try our offerings, and think you'll like them :-)

Ready to start learning?
Take This Course