Data Science and Machine Learning with Python - Hands On!

Become a data scientist in the tech industry! Comprehensive data mining and machine learning course with Python & Spark.
4.5 (2,363 ratings) Instead of using a simple lifetime average, Udemy calculates a
course's star rating by considering a number of different factors
such as the number of ratings, the age of ratings, and the
likelihood of fraudulent ratings.
15,830 students enrolled Bestselling in Machine Learning
$19
$120
84% off
Take This Course
  • Lectures 71
  • Length 9 hours
  • Skill Level All Levels
  • Languages English
  • Includes Lifetime access
    30 day money back guarantee!
    Available on iOS and Android
    Certificate of Completion
Wishlisted Wishlist

How taking a course works

Discover

Find online courses made by experts from around the world.

Learn

Take your courses with you and learn anywhere, anytime.

Master

Learn and practice real-world skills and achieve your goals.

About This Course

Published 2/2016 English

Course Description

Data Scientists enjoy one of the top-paying jobs, with an average salary of $120,000 according to Glassdoor and Indeed. That's just the average! And it's not just about money - it's interesting work too!

If you've got some programming or scripting experience, this course will teach you the techniques used by real data scientists in the tech industry - and prepare you for a move into this hot career path. This comprehensive course includes 68 lectures spanning almost 9 hours of video, and most topics include hands-on Python code examples you can use for reference and for practice. I’ll draw on my 9 years of experience at Amazon and IMDb to guide you through what matters, and what doesn’t.

The topics in this course come from an analysis of real requirements in data scientist job listings from the biggest tech employers. We'll cover the machine learning and data mining techniques real employers are looking for, including:

  • Regression analysis
  • K-Means Clustering
  • Principal Component Analysis
  • Train/Test and cross validation
  • Bayesian Methods
  • Decision Trees and Random Forests
  • Multivariate Regression
  • Multi-Level Models
  • Support Vector Machines
  • Reinforcement Learning
  • Collaborative Filtering
  • K-Nearest Neighbor
  • Bias/Variance Tradeoff
  • Ensemble Learning
  • Term Frequency / Inverse Document Frequency
  • Experimental Design and A/B Tests


...and much more! There's also an entire section on machine learning with Apache Spark, which lets you scale up these techniques to "big data" analyzed on a computing cluster.

If you're new to Python, don't worry - the course starts with a crash course. If you've done some programming before, you should pick it up quickly. This course shows you how to get set up on Microsoft Windows-based PC's; the sample code will also run on MacOS or Linux desktop systems, but I can't provide OS-specific support for them.

Each concept is introduced in plain English, avoiding confusing mathematical notation and jargon. It’s then demonstrated using Python code you can experiment with and build upon, along with notes you can keep for future reference.

If you’re a programmer looking to switch into an exciting new career track, or a data analyst looking to make the transition into the tech industry – this course will teach you the basic techniques used by real-world industry data scientists. I think you'll enjoy it!




What are the requirements?

  • You'll need a desktop computer (Windows, Mac, or Linux) capable of running Enthought Canopy 1.6.2 or newer. The course will walk you through installing the necessary free software.
  • Some prior coding or scripting experience is required.
  • At least high school level math skills will be required.
  • This course walks through getting set up on a Microsoft Windows based desktop PC. While the code in this course will run on other operating systems, we cannot provide OS-specific support for them.

What am I going to get from this course?

  • Extract meaning from large data sets using a wide variety of machine learning, data mining, and data science techniques with the Python programming language.
  • Perform machine learning on "big data" using Apache Spark and its MLLib package.
  • Design experiments and interpret the results of A/B tests
  • Visualize clustering and regression analysis in Python using matplotlib
  • Produce automated recommendations of products or content with collaborative filtering techniques
  • Apply best practices in cleaning and preparing your data prior to analysis

What is the target audience?

  • Software developers or programmers who want to transition into the lucrative data science career path will learn a lot from this course.
  • Data analysts in the finance or other non-tech industries who want to transition into the tech industry can use this course to learn how to analyze data using code instead of tools. But, you'll need some prior experience in coding or scripting to be successful.
  • If you have no prior coding or scripting experience, you should NOT take this course - yet. Go take an introductory Python course first.

What you get with this course?

Not for you? No problem.
30 day money back guarantee.

Forever yours.
Lifetime access.

Learn on the go.
Desktop, iOS and Android.

Get rewarded.
Certificate of completion.

Curriculum

Section 1: Getting Started
02:44

What to expect in this course, who it's for, and the general format we'll follow.

02:37

We'll show you where to download the scripts and sample data used in this course, and where to put it.

06:19

We'll install our Python 2.7 environment, Enthought Canopy, and install the Python libraries and packages we'll need for this course. When we're done, we'll do a quick test of running a real Python notebook!

15:58

In a crash course on Python and what's different about it, we'll cover the importance of whitespace in Python scripts, how to import Python modules, and Python data structures including lists, tuples, and dictionaries.

09:41

In part 2 of our Python crash course, we'll cover functions, boolean expressions, and looping constructs in Python.

03:55

This course presents Python examples in the form of iPython Notebooks, but we'll cover the other ways to run Python code: interactively from the Python shell, or running stand-alone Python script files.

Section 2: Statistics and Probability Refresher, and Python Practise
06:58

We cover the differences between continuous and discrete numerical data, categorical data, and ordinal data.

05:26

A refresher on mean, median, and mode - and when it's appropriate to use each.

08:30

We'll use mean, median, and mode in some real Python code, and set you loose to write some code of your own.

11:12

We'll cover how to compute the variation and standard deviation of a data distribution, and how to do it using some examples in Python.

03:27

Introducing the concepts of probability density functions (PDF's) and probability mass functions (PMF's).

07:45

We'll show examples of continuous, normal, exponential, binomial, and poisson distributions using iPython.

12:33

We'll look at some examples of percentiles and quartiles in data distributions, and then move on to the concept of the first four moments of data sets.

13:46

An overview of different tricks in matplotlib for creating graphs of your data, using different graph types and styles.

11:31

The concepts of covariance and correlation used to look for relationships between different sets of attributes, and some examples in Python.

11:03

We cover the concepts and equations behind conditional probability, and use it to try and find a relationship between age and purchases in some fabricated data using Python.

02:18

Here we'll go over my solution to the exercise I challenged you with in the previous lecture - changing our fabricated data to have no real correlation between ages and purchases, and seeing if you can detect that using conditional probability.

05:23

An overview of Bayes' Theorem, and an example of using it to uncover misleading statistics surrounding the accuracy of drug testing.

Section 3: Predictive Models
11:01

We introduce the concept of linear regression and how it works, and use it to fit a line to some sample data using Python.

08:04

We cover the concepts of polynomial regression, and use it to fit a more complex page speed - purchase relationship in Python.

08:06

Multivariate models let us predict some value given more than one attribute. We cover the concept, then use it to build a model in Python to predict car prices based on their age, mileage, and model. We'll also get our first look at the pandas library in Python.

04:36

We'll just cover the concept of multi-level modeling, as it is a very advanced topic. But you'll get the ideas and challenges behind it.

Section 4: Machine Learning with Python
08:57

The concepts of supervised and unsupervised machine learning, and how to evaluate the ability of a machine learning model to predict new values using the train/test technique.

05:47

We'll apply train test to a real example using Python.

03:59

We'll introduce the concept of Naive Bayes and how we might apply it to the problem of building a spam classifier.

08:05

We'll actually write a working spam classifier, using real email training data and a surprisingly small amount of code!

07:23

K-Means is a way to identify things that are similar to each other. It's a case of unsupervised learning, which could result in clusters you never expected!

05:14

We'll apply K-Means clustering to find interesting groupings of people based on their age and income.

03:09

Entropy is a measure of the disorder in a data set - we'll learn what that means, and how to compute it mathematically.

Article

In order to run the next lecture on decision trees, you'll need some software called "GraphViz" installed. Here's how.

08:43

Decision trees can automatically create a flow chart for making some decision, based on machine learning! Let's learn how they work.

09:47

We'll create a decision tree and an entire "random forest" to predict hiring decisions for job candidates.

05:59

Random Forests was an example of ensemble learning; we'll cover over techniques for combining the results of many models to create a better result than any one could produce on its own.

04:27

Support Vector Machines are an advanced technique for classifying data that has multiple features. It treats those features as dimensions, and partitions this higher-dimensional space using "support vectors."

05:36

We'll use scikit-learn to easily classify people using a C-Support Vector Classifier.

Section 5: Recommender Systems
07:57

One way to recommend items is to look for other people similar to you based on their behavior, and recommend stuff they liked that you haven't seen yet.

08:15

The shortcomings of user-based collaborative filtering can be solved by flipping it on its head, and instead looking at relationships between items instead of relationships between people.

09:08

We'll use the real-world MovieLens data set of movie ratings to take a first crack at finding movies that are similar to each other, which is the first step in item-based collaborative filtering.

07:59

Our initial results for movies similar to Star Wars weren't very good. Let's figure out why, and fix it.

10:22

We'll implement a complete item-based collaborative filtering system that uses real-world movie ratings data to recommend movies to any user.

05:29

As a student exercise, try some of my ideas - or some ideas of your own - to make the results of our item-based collaborative filter even better.

Section 6: More Data Mining and Machine Learning Techniques
03:44

KNN is a very simple supervised machine learning technique; we'll quickly cover the concept here.

12:29

We'll use the simple KNN technique and apply it to a more complicated problem: finding the most similar movies to a given movie just given its genre and rating information, and then using those "nearest neighbors" to predict the movie's rating.

05:44

Data that includes many features or many different vectors can be thought of as having many dimensions. Often it's useful to reduce those dimensions down to something more easily visualized, for compression, or to just distill the most important information from a data set (that is, information that contributes the most to the data's variance.) Principal Component Analysis and Singular Value Decomposition do that.

09:05

We'll use sckikit-learn's built-in PCA system to reduce the 4-dimensions Iris data set down to 2 dimensions, while still preserving most of its variance.

09:05

Cloud-based data storage and analysis systems like Hadoop, Hive, Spark, and MapReduce are turning the field of data warehousing on its head. Instead of extracting, transforming, and then loading data into a data warehouse, the transformation step is now more efficiently done using a cluster after it's already been loaded. With computing and storage resources so cheap, this new approach now makes sense.

12:44

We'll describe the concept of reinforcement learning - including Markov Decision Processes, Q-Learning, and Dynamic Programming - all using a simple example of developing an intelligent Pac-Man.

Section 7: Dealing with Real-World Data
06:15

Bias and Variance both contribute to overall error; understand these components of error and how they relate to each other.

10:55

We'll introduce the concept of K-Fold Cross-Validation to make train/test even more robust, and apply it to a real model.

07:10

Cleaning your raw input data is often the most important, and time-consuming, part of your job as a data scientist!

10:56

In this example, we'll try to find the top-viewed web pages on a web site - and see how much data pollution makes that into a very difficult task!

03:22

A brief reminder: some models require input data to be normalized, or within the same range, of each other. Always read the documentation on the techniques you are using.

07:00

A review of how outliers can affect your results, and how to identify and deal with them in a principled manner.

Section 8: Apache Spark: Machine Learning on Big Data
07:02

We'll present an overview of the steps needed to install Apache Spark on your desktop in standalone mode, and get started by getting a Java Development Kit installed on your system.

13:29

We'll install Spark itself, along with all the associated environment variables and ancillary files and settings needed for it to function properly.

09:10

A high-level overview of Apache Spark, what it is, and how it works.

11:42

We'll go in more depth on the core of Spark - the RDD object, and what you can do with it.

05:09

A quick overview of MLLib's capabilities, and the new data types it introduces to Spark.

16:00

We'll take the same problem for our earlier Decision Tree lecture - predicting hiring decisions for job candidates - but implement it using Spark and MLLib!

11:07

We'll take the same example of clustering people by age and income from our earlier K-Means lecture - but solve it in Spark!

06:44

We'll introduce the concept of TF-IDF (Term Frequency / Inverse Document Frequency) and how it applies to search problems, in preparation for using it with MLLib.

08:11

Let's use TF-IDF, Spark, and MLLib to create a rudimentary search engine for real Wikipedia pages!

07:57

Spark 2.0 introduced a new API for MLLib based on DataFrame objects; we'll look at an example of using this to create and use a linear regression model.

Section 9: Experimental Design
08:23

Running controlled experiments on your website usually involves a technique called the A/B test. We'll learn how they work.

05:59

How to determine significance of an A/B tests results, and measure the probability of the results being just from random chance, using T-Tests, the T-statistic, and the P-value.

06:04

We'll fabricate A/B test data from several scenarios, and measure the T-statistic and P-Value for each using Python.

03:24

Some A/B tests just don't affect customer behavior one way or another. How do you know how long to let an experiment run for before giving up?

09:26

There are many limitations associated with running short-term A/B tests - novelty effects, seasonal effects, and more can lead you to the wrong decisions. We'll discuss the forces that may result in misleading A/B test results so you can watch out for them.

Section 10: You made it!
02:59

Where to go from here - recommendations for books, websites, and career advice to get you into the data science job you want.

Article

If you enjoyed this course, please leave a star rating for it!

Bonus Lecture: Discounts on my Spark and MapReduce courses!
01:28

Students Who Viewed This Course Also Viewed

  • Loading
  • Loading
  • Loading

Instructor Biography

Frank Kane, Data Miner and Software Engineer

Frank Kane spent 9 years at Amazon and IMDb, developing and managing the technology that automatically delivers product and movie recommendations to hundreds of millions of customers, all the time. Frank holds 17 issued patents in the fields of distributed computing, data mining, and machine learning. In 2012, Frank left to start his own successful company, Sundog Software, which focuses on virtual reality environment technology, and teaching others about big data analysis.

Ready to start learning?
Take This Course