Artificial Intelligence and Machine Learning Fundamentals
4.5 (26 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
201 students enrolled

Artificial Intelligence and Machine Learning Fundamentals

Learn to develop real-world applications powered by the latest advances in intelligent systems
4.5 (26 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
201 students enrolled
Created by Packt Publishing
Last updated 2/2020
English
English [Auto]
Current price: $139.99 Original price: $199.99 Discount: 30% off
5 hours left at this price!
30-Day Money-Back Guarantee
This course includes
  • 8 hours on-demand video
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
Training 5 or more people?

Get your team access to 4,000+ top Udemy courses anytime, anywhere.

Try Udemy for Business
What you'll learn
  • Understand the importance, principles, and fields of AI
  • Learn to implement basic artificial intelligence concepts with Python
  • Apply regression and classification concepts to real-world problems
  • Perform predictive analysis using decision trees and random forests
  • Perform clustering using the k-means and mean shift algorithms
  • Understand the fundamentals of deep learning via practical examples
Course content
Expand all 53 lectures 07:47:57
+ Principles of Artificial Intelligence
12 lectures 01:45:32

Let’s begin the course with the content coverage.

Preview 10:40

Before you start this course, you will need to have Python 3.6 and Anaconda installed. You will find the steps to install them in the coming videos.

Installation and Setup
04:30

Let us begin with the first lesson and understand what we are going to cover in our learning journey.

Lesson Overview
03:21

Before discussing different AI techniques and algorithms, we will look at the fundamentals of artificial intelligence and machine learning and go over a few basic definitions. Let us learn more about it with the following topics:

· What Is Artificial Intelligence (AI)?

· Command-Line Shells

· Command-Line Terminology

Introduction to AI and Machine Learning
08:13

Let us look at different ways of how AI solves real world problems. Here are the topics that we will cover now:

· How Does AI Solve Real World Problems?

· Diversity of Disciplines

How Does AI Solve Real World Problems?
14:22

Now that we know what Artificial Intelligence is, let's move on and investigate different fields in which AI is applied. Let us learn more about it with the following topics:

· Simulation of Human Behavior

· Simulating Intelligence – The Turing Test

Fields and Applications of Artificial Intelligence
08:30

In the previous videos, we discovered the fundamentals of artificial intelligence. One of the core tasks for artificial intelligence is learning. Let us learn more about it with the following topics:

· Intelligent Agents

· Classification and Prediction

· Learning Models

AI Tools and Learning Models
06:45

In order to put the basic AI concepts into practice, we need a programming language that supports artificial intelligence. In this course, we have chosen Python. Let us learn more about it with the following topics:

· What Is Python?

· Why is Python Dominant in Machine Learning, Data Science, and AI?

· Anaconda in Python

· Python Libraries for Artificial Intelligence

The Role of Python in Artificial Intelligence
14:17

The NumPy library will play a major role in this course, so it is worth exploring it further. Here are the topics that we will cover now:

· A Brief Introduction to the NumPy Library

· Matrix Operations Using NumPy

A Brief Introduction to the NumPy Library
06:58

An AI game player is nothing but an intelligent agent with a clear goal: to win the game and defeat all other players. Artificial Intelligence experiments have achieved surprising results when it comes to games. Today, no human can defeat an AI in the game of chess. Here are the topics that we will cover now:

· Intelligent Agents in Games

· Combinatoric Explosion: Chess

· Breadth First Search and Depth First Search

Python for Game AI
11:52

In AI search, the root of the tree is the starting state. We traverse from this state by generating successor nodes of the search tree. Search techniques differ regarding which order we visit these successor nodes in. Here are the topics that we will cover now:

· Breadth First Search and Depth First Search

· Exploring the State Space of a Game

· Estimating the Number of Possible States in Tic-Tac-Toe Game

· Creating an AI Randomly

Breadth First Search and Depth First Search
13:58

Summarize your learning from this lesson.

Lesson Summary
02:06

Take the assessment to test your understanding about this lesson.

Test Your Knowledge
8 questions
+ AI with Search Techniques and Games
8 lectures 01:20:15

Let us begin with the second lesson and understand what we are going to cover in our learning journey.

Lesson Overview
11:04

In this video, we will formalize informed search techniques by defining and applying heuristics to guide our search. Let us learn more about it with the following topics:

· Uninformed and Informed Search

· Creating Heuristics

· Creating Heuristics - Euclidean Distance

· Creating Heuristics - Manhattan Distance

· Admissible and Non-Admissible Heuristics

· Heuristic Evaluation

· Heuristic 1: Simple Evaluation of the Endgame

· Heuristic 2: Utility of a Move

Heuristics
12:49

In this video, we will perform static evaluation on the Tic-Tac-Toe game using heuristic function. Here are the topics that we will cover now:

· Using Heuristics for an Informed Search

· Types of Heuristics

Tic-Tac-Toe
10:04

In the first two lessons, we learned how to define an intelligent agent, and how to create a heuristic that guides the agent toward a desired state. We learned that this was not perfect, because at times we ignored a few winning states in favor of a few losing states. Let us learn more about it with the following topics:

· Pathfinding with the A* Algorithm

· Finding the Shortest Path to Reach a Goal

· Finding the Shortest Path Using BFS

Pathfinding with the A* Algorithm
07:35

A* is a complete and optimal heuristic search algorithm that finds the shortest possible path between the current game state and the winning state. Let us learn more about it with the following topics:

· Introducing the A* Algorithm

· A* Search in Practice Using the Simpleai Library

Introducing the A* Algorithm
19:29

In the first two topics, we saw how hard it was to create a winning strategy for a simple game such as Tic-Tac-Toe. The last topic introduced a few structures for solving search problems with the A* algorithm. We also saw that tools such as the simpleai library help us reduce the effort we put in to describe a task with code. We will use all of this knowledge to supercharge our game AI skills and solve more complex problems. Let us learn more about it with the following topics:

· Search Algorithms for Turn-Based Multiplayer Games

· The Minmax Algorithm

Game AI with the Minmax Algorithm
09:34

In this video we will continue to learn how to develop AI games using alpha-beta pruning.

Game AI with Alpha-Beta Pruning
08:17

Summarize your learning from this lesson.

Lesson Summary
01:23
Test Your Knowledge
5 questions
+ Regression
7 lectures 01:04:45

Let us begin with the third lesson and understand what we are going to cover in our learning journey.

Lesson Overview
02:35

Regression helps us understand how the output variable changes when we keep all but one input variable fixed, and we change the remaining input variable. Let us learn more about it with the following topics:

· What Is Regression?

· Cartesian Coordinate System

· Features and Labels

· Feature Scaling

· Cross-Validation with Training and Test Data

Linear Regression with One Variable
13:15

We are illustrating the process of regression on a dummy example, where we only have one feature and very limited data. As we only have one feature, we have to format x_train by reshaping it with x_train.reshape (-1,1) to a NumPy array containing one feature. Here are the topics that we will cover now:

· Fitting a Model on Data with scikit-learn

· Linear Regression Using NumPy Arrays

· Fitting a Model Using NumPy Polyfit

· Predicting Values with Linear Regression

Fitting a Model on Data with scikit-learn
13:51

In the previous video, we dealt with linear regression with one variable. Now we will learn an extended version of linear regression, where we will use multiple input variables to predict the output.

We will rely on examples where we will load and predict stock prices. Therefore, we will experiment with the main libraries used for loading stock prices. Let us learn more about it with the following topics:

· Multiple Linear Regression

· The Process of Linear Regression

· Importing Data from Data Sources

· Loading Stock Prices with Yahoo Finance

· Loading Files with pandas

· Using Quandl to Load Stock Prices

Linear Regression with Multiple Variables
10:41

Before we perform regression, we must choose the features we are interested in, and we also have to figure out the data range on which we do the regression. Preparing the data for prediction is the second step in the regression process. Let us learn more about it with the following topics:

· Preparing Data for Prediction

· Performing and Validating Linear Regression

· Predicting the Future

Preparing Data for Protection
09:15

When performing polynomial regression, the relationship between x and y, or using their other names, features and labels, is not a linear equation, but a polynomial equation. This means that instead of the y = a*x+b equation, we can have multiple co-efficients and multiple powers of x in the equation.

To make matters even more complicated, we can perform polynomial regression using multiple variables, where each feature may have co-efficients multiplying different powers of the feature.

Our task is to find a curve that best fits our dataset. Once polynomial regression is extended to multiple variables, we will learn the Support Vector Machines model to perform polynomial regression. Let us learn more about it with the following topics:

· Polynomial Regression with One Variable

· 1st, 2nd, and 3rd Degree Polynomial Regression

· Polynomial Regression with Multiple Variables

· Support Vector Regression

· Support Vector Machines with a 3 Degree Polynomial Kernel

Polynomial and Support Vector Regression
13:37

Summarize your learning from this lesson.

Lesson Summary
01:31

Take the assessment to test your understanding about this lesson.

Test Your Knowledge
5 questions
+ Classification
6 lectures 47:42

Let us begin with the fourth lesson and understand what we are going to cover in our learning journey.

Lesson Overview
00:53

While regression focuses on creating a model that best fits our data to predict the future, classification is all about creating a model that separates our data into separate classes. Let us learn more about it with the following topics:

· The Fundamentals of Classification

· CSV Format

· Loading Datasets

The Fundamentals of Classification Part 1
06:36

Before building a classifier, we are better off formatting our data so that we can keep relevant data in the most suitable format for classification, and removing all data that we are not interested in. Here are the topics that we will cover now:

· Data Pre-processing

· Minmax Scaling of the Goal Column

· Identifying Features and Labels

· Cross-Validation with scikit-learn

The Fundamentals of Classification Part 2
12:03

The goal of classification algorithms is to divide data so that we can determine which data points belong to which region. Suppose that a set of classified points is given. Our task is to determine which class a new data point belongs to. The k-nearest neighbor classifier receives classes of data points with given feature and label values. The goal of the algorithm is to classify data points. These data points contain feature coordinates. Let us learn more about it with the following topics:

· Introducing the K-Nearest Neighbor Algorithm

· Importing Data from Data Sources

· Distance Functions

· The Manhattan/Hamming Distance

· Illustrating the K-nearest Neighbor Classifier Algorithm

· k-nearest Neighbor Classifcation in scikit-learn

· Parameterization of the k-nearest neighbor Classifier in scikit-learn

The k-nearest neighbor Classifier
13:27

We first used support vector machines for regression in Lesson 3, Regression. In this topic, you will find out how to use support vector machines for classification. As always, we will use scikit-learn to run our examples in practice. Let us learn more about it with the following topics:

· What are Support Vector Machine Classifiers?

· Understanding Support Vector Machines

· Support Vector Machines in scikit-learn

· Parameters of the scikit-learn SVM

Classification with Support Vector Machines
13:08

Summarize your learning from this lesson.

Lesson Summary
01:35

Take the assessment to test your understanding about this lesson.

Test Your Knowledge
5 questions
+ Using Trees for Predictive Analysis
8 lectures 01:06:50

Let us begin with the fifth lesson and understand what we are going to cover in our learning journey.

Lesson Overview
01:18

In decision trees, we have input and corresponding output in the training data. A decision tree, like any tree, has leaves, branches, and nodes. Leaves are the end nodes like a yes or no. Nodes are where a decision is taken. A decision tree consists of rules that we use to formulate a decision on the prediction of a data point. Let us learn more about it with the following topics:

· Decision Trees

· Creating a Decision Tree

· Credit Worthiness – Rules and Observations

Introduction to Decision Trees
14:03

In information theory, entropy measures how randomly distributed the possible values of an attribute are. The higher the degree of randomness, the higher the entropy of the attribute. Here are the topics that we will cover now:

· Calculating the Entropy

· Information Gain

Entropy
07:30

Instead of entropy, there is another widely used metric that can be used to measure the randomness of a distribution: Gini Impurity. Let us learn more about it with the following topics:

· Exit Condition

· Building Decision Tree Classifiers using scikit-learn

· Evaluating the Performance of Classifiers

Gini Impurity
11:08

In this video, we will learn about precision and recall. Let us learn more about it with the following topics:

· Precision and Recall

· Calculating the F1 Score

· Confusion Matrix

Precision and Recall
15:34

Random Forest Classification and regression are ensemble algorithms. The idea behind ensemble learning is that we take an aggregated view over a decision of multiple agents that potentially have different weaknesses. Let us learn more about it with the following topics:

· Constructing a Random Forest

· Bagging

Random Forest Classifier
09:04

The interface of scikit-learn makes it easy to handle the random forest classifier. Throughout the last three lessons, we have already gotten used to this way of calling a classifier or a regression model for prediction. Let us learn more about it with the following topics:

· Parameterization of the Random Forest Classifier

· Feature Importance

· Extremely Randomized Trees

Random Forest Classification Using scikit-learn
06:42

Summarize your learning from this lesson.

Lesson Summary
01:31

Take the assessment to test your understanding about this lesson.

Test Your Knowledge
5 questions
+ Clustering
5 lectures 40:57

Let us begin with the sixth lesson and understand what we are going to cover in our learning journey.

Lesson Overview
01:22

In the previous lessons, we dealt with supervised learning algorithms to perform classification and regression. We used training data to train our classification or regression model, and then we validated our model using testing data. In this lesson, we will perform unsupervised learning by using clustering algorithms. Let us learn more about it with the following topics:

· Clustering

· Defining the Clustering Problem

· Clustering Approaches

· Clustering Algorithms Supported by scikit-learn

Introduction to Clustering
11:28

The k-means algorithm is a flat clustering algorithm. Here are the topics that we will cover now:

· The k-means Algorithm

· Use Cases

· k-means in scikit-learn

· Parameterization of the k-means Algorithm in scikit-learn

· Retrieving the Center Points and the Labels

The k-means Algorithm
13:38

Mean shift is a hierarchical clustering algorithm. Unlike the k-means algorithm, in mean shift, the clustering algorithm determines how many clusters are needed, and also performs the clustering. This is advantageous because we rarely know how many clusters we are looking for. Let us learn more about it with the following topics:

· Mean Shift Algorithm

· Illustrating Mean Shift in 2D

· Mean Shift Algorithm in scikit-learn

· Image Processing in Python

Mean Shift Algorithm
12:58

Summarize your learning from this lesson.

Lesson Summary
01:31

Take the assessment to test your understanding about this lesson.

Test Your Knowledge
5 questions
+ Deep Learning with Neural Networks
7 lectures 01:01:56

Let us begin with the seventh lesson and understand what we are going to cover in our learning journey.

Lesson Overview
01:14

TensorFlow is one of the most important machine learning and open source libraries maintained by Google. The TensorFlow API is available in many languages, including Python, JavaScript, Java, and C. As TensorFlow supports supervised learning, we will use TensorFlow for building a graph model, and then use this model for prediction. Let us learn more about it with the following topics:

· TensorFlow for Python

· Installing TensorFlow in the Anaconda Navigator

· TensorFlow Operations

· Using Basic Operations and TensorFlow Constants

· Placeholders and Variables

· Global Variables Initializer

TensorFlow for Python
13:06

Neural networks are the newest branch of AI. Neural networks are inspired by how the human brain works. Originally, they were invented in the 1940s by Warren McCulloch and Walter Pitts. The neural network was a mathematical model that was used for describing how the human brain can solve problems. Here are the topics that we will cover now:

· Use Cases

· Biases

· Use Cases for Artificial Neural Networks

· Activation Functions

Introduction to Neural Networks
15:39

As artificial neural networks provide a supervised-learning technique, we have to train our model using training data. Training the network is the process of finding the weights belonging to each variable-input pair. The process of weight optimization consists of the repeated execution of two steps: forward propagation and backward propagation. The names forward and backward propagation imply how these techniques work. Let us learn more about it with the following topics:

· Forward and Backward Propagation

· Configuring a Neural Network

· Importing the TensorFlow Digit Dataset

· Modeling Features and Labels

· TensorFlow Modeling for Multiple Labels

· Optimizing the Variables

Forward and Backward Propagation
14:11

In this video, we will learn how to create a TensorFlow session and run the model. Let us learn more about it with the following topics:

· Training the TensorFlow Model

· Using the Model for Prediction

· Testing the Model

· Randomizing the Sample Size

Training the TensorFlow Model
09:02

In this video, we will increase the number of layers of the neural network. You may remember that we can add hidden layers to our graph. We will target improving the accuracy of our model by experimenting with hidden layers. Let us learn more about it with the following topics:

· Adding Layers

· Convolutional Neural Networks

Deep Learning
05:35

Summarize your learning from this lesson.

Lesson Summary
03:09

Take the assessment to test your understanding about this lesson.

Test Your Knowledge
4 questions
Requirements
  • You do not need any prior experience in AI.
  • We recommend that you have knowledge of high school level mathematics and at least one programming language, preferably Python.
Description

Machine learning and neural networks are fast becoming pillars on which you can build intelligent applications. The course will begin by introducing you to Python and discussing using AI search algorithms. You will learn math-heavy topics, such as regression and classification, illustrated by Python examples.

You will then progress on to advanced AI techniques and concepts, and work on real-life data sets to form decision trees and clusters. You will be introduced to neural networks, which is a powerful tool benefiting from Moore's law applied on 21st-century computing power. By the end of this course, you will feel confident and look forward to building your own AI applications with your newly-acquired skills!

About the Author

Zsolt Nagy is an engineering manager in an ad tech company heavy on data science. After acquiring his MSc in inference on ontologies, he used AI mainly for analyzing online poker strategies to aid professional poker players in decision making. After the poker boom ended, he put extra effort into building a T-shaped profile in leadership and software engineering.

Who this course is for:
  • This course is ideal for software developers and data scientists, who want to enrich their projects with machine learning.