# Artificial Intelligence and Machine Learning Fundamentals

**5 hours**left at this price!

- 8 hours on-demand video
- Full lifetime access
- Access on mobile and TV

- Certificate of Completion

Get your team access to 4,000+ top Udemy courses anytime, anywhere.

Try Udemy for Business- Understand the importance, principles, and fields of AI
- Learn to implement basic artificial intelligence concepts with Python
- Apply regression and classification concepts to real-world problems
- Perform predictive analysis using decision trees and random forests
- Perform clustering using the k-means and mean shift algorithms
- Understand the fundamentals of deep learning via practical examples

Let’s begin the course with the content coverage.

Before discussing different AI techniques and algorithms, we will look at the fundamentals of artificial intelligence and machine learning and go over a few basic definitions. Let us learn more about it with the following topics:

· What Is Artificial Intelligence (AI)?

· Command-Line Shells

· Command-Line Terminology

In order to put the basic AI concepts into practice, we need a programming language that supports artificial intelligence. In this course, we have chosen Python. Let us learn more about it with the following topics:

· What Is Python?

· Why is Python Dominant in Machine Learning, Data Science, and AI?

· Anaconda in Python

· Python Libraries for Artificial Intelligence

An AI game player is nothing but an intelligent agent with a clear goal: to win the game and defeat all other players. Artificial Intelligence experiments have achieved surprising results when it comes to games. Today, no human can defeat an AI in the game of chess. Here are the topics that we will cover now:

· Intelligent Agents in Games

· Combinatoric Explosion: Chess

· Breadth First Search and Depth First Search

In AI search, the root of the tree is the starting state. We traverse from this state by generating successor nodes of the search tree. Search techniques differ regarding which order we visit these successor nodes in. Here are the topics that we will cover now:

· Breadth First Search and Depth First Search

· Exploring the State Space of a Game

· Estimating the Number of Possible States in Tic-Tac-Toe Game

· Creating an AI Randomly

In this video, we will formalize informed search techniques by defining and applying heuristics to guide our search. Let us learn more about it with the following topics:

· Uninformed and Informed Search

· Creating Heuristics

· Creating Heuristics - Euclidean Distance

· Creating Heuristics - Manhattan Distance

· Admissible and Non-Admissible Heuristics

· Heuristic Evaluation

· Heuristic 1: Simple Evaluation of the Endgame

· Heuristic 2: Utility of a Move

In the first two lessons, we learned how to define an intelligent agent, and how to create a heuristic that guides the agent toward a desired state. We learned that this was not perfect, because at times we ignored a few winning states in favor of a few losing states. Let us learn more about it with the following topics:

· Pathfinding with the A* Algorithm

· Finding the Shortest Path to Reach a Goal

· Finding the Shortest Path Using BFS

A* is a complete and optimal heuristic search algorithm that finds the shortest possible path between the current game state and the winning state. Let us learn more about it with the following topics:

· Introducing the A* Algorithm

· A* Search in Practice Using the Simpleai Library

In the first two topics, we saw how hard it was to create a winning strategy for a simple game such as Tic-Tac-Toe. The last topic introduced a few structures for solving search problems with the A* algorithm. We also saw that tools such as the simpleai library help us reduce the effort we put in to describe a task with code. We will use all of this knowledge to supercharge our game AI skills and solve more complex problems. Let us learn more about it with the following topics:

· Search Algorithms for Turn-Based Multiplayer Games

· The Minmax Algorithm

Regression helps us understand how the output variable changes when we keep all but one input variable fixed, and we change the remaining input variable. Let us learn more about it with the following topics:

· What Is Regression?

· Cartesian Coordinate System

· Features and Labels

· Feature Scaling

· Cross-Validation with Training and Test Data

We are illustrating the process of regression on a dummy example, where we only have one feature and very limited data. As we only have one feature, we have to format x_train by reshaping it with x_train.reshape (-1,1) to a NumPy array containing one feature. Here are the topics that we will cover now:

· Fitting a Model on Data with scikit-learn

· Linear Regression Using NumPy Arrays

· Fitting a Model Using NumPy Polyfit

· Predicting Values with Linear Regression

In the previous video, we dealt with linear regression with one variable. Now we will learn an extended version of linear regression, where we will use multiple input variables to predict the output.

We will rely on examples where we will load and predict stock prices. Therefore, we will experiment with the main libraries used for loading stock prices. Let us learn more about it with the following topics:

· Multiple Linear Regression

· The Process of Linear Regression

· Importing Data from Data Sources

· Loading Stock Prices with Yahoo Finance

· Loading Files with pandas

· Using Quandl to Load Stock Prices

Before we perform regression, we must choose the features we are interested in, and we also have to figure out the data range on which we do the regression. Preparing the data for prediction is the second step in the regression process. Let us learn more about it with the following topics:

· Preparing Data for Prediction

· Performing and Validating Linear Regression

· Predicting the Future

When performing polynomial regression, the relationship between x and y, or using their other names, features and labels, is not a linear equation, but a polynomial equation. This means that instead of the y = a*x+b equation, we can have multiple co-efficients and multiple powers of x in the equation.

To make matters even more complicated, we can perform polynomial regression using multiple variables, where each feature may have co-efficients multiplying different powers of the feature.

Our task is to find a curve that best fits our dataset. Once polynomial regression is extended to multiple variables, we will learn the Support Vector Machines model to perform polynomial regression. Let us learn more about it with the following topics:

· Polynomial Regression with One Variable

· 1st, 2nd, and 3rd Degree Polynomial Regression

· Polynomial Regression with Multiple Variables

· Support Vector Regression

· Support Vector Machines with a 3 Degree Polynomial Kernel

While regression focuses on creating a model that best fits our data to predict the future, classification is all about creating a model that separates our data into separate classes. Let us learn more about it with the following topics:

· The Fundamentals of Classification

· CSV Format

· Loading Datasets

Before building a classifier, we are better off formatting our data so that we can keep relevant data in the most suitable format for classification, and removing all data that we are not interested in. Here are the topics that we will cover now:

· Data Pre-processing

· Minmax Scaling of the Goal Column

· Identifying Features and Labels

· Cross-Validation with scikit-learn

The goal of classification algorithms is to divide data so that we can determine which data points belong to which region. Suppose that a set of classified points is given. Our task is to determine which class a new data point belongs to. The k-nearest neighbor classifier receives classes of data points with given feature and label values. The goal of the algorithm is to classify data points. These data points contain feature coordinates. Let us learn more about it with the following topics:

· Introducing the K-Nearest Neighbor Algorithm

· Importing Data from Data Sources

· Distance Functions

· The Manhattan/Hamming Distance

· Illustrating the K-nearest Neighbor Classifier Algorithm

· k-nearest Neighbor Classifcation in scikit-learn

· Parameterization of the k-nearest neighbor Classifier in scikit-learn

We first used support vector machines for regression in Lesson 3, Regression. In this topic, you will find out how to use support vector machines for classification. As always, we will use scikit-learn to run our examples in practice. Let us learn more about it with the following topics:

· What are Support Vector Machine Classifiers?

· Understanding Support Vector Machines

· Support Vector Machines in scikit-learn

· Parameters of the scikit-learn SVM

In decision trees, we have input and corresponding output in the training data. A decision tree, like any tree, has leaves, branches, and nodes. Leaves are the end nodes like a yes or no. Nodes are where a decision is taken. A decision tree consists of rules that we use to formulate a decision on the prediction of a data point. Let us learn more about it with the following topics:

· Decision Trees

· Creating a Decision Tree

· Credit Worthiness – Rules and Observations

Instead of entropy, there is another widely used metric that can be used to measure the randomness of a distribution: Gini Impurity. Let us learn more about it with the following topics:

· Exit Condition

· Building Decision Tree Classifiers using scikit-learn

· Evaluating the Performance of Classifiers

Random Forest Classification and regression are ensemble algorithms. The idea behind ensemble learning is that we take an aggregated view over a decision of multiple agents that potentially have different weaknesses. Let us learn more about it with the following topics:

· Constructing a Random Forest

· Bagging

The interface of scikit-learn makes it easy to handle the random forest classifier. Throughout the last three lessons, we have already gotten used to this way of calling a classifier or a regression model for prediction. Let us learn more about it with the following topics:

· Parameterization of the Random Forest Classifier

· Feature Importance

· Extremely Randomized Trees

In the previous lessons, we dealt with supervised learning algorithms to perform classification and regression. We used training data to train our classification or regression model, and then we validated our model using testing data. In this lesson, we will perform unsupervised learning by using clustering algorithms. Let us learn more about it with the following topics:

· Clustering

· Defining the Clustering Problem

· Clustering Approaches

· Clustering Algorithms Supported by scikit-learn

Mean shift is a hierarchical clustering algorithm. Unlike the k-means algorithm, in mean shift, the clustering algorithm determines how many clusters are needed, and also performs the clustering. This is advantageous because we rarely know how many clusters we are looking for. Let us learn more about it with the following topics:

· Mean Shift Algorithm

· Illustrating Mean Shift in 2D

· Mean Shift Algorithm in scikit-learn

· Image Processing in Python

TensorFlow is one of the most important machine learning and open source libraries maintained by Google. The TensorFlow API is available in many languages, including Python, JavaScript, Java, and C. As TensorFlow supports supervised learning, we will use TensorFlow for building a graph model, and then use this model for prediction. Let us learn more about it with the following topics:

· TensorFlow for Python

· Installing TensorFlow in the Anaconda Navigator

· TensorFlow Operations

· Using Basic Operations and TensorFlow Constants

· Placeholders and Variables

· Global Variables Initializer

Neural networks are the newest branch of AI. Neural networks are inspired by how the human brain works. Originally, they were invented in the 1940s by Warren McCulloch and Walter Pitts. The neural network was a mathematical model that was used for describing how the human brain can solve problems. Here are the topics that we will cover now:

· Use Cases

· Biases

· Use Cases for Artificial Neural Networks

· Activation Functions

As artificial neural networks provide a supervised-learning technique, we have to train our model using training data. Training the network is the process of finding the weights belonging to each variable-input pair. The process of weight optimization consists of the repeated execution of two steps: forward propagation and backward propagation. The names forward and backward propagation imply how these techniques work. Let us learn more about it with the following topics:

· Forward and Backward Propagation

· Configuring a Neural Network

· Importing the TensorFlow Digit Dataset

· Modeling Features and Labels

· TensorFlow Modeling for Multiple Labels

· Optimizing the Variables

In this video, we will increase the number of layers of the neural network. You may remember that we can add hidden layers to our graph. We will target improving the accuracy of our model by experimenting with hidden layers. Let us learn more about it with the following topics:

· Adding Layers

· Convolutional Neural Networks

- You do not need any prior experience in AI.
- We recommend that you have knowledge of high school level mathematics and at least one programming language, preferably Python.

Machine learning and neural networks are fast becoming pillars on which you can build intelligent applications. The course will begin by introducing you to Python and discussing using AI search algorithms. You will learn math-heavy topics, such as regression and classification, illustrated by Python examples.

You will then progress on to advanced AI techniques and concepts, and work on real-life data sets to form decision trees and clusters. You will be introduced to neural networks, which is a powerful tool benefiting from Moore's law applied on 21st-century computing power. By the end of this course, you will feel confident and look forward to building your own AI applications with your newly-acquired skills!

**About the Author**

**Zsolt Nagy** is an engineering manager in an ad tech company heavy on data science. After acquiring his MSc in inference on ontologies, he used AI mainly for analyzing online poker strategies to aid professional poker players in decision making. After the poker boom ended, he put extra effort into building a T-shaped profile in leadership and software engineering.

- This course is ideal for software developers and data scientists, who want to enrich their projects with machine learning.