The Ultimate 2019 Deep Learning & Machine Learning Bootcamp
4.1 (48 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
3,733 students enrolled

The Ultimate 2019 Deep Learning & Machine Learning Bootcamp

Use Tensorflow, Keras & Python to build Feedforward, Convolutional, Recurrent, Generative Adversarial Networks & more...
4.1 (48 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
3,733 students enrolled
Last updated 11/2019
English
English [Auto]
Current price: $139.99 Original price: $199.99 Discount: 30% off
5 hours left at this price!
30-Day Money-Back Guarantee
This course includes
  • 3 hours on-demand video
  • 4 downloadable resources
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
Training 5 or more people?

Get your team access to 4,000+ top Udemy courses anytime, anywhere.

Try Udemy for Business
What you'll learn
  • Use Tensorflow and Keras with Python
  • Create Neural Network models, train them and check their accuracy
  • Choose the best Neural Network architecture for a given problem
Course content
Expand all 37 lectures 03:04:25
+ Introduction
5 lectures 16:27

This is an Introduction to the Course and we will talk about the course plan and the prerequisites to the course.

The prerequisites will mainly be maths and software installation skills.



Preview 02:30

Clarify the Terminologies: It is important at this stage to clarify the terms Artificial Intelligence, Machine Learning, Neural Networks, Deep Neural Networks and how they are related. This is because as a beginner, you will hear these terms used in different context and it might seem confusing.

Difference between traditional programming and machine learning. Machine Learning techniques involve giving the network input and output examples and optimize the machine such that it can replicate the output for given input. So it is not given rules but instead rules are inferred.

We will also have a first look at how a Neural Network looks.

Definitions
02:46

In this lecture you will have a brief history of neural networks, focusing on achievement of Deep Learning. It will give you a good idea of how deep learning progressed and the breakthroughs through time.

History of Neural Networks
03:16

In this lecture you will go through the typical uses of Deep Learning. There are different categories of use of Deep Learning and the main ones will be explained here.

Uses of Neural Networks
05:53

This course starts with these four types of Neural Networks:

(1) Feedforward

(2) Convolutional

(3) Recurrent

(4) Generative Adversarial

A brief explanation will be given here and more details will be given in later sections, along with the practical implementations.


Types of Neural Networks
02:02
Introduction
2 questions
+ Getting Ready - Install Python, Jupyter Notebook and Tensorflow
4 lectures 19:38

Here you will be shown how to install and set the environment to use Tensorflow.

https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.html

Install Tensorflow

pip install tensorflow==2.0.0-beta0

Create Tensorflow Environment

conda create -n tensorflow_env tensorflow

Activate Tensorflow Environment

conda activate tensorflow_env

Test if Tensorflow works

Installation and Setting up
03:52
Installation and Setting up
4 questions

This is a short lecture on how to find your way around Jupyter Notebook. We will use Jupyter Notebook for all the practicals, so it is good idea to get used with the GUI.

Jupyter Notebook
02:26

If you haven't used Python before, don't worry! This lecture will help you get started with Python. You will learn about:

Variables, Printing

Functions

If Else statement

Dictionaries, List, Arrays - what are the differences?

Basic Pythoning
10:09
Basic Pythoning
4 questions

This is lecture is about guiding you through your first project in Tensorflow and implement a Neural Network. You will simply type the commands up till you can run training and see learning happening with loss number decreasing and accuracy increasing.

Hello World project - Handwritten digits recognition
03:11
Installation
2 questions
+ Components of Deep Neural Networks
3 lectures 14:24

The main components of the Neural Networks are:

(1) Layers

They are usually designed by layers. There is one Input and one Output layer. In between these layers there is at least one hidden layer.

Note the terminology: Deep Neural Network is when there are at least 2 hidden layers, while Neural Networks is where there is only one hidden layer. But both works on the same principle.

E.g The number of neurons in a neural network is of the following structure:

2 - 3 - 4 -1

That is, the number of neurons on Input layer = 2

number of neurons on first Hidden Layer = 3

number of neurons on second Hidden Layer = 4

number of neurons on Output Layer = 1

(2) Nodes

(3) Weights

Weights are multiplying values on each interconnection

(4) Interconnections

The Neural Networks consist of interconnection of neurons.

(5) Activation functions

Activation functions are usually non-linear mathematical functions that simulate the firing actions of biological neurons. 

(6) Bias

Preview 04:16
Layers and Nodes
3 questions
Weights
2 questions

The activation functions used in Deep Learning mimics the firing of neurons. To model this we have suggested several mathematical functions, with various degree of success. Example of Activation Functions:

1. ReLu

2. Sigmoid

3. Softmax

4. Leaky ReLu


Read : https://medium.com/the-theory-of-everything/understanding-activation-functions-in-neural-networks-9491262884e0


Activation Functions
04:28
Activation Function
3 questions

How to practically calculate feed forward computation?

An example is given with numbers and how calculations are made starting from input values to output values

Feedforward calculation example
05:40
+ Training Neural Networks
5 lectures 20:28

This lecture introduces the training of neural networks. The concept of training of neural networks will be explained.

What is training in Neural Networks?
04:33

Loss functions are important part of training because they are used to computer the error. The error value is used to update the weights in the networks, thus learning happens in this way.

Loss Functions
04:29

Training is basically an optimization exercise. The goal of training can be summarized as to minimize the error function. This is explained in this lecture.

Minimizing Error
05:27

There is a parameter in the optimization function called the learning rate.

Learning rate is a parameter that is chosen by the programmer. A high learning rate means that bigger steps are taken in the weight updates and thus, it may take less time for the model to converge on an optimal set of weights.

However, a learning rate that is too high could result in jumps that are too large and not precise enough to reach the optimal point, as seen below

Learning Rate
03:15

In this lecture we will talk about initialisation and normalisation. Although these two techniques seem unrelated to training and learning, research shows that the initialisation and normalisation methods may affect training.

Initialization and Normalization
02:44
Training
2 questions
+ Tensorflow Libraries for Deep Learnig
6 lectures 29:30

The Design flow for a typical Deep Learning project on Tensorflow will be explained here. This is the introduction to this section. The rest of the section will be about detailing each stage in the flow.

The Design flow
04:37

This lecture will show how to access datasets such as MNIST and Fashion MNIST. You will learn how to load them and get the data in variables for use.

Datasets
07:53

This stage in the design flow is where you will build the model and defining which layers they will contain.

Build Models with Layers
07:12
  • Loss function —This measures how accurate the model is during training. We want to minimize this function to "steer" the model in the right direction.

  • Optimizer —This is how the model is updated based on the data it sees and its loss function.

  • Metrics —Used to monitor the training and testing steps. The following example uses accuracy, the fraction of the images that are correctly classified.

model.compile(

optimizer='adam',

loss='sparse_categorical_crossentropy', 

metrics=['accuracy']

)

Compile Models
04:35

model.fit(train_images, train_labels, epochs=5)

This command will be executed to train the model and this is called the 'Fitting' stage. It is an optimization.

Fitting
03:12

Evaluate is when you use test data and evaluate the model by calculating the loss and typically, accuracy as the metrics. Prediction is about querying the model with a data and see what output label it gives.

Evaluate and Predict
02:01
Design Flow
2 questions
+ Convolution Neural Networks
5 lectures 33:30

This is an introduction to Convolution Neural Networks. You will have a high level explanation of this kind of networks and where they are used.

What are CNN?
06:44

Convolution is a processing done in the layers of CNNs and this lecture is about how convolution works.

Convolution
06:29

Pooling is another main processing in the CNNs and this is explained in details in this lecture.

Pooling
02:05

This is the first part of the project. The project that you will do in this section is to classify Fashion MNIST images into the correct label. The project will be executed on Jupyter.

Project : Fashion MNIST using CNN - Part 1
07:43

This is part two of the project where you will train the model and predict labels using a small function.

Project: Fashion MNIST using CNN - Part 2
10:29
Convolutional Neural Networks
2 questions
+ Recurrent Neural Networks
6 lectures 34:45

This is an introduction to Recurrent Neural Networks. You will learn why are they used, what types of problems it solves and the basic idea on how they work.

Introduction
05:51

In this lecture you will learn about the basic structure of RNNs and how they are unrolled for analysis.

Structure
04:33

This short lecture shows concrete examples of how RNNs are used. This will give you a better understanding of practical use of RNNs.

Examples
02:49

This lecture explains how RNNs are trained. Since they are different from Feedforward Neural Networks, the concept of training in relation to RNNs is detailed here.

Training
06:41

The problem of long dependencies are solved by modification of RNNs. Two architectures are used: Long Short-Term Memory and Gated Recurrent Units. These two variations of RNNs are explained at a high level. We won't get into too much details.

LSTM (Long Short-Term Memory) and GRU (Gated Recurrent Unit)
02:52

This project in on using a Gated Recurrent Unit to train it with the Shakespeare text. And at the end the neural network will be queried with a single word and run until it create a text of 1000 characters. The interesting part of this project is that it create a text similar to shakespeare's text.

Project - Shakespeare text immitation using GRU
11:59
Recurrent Neural Networks
2 questions
+ Generative Adversarial Networks
3 lectures 15:43

This is an introduction to Generative Adversarial Networks. You will learn the basic concepts behind GANs and and introduction to the structure.

Introduction
04:42

This lecture will give more details on how GANs work.

Preview 04:24

This is an interesting project that you will do here. It is about training a GAN with MNIST data and get an network that is able to generate handwritten digits similar to that of MNIST.

Project - Generate images of handwritten digits
06:37
Generative Adversarial Networks
2 questions
Requirements
  • Basic programming concepts
  • High school Maths
  • Basic level software installation skills
Description

This course was designed to bring anyone up to speed on Machine Learning & Deep Learning in the shortest time.

This particular field in computer engineering has gained an exponential growth in interest worldwide following major progress in this field.


The course starts with building on foundation concepts relating to Neural Networks. Then the course goes over Tensorflow libraries and Python language to get the students ready to build practical projects.

The course will go through four types of neural networks:

1. The simple feedforward

2. Convolutional

3. Recurrent

4. Generative Adversarial

You will build a practical Tensorflow project for each of the above Neural Networks. You will be shown exactly how to write the codes for the models, train and evaluate them.

Here is a list of projects the students will implement:

1. Build a Simple Feedforward Network for MNIST dataset, a dataset of handwritten digits

2. Build a Convolutional Network to classify Fashion items, from the Fashion MNIST dataset

3. Build a Recurrent Network to generate a text similar to Shakespeare text

4. Build a Generative Adversarial Network to generate images similar to MNIST dataset


Who this course is for:
  • Those seeking entry level roles in AI/Machine Learning
  • Web Developers who want to implement Machine Learning for their clients
  • Students in Computer science
  • Researchers who are looking for kickstart in Deep Learnig
  • Software project managers who plan to use ML in clients' projects