The Ultimate 2019 Deep Learning & Machine Learning Bootcamp
- 3 hours on-demand video
- 4 downloadable resources
- Full lifetime access
- Access on mobile and TV
- Certificate of Completion
Get your team access to 4,000+ top Udemy courses anytime, anywhere.Try Udemy for Business
- Use Tensorflow and Keras with Python
- Create Neural Network models, train them and check their accuracy
- Choose the best Neural Network architecture for a given problem
This is an Introduction to the Course and we will talk about the course plan and the prerequisites to the course.
The prerequisites will mainly be maths and software installation skills.
Clarify the Terminologies: It is important at this stage to clarify the terms Artificial Intelligence, Machine Learning, Neural Networks, Deep Neural Networks and how they are related. This is because as a beginner, you will hear these terms used in different context and it might seem confusing.
Difference between traditional programming and machine learning. Machine Learning techniques involve giving the network input and output examples and optimize the machine such that it can replicate the output for given input. So it is not given rules but instead rules are inferred.
We will also have a first look at how a Neural Network looks.
Here you will be shown how to install and set the environment to use Tensorflow.
pip install tensorflow==2.0.0-beta0
Create Tensorflow Environment
conda create -n tensorflow_env tensorflow
Activate Tensorflow Environment
conda activate tensorflow_env
Test if Tensorflow works
This is lecture is about guiding you through your first project in Tensorflow and implement a Neural Network. You will simply type the commands up till you can run training and see learning happening with loss number decreasing and accuracy increasing.
The main components of the Neural Networks are:
They are usually designed by layers. There is one Input and one Output layer. In between these layers there is at least one hidden layer.
Note the terminology: Deep Neural Network is when there are at least 2 hidden layers, while Neural Networks is where there is only one hidden layer. But both works on the same principle.
E.g The number of neurons in a neural network is of the following structure:
2 - 3 - 4 -1
That is, the number of neurons on Input layer = 2
number of neurons on first Hidden Layer = 3
number of neurons on second Hidden Layer = 4
number of neurons on Output Layer = 1
Weights are multiplying values on each interconnection
The Neural Networks consist of interconnection of neurons.
(5) Activation functions
Activation functions are usually non-linear mathematical functions that simulate the firing actions of biological neurons.
The activation functions used in Deep Learning mimics the firing of neurons. To model this we have suggested several mathematical functions, with various degree of success. Example of Activation Functions:
4. Leaky ReLu
Read : https://medium.com/the-theory-of-everything/understanding-activation-functions-in-neural-networks-9491262884e0
There is a parameter in the optimization function called the learning rate.
Learning rate is a parameter that is chosen by the programmer. A high learning rate means that bigger steps are taken in the weight updates and thus, it may take less time for the model to converge on an optimal set of weights.
However, a learning rate that is too high could result in jumps that are too large and not precise enough to reach the optimal point, as seen below
Loss function —This measures how accurate the model is during training. We want to minimize this function to "steer" the model in the right direction.
Optimizer —This is how the model is updated based on the data it sees and its loss function.
Metrics —Used to monitor the training and testing steps. The following example uses accuracy, the fraction of the images that are correctly classified.
The problem of long dependencies are solved by modification of RNNs. Two architectures are used: Long Short-Term Memory and Gated Recurrent Units. These two variations of RNNs are explained at a high level. We won't get into too much details.
This project in on using a Gated Recurrent Unit to train it with the Shakespeare text. And at the end the neural network will be queried with a single word and run until it create a text of 1000 characters. The interesting part of this project is that it create a text similar to shakespeare's text.
- Basic programming concepts
- High school Maths
- Basic level software installation skills
This course was designed to bring anyone up to speed on Machine Learning & Deep Learning in the shortest time.
This particular field in computer engineering has gained an exponential growth in interest worldwide following major progress in this field.
The course starts with building on foundation concepts relating to Neural Networks. Then the course goes over Tensorflow libraries and Python language to get the students ready to build practical projects.
The course will go through four types of neural networks:
1. The simple feedforward
4. Generative Adversarial
You will build a practical Tensorflow project for each of the above Neural Networks. You will be shown exactly how to write the codes for the models, train and evaluate them.
Here is a list of projects the students will implement:
1. Build a Simple Feedforward Network for MNIST dataset, a dataset of handwritten digits
2. Build a Convolutional Network to classify Fashion items, from the Fashion MNIST dataset
3. Build a Recurrent Network to generate a text similar to Shakespeare text
4. Build a Generative Adversarial Network to generate images similar to MNIST dataset
- Those seeking entry level roles in AI/Machine Learning
- Web Developers who want to implement Machine Learning for their clients
- Students in Computer science
- Researchers who are looking for kickstart in Deep Learnig
- Software project managers who plan to use ML in clients' projects