Deep Learning with TensorFlow 2.0 [2020]
4.4 (1,778 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
16,916 students enrolled

Deep Learning with TensorFlow 2.0 [2020]

Build Deep Learning Algorithms with TensorFlow 2.0, Dive into Neural Networks and Apply Your Skills in a Business Case
4.4 (1,778 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
16,916 students enrolled
Last updated 1/2020
English
English [Auto-generated], Italian [Auto-generated], 3 more
  • Polish [Auto-generated]
  • Portuguese [Auto-generated]
  • Spanish [Auto-generated]
Current price: $135.99 Original price: $194.99 Discount: 30% off
5 hours left at this price!
30-Day Money-Back Guarantee
This course includes
  • 6 hours on-demand video
  • 18 articles
  • 20 downloadable resources
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
Training 5 or more people?

Get your team access to 4,000+ top Udemy courses anytime, anywhere.

Try Udemy for Business
What you'll learn
  • Gain a Strong Understanding of TensorFlow - Google’s Cutting-Edge Deep Learning Framework
  • Build Deep Learning Algorithms from Scratch in Python Using NumPy and TensorFlow
  • Set Yourself Apart with Hands-on Deep and Machine Learning Experience
  • Grasp the Mathematics Behind Deep Learning Algorithms
  • Understand Backpropagation, Stochastic Gradient Descent, Batching, Momentum, and Learning Rate Schedules
  • Know the Ins and Outs of Underfitting, Overfitting, Training, Validation, Testing, Early Stopping, and Initialization
  • Competently Carry Out Pre-Processing, Standardization, Normalization, and One-Hot Encoding
Course content
Expand all 111 lectures 05:55:20
+ Welcome! Course introduction
3 lectures 11:23

In this lesson we introduce ourselves and look into why machine learning is important and what some of its most common applications are. Starting from the origin of the term ‘machine learning’, we discuss different applications from natural language processing to self-driving cars. 

Preview 06:54

The focus of this course is deep learning, and deep neural networks in particular. It is of utmost importance to us that we provide you with in-depth preparation. In this lesson we provide a quick overview of what will follow in the course.

Preview 04:14
What does the course cover? - Quiz
4 questions
Download All Resources and Important FAQ
00:15
+ Introduction to neural networks
13 lectures 42:56

To get into deep learning, one must learn the basics of machine learning. There are four major components: data, model, objective function, and the optimization algorithm. We introduce them and lay the groundwork for the section to come.

Preview 04:09
Introduction to neural networks - Quiz
1 question

Before we begin explaining, we must cover two key concepts in machine learning: training and learning. We explore them through the example of a coffee machine.

Training the model
02:54
Training the model - Quiz
3 questions

Machine learning can be split into three major types: supervised, unsupervised, and reinforcement learning. This course focuses on supervised learning with TensorFlow; thus, we further differentiate between the two types of supervised learning: classification and regression.

Types of machine learning
03:43
Types of machine learning - Quiz
4 questions

The linear model is the basis of neural networks. In this lesson, we introduce xw+b as the basic linear model on which we later build the deep net.

The linear model
03:08
The linear model - Quiz
3 questions
Need Help with Linear Algebra?
00:18

Building on the basic linear model, we extend it to its matrix form for multiple inputs. 

The linear model. Multiple inputs
02:25
The linear model. Multiple inputs - Quiz
2 questions

In this lesson, we further extend the linear model to multiple inputs and multiple outputs. This is also its most general form, which we will use later in the TensorFlow framework.

The linear model. Multiple inputs and multiple outputs
04:25
The linear model. Multiple inputs and multiple outputs - Quiz
3 questions

A picture is worth a thousand words. We explore a machine learning algorithm graphic of a regression and a classification through the linear model. We further make a note on why we must also employ non-linearities.

Graphical representation
01:47
Graphical representation - Quiz
1 question

The third main building block of a deep learning algorithm is the objective function. We explain the concept and introduce the two most commonly used objective functions: the L2-norm loss and the cross-entropy loss.

The objective function
01:27
The objective function - Quiz
2 questions

In the presence of regression, we often employ the L2-norm loss. The L2-norm loss is equivalent to the OLS in statistics or the Euclidean distance in mathematics.

L2-norm loss
02:04
L2-norm loss - Quiz
3 questions

In the presence of classification, one of the most common objective functions is the cross-entropy loss. We look into an image classification example that helps us picture the classification problem better.

Cross-entropy loss
03:55
Cross-entropy loss - Quiz
4 questions

The final ingredient of a machine learning algorithm is the optimization algorithm. The most basic method is the gradient descent. We start from the 1-parameter gradient descent to get a good idea of the methodology.

One parameter gradient descent
06:33
One parameter gradient descent - Quiz
4 questions

Building on the 1-parameter gradient descent, we reach the n-parameter gradient descent, which is also the basic method of optimization used in machine learning. 

N-parameter gradient descent
06:08
N-parameter gradient descent - Quiz
3 questions
+ Setting up the working environment
9 lectures 21:51

In this lesson we introduce the tools we will need in this course. They are the programming language Python, the most popular Python data science platform (Anaconda), the Jupyter notebook as the main place where we will program, and, finally, the machine learning framework on which we focus: TensorFlow.

Setting up the environment - An introduction - Do not skip, please!
00:50

We describe our choice of programming language and environment. 

Why Python and why Jupyter?
04:53
Why Python and why Jupyter? - Quiz
2 questions
Installing Anaconda
03:03

In this short lecture, we show how to download and install the Anaconda platform.

The Jupyter dashboard - part 1
02:27

This lesson is about basic things you can do with Jupyter. We also show useful shortcuts for faster coding.

The Jupyter dashboard - part 2
05:14

In this article you can find a PDF file with all Jupyter shortcuts. Enjoy!

Jupyter Shortcuts
00:09
The Jupyter dashboard - Quiz
3 questions

To use the TensorFlow package, we must first install it. This is a very important lecture in which we show how that’s done.

‘pip install TensorFlow’ and ‘conda install TensorFlow’ are the two ways you can achieve that in Anaconda.

Installing TensorFlow 2
05:02
Installing packages - exercise
00:06
Installing packages - solution
00:07
+ Minimal example - your first machine learning algorithm
5 lectures 20:31

The series of lectures in this section involve what is essentially the first machine learning algorithm many students see. It is extremely important for understanding the process. We don’t employ TensorFlow yet. This first lecture is the outline of the model. 

Preview 03:06

In this lesson we generate data on which we will later train. This step is not part of the machine learning algorithm. Essentially, we will create fake data with a linear relationship. That’s the approach we have taken to prove that the machine learning methodology we showed so far is working.

Minimal example - part 2
04:58

An important step in the machine learning algorithm we have not discussed is the initialization of variables. This lecture tackles that problem in practice. Later in the course, we will feature a separate section on the issue.

Minimal example - part 3
03:25

The final part of this minimal machine learning example is the actual training of the model. That is not deep learning just yet, but the logic follows the neural networks one, which will later be unfolded into a deep net.

Minimal example - part 4
08:15
Minimal example - Exercises
00:47
+ TensorFlow - An introduction
8 lectures 23:10

This lecture is an introduction to TensorFlow. We explain our choice of framework and compare TensorFlow and sklearn.

TensorFlow outline
03:28

TensorFlow 2.0 is 2 for a reason. There is still version 1 of TensorFlow. In this lecture we compare the two releases and highlight the improvements that come with TF2.

TensorFlow 2 intro
02:33

Why it makes sense to code in TF and what to expect!

A Note on Coding in TensorFlow
00:58

TensorFlow works with tensors; thus, it requires the data to be organized in a TensorFlow-friendly way. One solution to the problem is through the NumPy file format *.npz

Types of file formats in TensorFlow and data handling
02:34

In TensorFlow, the model is programmed in a different way. We take the same minimal example but show it in the context of the TensorFlow framework. Essentially, we expect the same result.

Like the model, the objective function and the optimization algorithm are implemented in a different way in TensorFlow. 

Model layout - inputs, outputs, targets, weights, biases, optimizer and loss
05:48

We interpret the result of our training and check the weights and biases.

Interpreting the result and extracting the weights and bias
04:09

TensorFlow is very flexible in terms of customization. In fact that's what deep learning is all about. In this lecture we peek into the different ways in which we can customize our model.

Cutomizing your model
02:51
Minimal example with TensorFlow - Exercises
00:49
+ Going deeper: Introduction to deep neural networks
8 lectures 25:23

Once we are familiar with the machine learning logic and somewhat familiar with the TensorFlow framework, we are ready to carry on with deep learning. The building block of the deep neural network is the neuron layer.

Preview 01:53

Deep learning implies deep neural networks or deep nets. But what is a deep net?

Preview 02:18

To have deep learning and deep nets, we require (at least) several layers. However, they must be stacked with the help of non-linearities.

Understanding deep nets in depth
04:58

Stacking layers produces a deep net. But why do we need non-linearities?

Why do we need non-linearities?
02:59

In a machine learning context, non-linearities are also called activation functions. Henceforth, that’s how we will refer to them. In this lesson, we explain the basic rationale behind an activation function.

Activation functions
03:37

After looking into the most common activation functions, we notice that one of them is special. In this lecture we explore softmax activation and explain why it is used as the activation of the output layer in deep learning.

Softmax activation
03:24

The most intuitive lesson, yet the hardest to grasp in mathematical terms. In this couple of lessons, we explore the intuition behind backpropagation, while in the next section, we look into the mathematics of it.

Backpropagation
03:12

Now that we know what backpropagation is, let’s explore the intuition behind it through an example. We take a machine learning diagram of a neural network with a single hidden layer and backpropagate through it.

Backpropagation - visual representation
03:02
+ Backpropagation. A peek into the Mathematics of Optimization
1 lecture 00:20

The mathematics behind backpropagation must be quite interesting for anyone with a quantitative background. Here they are!

Backpropagation. A peek into the Mathematics of Optimization
00:20
+ Overfitting
6 lectures 19:36

One of the most commonly asked questions in data science interviews is about overfitting. Two concepts are interrelated: underfitting and overfitting. We look into both and why each one of them yields in a suboptimal machine learning algorithm. We look into a regression example.

Underfitting and overfitting
03:51

Once we know the difference between underfitting and overfitting, we look into a classification example.

Underfitting and overfitting - classification
01:52

Machine learning practitioners approach the overfitting issue by dividing the initial dataset into three parts: training, validation, and test. In this lesson we explain why this is the case.

Training and validation
03:22

Machine learning practitioners approach the overfitting issue by dividing the initial dataset into three parts: training, validation, and test. In this lesson we look into the test dataset.

Training, validation, and test
02:30

Sometimes there is not enough data to split your dataset into training, validation, and test. N-fold cross-validation is one solution to the problem.

N-fold cross validation
03:07

Training, validation, and test datasets cannot prevent overfitting on their own. They should be implemented together with an early stopping mechanism.

Early stopping
04:54
+ Initialization
3 lectures 08:04

When you use clumsy or inappropriate methods, even the fastest computer in the world won’t be able to help you with your machine learning. As they say, the devil is in the details, and this saying is as true as it gets for initialization. 

Initialization - Introduction
02:32

There are two simple types of initialization: random uniform and random normal initialization.

Types of simple initializations
02:47

Simple initialization methods have major setbacks for a machine learning algorithm. A state-of-the-art solution to that problem is the Xavier (Glorot) initializer.

Xavier initialization
02:45
+ Gradient descent and learning rates
7 lectures 20:40

Since we’ve seen only the gradient descent, it is time to discuss improvements that will lead to enhanced machine learning algorithms. The stochastic gradient descent and batching as a whole are great improvements over the status quo.

Stochastic gradient descent
03:24

In real life, loss functions are not as regular as we imagine them to be. Some issues are worth discussing when we use the gradient descent.

Gradient descent pitfalls
02:02

There are ways to take care of the local minima pitfalls, and momentum is one of them.

Preview 02:30

Choosing the learning rate of a machine learning algorithm is no easy task. But why choose one, when you can have a learning rate schedule?

Learning rate schedules
04:25

Simply talking about the learning rate without seeing it doesn’t inform us as well as seeing a graph does. As they say, a picture is worth a thousand words.

Learning rate schedules. A picture
01:32

Once, there were two leading adaptive learning rate schedules: AdaGrad and RMSprop. They are also the basis for advanced optimizers.

Adaptive learning rate schedules
04:08

Adam, or the adaptive moment estimation, is the state-of-the-art optimization method. We introduce it so we can employ it later in our TensorFlow algorithms.

Adaptive moment estimation
02:39
Requirements
  • Some basic Python programming skills
  • You’ll need to install Anaconda. We will show you how to do it in one of the first lectures of the course.
  • All software and data used in the course are free.
Description

Data scientists, machine learning engineers, and AI researchers all have their own skillsets. But what is that one special thing they have in common?

They are all masters of deep learning.

We often hear about AI, or self-driving cars, or the ‘algorithmic magic’ at Google, Facebook, and Amazon. But it is not magic - it is deep learning. And more specifically, it is usually deep neural networks – the one algorithm to rule them all.

Cool, that sounds like a really important skill; how do I become a Master of Deep Learning?

There are two routes you can take:

The unguided route – This route will get you where you want to go, eventually, but expect to get lost a few times. If you are looking at this course you’ve maybe been there.

The 365 route – Consider our route as the guided tour. We will take you to all the places you need, using the paths only the most experienced tour guides know about. We have extra knowledge you won’t get from reading those information boards and we give you this knowledge in fun and easy-to-digest methods to make sure it really sticks.

Clearly, you can talk the talk, but can you walk the walk? – What exactly will I get out of this course that I can’t get anywhere else?

Good question! We know how interesting Deep Learning is and we love it! However, we know that the goal here is career progression, that’s why our course is business focused and gives you real world practice on how to use Deep Learning to optimize business performance.

We don’t just scratch the surface either – It’s not called ‘Skin-Deep’ Learning after all. We fully explain the theory from the mathematics behind the algorithms to the state-of-the-art initialization methods, plus so much more.

Theory is no good without putting it into practice, is it? That’s why we give you plenty of opportunities to put this theory to use. Implement cutting edge optimizations, get hands on with TensorFlow and even build your very own algorithm and put it through training!

Wow, that’s going to look great on your resume!

Speaking of resumes, you also get a certificate upon completion which employers can verify that you have successfully finished a prestigious 365 Careers course – and one of our best at that!

Now, I can see you’re bragging a little, but I admit you have peaked my interest. What else does your course offer that will make my resume shine?

Trust us, after this course you’ll be able to fill your resume with skills and have plenty left over to show off at the interview.

  • Of course, you’ll get fully acquainted with Google’ TensorFlow and NumPy, two tools essential for creating and understanding Deep Learning algorithms.

  • Explore layers, their building blocks and activations – sigmoid, tanh, ReLu, softmax, etc.

  • Understand the backpropagation process, intuitively and mathematically.

  • You’ll be able to spot and prevent overfitting – one of the biggest issues in machine and deep learning

  • Get to know the state-of-the-art initialization methods. Don’t know what initialization is? We explain that, too

  • Learn how to build deep neural networks using real data, implemented by real companies in the real world. TEMPLATES included!

  • Also, I don’t know if we’ve mentioned this, but you will have created your very own Deep Learning Algorithm after only 1 hour of the course.

  • It’s this hands-on experience that will really make your resume stand out

This all sounds great, but I am a little overwhelmed, I’m afraid I may not have enough experience.

We admit, you will need at least a little understanding of Python programming but nothing to worry about. We start with the basics and take you step by step toward building your very first (or second, or third etc.) Deep Learning algorithm – we program everything in Python and explain each line of code.

We do this early on and it will give you the confidence to carry on to the more complex topics we cover.

All the sophisticated concepts we teach are explained intuitively. Our beautifully animated videos and step by step approach ensures the course is a fun and engaging experience for all levels.

We want everyone to get the most out of our course, and the best way to do that is to keep our students motivated. So, we worked hard to ensure that students with varying skills are challenged without being overwhelmed. Each lecture builds upon the last and practical exercises mean that you can practice what you’ve learned before moving on to the next step.

And of course, we are available to answer any queries you have. In fact, we aim to answer any and all question within 1 business day. We don’t just chuck you in the pool then head to the bar and let you fend for yourself.

Remember, we don’t just want you to enrol – we want you to complete the course and become a Master of Deep Learning.

OK, awesome! I feel much better about my level of experience now, but we haven’t discussed yours! How do I know you can teach me to become a Master of Deep Learning?

That’s an understandable worry, but it’s one we have no problem removing.

We are 365 Careers and we’ve been creating online courses for ages. We have over 220,000 students and enjoy high ratings for all our Udemy courses. We are a team of experts who are all, at heart, teachers. We believe knowledge should be shared and not just through boring text books but in engaging and fun ways.

We are well aware how difficult it is to build your knowledge and skills in the data science field, it’s so new and has grown so fast that the education sector has struggled to keep up and offer any substantial methods of teaching these topic areas. We wanted to change things – to rock the boat – so we developed our unique teaching style, one that countless students have enjoyed and thrived with.

And between us, we think this course is one of our favourites, so if this is your first time with us, you’re in for a treat. If it’s not and you’ve taken one of our courses before, then, you’re still in for a treat!

I’ve been hurt before though, how can I be sure you won’t let me down?

Easy, with Udemy’s 30-day money back guarantee. We strive for the best and believe that our courses are the best out there. But you know what, everyone is different, and we understand that. So, we have no problem offering this guarantee, we want students who will complete and get the most out of this course. If you are one of the few who finds this course not what you wanted or expected then, get your money back. No questions, no risk, no problem.

Great, that takes a load of my shoulders. What next?

Click on the ‘Buy now’ button and take that first step toward a satisfying data science career and becoming a Master of Deep Learning.

Who this course is for:
  • Aspiring data scientists
  • People interested in Machine Learning, Deep Learning, Business, and Artificial Intelligence,
  • Anyone who wants to learn how to code and build machine and deep learning algorithms from scratch