Neural Networks with TensorFlow and PyTorch
4.0 (3 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
60 students enrolled

Neural Networks with TensorFlow and PyTorch

Unleash the power of TensorFlow and PyTorch to build and train Neural Networks effectively
4.0 (3 ratings)
Course Ratings are calculated from individual students’ ratings and a variety of other signals, like age of rating and reliability, to ensure that they reflect course quality fairly and accurately.
60 students enrolled
Created by Packt Publishing
Last updated 3/2019
English
English [Auto-generated]
Current price: $139.99 Original price: $199.99 Discount: 30% off
5 hours left at this price!
30-Day Money-Back Guarantee
This course includes
  • 13 hours on-demand video
  • 1 downloadable resource
  • Full lifetime access
  • Access on mobile and TV
  • Certificate of Completion
Training 5 or more people?

Get your team access to 4,000+ top Udemy courses anytime, anywhere.

Try Udemy for Business
What you'll learn
  • Get hands-on and understand Neural Networks with TensorFlow and PyTorch
  • Understand how and when to apply autoencoders
  • Develop an autonomous agent in an Atari environment with OpenAI Gym
  • Apply NLP and sentiment analysis to your data
  • Develop a multilayer perceptron neural network to predict fraud and hospital patient readmission
  • Build convolutional neural network classifier to automatically identify a photograph
  • Learn how to build a recurrent neural network to forecast time series and stock market data
  • Know how to build Long Short Term Memory Model (LSTM) model to classify movie reviews as positive or negative using Natural Language Processing (NLP)
  • Get familiar with PyTorch fundamentals and code a deep neural network
  • Perform image captioning and grammar parsing using Natural Language Processing
Requirements
  • Basic knowledge of Python is required. Familiarity with TensorFlow and PyTorch will be beneficial.
Description

TensorFlow is quickly becoming the technology of choice for deep learning and machine learning, because of its ease to develop powerful neural networks and intelligent machine learning applications. Like TensorFlow, PyTorch has a clean and simple API, which makes building neural networks faster and easier. It's also modular, and that makes debugging your code a breeze. If you’re someone who wants to get hands-on with Deep Learning by building and training Neural Networks, then go for this course.

This course takes a step-by-step approach where every topic is explicated with the help of a real-world examples. You will begin with learning some of the Deep Learning algorithms with TensorFlow such as Convolutional Neural Networks and Deep Reinforcement Learning algorithms such as Deep Q Networks and Asynchronous Advantage Actor-Critic. You will then explore Deep Reinforcement Learning algorithms in-depth with real-world datasets to get a hands-on understanding of neural network programming and Autoencoder applications. You will also predict business decisions with NLP wherein you will learn how to program a machine to identify a human face, predict stock market prices, and process text as part of Natural Language Processing (NLP). Next, you will explore the imperative side of PyTorch for dynamic neural network programming. Finally, you will build two mini-projects, first focusing on applying dynamic neural networks to image recognition and second NLP-oriented problems (grammar parsing).

By the end of this course, you will have a complete understanding of the essential ML libraries TensorFlow and PyTorch for developing and training neural networks of varying complexities, without any hassle.

Meet Your Expert(s):

We have the best work of the following esteemed author(s) to ensure that your learning journey is smooth:

  • Roland Meertens is currently developing computer vision algorithms for self-driving cars. Previously he has worked as a research engineer at a translation department. Examples of things he has made are a Neural Machine Translation implementation, a post-editor, and a tool that estimates the quality of a translated sentence. Last year, he worked at the Micro Aerial Vehicle Laboratory at the university of Delft, on indoor localization (SLAM) and obstacle avoidance behaviors for a drone that delivers food inside a restaurant. Another thing he worked on was detecting and following people using onboard computer vision algorithms on a stereo camera. For his Master's thesis, he did an internship at a company called SpirOps, where he worked on the development of a dialogue manager for project Romeo. In his Artificial Intelligence study, he specialized in cognitive artificial intelligence and brain-computer interfacing.

  • Harveen Singh Chadha is an experienced researcher in Deep Learning and is currently working as a Self Driving Car Engineer. He is currently focused on creating an ADAS (Advanced Driver Assistance Systems) platform. His passion is to help people who currently want to enter into the Data Science Universe.

  • Anastasia Yanina is a Senior Data Scientist with around 5 years of experience. She is an expert in Deep Learning and Natural Language processing and constantly develops her skills as far as possible. She is passionate about human-to-machine interactions. She believes that bridging the gap may become possible with deep neural network architectures.

Who this course is for:
  • This course is for machine learning developers, engineers, and data science professionals who want to work with neural networks and deep learning using powerful Python libraries, TensorFlow and PyTorch.
Course content
Expand all 101 lectures 13:01:31
+ Learning Neural Networks with Tensorflow
25 lectures 03:34:26

This video provides an overview of the entire course.

Preview 04:26

People often discuss whether being good at deep learning requires a lot of knowledge like a scientist, or requires a lot of practice like an artist. You need a combination of both to build state of the art models.

  • Understand that deep learning requires knowledge

  • Understand that deep learning requires practice

  • Understand that you need both to build state of the art models.

Solving Public Datasets
02:02

In this course, the viewers only need to install one thing, and that is Docker. With this tool, we put a "virtual operating system" on the viewer’s computer which has all dependencies that they need for this course.

  • Understand what Docker is

  • Learn about Downloading Docker

  • Installe Docker in your system

Why We Use Docker and Installation Instructions
02:52

Learn how to download the source code for this course and build the Docker image. The author will show what commands to enter, and how viewers can open the Jupyter Notebook. Finally, he will show viewers what a Jupyter Notebook is and how it works.

  • Build the Docker image

  • Open the Jupyter Notebook for this section

  • Understand how to use a Jupyter Notebook

Our Code, in a Jupyter Notebook
05:07

Look at the TensorFlow software, and understand what its definition is. The author will build some graphs, and will explain what they do and how to evaluate them in a session. Some TensorFlow functions are compared to their NumPy equivalent.

  • Understand what TensorFlow is

  • Build a graph

  • Evaluate this graph in a session

Understanding TensorFlow
14:05

Get the Iris dataset and inspect it. Find insights in how to recognize flowers.

  • Get the data from the SKLearn library

  • View how the data is represented

  • Plot the data to see how we can classify it

The Iris Dataset
06:05

Look at the human brain for inspiration on how computers can learn something and learn how to manually design a Neural Network.

  • Know how the human brain works

  • Learn how we can formalize this with math

  • Program the forward pass with Numpy

The Human Brain and How to Formalize It
11:46

Determine the error that the network made, and how we can optimize the network to reduce this error.

  • Put our Neural Network in Tensorflow

  • Determine the error and choose an optimizer

  • Train our network on our data

Backpropagation
12:04

Although during training it may look as if our neural network learned to classify everything, it's possible it does not generalize to the whole dataset. To see how well our network performs we have to split our data into training and test set.

  • Know why you want to split your data

  • Learn how to split data with SKLearn

  • Evaluate network performance with part of the data

Overfitting — Why We Split Our Train and Test Data
09:34

Download the data from Kaggle and see what is in the dataset.

  • Download the data from Kaggle

  • View how the data is represented

  • Plot the data to see what we have to solve

Ground State Energies of 16,242 Molecules
07:31

Replicate the Neural Network made in the previous section to see how well this works.

  • Look at how we first built a network

  • Take a look at the Keras API

  • Program a Neural Network with using little code

First Approach – Easy Layer Building
10:17

Learn how preprocessing data can give big performance boosts to Neural Networks.

  • Build functions to easily compare our testing results

  • Preprocess our data with the sklearn StandardScaler

  • Compare the results on scaled and unscaled datasets

Preprocessing Data
10:03

In this video, we will explore other activation functions, and will look at how well the network performs the ReLU function which is used as activation function.

  • Define a variable that contains the activation function

  • Change the activation function to RELU

  • Visualize several activation functions

Understanding the Activation Function
10:21

There are several better methods to estimate these hyperparameters, and we will try grid-search over the learning rate parameter to improve our performance.

  • Learn what hyperparameters are

  • Learn about methods to tweak hyperparameters

  • Do a grid-search over the learning rate variable

The Importance of Hyperparameters
08:25

We want to download a dataset with images of written digits and save these digits to our datasets folder. We will visualize them with Matplotlib after reshaping them.

  • Load the MNIST data with TensorFlow

  • Reshape the vectors to represent an image

  • Visualize the images with Matplotlib

Images of Written Digits
06:22

We will apply what we learned in the previous section on these images and build a deep Neural Network with fully connected layers. We will also write an evaluation function that determines the accuracy of a Neural Network.

  • Build a deep Neural Network with dense layers

  • Determine the accuracy of our solution

  • Compare our result to the state of the art

Dense Layer Approach
06:54

If you move an image, it's still easy for humans to recognize the image. With our dense layers, the network has to "learn" all positions a character can be at. We will introduce convolutional layers and pooling layers to counter this problem

  • Understand what convolutional and max pooling layers are

  • Apply convolutional and max pooling layers in our network

  • See that we improved our accuracy

Convolution and Pooling Layers
11:35

This is the continuation video on convolution and pooling layers.

  • See that we improved our accuracy

Convolution and Pooling Layers (Continued)
07:26

Knowing the output activations of a neural network is great, but often you want to see a "probability" per output class. To do this we introduce the softmax function.

  • Understand what the softmax function does

  • Add the softmax function to our Neural Network

  • Inspect the output of the softmax function and linear weighing

From Activations to Probabilities – the Softmax Function
04:55

We currently used the mean squared error loss function and normal gradient descent. The softmax cross entropy function performs better for classification functions. We will also look at the momentum and the adam optimizer, which often perform better.

  • Understand what the softmax cross entropy function does

  • Understand what the momentum and adam optimizer do

  • Compare our performance with our previous performance

Optimization and Loss Functions
10:26

To analyze the faces of celebrities, we need to have a lot of data. The CelebA dataset contains more than 200, 000 images of celebrities.

  • Find the data

  • Download and unzip the data

  • Load the labels and filenames

Large-Scale CelebFaces Attributes (CelebA) Dataset
08:10

As there are a lot of images, loading them all into our memory would require a lot from our computer space. Instead we will build a pipeline in TensorFlow that reads the images when we need them.

  • Partition filenames in a train and test set partition vector

  • Build an input queue with filenames and labels

  • Preprocess images inside the TensorFlow graph

Building an Input Pipeline in TensorFlow
11:20

We loaded our data and preprocessed our images. Now it's time to see how well our well known approach of convolutional layers works on this dataset.

  • Build a convolutional Neural Network

  • Select the loss and optimizers

  • Train the network and plot the loss

Building a Convolutional Neural Network
09:01

Each layer learns to respond to the output of the previous layer during backpropagation. A trick to speed up this process AND get better results is called batch normalization. We will add it to the layers in our network.

  • Understand why batch normalization works

  • Add batch normalization layers to our network

  • Compare the output of non-batch normalization and batch normalization

Batch Normalization
07:42

Neural networks sometimes learn something you don't expect. Looking at activations can be an important tool to verify your network is learning something that makes sense. We will also evaluate the performance of our network by drawing a ROC curve.

  • Visualize the output of a convolutional layer

  • Explain what the ROC curve is

  • Draw the ROC curve for our Neural Network

Understanding What Your Network Learned –Visualizing Activations
15:57
Test Your Knowledge
3 questions
+ Advanced Neural Networks with Tensorflow
25 lectures 03:33:25

This video provides an overview of the entire course.

Preview 03:51

There are a lot of different problems in machine learning you can approach with neural networks. In this section, we are going to learn about autoencoders, siamese Neural Networks, and reinforcement learning.

  • Discuss what knowledge you should have

  • Explain what we will learn

The Approach of This Course
02:51

During this course, we will download one program with all dependencies using Docker. This video will show you what to download to set up your machine learning workspace.

  • Install Docker

  • Download the code for this course

  • Build and start the Docker container

Installing Docker and Downloading the Source Code for This Course
08:06

In this video, viewers are shown how they can design a Neural Network that can recognize written digits with TensorFlow.

  • Download the MNIST dataset

  • Build a simple neural network with convolutional layers

  • Evaluate the network

Understanding Jupyter Notebooks and TensorFlow
12:56

We will take a look at what TensorBoard is, and how to start it. Luckily TensorBoard is already included in the Dockerfile you are running.

  • Start TensorBoard

  • Write your graph to a file

  • Plot the graph so we can inspect it

Visualizing Your Graph
07:07

Now that we are able to get our graph in TensorBoard, it's time to add something more interesting, that is, the loss. With TensorBoard we can plot multiple lines in the same graph.

  • Add summaries to your graph

  • Write to the summary

  • Compare results between runs

Adding Summaries
08:58

Sometimes you want to take a look at the value of your weights. We will compare two runs, one run with a wrong learning rate.

  • Learn how to visualize your weights

  • Run two sessions, with different learning rates

  • Look at the size of the weights in TensorBoard

Plotting the Weights in a Histogram
09:41

Sometimes the problem with your Neural Network is not in the network, but in the data you put into it (or get out of it). Luckily, we can inspect both with TensorBoard.

  • Write inputs to your network to TensorBoard

  • Look at if the input is correct

  • Merge all summaries so you can write them all at once

Inspecting Input and Output
06:16

We will build our first autoencoder that is able to represent MNIST characters in only 10 values. We will evaluate if our neural network can learn something.

  • Build an autoencoder with TensorFlow

  • Feed the network MNIST characters

  • See if the loss goes down during training

Encoding MNIST Characters
12:40

In this video, we will take a look at the result of our decoder. We will also look at one practical application: denoising your input.

  • Inspect the result of our decoder

  • Add noise to input images

  • Inspect result of our decoder on noisy input

Practical Application –Denoising
06:21

It's often difficult to work with noisy input data. Neural Networks tend to "overfit" on certain patterns, which are disturbed by this noise. Dropout is an effective way to reduce your testing error.

  • Understand what dropout does

  • Add dropout to our autoencoder

  • Compare the results to a network without dropout

The Dropout Layer
08:41

By setting the variables in the latent layer to random values, we could generate "new" images of characters. As we don't know in what range we could pick these values, we add an extra loss to our autoencoder that specifies the range we want.

  • Understand what a variational autoencoder is

  • Add an extra factor to our loss function

  • Generate new images of characters

Variational Autoencoders
11:32

We will look at the Omniglot dataset. This dataset contains many classes and 20 samples per class.

  • Learn what's in the Omniglot dataset

  • Download the dataset

  • Define functions to load our data

The Omniglot Dataset
09:13

Siamese Neural Networks map input to an output vector. The idea is that this output is similar for similar characters. In this video, we will create such a network with TensorFlow.

  • Create a Neural Network with shared layers for the input

  • Determine the distance between the output of your layers

  • Train the network to create the correct distance

What Is a Siamese Neural Network?
06:55

We defined our network in the previous video. Now it's time to train and test it. We use the data load functions from the first video to train and evaluate our performance.

  • Train the network using our training data

  • Evaluate the network using our testing data

  • Determine the accuracy on top-5 and top-20 test cases

Training and Testing a Siamese Neural Network
05:59

In this video, we will explore two different loss functions: a cross-entropy error on the last layer, and a contrastive loss function that works like a "spring".

  • Try a cross-entropy error on our last layer

  • Try a contrastive loss error on our last layer

  • Compare the performance of all three loss functions

Alternative Loss Functions
06:39

We previously built a lot of large neural networks, and will continue to do so in the next section. In this video, we will analyze what factors influence the speed with which we train our Neural Network.

  • Look at the influence of batch sizes

  • Look at the influence of the size of dense layers

  • Look at the influence of the size of convolutional layers

Speed of Your Network
13:28

We will install the OpenAI gym environment and explore the problem of balancing a stick on a cart.

  • Load dependencies for the OpenAI gym

  • Control the agent with random actions

  • Inspect possible inputs and outputs

Getting Started with the OpenAI Gym
09:30

You could try to solve this environment with one simple matrix multiplication with the input. This essentially gives you a single Neural Network, but no way to optimize this with gradient descent.

  • Frame the problem as single-layer neural network

  • Generate random matrices and evaluate how good they are

  • Pick the best matrix till we find one that solves the problem

Random Search
08:41

We will take a look at reinforcement learning, which is a technique where an autonomous agent learns by getting rewards whenever he does something good! We will discuss what q-learning is, and how deep q-networks work.

  • Learn what deep-q networks are

  • Implement deep q-networks in TensorFlow

Reinforcement Learning Explained
06:34

This is the continuation of discussion on reinforcement learning.

  • Train and evaluate our agent

Reinforcement Learning Explained (Continued)
11:16

In the previous video, our neural network did not fully solve the environment. In this video, we will look at two tricks that will improve the performance of our Neural Network: epsilon annealing, and limiting the replay memory.

  • Limit the replay memory

  • Anneal epsilon during training

  • Evaluate our newly trained agent

Reinforcement Learning Tricks
08:02

In this video, we will use Deep Learning to play Atari games.

  • Explore techniques necessary to train an agent that plays Atari games

Playing Atari Games
10:45

In this video, we will define our Neural Network.

  • Discuss what Huber loss is

Defining Our Network
06:36

In this video, we will start a session to set variables of two networks to the same value.

  • Look at what a target and network value are

  • Explore how we can copy weights

Starting and Training a Session
10:47
Test Your Knowledge
3 questions
+ Hands-On Neural Network Programming with TensorFlow
25 lectures 02:47:12

This video provides an overview of the entire course.

Preview 03:06

This video aims to explain and introduce neural networks.

  • Introduction to neural networks

  • Understand deep learning

  • The importance of deep learning

Introduction To Neural Networks
05:23

This video aims to assist you in setting up environment in five basic steps.

  • Download and install Miniconda. Then validate Miniconda and the Python installation.

  • Create a new environment

  • Install Jupyter Notebook. Then validate the installation.

Setting Up Environment
05:16

This video aims to explain and introduce TensorFlow.

  • Introduction to TensorFlow

  • Know why to use TensorFlow

  • Learn more about TensorFlow

Introduction To TensorFlow
04:57

This video aims to demonstrate TensorFlow installation.

  • Activate the environment

  • Install TensorFlow for CPU version

  • Use the conda install tensorflow command

TensorFlow Installation
01:52

This video aims to explain multilayer perceptron neural network.

  • Introduction to perceptron

  • Understand how does a perceptron works

  • Learn about multilayer perceptron neural network

Multilayer Perceptron Neural Network
02:51

This video aims to explain and introduce forward propagation and loss functions.

  • Brief introduction to forward propagation

  • Understand how forward perceptron works

  • Learn about activation functions and loss functions

Forward Propagation & Loss Functions
05:05

This video aims to explain and introduce backpropagation.

  • Introduction to backpropagation

  • Understand backpropagation using a picture

  • Learn about gradient descent

Backpropagation
03:45

This video aims to explain demonstrate how to create and train a neural network model to predict fraud.

  • Introduction to training set and need to divide them into 3 sets

  • Learn about Overfitting

  • Understand underfitting

Creating First Neural Network to Predict Fraud
15:52

This video aims to demonstrate a test set of a neural network model to predict fraud.

  • Test model on test set

  • Predict model on test set

  • Evaluate the model

Testing Neural Network to Predict Fraud
02:56

This video aims to explain Convolutional Neural Network.

  • Introduction to Convolutional Neural Networks

  • Understand the need of Convolutional Neural Networks

  • General guidelines

Introduction To Convolutional Neural Networks
11:24

This video aims to demonstrate how to train model to identify faces.

  • Download and import data” faces in the wild”

  • Understand how does forward perceptron works

  • Train Test Split and model data using CNN

Training a Convolution Neural Network
16:11

This video aims to demonstrate how to test model to identify faces.

  • Test the trained model

  • Load model

  • Test model

Testing a Convolution Neural Network
02:18

This video aims to explain RNNs.

  • Introduction

  • Understand the need for RNNs

  • Types of RNN

Introduction To Recurrent Neural Networks
05:04

This video aims to demonstrate how to train a model to detect sentiments of a user.

  • Import and preprocess data

  • Use the embedding matrix approach to model data

  • Saving the model

Training a Recurrent Neural Network
08:18

This video aims to demonstrate how to test a model to detect sentiments of a user.

  • Test the trained model

  • Load the model

  • Test the model

Testing a Recurrent Neural Network
02:28

This video aims to explain Long Short Term Memory Network.

  • Introduction to LSTM Networks

  • Understand the need for LSTM Networks

  • Know more about LSTM Networks

Introduction To Long Short-Term Memory Network
04:37

This video aims to demonstrate how to train model to detect sentiments of a user.

  • Import data

  • Train Test Split

  • Preparing dataset for training and modelling data

Training an LSTM Network
12:49

This video aims to demonstrate how to test model to detect sentiments of a user.

  • Test the trained model

  • Load model

  • Test model

Testing a Long Short-Term Memory Network
03:14

This video aims to explain generative models.

  • Introduction to generative models and what can it do

  • Learn about unsupervised learning

  • Know about different types of generative models

Introduction To Generative models
05:03

This video aims to demonstrate how to use generative models to create an art work from already created paintings by using neural style transfer.

  • Introduction to neural style transfer

  • Demonstrate using an example

  • Understand cost function

Neural Style Transfer: Basics
10:35

This video aims to demonstrate cost function in neural style transfer.

  • Content loss

  • Style loss

  • Total Loss

Results: Neural Style Transfer
07:56

This video aims to explain Autoencoders.

  • Introduction to Autoencoders

  • Learn about Autoencoders: Encoders and Decoders

  • Know about different types of Autoencoders

Introduction To Autoencoders
05:54

This video aims to demonstrate how to use Autoencoder to convert a grey scale image to a RGB image.

  • Know about cifar 10 dataset

  • Define encoder and decoder architecture

  • Define Autoencoder architecture

Autoencoder in TensorFlow
15:04

This video aims to demonstrate how to train the model and see how the model is preforming.

  • Demonstrate steps for training the model

  • Demonstrate steps for testing the model

Training & Testing a Autoencoder
05:14
Test Your Knowledge
4 questions
+ Dynamic Neural Network Programming with PyTorch
26 lectures 03:06:28

This video gives glimpse of the entire course.

Preview 03:10

Setting environment and installing the necessary libraries is an important step. We provide instructions for Windows, Linux and Mac users.

  • Install Python

  • Install pip and Numpy

  • Install PyTorch

Installation Checklist
03:33

Recalling the main PyTorch concepts and recapping tensors, variables and automatic differentiation.

  • Understand how to work with tensors

  • Understand the concept of variables

  • Get to know how to use automatic differentiation

Tensors, Autograd, and Backprop
03:46

Continue recalling main PyTorch concepts. It is extremely important to remember about the basic principles before proceeding to more advanced architectures.

  • Recall backpropagation

  • Learn about learn functions

  • Define a simple neural network

Backprop, Loss Functions, and Neural Networks
06:24

Although every neural network can be trained using just cpu, it may be very time-consuming. That’s why learning about working with PyTorch on GPU is important.

  • Learn about device.to()

  • Explore the presented code

  • Test the performance on CPU and GPU

PyTorch on GPU: First Steps
03:13

Imperative and dataflow programming allow solving different tasks. It’s important to choose the best style for the current task.

  • Get to know about computational graphs

  • Get to know about imperative and dataflow programming

  • Compare imperative and dataflow programming

Imperative Programming Architectures
02:46

For dataflow and imperative programming you need different tools. Dynamic graphs allow using imperative paradigm. Learning about dynamic graph key features and differences from the static ones is important as far as it goes to writing effective easy-to-read code in PyTorch.

  • Compare PyTorch and TensorFlow to feel differences in graph definitions

  • Compare static and dynamic graphs, its pros. and cons.

  • Learn about dynamic graph applications

Static Graphs versus Dynamic Graphs
04:42

Finding bugs in code may be very time consuming. For effective debugging PyTorch has several tips and tricks.

  • Learn about PyTorch debugging tools

  • Learn about TensorFlow debugging tools

  • Compare them and decide which is simpler

Neural Network Debugging: Why Imperative Philosophy Helps
02:02

Information about the main building blocks is extremely important. For PyTorch the knowledge of how to implement the popular architectures helps a lot.

  • Get to know how to load data

  • Implement feedforward network

  • Implement recurrent neural network

Feedforward and Recurrent Neural Networks
13:07

As far as Computer Vision is concerned, convolutional neural network is the main tool for every task. PyTorch allows us to implement it in a very easy way.

  • Implement CNN class

  • Learn how to train your network

  • Evaluate the results

Convolutional Neural Networks
19:36

Autoencoder is a good way to show how encoder-decoder architectures work. We will get to know about it implementing Linear Autoencoder.

  • Implement encoder with just one linear layer

  • Implement decoder with just one linear layer

  • Train the network and evaluate the results

Autoencoders
11:47

Sometimes, there is a necessity to use functions from the Numpy library. This may give more freedom and allow you not to implement some math by yourself. We will explore parameter-less Numpy extensions in this video.

  • Understand why Numpy extensions are useful

  • Explore the presented code

  • Run it on your own

Extensions with Numpy – Part 1
05:10

This video is devoted to parameterized extensions. In many cases you need to transfer parameters to the extension so learning about parameterized extensions is important.

  • Learn the syntax for writing parameterized examples

  • Explore the code

  • Run the code from the example

Extensions with Numpy – Part 2
05:19

By exploring LLTM unit code we will build an intuition of cases when C++ extensions are very helpful.

  • Explore initialization step

  • Explore forward pass function

  • Run code from the example

Custom C++ and CUDA Extensions: Motivation
04:17

C++ extensions come in two flavors: They can be built with the library called Setuptools, or “just in time”. Setuptools gives you more freedom, but also requires more advanced skills. We will learn how to write C++ extensions using the LLTM example from the official tutorial.

  • Explore script that uses Setuptools to compile C++ code

  • Learn about Aten library and pybind11

  • Define simple C++ code for forward and backward passes

Custom C++ and CUDA Extensions: Setuptools
04:31

After completing C++ code we need to bind it to Python. We will use pybind11 to bind our C++ functions into Python.

  • Explore binding code

  • Run script to build and install your extension

  • Import your extension to Python and test it

Custom C++ and CUDA Extensions: Binding to Python
03:20

The JIT compilation mechanism provides you with a way of compiling and loading your extensions on the fly by calling a simple function in PyTorch’s API. This way is very simple, but is appropriate only for trivial cases.

  • Learn about torch.utils. cpp_extension.load()

  • Learn about an ability to write your own build file

  • Try to compile and load your extension on the fly

Custom C++ and CUDA Extensions: JIT Compilation
03:21

In this video, we simply talk about image captioning task and try to build an intuition about it.

  • Look at the problem statement

  • See image captioning task types

  • Overview of the section

Image Captioning: First Steps
02:18

PyTorch provides very easy ways to load and preprocess the data. Getting to know them may help to write code faster and get rid of long unnecessary self-made load functions.

  • Learn Dataset module

  • Learn Transformations

  • Learn DataLoader module

PyTorch DataLoaders
09:06

Before proceeding to the implementation, we will learn more about the architecture for image segmentation tasks.

  • Recap encoder-decoder architectures

  • See Encoder: recall CNN

  • Explore Decoder: learn about LSTM

Image Captioning: Theory
09:48

We will learn how to use pretrained neural network for generating image captions and try to fine-tune it on Flickr 8k dataset.

  • Download Flickr 8k dataset, pretrained model weights and vocabulary

  • Import encoder and decoder from model.py, implement evaluation function

  • Fine-tune your network on the Flickr 8k dataset

Image Captioning: Practice
11:12

This part is optional and is devoted to datasets that may be used for training networks for image captioning.

  • Learn about Flickr 8k and Flickr 30k datasets

  • Learn about COCO dataset

  • Learn about PASCAL dataset

Honor Track: Image Captioning Datasets
02:57

We’ll quickly go through the section plan and discuss the main tasks covered in the section.

  • Discuss main NLP tasks

  • Look at the section plan

  • Explore instruments and tools

Motivation and Section Overview
01:51

The ordinary case is that most of your features are words. But we need a cool way to represent them in machine-readable format paying attention to the semantics.

  • Learn about nn.Embedding

  • Learn how to load pretrained word vectors

  • Create neural network with embedding layer (GloVe pretrained vectors)

Word Embeddings
12:49

Understanding the sentence sentiment may be extremely useful in many other NLP tasks. With PyTorch you can build a neural network for detecting whether the sentence is positive or negative.

  • Prepare the data using TorchText

  • Build a vocabulary

  • Implement a neural network overall

Sentiment Analysis with PyTorch
15:49

Generative models form the basis of machine translation, image captioning, question answering and more. We will learn how to build a model that will generate new poems having been trained on Shakespeare’s poems.

  • Download the texts

  • Implement encoder-decoder architecture

  • Train the network and evaluate the results

Char-Level RNN for Text Generation
20:34
Test Your Knowledge
5 questions