How Do GANs Work? (Step 2)

Hadelin de Ponteves
A free video tutorial from Hadelin de Ponteves
AI Entrepreneur
4.5 instructor rating • 84 courses • 1,229,144 students

Learn more from the full course

Deep Learning and Computer Vision A-Z™: OpenCV, SSD & GANs

Become a Wizard of all the latest Computer Vision tools that exist out there. Detect anything and create powerful apps.

10:58:47 of on-demand video • Updated November 2020

  • Have a toolbox of the most powerful Computer Vision models
  • Understand the theory behind Computer Vision
  • Master OpenCV
  • Master Object Detection
  • Master Facial Recognition
  • Create powerful Computer Vision applications
English [Auto] Step two is basically the same thing. But now the networks have been more trained which is going to do this again to reiterate this whole process and understand it better. So Step Two noise goes into the generator generator generates images which are not. They don't look as random anymore. They're kind of a bit clearer and we'll understand why in just a second off once we were finishing up with Step two we'll understand how they generate or understand these things. But basically through the through the back propagation error the declaration of error understood where it was making mistakes in the it. It's adjusted its weight and now is generating images which are a little bit more like dogs. And now we're going to train the discriminator again and for that we need some images of dogs. A new batch. And now we're going to put all of those into the discriminator along with the images of the dogs and it will output some values so I'll put the values now we need to compare the values to the actual numbers that we want that we know which ones are those which are not dogs the top ones are not dogs but the dogs. So we tell that to the discriminator this cremator Kalka is the era that propagates the arrow through its network and therefore learns from that and the weights are obtained. So next time we'll do a better job at discriminating dogs versus dogs. So it's learning. So it's now at some level to kind of like metaphorically speaking. And now we want to train the generators so the. So we want to get rid of that the generator generator and then we're going to use the same image. We're going to put them through the network of the discriminator is going to output some values. As you can see these values are lower. So if I go back you'll see that the values were apparently is open for the point to now the values are 0.5 inches or 21 so they draw because the discriminator has learned what how that these don't really look like dogs. But as you can see they don't. They didn't drop as low they were like dropping to like a complete zero. We'll talk about that later further down when we're doing Step Three. But for now just kind of noticed that what do we want to discuss now though is that what happens. How does a generator kind of like get better. Well I think of that intuitively. Well we again we take these values we calculate errors because we want these values to be ones. That's what the generator wanted wants to trick the discriminator and this error these errors of backburn brigade through to the discriminator and the weights are updated and the way to think of it intuitively is these networks are huge What's what we have drawn here is like these are very small just representations of those networks just like images to show that this is indeed a network. But in reality those areas are much much bigger. And the way to think about it is that the there's the communication happening between the general discriminant So the generator creates these images. Since this criminal says hey this computer I have some images here. Do you do you think these are images of dogs or not. And what do you think the possibilities are. And the distributor looks at them and says oh well you know those don't really look like dogs to me I'll give them about 20 percent probability 10 percent probability of 50 percent possibility and then that discriminators like Are you OK. You're totally right. I was trying to trick you. These are not dogs but can you tell me what I did wrong. You know like what where did I go wrong. And the discriminator in this case my look at the images and say this is the back propagation we're kind of like this is the intuitive understanding of the back propagation process and gry indecent the discriminator might say something like look I checked your images of dogs you know you know the images you send me and I really have seen images of dogs. So I know that dogs normally for instance have eyes. Your dogs have some little black dots which don't really look like eyes to me and you're missing out on eyes or you know you're you know we don't have enough paws or you don't have tails and your dogs those don't look like dogs to me or they're not. They don't look three dimensional enough they look very flat dogs and all three look three dimensional to me. So and that's this process of back propagation. This error is actually that's what it's kind of like overall in human language telling that discriminate the generator but in reality of course it's just telling the generator that hey you know you got to go to this part of your image doesn't feel right. This part of the image doesn't feel right and just basically dealing with individual pixels and how it's how they should be updated for these images to better resemble dogs. And then the weights of the neural network are up. So the next time we'll do a better job the generator. So step three.