Introduction: What are Artificial Neural Networks and how do they learn?

Sundog Education by Frank Kane
A free video tutorial from Sundog Education by Frank Kane
Founder, Sundog Education. Machine Learning Pro
4.5 instructor rating • 22 courses • 442,733 students

Learn more from the full course

Autonomous Cars: Deep Learning and Computer Vision in Python

Learn OpenCV, Keras, object and lane detection, and traffic sign classification for self-driving cars

12:44:34 of on-demand video • Updated May 2020

  • Automatically detect lane markings in images
  • Detect cars and pedestrians using a trained classifier and with SVM
  • Classify traffic signs using Convolutional Neural Networks
  • Identify other vehicles in images using template matching
  • Build deep neural networks with Tensorflow and Keras
  • Analyze and visualize data with Numpy, Pandas, Matplotlib, and Seaborn
  • Process image data using OpenCV
  • Calibrate cameras in Python, correcting for distortion
  • Sharpen and blur images with convolution
  • Detect edges in images with Sobel, Laplace, and Canny
  • Transform images through translation, rotation, resizing, and perspective transform
  • Extract image features with HOG
  • Detect object corners with Harris
  • Classify data with machine learning techniques including regression, decision trees, Naive Bayes, and SVM
  • Classify data with artificial neural networks and deep learning
English [Auto] Hello everyone and welcome to this section. I'm really excited to talk about artificial neural networks and to explore the power of them in the section. All right so the first question is what is artificial neural networks so efficient your networks are information information processing models that are inspired by the human brain. So simply we look at our human brain how human brain works and we try to imitate that mathematically. OK. So let's get a let's take a look. So what happened here is these are bunch of neurons in our human brain. And what you see here is kind of like you know these neurons firing and that's how we actually learn or acquire new pieces of information as we go. And as you guys can see here this is just an example of one neuron. What happened is these neurons have a list of dendrites which is kind of the branch is going into our nucleus. What happened is we take all our these all these signals within the nucleus we process them somehow and then we send a signal out to what we call the axon which is the out the outcome of the output of our nucleus. So in a very simple like nutshell what we did or like no scientist did is that they simply try to look at how our human brain works or our or how our neuron works. And we try to model that mathematically. So simply in a very simple form again we can discuss all that in a lot more details. This is just an introduction lecture to artificial efficient networks. We simply OK. We said OK. That's how our neuron works. Let's let's actually model that mathematically and that's why you guys can see here we have a bunch of inputs going into we assume it's a summation function for example we have bunch of weights in here and we apply kind of an activation function that generates an output y that's pretty much how we model our neuron in a mathematical form once we have a mathematical form and then use it you know an M and A microprocessor to actually like model our brain which is really powerful because we can learn that artificial your networks can be used to do many things face detection face recognition. We're going to learn how to do traffic sign classification and so on. So our brain has over 100 billion neurons communicating to each other through electrical and chemical signals. And these neurons help us to mainly see things generate ideas and so on. Right. And when we see learning when human learns that means the we try to change the effect of weights OK. Which is simply the values of what we actually can send of these signals here on the Exon. So how does how human brain works or how human brain learns it learns by creating connections among these neurons. OK. So that's how we model kind of one single neuron in a nutshell. OK. So in order to build our what we call artificial neural network or a multilayer neural network we actually don't you just one neuron we actually use several neurons as you guys can see here. We arrange them in a way that we have couple of neurons in the input layer couple of neurons and the hidden layer on couple of neurons and the output layer. And that's how we structured our what we call it artificial neural network. Again the whole idea is we want to build kind of a mini brain OK the can we teach that brain to do whatever we want. We can teach it to classify for example pedestrians classify traffic signs classify whatever application we want it. And in this section we're going to learn how to mainly build artificial neural networks. We're going to start again from the basics we're going to build all a kind of a mini brain one neuron then build it up a little bit make it two neurons build a little bit more and make it a multi layer of neurons and we're gonna use it to work to solve practical applications. OK. This kind of an intro the quick introduction to artificial neural networks. All right. And let's take a look at how in general human learns. OK. So the idea is we wanted to teach the network how to make the network learn in the same fashion as our humans as humans learn. So the first step is let's see how can how can we learn first the first step is let's assume that we wanted to teach a human let's for example a baby. OK. When you see this image when you see this car you put a label that's called we'll call a car. So we teaching a kid for example. OK. Once you see this image OK this means a car. That's how we try to pose what we call a training data sets to humans in general these kind of inputs. This what we call a desired or correct output and what happened is since babies in the beginning they don't know. OK. So the the. Sometimes you know like they mess up what they do that they generate call a deviated output. They look at this image and they say OK maybe that's a horse for example. OK. What happened is through experience or through learning we what we do with that we subtract the deviated output or the error basically or the VA the wrong output we subtracted from the desired or correct and we calculate an error signal and then everything that goes back and actually tried to update in a way update our neurons strength or connecting signals between these neurons. And that's how we actually learn as humans. OK. And through experience this deviated output you guys can see here becomes desired output which in this case our error turns out to be zero. That means you know like when a kid turns let's say four or five years old now he knows for sure that when you see this image that's an image of a car and that's it. OK. And that's how we learn. All right. What we do that we're going to do the exact same thing when we train our artificial network can actually use kind of this way what we call it supervised training. We actually pose or like show for example the network bunch of images. OK. These are a bunch of images of a car and I would say whenever you see this. OK. This is a car car car car and so on and so forth. And we keep doing this and as we go we actually try to change instead of changing the strengths of the connection between neurons. What we gonna do here we're just going to change the strength of the weights or the values of the weights. And that's pretty much how artificial your networks work. And it's really fascinating because it's you know it's kind of you're creating your your own like you know mini brain persay and you can teach it to do like wonderful things as we're going to see in this section. All right. OK great. So what are the artificial your networks advantages. So then one of the key advantages of artificial will networks is what we call a generalization capability. So humans in general you know like if you look at for example a self-driving car like this if you see it from the front if it's in front back if you see from the side you can tell right away you know without even asking that this is lacking in know there's an autonomous vehicle. I know this I know this by a bye by heart. Why why is that. Like even though you know you might have seen only the front and only maybe like let's say the back. But your brain in a way generalizes you know what's human. OK. If you if you show the brain image of this car from a side here you can actually generalize too. And that's the power of our brain and that's the power as well of artificial you know networks that we can teach a network to generalize to improve the generalization capability of these networks. OK. One of the key advantages well of artificial networks is that they are capable of modeling any relationship between a set of inputs and outputs. Actually this is one of the one of the sayings in all one of the statements basically a lot of people a lot of research papers that in your network with sufficient number of neurons can approximately model any continuous function with an acceptable degree of accuracy which means you're going to get the network I'm going to show you how to build the network in what we call it like a careless intensive flow. It's very very simple. I can show you with like in all a couple of lines maybe like 10 lines or so of code you can actually build your own brain which is really easy really fascinating. And that's again the power of artificial networks. If that you if you have training data OK you can build a network in couple of minutes and actually this network can then learn what you have just taught it. OK. What about the artificial new networks limitation the first limitations or drawbacks of artificial your networks is what we call it the black box structure. So again will we teach these new human networks or efficient run networks. We have bunch of inputs. Again we have a couple of outputs. What we do is that we train the network using data using training data. We say OK if you see this image of a car for example you're going to say OK our label is a car and you keep showing all these images as we go OK. The problem is as we train the network we actually go there and we change the values of the connections or the weights between these networks. Again we're going to add future sections we're going to show how can we build this network from scratch. Here it is again kind of introduction overview of the artificial networks. The problem is as you train the network OK what's happened inside becomes what we call the black box structure. OK. So we don't really know what is the physical or physical significance of let's say the weight connecting the first neuron to the second neuron for example. It's very difficult to identify and that's why this kind of you know like one of the limitations is we'll call it Black Box structuring. So instead of for example of doing physics based modeling where you actually know let's say you're middle modeling like a math spring damper system and you have a mass you have a spring coefficient you have damping coefficient the parameters within the model are well-defined. The the actual model something physically in in real world applications here it just kind of a black box structure a bunch of weights you train them somehow and that's how you you train your network. OK. So even though two networks can operate properly the internal structure of the network which is again the distribution of the weights often has often has no physical significance and very difficult to understand. All right. OK. So let's take a look a couple of applications in obviously our first nation which self-driving cars. And that's how we're gonna see how can we teach a network to classify different traffic signs. OK if you see for example like you know like 30 kilometers an hour you can detect OK this sign maybe 30 kilometers an hour. If you see let's say a yield sign you can detect a single sign and so on. What we can do we can use it for image classification and can classify OK. When you see this that means a car when you see this that means a pedestrian and so on. We can use it for fault detection and isolation. We can use it for face and speech recognition as you guys can see for example and using Facebook. The auto tagging thing that that's simply you know kind of algorithm that looks at your face and detect locate this face belongs to for example a specific person or tied to a specific person we can use it for system identification when you wanted to for example obtain a model for a system and we can use it for control system applications as well. We can use it for fingerprint recognition and hand writing as well. Character Recognition along with a lot of business applications to for stock market predictions and for for bank failure prognosis as well. There's a lot of applications. Again the the opportunities are endless. Well I firmly believe that I will be empowering a lot of our daily lives in the near future. Actually we are seeing it happening today with all again with self-driving cars all the face and speech recognition and so on. All right. OK so the next step is what's the plan of attack. So in this module we're going to start by building what we call it a single neuron or a perception. Just one simple neuron that simply represents how one neuron in our brain works. And that's how kind of we use it as a building structure. Just a very simple code. We can see how can we build it first and then moving forward. We can think OK instead of only having one neuron we're going to have two new role models as you guys can see here. And then we gonna build what we call it a multilayered perceptual network. We're going to build let's say a hundred neurons connect them together and then afterwards I'm going to build what we call a deep neural network or a kind of convolution on your network and that's where the type of network we're gonna be using that contains a lot of hidden layers that can be used to detect a lot of features. And we actually can use it to classify traffic signs. All right. That's all what I have for the section I hope you guys enjoyed it and see you in the next section.