Face Detection with OpenCV
A free video tutorial from Jose Portilla
Head of Data Science, Pierian Data Inc.
4.6 instructor rating • 32 courses • 2,258,695 students
Learn more from the full coursePython for Computer Vision with OpenCV and Deep Learning
Learn the latest techniques in computer vision with Python , OpenCV , and Deep Learning!
14:03:33 of on-demand video • Updated March 2021
- Understand basics of NumPy
- Manipulate and open Images with NumPy
- Use OpenCV to work with image files
- Use Python and OpenCV to draw shapes on images and videos
- Perform image manipulation with OpenCV, including smoothing, blurring, thresholding, and morphological operations.
- Create Color Histograms with OpenCV
- Open and Stream video with Python and OpenCV
- Detect Objects, including corner, edge, and grid detection techniques with OpenCV and Python
- Create Face Detection Software
- Segment Images with the Watershed Algorithm
- Track Objects in Video
- Use Python and Deep Learning to build image classifiers
- Work with Tensorflow, Keras, and Python to train on your own custom images.
English [Auto] Welcome back and this lecture actually going to implement the theory and ideas we talked about in the previous lecture concerning face detection and we'll show you how to implement them with open C and Python. Let's get started. OK here I am at the Jupiter notebook. Go ahead and do the imports of to imply that plotless. If you haven't done so already and we're going to do is upload a couple of images that I've downloaded for you in the data folder and two of the images are just images of some Nobel Peace Prize winners. So we have Dennis here and Nadia. Those are the latest Nobel Peace Prize winners from this filming. And keep in mind that one of these images specifically that Dennis image this is a more artistic portrait. So it is a photograph of him. You can see that he has kind of a glow around his face and it looks like the corners are darker than they would naturally be and that's because some sort of photographer has edited this image and it's a really great photo of him. But you see that sort of editing will actually cause some issues later on when we try to detect his eyes. Well I have no problem that technique face it's a great photo for that. His eyes will give us a little bit of trouble because of how dark the whiteness around his eyes is due to the editing in this photograph. For Nadia's photo this is just a natural photo of her so this will work out nicely. And then we'll also detect a group face photo. We actually won't be using this specific image but we will be using an image from the famous Solvay Conference which was in 1912. And this is actually a really famous conference will be using this photo right here of all these photos here are all the faces here. And you notice some of them aren't exactly looking directly at camera but we'll still be able to detect those. So let's get started. We're going upload all those images and we're going upload them in greyscale. So I'll say it not physical to to read data. Forward slash. And if you just start typing capital and you should see that appear and the person is zero there and then Dennis will also applaud him and say CB2 image read data then we'll pass and then it's also grayscale and then finally the Solvay Conference photo will say CB2 image read inside the data folder. There should be Solvay Conference but JPEG and upload that. So let's take a quick look at each of these images. So this is Nadia. And let's make that see map grayscale. There we go. So that's not the image Here's the image. And then here is the Solvay image. All right. So what we need to do is we first need to actually create the classifier and pass in the ex-MIL classifier. Luckily open Seabee comes with these pre-trained cascade files and we've relocated the ex-MIL files for you in your own data folder. So in the same data folder if you take a look at it by expanding here into data we have these two cascade folders underneath our Cascades. We actually have quite a bit of our cascade files that you can play around with Later on we'll be using this license one for your project. So the first step is to once you have that pre-trained SML file you call CB2 cascade classifier and then we're going to call data in the Cascades folder and the one we're going to be using is Haar cascade and it's called frontal face default. So in those hours once for like I frontal cut face lots of importance of fear and then the one we're looking for is frontal face underscore default. Ex-MIL. So we just need to pass that in and then we need actually assign it to an object so we're going to say face cascade is equal to this cascade classifier. And this is essentially a list of around 6000 classifiers essentially features that are going to be passed through the image to see if it fits all those features. And that would be an indication that there should be a face there. Let's create a quick function to actually functionally is the way this face cascade works. And then we can actually then draw a rectangle. So I'll say if the text underscore face pass in the image and then we will save face image is equal to a copy of the image. And what happens is when you actually call face cascade there's a Detect multis scale method on it and then you just passen the face image that you want to try to detect a Faizon and this will return back in object. They can then use for drawing rectangles. So will say face Rex or face rectangles whatever you want to call it. And this is essentially just the x and y position plus the width and height of the rectangle. So we say for x y w h so for the X and Y of the top left corner plus the width and height in face rectangles go ahead. And then just create this rectangle so I'll say CB2 that rectangle pasand the face image passen the left point's top left is X and Y and then the bottom right is going to be X plus the with that Y plus the height. And since we're dealing with grayscale we're just going to make the actual rectangle white. So Turner 55 for all those color channels. Or you could just pasand 255 so it's not with really color shells. There give a thickness of 10 and then we're going to return the updated face image. So you should be able to just write in 255 here. Don't worry about those color channels. So we have the tech face and then let's go ahead and check out the result. So let's run the text face on Dennis and then peel t show that results with a gray mapping and here we were able to successfully detect his face. So let's run this again. Will run it with Nadia run that and then we get to see his face. Now let's try running this on the Solvay Conference. So the soulé conference remember has multiple faces. In fact has a lot of them and some of them are even looking directly at the camera. So we're going to try saying the techs face so they and then view that result. So you notice right off the bat that some of these things that it's detecting aren't faces. And actually this may be a gargoyle or something on that building so that may be wrong but we can see you're also detecting double faces. So what we're going to do is we're going to add a few parameters specifically scale factor and minimum neighbors. Let me copy and paste this detects face function and then we're going to do is call this adjusted underscored the text face and we're going to add in a few parameters to detect multi-skilled. And if you shift tab here you can see that we have this scale factor and this minimum neighbors that we can end up adjusting the scale factor is a parameter specifying how much the image size is reduced each image scale and then the neighbors is the parameter specifying how many neighbors each candidate rectangle should have to retain it. So what happens is as you actually perform some of these classifiers you'll end up having multiple rectangles detecting a face and if multiple rectangles are near the same area or have a minimum number of neighbors then we decide that is where the face is. So we can go ahead and play of some of these parameters. I only use couple of defaults that worked well for me but a lot of times is just experimentation. We're going to say the scale factor here is equal to 1.2 and that the minimum neighbors is equal to 5 and then we're going to run the adjusted phase. So we'll say results is equal to adjusted Tech's face on the Solvay image and we'll see the results of this survey results. See map is equal to grey. And when you run this you see now you get a much better and clear the section of the actual faces. The only caveat being that now this person that was looking sideways you're no longer able to detect his face. So it's kind of a tradeoff it's a tradeoff between detecting too many faces. So this little thing right here and then detecting a face twice over this man or missing out on some faces that aren't looking directly at the camera and you can begin to adjust the scale factors and minimum neighbors to see if you can find a better balance between the two. Just keep in mind we have an image with lots of faces and people looking at different directions. Some SML file that's specifically made for frontal face may not work very well. Now check out the cascade file. So I'll say I cascade is equal to CB2 call cascade classifier again and inside the data folder on the haar Cascades we're going to look for Haar cascade underscore I. And then in a really similar fashion I'm just going to copy and paste the original the text face function except for then call it the text I's so say the text ice and everything else can be the same except to say this is sort of face rectangles these are rectangles now. And then instead of calling a face cascade we will call the cascade and then change this to eyes rectangles everything else pretty much the same. What's going to happen is this is actually going to return to rectangles if it finds a set of eyes and then we're going to run that and check the results on the eyes and we'll first do it on Natya and show the results to show results c map is equal to great run that and we were able to detect the eyes but notice it actually detected her nostril as well thinking it wasn't I. So you may have to also then adjust this multi-skilled the same way we adjust it here for scale factors and Midan neighbors. So you could do is just even try copy and pasting this and see if that helps. So you can try copy and paste scale factors and neighbors run that in lotus that will actually then clean up the nostrils it found. So it could have thought maybe has another set of eyes somewhere in the image maybe you thought there was two people but these actually aren't sophisticated enough to return back only pairs of eyes because maybe someone's winking or something. OK. So we're able to fix that issue. But now let's take a look at what happens when we search Denis's face for eyes. We'll see the result is equal to detect eyes on Dennis and then enjoyed won't say appeal to show results. See map is equal to grey. And what happens is if we spell that right result you don't get any rectangles around that his eyes and that's because if we look at the original photograph here where I got this from. You'll notice that the whites of his eyes are actually really dark. In fact they're almost the same color as some parts of his skin and some parts of his skin because of the editing that the photographer did on this photo. They're actually lighter than the whites of his eyes versus if we look at Natya the whites of her eyes are really quite distinct from the rest of her face. And that's one of the main features that this cascade is looking for. So we would have to find in an edited photo of Dennis here in order to actually detect his eyes correctly and this is kind of a particular photo that it won't work with. And it's really just because of the editing done on this photo due to some contrast and they add a little bit of that here to make it look a little darker tinted his eyes color and you cannot see his pupils are actually quite large as well here they they're dilated in this photo. So lots of stuff going on here that caused us to not detect his eyes even though as humans we can clearly see where his eyes are. So that's one of the tricks of working with these sort of cascade files. Now you're probably wondering how can you actually do this with video. Well video it's pretty straightforward. All we need to do is start capturing video with CB2 video capture. And I'm going to capture this straight from a camera so we'll say video capture zero and then we're going to do is say while true say RTT and frame is equal to cap. Read zero. And then say frame is equal to. And we're just going to run detect face on the frame. So essentially anything you do on a single image you can do on a single frame. And then we will say CB2 image show video face detects and then we'll show that frame and I will make awake here and say see is equal to to what key one. And then if you actually make this k if K is equal to 27. Go ahead and break. So if I hit break excuse me if I hit escape the stop recording then as always you want to release the camera and then say we to destroy all when those. So let's run this. And let me bring up the actual streaming. So there I am and now I'm going to look basically more directly into the camera and you can see that it's able to find me if I pick up the camera here and I'm going to put my hand above. You can see it's not now no longer working that well. In fact it thinks my hand is a face and maybe if I can trick it. Now that didn't work but if I clearly look at the camera if I'm looking frontal face it should have no problem even if I move my face detecting it. So that's really what's designed to do society just quickly have a facial recognition for someone looking more or less probably within like 45 degrees of frontal. Now if I start going too much this time you'll notice that it no longer finds me. OK so that's it for object detection with Cascade files. We're going to do is expand on this and actually have you perform a project that uses a prebuilt cascade file in order to blur license's. Okay thanks and we'll see you at the next lecture.