Introduction to Color Processing

A free video tutorial from Augmented Startups
M(Eng) AI Instructor 100k+ Subs on YouTube & 60k+ students
Rating: 3.8 out of 5Instructor rating
30 courses
57,057 students
Introduction to Color Processing

Lecture description

In this lecture we going to discuss color processing as well introduction into image color-space. You shall learn the difference between RGB color space as well HSV color-space and why they both are important. We also discuss briefly the two apps that you will be building, related to color processing.

Learn more from the full course

Learn Computer Vision and Image Processing in LabVIEW

Learn Computer Vision and Image Processing From Scratch in LabVIEW and build 9 Vision-based Apps

02:43:28 of on-demand video • Updated November 2019

Develop 9 Vision Based Apps in LabVIEW
Understand the fundamentals of Image Processing
The difference between computer and machine vision as well as their applications
Theory behind each image processing algorithm
How to apply the image processing algorithms for real life purposes
English [Auto]
I guys and both come to this lecture. This is basically the introduction to color processing. OK so basically although color sounds like it's a relatively straightforward concept different representations of color are useful in different contexts. OK so in the levy vision framework the colors of an individual pixel are processed what color is in the color library in the visual system. Let's just take a look at what we're going to be dealing with. OK so we all know what RGV is for those that don't. It is red green and blue. This is our full image. This is what an image composed of. But it can be divided into three components which are the red component green component and the blue component. If you combine these two components you will get this full image. For those who know that your TV is comprised of red green and blue pixels for the nucleus they come with a luminance pixel or a white pixel. There is another example of RSGB. So we have the original image and can be separated into the R G and B component. OK so let's look at another way to represent color which is called the H S V. So one criticism of RGV is that it does not specifically model luminance yet the luminance uprightness is one of the most common properties to manipulate. In theory the luminance is the relationship of the R G and B values. In practice however it is sometimes more convenient to separate the color values from the luminance that is the solution is d h s v. Which means for you. Saturation and value is defined according to the situation of well the value is the measure of luminance or brightness and the vehicle space is essentially just a transformation of the R.G. because of space because all colors in the color space have a corresponding unique color in the HSV space and vice versa. Let's look take a look at the an HST example. So with that awful image and then it separated into you which is the different colors and that also compares a situation which is the color fullness as well as the failure which is the brightness or intensity is another example because you put the different colors with saturation of saturated it is and the value which is intensity. OK let's look at the RACV relation to HSV so the color space is often used by people because it corresponds better to how people experience color rather than our g.p color space to us. So is representation of how looks good you are. You get a situation all situated is and in the value how dark or light it is in level would be used to color space or color plain expression which is used to extract either the Oggi or HSP information from image that we can access using the color palette. And from here we use color plain extraction and we can either choose the R.G. be green blue plain or the you situation in Lumines plane. OK so the aitches vehicle space is particularly useful when dealing with an object that is a lot of specular Helots or reflections in the vehicle space specular reflections will have a higher luminance value field or a lower situation component to you. Each component may get noise depending on how bright the reflection is but an object of solid color will have largely the same you even under variable's lighting. Let's take a look at color and segmentation segmentation is the process of dividing an image into areas of related content. Segmentation is based on subtracting away the pixels that are far away from the target color while preserving the pixels that are similar color image glass the function that computes the distance between every pixel in an image and given color. This function takes an argument to argue the value of that of a color and returns another image representing the distance from the specified color. Looking at a segmentation example if you take out the image which is the screen and we segment only the yellow part of it. So we segment that we never has. So you get either a 1 or 0. So this like being 1 in. And then we we create a mosque from this and we overlay the yellow over that and then we get only the segmented image. This is particularly useful in object tracking or color tracking or you can use it for doing measurements such as What is the lead from the switch to the stage to do that on you. It is possible but is easy to do with a binary image. OK so in the next election were going to be pulling some vision apps to the vision app number one is to count Eminem's. So it's basically a simple F but we will be able to count how many green and medians they need. I mean it was great. It's a bit tricky because our image may look similar to red in some cases but you should remember that in image processing you may not get 100 percent detection rate so that's why computer vision is an ongoing study and field let's look a look at Vision at number two we're going to be doing color tracking. So this will take years if you have an orange bowl or blue ball or any color ball and that ball is different from your background. You'll be able to track that ball through your space as you move it in and out of the camera. So I we're be looking forward to doing that. Your first vision based apps and this is your first step into image processing into creating cool apps that you can use for future applications. OK so in the next election.