# Principal Component Analysis (PCA) for Reduction and Whitening

A free video tutorial from Ahmed Fathy, MSc

MSc, Senior Deep learning engineer @ Affectiva & Instructor

6 courses

43,137 students

## Learn more from the full course

### College Level Advanced Linear Algebra! Theory & Programming!

Linear Algebra (matlab - python) & Matrix Calculus For Machine Learning, Robotics, Computer Graphics, Control, & more !

35:05:52 of on-demand video • Updated November 2020

Gain Deep Understanding Of Linear Algebra Theoretically, Conceptually & Practically.

Obtain A Very Robust Mathematical Foundation For Machine & Deep Learning, Computer Graphics, And Control Systems.

Learn How To Use Both Python And Matlab For Solving & Visualizing Linear Algebra Problems.

[Matrix Calculus] Learn How To Differentiate & Optimize Complex Equations Involving Matrices.

Learn A Lot About Data Science, Co-variance Matrices, And The PCA.

Learn About Linear Regression, The Normal Equation, And The Projection Matrix.

Learn About Singular Value Decompositions Formally & Conceptually.

Learn About Inverses And Pseudo Inverses.

Learn About Determinants And Positive Definite Matrices.

Learn How To Solve Systems Of Linear, Difference, & Differential Equations Both By Hand And Software.

Learn About Lagrange Multipliers & Taylor Expansion.

Learn About The Hessian Matrix And Its Importance In Multi-variable Calculus & Optimizations.

Learn About Complex Transformation Matrices Like The Matrix To Perform Rotation Around An Arbitrary Axis In 3D.

And Much More ! This is a 34+ hours course !

Hello, it's Ahmed Fathi, and in this video, I'd like to show you some Python programming and some visualizations for the whole section in this Python notebook. We have the following eight dimensionality reduction and witling. Let's see what I have done here. Here I am importing no matplotlib and the math libraries. Then I am defining the mean and covariance matrix of my original data. So the variance in the X direction is 40. The variance in the Y direction is 20 and the variance is 10. Here I am getting the igan the composition of this covariance matrix into two matrices, the Matrix and the Eigenvectors Matrix. Then I am generating some random, multivariate, normal distribution, which is a Gaussian distribution using this mean and this covariance matrix. And I am generating ten thousand data points here. I am plotting the original data. I am blocking the vectors, the eigenvectors of the convergence matrix, and I'm also plotting the original data points. And by this block, I just want you to see how the original data points are oriented. And I want to prove graphically that the eigenvectors are actually the directions of the maximum spread of data. So here is the first plot right here. This is the data plotted. This is the first eigenvector and this is the second eigenvector. And indeed, the it of the eigenvector is in the direction of the maximum speed of data. So after that, we do the following are want to perform the BCA rotation. So I want to value my original data points by e transpose e transposes education metrics. I also multiply my axes by the same rotation matrix transpose because I want to rotate the data points and the principal axis at the same time. And after that I reblock my axes and my data after rotation. So when I come here I find that the plot of the rotation will be something like that. So this shows that indeed e transpose is the rotation matrix that we should multiply by in order to remove the covariance in our data points. After that, I want to perform projection or dimensionality reduction. So I come here and after this rotation I make a copy of my rotated data into this variable and then I remove the second component from all the data points. I set the second component from all the data points to zero, which is equivalent to removing the second component of all the data points. And then I print the rotated data and the projected rotated data and I see the following for the projected data. I have those data points right here. And after the formal rejection, the second components turned out to be zero, just as desired. Now, after that, I looked at the data after this projection, so block after projection in the rotated space. So here, instead of the rotated data, I plot the projected rotated data and we see the following result. Here is the result after projection, and it is indeed projected onto the x axis, which originally was the E one X, the first eigenvector axis. After that, I want to show the projected data points in the original space in this space right here. I want to show the projection of those data points onto this first eigenvector. So to do this, I you get back using the matrix like that. So here I multiply it by this projected rotating data to get the reconstructed data in the original space. And then I plot the reconstructed data in the original space. And I see the following. Here is the result. Here is the data projected onto the first principle axis as desired. So by this we have performed all the steps for a dimensionality reduction. Finally, let's perform a whitening, we have seen before that the whitening mattress is equal to deep or negative one half multiplied by E transpose. So I define this whitening matrix right here as indeed a square root of the Matrix D Then I get the inverse of this function to have given the path and then you multiply this by e transpose. After that I multiply the whitening matrix w by the original data in order to get the whitened data. And then I got the white data in the original space and I see that this is the result. And indeed this data is whitens just as easily. See you in the next video.