# Algorithms complexity

A free video tutorial from Andrei Margeloiu

Google HashCode World finalist, 3x Gold medalist

1 course

134,781 students

## Lecture description

The bigger the complexity, the slower the algorithm. You will be able to classify algorithms based on Big O complexity notation!

I am sure this lecture will give you the 'aha, I got it now' sensation.

## Learn more from the full course

### Introduction to Algorithms and Data structures in C++

A step-by-step guide with solved problems. I'm teaching visually with lots of examples.

03:13:19 of on-demand video • Updated August 2018

Deeply understand basic Algorithms & Data Structures concepts

Apply Algorithms & Data structures to new problems

Analyse algorithms efficiency

Find efficient algorithms

Solve algorithmic problems!

Learn more advanced topics

High handed in this lesson we are going to find out how to analyze the time complexity of an algorithm. First the running time of your program or algorithm depends upon multiple factors like if you have a single or a multiple or or even reading and writing speed of your internal PC memory. It also depends on the softer architecture that you are using whether your software to its own 32 bits or 64 bits. And finally it depends upon the input size. When we want to analyze the time complexity of algorithms we don't bother about the processor or the memory or the software we just care about how big it's the input size or how many things we have to process in order to give the correct output. Let's define a model machine which is a computer which takes one unit of time for an hour it medical and logical operation. And also it consumes one unit of time for an side one and the return operations. Let's take a simple example. Here is a function in the card and it returns the sum of a plus b. How many units of time do you think does it take the plus operations paid 1 unit of time and the return statement. Also take one unit of time so the total cost of this function is two units of time. And because it doesn't depend on the input size or how big or small are the variables inside it we can call this constant time because again it doesn't depend on the input size of this function. Let's see another example. This function calculates the total sum of the elements of an array. Let's analyze this function a row by row. The first throw is an assignment. So it has the cost to one unit of time and it happens just one times in the second row. We see a loop and in this loop the variable i goes from one to N. So first the total cost of this line will be two because one unit of time is for the comparison and one unit of time is for the incrementation. How many times do you think this loop would work. It will work and times for each value of i between 1 and n in the next throw. There is an assignment an A-plus operation. So the total cost of this row is two. And how many times does it run and times because for each value of the variable i the total is increased by the value of a I. And the last throw is the return statement. So it costs one unit of time and it happens just one time. So the total cost of this function would be each cost of each row multiplied by its number of time. So it will be one multiplied by one plus two multiplied by and plus two multiplied by and plus 1 multiplied by 1. So this function takes four and plus two units of time. Let's take another example here. We want to calculate the total sum of our metrics will calculate again the total cost of each row individually and then add them together. The first assignment has cost one and half at one time. The first loop has a cost of 2 because of the compares and the incrementation and it happens and time the second loop which is a nested loop has also the cost too. And it happens and square times because for each i g will take a value from one to N and its increased one at a time. So a total of and square times and the assignment operation of total would have the cost to end it will happen and square of time. Because it is inside the nested loop the return statement has the cost one. So this function will consume a four and a square plus two and plus two units of time on a model machine. When we want to analyze the complexity of an algorithm we tend to analyze it for the worst case scenario and for very large inputs size. For example if an algorithm d consumes This function of units of time when n tends to be infinity or the input side its very big. This function tends to be equal weight and power 3 because 5 and square plus nine and plus 7 will become insignificant in comparison with an AT power 3 also. Let's take another example. The first algorithm consumes five and square plus trillion plus seven units of time. And when n tends to be infinity 3 and plus 7 will become insignificant in comparison with 5 and square its the same in the case of the second algorithm. And also in the third algorithm case 9 9 8 will become insignificant in comparison with two and went and it's very big. Let's see another example and this time we are going to use the Big O notation. How do we use that. Well we have two very easy rules. First we have to drop the lower order terms. And second we have to drop any constant multiplier. For example this algorithm takes five and plus seven operations. And first we apply the first two rows. So we drop the lower order terms seven because N has a bigger order and we remain with five and we apply the second four. Anyway we drop five because it is a constant not deployed. So the big O in operation of the complexity of this algorithm would be 0 and in the second case we'll have to drop again the lower order terms which are three and add power three plus and square plus this big number. But it will become small when and tends to be infinity. So we will remain with all and power for let's define an other easier rule and the running time of an algorithm is the sum of all the chord fragments inside that algorithm. Let's take an example in the left side. We define simple statements and because they are simple and they don't depend on the input size they take one unit of time. So the complexity would be 0 1 in the middle. We have a simple loop which has inside simple statements and it would take an operations and the complexity of it would be our end in the right side. There is a nested loop which takes and square operations. So it would have the complexity and square. Here is a function in which I have added all the previous fragments of code. It's a single function. So we'll have to add all the different fragments complexities and make the final big complexity notation in the first part. We have 0 1 complexity in the second part. It is a simple loop. So we have our end and the last fragment has 0 and square complexity. This function would take 0 1 plus O N plus 0 and square units of time. But because we'll have to drop the lower order complexities will drop 0 1 and 0. And so the complexity of this function would be 0 and a square until the next lesson. They care.