Timing Our Code

Colt Steele
A free video tutorial from Colt Steele
Developer and Bootcamp Instructor
4.7 instructor rating • 11 courses • 1,042,523 students

Learn more from the full course

JavaScript Algorithms and Data Structures Masterclass

The Missing Computer Science and Coding Interview Bootcamp

21:44:09 of on-demand video • Updated November 2018

  • Learn everything you need to ace difficult coding interviews
  • Master dozens of popular algorithms, including 6 sorting algorithms!
  • Implement 10+ data structures from scratch
  • Improve your problem solving skills and become a stronger developer
English [Auto] All right, so let's take a look at a more concrete example was compared to solutions to the same problem. All right, so here's our problem. Suppose you want to write a function that calculates the sum of all numbers from one up to and including some number. And so if we plug three in, we should get one plus two plus three, which is six, the most common or the easiest to come up with solution. This is the one that comes to my mind first is to basically create a total variable accumulator and loop through all those numbers and add them in one at a time, starting at one all the way up until the end. So that's what I've done here. We have a for loop. Total variable starts at zero and at the end we return total, we start the loop at one. We go up until GNR each time through total plus equals I OK. And actually have there we go. I have it open as a snippet here. If you're not familiar with snippets, basically it's a way for me to just rather than copy and paste code into the console and have to deal with losing it and rerunning it and that pain in chrome. You can go to the sources tab and then there's a snippets tab you can click on and it allows you to create new snippets. I have a couple of them here and I'm going to use them in this course quite frequently as an easy way to save code and execute it. And just to show you how it works, here's the same add up to function, and I am printing it out at the bottom, the result of adding up to six. So that should be six plus five plus four plus three plus two plus one to execute it. I can either click over here or you can see the shortcut it suggests to me on a Mac is command center. It will say the correct shortcut on a PC. I think it's control center. So if I do that now, I get twenty one. If I did it with three or four. There we go. I get six. If I did it with 100, I get fifty 50. OK, so that's just to demonstrate that that works. Next, there's a second solution. There's more than these two, but these are two that I'm going to use because they illustrate my point. This one is definitely less intuitive. As you can see, it's a lot shorter as a single line, which doesn't necessarily make it better, but it's very different. There's no loop involved. What we're doing is it's more of a mathematical formula. We take N and we multiply it by N plus one divided by two. And where this is derived from, I'm not going to go into I do have slides on it or a slide that talks about how we arrive with this formula. If you want to check it out, you can, but it is not the it's not the focus of this video. So I don't want to distract from the actual focus, which is evaluating and comparing these pieces of code, not actually diving into how we write them. So just to show you this, when it does work, I have another snippet over here. I can run the same thing and let's do it with six again and I'll do a command enter to execute and I get twenty one. And just to walk through what happens there is the equivalent of taking six plus one first, which is going to give us seven and then we multiply that by six which gives us forty two and then we divide by two which gives us twenty one. So it does work, it gives us the same answer in the end. Again, how it works. We'll leave that V. All right. So we established there are two working solutions which one is better. And of course the first thing that we could ask is what does better actually mean. Does it mean the faster code in seconds or milliseconds? Does it mean the code that's fastest when we're adding a small number versus a large number? Let's say we want to benchmark it when we add up all the numbers between zero and one billion, is that a good test or is it about how much memory is used? Is it, you know, the number of variables that are created, the data that is stored each time that that function is called? Or what about something like readability, legibility, how important is that? Is that better or brevity? That's not on here, but a lot of people care about that. They like to minimize the the length the number of characters used in their programming, not my style personally, but definitely valid. All of these are valid concerns and it really comes down to the situation. But I think most people would agree that the first two, especially for now, we're going to focus on speed, will come back to memory usage in a bit. But these too often can be more important than something like readability. And unfortunately, they often come at the expense of readability. And that's sort of the balancing act of of writing good code is writing efficient code that doesn't use a ton of memory, but also is still readable and doesn't look like complete gibberish. So we're going to talk about basically all how all of these play together throughout the entire course. It will be a recurring theme, but we're going to focus on evaluating speed. How long code takes to execute that you're going to start. So how do we do that? Well, the simplest way would be to use something like built-In timing functions, something like this, where we have our first add up to. And then I use a function called a method called Performance Dot. Now, which in the browser is just going to tell me the number of milliseconds since the document was created, basically since I opened the window. And then so I'm going to save that to a variable before I call. Add up to then I'm going to call add up to with what did I do since a billion. I think it's a billion, so I'm going to call it with a large number and then after that finishes, I'm going to check performance. Now again, which should have incremented a bunch of milliseconds most likely. So I have two numbers. Then I just subtract them the first or the second time minus the first time, and I divide it by 1000 because it's in milliseconds and I want to work in seconds. We don't have to do that last part and I print it out. So this should work and I actually have a snippet right here. Same thing. So add up to our same one from before and we're calling it with one billion and we're going to print out how long elapsed. Let me clear my console first and I'm going to execute it. There we go, one point two five seven five, so on seconds, let me do it one more time. And what you actually see, I don't think executed it, there we go, is that it varies. So it changes even on my own computer. Nothing really has changed. I haven't added code. I haven't changed the numbers here. It's the exact same code, but we're getting a different output before I forget. Let's go over to the second solution. So the exact same number. Let me just make sure I didn't leave off or add a zero. Yes, they are the same and I'm doing the same thing, taking performance out now, the beginning and performance. But now at the end, yes, I should show what that looks like. You can see that kind of a large number at this point because I've had this open for a number of minutes. If I refresh the page, though, I just did. Now it's at 2000 milliseconds, which is two seconds. And now if I did it again, we're at six point six seconds. And if we subtract the two, it's, you know, four seconds roughly in between. Anyway, we're doing the same thing here. OK, so now if I run this, you can see we get a way smaller number, but it's still actually I guess it doesn't look like it's changing here. Is it the measurements? The differences are too small really to capture. But my point is that this is significantly shorter in duration with the same data as this one. Here we go, one point two, four seconds compared to basically zero seconds. So that seems like a good sign for this solution. It seems like it's much more efficient and that's great. But this process is not the most reliable of manually timing things like this before and after and comparing it to the other function. And it's not that easy for us to talk about. How would I actually write down? How would I give it a label of how efficient this one is compared to this one is based off of the percentage of speed. Is it the I subtract the number of seconds or milliseconds. It gets a little blurry. This brings us to the problem with time, which I'm reserving for my end of life memoir title. The problem with time, it's just a kneisel sounds very deep. So the first is that different machines will record different times. So it's not reliable depending on the specifications of a machine and what's currently happening on that machine and what code is running the time, the results you can get will be different. That doesn't really mean that will get the opposite results where the first solution is suddenly faster than the second one. But it means that the margins could change, the actual measurements could be different and are almost guaranteed to be different times. And as we saw, the same machine will record different times. So in my browser, I just did the exact same code, that first example, and it differed each time by a little bit, which isn't really a problem, but it still shows that it's not precise and that we can't rely on that. And second of all third of all, I guess, is that for fast algorithms, things that are happening on a really, really fast scale, speed measurements might not be precise enough. We have two or three or four algorithms and they're all super fast. They're doing something very quickly. There still is one that is going to be fastest or most efficient. But if our timing functions can't figure it out because they're, you know, the smallest interval of time that can be measured isn't good enough, then it doesn't really help us. So how do we walk through our code and actually talk in general terms about which code is better without having to go and time it? So I want to be clear. I'm not saying that timing your code is a bad idea. I do it all the time, but I'm more saying is that it would be cool if there was another way that didn't involve having to set up a new file and actually go through the process of timing or code. What if our code takes, you know, an hour, something massive and I'm comparing it to another version that takes four hours. I don't want to have to run a test to figure out which one is faster. We want to be able to assign a value to, in general terms, talk about how code compares to other code without having to go through all of this. And that's where big notation comes in. And that's coming up next. Cliffhanger, sorry.