# Constant Time - O(1)

A free video tutorial from Karoly Nyisztor • Professional Software Architect
Senior Software Engineer, Author, Inventor
4.4 instructor rating • 4 courses • 18,915 students

## Lecture description

In this lecture, we are going to talk about the Constant Time complexity. We'll also implement Swift functions that execute in constant time.

Introduction to Algorithms and Data Structures in Swift 5

Lay the foundations of algorithms and data structures in Swift 5. Level up your Swift 5 coding skills.

02:05:40 of on-demand video • Updated April 2021

• Learn how to write elegant and efficient code from a leading expert
• Cut down on your development time
• Get the companion eBook for FREE! (sells for \$28.80 on Amazon)
• Gain a working knowledge of algorithmic thinking
• Learn how to improve the performance of existing code
• Get ready for technical job interviews
English Constant time describes an algorithm that requires the same amount of time to execute regardless of the input size. Here's what we get if we visualize the running time of an algorithm that executes in constant time. The input size doesn't affect the execution time. In other words, the execution time remains constant. An algorithm which has a constant time complexity runs for the same amount of time whether it operates on one or several thousand or million entries. In the following demo, we're going to implement some examples that execute in constant time. Note that we're gonna rely on Swift playgrounds throughout this course. We'll implement a utility class for measuring the performance. We'll rely on this utility class to perform measurements in the current demo and several other upcoming projects. To illustrate the cost and time complexity, we'll create a function that performs a check on an array. The second example relies on the hash map look up we're going to measure the times required to retrieve values based on their keys from dictionaries of various sizes. If implemented correctly, the hash map lookup should happen in constant time. All right. Now, let's switch to Xcode. If you want to follow along with me, download the repository from GitHub. Open the "Big O" playground from the "big-o-src" folder. You can find the source code for this demo in the "Constant Time" playground page. First, we implement the benchmarking utility. The "BenchTimer" class has a single method called "measureBlock()". The "measureBlock()" type method has a closure argument. The measured code gets executed ten times, and we store the individual run times in an array. We rely on the QuartzCore framework's CACurrentMediaTime() function to retrieve the current absolute time. Unlike NSDate or CFAbsoluteTimeGetCurrent(), CACurrentMediaTime() is reliable since it's not subject to changes in the external time reference. The measured block gets executed between creating the "startTime" and "endTime." we store the execution time in the "executionTimes" array. After ten iterations, we calculate the average execution time. The "reduce()" array function calls the closure sequentially on all the array elements, which in our case sums up all the items in the array. Finally, we divide the result by the number of iterations which gives us the average execution time. Next, we implement a function which takes an array and checks whether the first element is 0. The simple function executes in constant time. That is, the execution time should be the same regardless of the size of the input array. Now, let's prove our theory. We'll invoke the "startsWithZero()" function with three arrays. The first array has only three elements. The second one has 10000 entries. And the third array is huge, with one million items. Our benchmark shows that the size of the input array doesn't affect the execution time. There are only negligible differences in the order of microseconds. Another algorithm which runs in constant time is the hashmap lookup. We rely on the "generateDict()" helper function to create custom sized dictionaries. generateDict() takes on input argument called "size." This argument provides the size of the generated dictionary. The function returns a dictionary with keys of type "String" and values of type "Integer." First, we create an empty dictionary next. Next, I validate the size input parameter using the guard statement. If the size is not bigger than zero, we return the empty dictionary. Otherwise, we generate the keys and the values. I use a for-in loop to iterate from 0 up to, but not including "size." That is, we iterate from 0 to (size - 1). The key is the string representation of the index. Then we set the value for the given key which is the index itself. In the end, we get a dictionary filled with key-value pairs in the form [String: Integer]. Now, let's create dictionaries of different sizes. The first one called "smallDictionary" contains only three key-value pairs. "mediumDict" has five hundred and "hugeDict" contains one hundred thousand keys and values. We rely on the "BenchTimer.measureBlock()" method to find out the execution time of the dictionary lookup. After running the demo, we indeed see that the time it takes to find an element doesn't depend on the size of the dictionary. Our test shows that dictionary search has a constant time complexity. The differences in execution times are below the microsecond threshold. Constant time algorithms are ideal because they are not affected by the size of the input. Unfortunately, it's not always possible to come up with a solution that runs in constant time.