Time and Space Complexity tech interview questions

Raquel Fraktas
3 min readJan 28, 2022

So you’re new to interviewing, and the above title sounds like you’re reading an article for NASA.

Basically, time complexity is a function describing the amount of time an algorithm takes to complete in terms of the amount of input. Space complexity is a function describing the amount of memory (space) an algorithm takes in terms of the amount of input to the algorithm.

Time and space is very valuable, and you want to write code that takes the least amount of time and space to complete. The question that a technical interviewer could ask is “what is the Big-O of your answer?”

https://www.bigocheatsheet.com/

Big O notation is another way to call the analysis of time and space complexity cost of an algorithm. Big O notation can be written like the above in the chart; O(n²) can be pronounced “O of N squared,” where “n” represents the input size, and the function inside the () gives us an idea of how complex the algorithm is in relation to it’s input size.

O(1) or O(log n) is the ideal case scenario you want to hit- as it takes the least amount of space and time to execute the function. We don’t care about O(1), O(2), etc. It gets rounded down to O(1) which is to say that the operation is a flat line in terms of scalability. These operations only happen once. O(1) is constant time.

For O(n), usually loops would take on this value- it runs every value in our input. The operations increase linearly according to the inputs. This means the algorithm runs in a linear time. This is linear time.

A good rule of thumb for O(n²) , is that if you see a nested loop, you can use multiplication to figure out the notation. For example, like we explained in the paragraph above, the first loop is O(n) , but then we multiply that loop and have it run another time on the dataset, you square the n which is like doing O(n *n). This is known as quadratic time.

How to calculate Big O

As a brief overview- you must break down the function into smaller chunks, and calculate the Big O of each operation.

Then you add up the Big O of each operation together, taking out the constants, and identifying the highest order term.

What I mean by dropping the constants, is that you have two loops in a function (not nested), your calculation could come out to O(n+n). The n+n doesn’t matter, and we automatically turn it to O(n) .

The highest order term of O(1+1+1+1) or O(4) is O(1) .

--

--