# Understanding Time and Space Complexity: A Comprehensive Guide to Big O Notation

Heads up!

This summary and transcript were automatically generated using AI with the Free YouTube Transcript Summary Tool by LunaNotes.

Generate a summary for freeIf you found this summary useful, consider buying us a coffee. It would help us a lot!

## Introduction

Understanding time and space complexity is crucial for any software engineer aiming to write efficient code. In this article, we will delve into the intricacies of Big O notation and how it helps us evaluate the performance of algorithms based on their resource consumption. We’ll illustrate these concepts using straightforward examples and provide a deeper understanding of complexity analysis.

## What is Big O Notation?

Big O notation is a mathematical representation that describes the upper limit of an algorithm's running time (time complexity) or space requirement (space complexity) relative to the input size, denoted as **n**. It's a tool for comparing the efficiencies of different algorithms, especially for large inputs.

### The Importance of Time Complexity

The time complexity of an algorithm signifies how the execution time increases as the size of the input grows. It tells us about the computational efficiency of an algorithm. Common classifications include:

**O(1)**- Constant time**O(n)**- Linear time**O(n^2)**- Quadratic time**O(log n)**- Logarithmic time

### The Importance of Space Complexity

Space complexity measures the amount of working storage an algorithm needs. It includes both the temporary space allocated by the algorithm and the space beyond the input data itself. Just like time complexity, space complexity is also expressed in Big O notation.

## Analyzing Time Complexity with Examples

### Example 1: Squaring Numbers

Let’s consider an example where we have an array of integers, and our task is to return an array of their squares. For instance, given an input array **A = [1, 2, 3, 4, 5]**, we want to compute the array **B = [1, 4, 9, 16, 25]**.

```
# Define a function to square numbers in an array
# Time Complexity: O(n)
def square(array):
return [x ** 2 for x in array] # List comprehension
```

In this example, we traverse each element in the array once, resulting in a time complexity of **O(n)**. This is because we perform a single operation (squaring) per input element.

### Example 2: Finding All Pairs

Now let's look at a more complex problem: finding all unique pairs in an array. Using the same input
array **A**, we want to find pairs (1,2), (1,3), (1,4), etc.

- For
**n = 5**elements, the pairs count appears to be more than five. As we fix an element, we traverse the rest, leading us to execute this nearly for every element: - This results in a time complexity of
**O(n^2)**since we loop through**n**elements and within that loop, we iterate through another**n**elements.

### Example 3: Constant Time Complexity

Let's consider an example in which we want to retrieve the first number in the array:

```
# Function to get the first element of an array
# Time Complexity: O(1)
def get_first(array):
return array[0]
```

Here, irrespective of the size of array **A**, retrieving the first element takes a constant amount of time, leading to a time complexity of **O(1)**.

## Understanding Space Complexity

Space complexity often reflects the additional memory allocations that your algorithm requires beyond the input size. Let's analyze our previous examples:

### Squaring Numbers Function Space Complexity

For the squaring example, we create a new array to store results. Therefore, if the input size is **n**, the space complexity is also **O(n)**.

### Modifying Original Array Space Complexity

However, if we overwrite the original input:

```
# Function to square elements in place
# Space Complexity: O(1)
def square_in_place(array):
for i in range(len(array)):
array[i] = array[i] ** 2
```

In this case, we do not allocate any additional memory apart from the input array, leading to a space complexity of **O(1)** - constant space.

## The Big O Chart

Here’s a simple reference chart of commonly used complexities: | Big O Notation | Name | Description | |----------------|-----------|-----------------------------| | O(1) | Constant | Execution time is constant | | O(log n) | Logarithmic | Decreases as input increases | | O(n) | Linear | Directly proportional to input | | O(n log n) | Linearithmic | Common in efficient sorting algorithms | | O(n^2) | Quadratic | Nested loops through input | | O(2^n) | Exponential | Grows extremely fast | | O(n!) | Factorial | Very complex growth |

## Summary

In this article, we explored the fundamental concepts of time and space complexity, specifically addressing Big O notation. We discussed several practical examples, demonstrating the outlines of efficiency measurements in algorithm performance, both in terms of time and space. Understanding these principles is crucial for developing efficient algorithms that scale well with increasing input sizes. Remember the importance of testing various approaches to solve problems, as well as considering time and space complexity when evaluating algorithms.

We hope this guide has clarified your understanding of Big O notation and how it applies to algorithmic efficiency. If you have any further questions or inquiries, feel free to reach out!

hey guys Greg here let's talk about time and space complexity as well as Big O notation now when we talk about these

things we generally have some sort of input so we'll say we have an array of just the first five numbers now this is

a particular example but in general you could have n many values in your array so you could be given one value you

could be given two three maybe even a billion so let's say we were given an array of numbers which I will just call

capital A and our task was to return a new array which had the square of all the numbers so we'd want to return we'll

call it B which is going to be 1 s which is 1 2 squ which is 4 3 squ which is 9 this would be 16 and 25 now in general

this would be written as some sort of a function in Python you define a function with DF and you'd give it a name say

like Square so we'll just call it the square function and it's going to take an array so we'll just say a RR now I'm

not actually going to write this function but you could picture probably what it does the function would make a

new array and it's going to create that by looping over all of the elements in a and squaring those we have to visit all

of the numbers in a so again if a has n many numbers then our squaring function here it has to Loop over all of these N

Things So in general if your array has n many elements then our squaring function will see that it actually has a Time

complexity of Big O of n so basically saying that we have to Loop over all of the N Things now let's say we had a

different problem with the same same array a here instead of getting the squares of all the numbers what if we

wanted all of the pairs of different numbers so that would be one with two one with three one with four and one

with five those are all the combinations with one and you don't want the duplicates like 2 one because those are

actually the same pair of numbers here you could then start with two so it's going to be two is with three two could

be with four two could be with five so those are all the ones with two all the ones starting with three are are just

going to be three with four as well as three with five and you could also have four with five is going to be your last

pair there these are all the different pairs of numbers in the array now in this example we have five numbers so n

is equal to 5 we have way more than five things in the output here so you could think that this is probably going to be

a time complexity that's actually much more than just o of n it's not going to be o of n because we're not just doing

five things we're doing way more than that and we'll see why let's think about this as two colors we could green as

this one and blue is going to be set over here now basically what you would do is have all of them starting with one

so you'd have green as here but then blue moves over so we have 1 and two 1 and three 1 and four one and five notice

we basically just went through the entire array already even with just this green thing fixed here then this green

would move over one and our blue would reset to over here so we'd basically have to go through the array again we'd

have to go through most of the array again we'd have 2 3 2 4 and 25 then then green moves only over one position and

we go through a lot of the array again we'd have 3 4 35 and then ultimately we'd have four five now this math won't

be exact but roughly if we have n positions for the green one because we basically have to start with everything

and then for each of those N Things we're basically sliding the blue one across most of the array we're

essentially getting an N times an N which is actually going to be written as an N squ so that's why the time

complexity of this problem is actually going to be written as Big O of n^ 2 basically n * n so that's some of the

intuition but let's talk exactly about what Big O notation actually is well basically what you'd have to do is for

some problem that you have you would draw n on the x- axis so that's say like the size of the array that you're given

it could have one element it could have two it could have all the way up until like a trillion elements we'll just

write this as a million but it could be infinite really and on the y- AIS you would write the time that it takes and

don't think about this as Mill seconds in fact most people would actually say this is more so just the number of

operations that you have to do the number of different things that your computer is actually doing your program

could do just one operation it could do two three again all the way up until we'll just leave it as a million

different things but again it's basically infinite now let's give this a little Legend in green we're going to

draw something for the square problem where we are just trying to square all of the numbers in the array we saw that

that had a Time complexity of Big O of n now let's talk about what what that means well basically if your array has

one thing to square then you just need to do one operation you need to square it if your array has two things well you

need to square both of those so we do two things if you had three then you would do three if you had four then you

would do four and so on basically you can see here it is creating a line every single time we have one more element we

have one more thing to do you would draw this as a straight line and in statistics you'd say this is directly

proportional the more elements you have to square well the more operations you'd have to perform now this is opposed to

our second example which I'll draw in blue here this was where we looked at basically all pairs of numbers and we

said that that had a Time complexity of Big O of n^ 2 or n * n now this is what we call a quadratic function or a

parabola in math and it's very very steep it basically looks something like this I'm not going to draw the math

precisely CU it's a little complicated but it would curve up very sharply like this and very very quickly it starts to

get extremely steep so getting all of the pairs of numbers that is way drastically more timec consuming than it

is to just get the square of all of the numbers okay so we would say that this algorithm here is Big O of n^2 and we'd

say that this one is Big O of N and in other terms we would say that this is a linear function because it draws a line

and the blue one is what we'd call a quadratic function it's way slower and way more timec consuming now there's so

many different types of functions you could have we could have another one here which is like all triples of

numbers so instead of just a pair it would be all basically three pieces which would be a b and c for each of the

numbers you could have that and that would have a big O of n cubed that's basically n * n * n it would look maybe

something like this where it's like even sharper than that okay but let's get rid of most of our functions here let's get

rid of the red and the blue one and let's just look at this green one here let's now learn what this big O actually

means here these are just functions now we need to say exactly what the Big O part of this means now what the big o

means is essentially can I draw a line that's worse AKA more operations than yours so can I draw a line that's always

going to be worse than yours well yes I actually can I just did that and because I could do that that means that it's

true that your function is Big O of n now what also counts as a worst line is something like this which is a straight

line but then after a point it's always going to be worse so after this point here my blue line here that one's always

going to be high higher than yours so even though it's less high in this region over here after some point AKA

after some n value here after a particular threshold of the size of your array my line I drew here is always

going to be worse than yours and because I could do that that's what it means for your function to be Big O of n now why

is this actually useful well if we bring back our Parabola function which I'll draw in red this time this is again

going to be all different pairs of numbers and as we said this is going to be a big O of n s time complexity so if

we were to draw that maybe it looked something like this so here's a quadratic function well there is no line

in the world that you could draw that's going to what we call bound this thing AKA that's higher than it if you were to

draw literally any line that ever existed maybe like this one well after a point your red line is going to be

higher than it and even if we drew a super steep line like this it's increasing at such a faster rate than my

line that after a point your red line is always going to be worse so my blue line can bound our green line I can draw a

line worse than that one but I can't draw a line that's worse than a parabola that's just impossible so that means

that this all pairs function is definitely not Big O of n but just like we could draw a line that's going to be

steeper than the green one after a point we can draw a parabola that's going to be steeper than our other one after a

point here so in this region here our red one is higher but after some point here after some value of n our blue

Parabola is always going to be higher it's always going to take more operations and therefore that is why we

can say that our red function is Big O of n^2 we could draw another Parabola that bounded it now there we were

generally talking about time complexity with Big O notation but you can also talk about space complexity so if you

were again writing a function that had an array we'll call it a if you wanted to return an array that had the squares

of all the numbers well then basically you would have to create a new array called B which is going to be the

squares so 1 4 9 1625 just like we said before and this thing takes up space we are actually

storing this in the computer so if your array had in general and many things well then we're basically taking up and

much space if you have five things and we need to square them and store those five things well that's going to take up

you know five much space it's going to take in general n space so you'd say that this algorithm has a big O of n

space complexity because we're taking up n much space however what's really interesting is that you actually don't

have to create a new array that's just what makes sense to us you can actually just manipulate the original array that

we were given so we were given this array that has N Things and you wanted an array that had the squared numbers

well you can actually just modify the original input so you could do this as one this is going to be replaced by 4 to

be 9 16 and 25 so notice it's the same exact block of memory we were given we didn't use up any new space so this is

actually what we call a constant space solution basically meaning that we're not using any extra space or we're only

using constant space we'll talk about that in a moment you'd actually say that this takes up o of one space so big O of

one that is constant space now there's also certain algorithms that might have a constant time complexity this one does

not because you have to Loop over the nend things so it would have a time of Big O of n because we have to Loop over

the N Things but the space complexity is constant because you're not using any extra space now let's see an example of

what could have a constant time solution so again we'll just use the exact same example here 1 2 3 4 5 but instead of

trying to square all of the numbers what if we just wanted a function that returned the first number in the array

generally arrays are zero indexed so these are going to be the array positions and if you haven't done much

with arrays don't worry we'll get to them but a cool feature of arrays is that you can write something like a at

zero that's generally how you'd write it in a program language and that would tell you what the first element of the

array was if you did this it would tell you that the number is one and so if your problem was to know what is the

first number in the array well that would actually have a Time complexity of Big O of one a constant time complexity

because regardless of how big the array is if you had five things 10 things a million things in general if you had N

Things in the array that's not part of your big O expression here n is not anywhere in here we're just writing Big

O of one to say this has nothing to do with n it is constant with respect to n now a common confusion that people might

have is thinking that o of one means you do one thing it doesn't necessarily mean you do one thing if you change this

problem to say what are the first three numbers in the array so you'd want to have 1 2 and three well this is still

going to be a constant time solution and you can still write that as bigle of one we'll see why in a few minutes but

basically what you would do is You' get a at z a at 1 and a at 2 so these these are going to be the first three numbers

in the array AKA these three right here if you wanted to know that we could just do that it's going to take three

operations to do that and so that still has nothing to do with the size of n whether n had a 100 a million or a

billion things you're just doing three things AKA a constant number of things so this is still going to be constant

time and you can still write that as bigle of one time and to bring back our previous graph here if you had n on the

xaxis and the number of operations which I'll just write time but it's not really in milliseconds it's like the number of

things that you do well a big O of n solution would look something like this so this is a line a big O of n^2

solution might look something like this so this is going to be a parabola which is Big O of n^2 a constant time solution

would be written as a horizontal line basically meaning as n is increasing this means that n gets bigger the time

that it takes is not increasing at all so it's fixed at this point right here regardless of how big the array is it's

not going to take any more time and we'll see lots of examples of O of one things for example hashing is going to

come up hashing is generally an O of one thing as we saw array indexing is going to be o of one so you can very quickly

get the element that's at a particular part of an array and in general just lots of things are o of one if you

wanted to print a number so if you wanted to print something like five well you'd say that's o of one because you

just have a number and you print it if you had to print n many numbers so if you had like n numbers to print and you

had to print all of those well that's going to be Big O of n because you have to Loop through all of them and then

print it that many times now there's some important mathematical laws to understand about Big O notation which is

basically you can usually write them in a very simple way if you discovered that your function was something like o of

n^2 plus maybe 2 n + 1 well why would that come up well maybe your function did all pairs of numbers so that's going

to be an N squ thing to do 2 N would come up if you did two separate Loops so if you maybe printed all of the numbers

so if you printed all of them well you'd have N Things to print and then if you also say squared all of the numbers so

if you had to square each of the numbers well then that's going to be an end thing to do as well so you have one Loop

to print all of them that's o of N and then you have another loop to square all of them that's also o of n so basically

that would be a 2 N or n plus n thing to do and then maybe you just did like one more thing like you just printed some

random number then you might get something like N2 + 2 n + 1 but the main reason why Big O is used so popularly is

because this actually just reduces to Big O of N squared and even if this was like a 5 n squ like you got all the

pairs of numbers five times for some reason that's still going to just be written as big old of n^2 now there's

basically two main reasons for that which is say you had 5 n^2 here that reduces to that that would basically

mean you had a really sharp Parabola like this and so it's going to be five times as worse as you know a normal n

squ but the laws of Big O say can you draw a worse one well yeah you could just draw something like 6 n^2 so that's

just going to be like ramps up a little bit faster and so it's still going to be written as n^2 and for the same reason

if your function is like a complicated Parabola this is just a complicated one where you had 5 n^ 2 + 2 n + 1 there's

some other weaker components as well easily the worst thing that you're doing here is the n s part or the 5 n squ part

this stuff like basically doesn't even matter you have a line piece over here and then you have a really steep

Parabola over here so you'd really only care about the strongest component which in this case is going to be the n s if

you had something like an N cubed as well or maybe a 3 n cubed as well well then that's going to be the worst piece

so that would actually reduce to just Big O of n cubed because you only care about that strongest component and those

numbers out in front here that's just saying how strong your curve is but you can always draw one that's going to be

stronger than that and for the same reason if you had something that had to do with the alphabet so if you say

looked at all of the lowercase English letters well there's a finite or constant number of those things there's

particularly 26 of those things and so that would basically mean if you were drawing a line here this would be at 26

operations and as n got bigger in an array well it's still just going to be 26 letters so there's really nothing to

do there and so for the same reason you can actually just equate this to be Big O of one they mean the same thing

because if this is some constant number well can you draw basically a worse horizontal line well yeah you can just

draw one that's basically 27 here and so for that same reason you would just remove those constants out in front

instead of 26 it would just basically be one so you could reduce any known constant number to just be one because

that's the smallest number that we have now I'm just going to finish things off with this chart here which is just taken

from leak code basically this is the order of notations that you generally prefer here so at the very bottom we

have o of one which is constant generally that's pretty fast then log n we'll see that when we do binary search

and it's very very fast from there it's linear which is O of N and then this one comes up if you're often sorting things

so this is going to be o of n log n to sort o of n s is in the horrible region because it is way way worse than n logn

and way worse than o of n it's very very Steep and so generally that's going to be like The Brute Force algorithm if you

do something that basically just means the slowest weight possible and O of n factorial and 2 to the N these are

insanely slow and you only want to do those if that's actually the fastest thing possible okay I hope this was