logo

Time Complexity and Space Complexity 📂Algorithm

Time Complexity and Space Complexity

Definition

The time it takes to solve a given problem is called time complexity, and the memory requirements are referred to as space complexity.

Example

Asymptotic notation is a very useful means for expressing these. Let’s take a look at an example of time complexity.

Constant Time $O(1)$

Algorithms that can finish regardless of $n$, essentially taking no time. For example, finding the third element in $\mathbf{x} = [4,3,8,-1,-9,0,5,7,2,6]$ just returns $8$ without caring how $\mathbf{x}$ looks.

Linear Time $O(n)$

Takes time proportional to $n$. For example, finding the maximum value in $\mathbf{x} = [4,3,8,-1,-9,0,5,7,2,6]$ can be done in $8$ tries, but to ensure that it is indeed the maximum value, the rest of the elements must also be checked. Though the expression may seem a bit poor, linear time algorithms are considered quite fast in reality.

Quadratic Time $O(n^2)$

Takes time proportional to $n^2$. For example, if sorting $\mathbf{x} = [4,3,8,-1,-9,0,5,7,2,6]$ in order of size, find the maximum value, place it at the end, and repeat with the remaining array. Since finding the maximum value each time takes $n, n-1, \cdots , 1$ time, their sum becomes $\displaystyle \sum_{k=1}^{n} k = {{ n (n+1)} \over {2}} = O(n^2)$ according to the formula for the sum of an arithmetic series. Although a smarter method might exist, there’s no need to contemplate a more foolish method.

Cubic Time $O(n^3)$

Takes time proportional to $n^3$. For an example, consider multiplying matrices $n=2^{k}$ by $n \times n$, thinking of calculating $A, B \in \mathbb{R}^{n \times n}$ multiplied by $C = AB$. Then, using the Jordan Block matrix representation, compute the multiplication of the following eight ${{n} \over {2}} \times {{n} \over {2}}$ matrices. $$ AB= \begin{bmatrix} A_{1} & A_{2} \\ A_{3} & A_{4} \end{bmatrix} \begin{bmatrix} B_{1} & B_{2} \\ B_{3} & B_{4} \end{bmatrix} = \begin{bmatrix} C_{1} & C_{2} \\ C_{3} & C_{4} \end{bmatrix} = C $$

$$ C_{1} = A_{1}B_{1} + A_{2} B_{3} $$

$$ C_{2} = A_{1}B_{2} + A_{2} B_{4} $$

$$ C_{3} = A_{3}B_{1} + A_{4} B_{3} $$

$$ C_{4} = A_{3}B_{2} + A_{4} B_{4} $$ Continue this computation, since the time it takes to do one calculation is $T(n)$ and if the running time outside the repetition is $c$ then $\displaystyle T(n) = 8 T \left( {{n} \over {2}} \right) + c$, $$ \begin{align*} T(n) =& 8 T \left( {{n} \over {2}} \right) + c \\ =& 8 \left[ 8 T \left( {{n} \over {4}} \right) + c \right] + c \\ =& 8 \left[ 64 T \left( {{n} \over {8}} \right) + 8 c + c \right] + c \\ =& 8^3 T \left( {{n} \over {8}} \right) + (1+8+8^2)c \\ =& 8^3 T \left( {{n} \over {8}} \right) + {{8^3 - 1} \over {8 - 1}}c \\ & \vdots & \\ \approx& 8^{\log_{2} n} ( T(1) + c ) \\ =& n^{\log_{2} 8} ( T(1) + c ) \\ =& n^{3} ( T(1) + c ) \\ =& \Theta ( n^3 ) \end{align*} $$ that’s why. Matrix multiplication is almost without exception and is done a lot in applied areas involving math. However, $n^3$ is somewhat large. Just $n=100$ becomes a whopping $n^3 = 10^6$. Can we further reduce the computation here? It seems there’s nothing more to reduce when we’re already not doing unnecessary calculations. However, an amazing method called the Strassen algorithm can reduce this even further. This is the beauty of algorithms.

Logarithmic Time $O \left( \log (n) \right)$

This means incredibly fast. Suppose there’s a problem of finding the position of $0$ in a sorted array $\text{sort} (\mathbf{x}) = [-9,-1,0,2,3,4,5,6,7,8]$. An intuitive method is to choose a middle element, check if it’s greater or less than $0$, and if greater, discard the right side, if less, discard the left side of the array to narrow down and find it. This is called binary search. Since the time it takes for one calculation is $T(n)$ and the time it takes to compare is $c$, $$ \begin{align*} T(n) =& T \left( {{n} \over {2}} \right) + c \\ =& \left[ T \left( {{n} \over {4}} \right) + c \right] + c \\ =& T \left( {{n} \over {4}} \right) + 2 c \\ =& T \left( {{n} \over {8}} \right) + 3 c \\ & \vdots & \\ \approx& T \left( 1 \right) + c \log_{2} n \\ =& O \left( \log_{2} (n) \right) \end{align*} $$ that’s why. It’s fine to say that $O( \log (n)))$ according to the logarithm base change formula, but originally in computer science, if there’s no specific mention of the log base, it’s usually not $e = 2.7182…$ but $2$, so no need to worry about the expression.

Exponential Time $O(2^n)$

This means it takes an incredibly long time. Using recursion to calculate the Fibonacci sequence is very inefficient because the time it takes to calculate the $n$ Fibonacci number $a_{n}$ is $T(n)$ and the time it takes to add the two previous terms is $c$, so for the golden ratio $\phi = 1.618…$, $$ \begin{align*} T(n) =& T(n-1) + T(n-2) + c \\ =& [ T(n-2) + T(n-3) + c ] + T(n-2) + c \\ =& 2 T(n-2) + T(n-3) + (1+1) c \\ =& 3 T(n-3) + 2 T(n-4) + (1+1+2) c \\ =& 5 T(n-4) + 3 T(n-5) + (1+1+2 + 3) c \\ =& 8 T(n-6) + 5 T(n-4) + (1+1+2+3+5) c \\ & \vdots & \\ =& c \sum_{k=1}^{n} a_{n} \\ \approx& {{\phi^{n-1}-1} \over {\phi-1}} c \\ =& \Omega ( \phi^n ) \end{align*} $$ that’s why. The last part is approximated using the formula for the sum of a geometric series since $\displaystyle \phi \approx {{a_{n}} \over {a_{n-1}}}$. Such algorithms are practically impossible to use, so look for solutions like dynamic programming, or conversely, exploiting such time complexities could be applied to cryptography.