logo

Time Complexity and Space Complexity 📂Algorithm

Time Complexity and Space Complexity

Definition

The time it takes to solve a given problem is called time complexity, and the memory requirements are referred to as space complexity.

Example

Asymptotic notation is a very useful means for expressing these. Let’s take a look at an example of time complexity.

Constant Time O(1)O(1)

Algorithms that can finish regardless of nn, essentially taking no time. For example, finding the third element in x=[4,3,8,1,9,0,5,7,2,6]\mathbf{x} = [4,3,8,-1,-9,0,5,7,2,6] just returns 88 without caring how x\mathbf{x} looks.

Linear Time O(n)O(n)

Takes time proportional to nn. For example, finding the maximum value in x=[4,3,8,1,9,0,5,7,2,6]\mathbf{x} = [4,3,8,-1,-9,0,5,7,2,6] can be done in 88 tries, but to ensure that it is indeed the maximum value, the rest of the elements must also be checked. Though the expression may seem a bit poor, linear time algorithms are considered quite fast in reality.

Quadratic Time O(n2)O(n^2)

Takes time proportional to n2n^2. For example, if sorting x=[4,3,8,1,9,0,5,7,2,6]\mathbf{x} = [4,3,8,-1,-9,0,5,7,2,6] in order of size, find the maximum value, place it at the end, and repeat with the remaining array. Since finding the maximum value each time takes n,n1,,1n, n-1, \cdots , 1 time, their sum becomes k=1nk=n(n+1)2=O(n2)\displaystyle \sum_{k=1}^{n} k = {{ n (n+1)} \over {2}} = O(n^2) according to the formula for the sum of an arithmetic series. Although a smarter method might exist, there’s no need to contemplate a more foolish method.

Cubic Time O(n3)O(n^3)

Takes time proportional to n3n^3. For an example, consider multiplying matrices n=2kn=2^{k} by n×nn \times n, thinking of calculating A,BRn×nA, B \in \mathbb{R}^{n \times n} multiplied by C=ABC = AB. Then, using the Jordan Block matrix representation, compute the multiplication of the following eight n2×n2{{n} \over {2}} \times {{n} \over {2}} matrices. AB=[A1A2A3A4][B1B2B3B4]=[C1C2C3C4]=C AB= \begin{bmatrix} A_{1} & A_{2} \\ A_{3} & A_{4} \end{bmatrix} \begin{bmatrix} B_{1} & B_{2} \\ B_{3} & B_{4} \end{bmatrix} = \begin{bmatrix} C_{1} & C_{2} \\ C_{3} & C_{4} \end{bmatrix} = C

C1=A1B1+A2B3 C_{1} = A_{1}B_{1} + A_{2} B_{3}

C2=A1B2+A2B4 C_{2} = A_{1}B_{2} + A_{2} B_{4}

C3=A3B1+A4B3 C_{3} = A_{3}B_{1} + A_{4} B_{3}

C4=A3B2+A4B4 C_{4} = A_{3}B_{2} + A_{4} B_{4} Continue this computation, since the time it takes to do one calculation is T(n)T(n) and if the running time outside the repetition is cc then T(n)=8T(n2)+c\displaystyle T(n) = 8 T \left( {{n} \over {2}} \right) + c, T(n)=8T(n2)+c=8[8T(n4)+c]+c=8[64T(n8)+8c+c]+c=83T(n8)+(1+8+82)c=83T(n8)+83181c8log2n(T(1)+c)=nlog28(T(1)+c)=n3(T(1)+c)=Θ(n3) \begin{align*} T(n) =& 8 T \left( {{n} \over {2}} \right) + c \\ =& 8 \left[ 8 T \left( {{n} \over {4}} \right) + c \right] + c \\ =& 8 \left[ 64 T \left( {{n} \over {8}} \right) + 8 c + c \right] + c \\ =& 8^3 T \left( {{n} \over {8}} \right) + (1+8+8^2)c \\ =& 8^3 T \left( {{n} \over {8}} \right) + {{8^3 - 1} \over {8 - 1}}c \\ & \vdots & \\ \approx& 8^{\log_{2} n} ( T(1) + c ) \\ =& n^{\log_{2} 8} ( T(1) + c ) \\ =& n^{3} ( T(1) + c ) \\ =& \Theta ( n^3 ) \end{align*} that’s why. Matrix multiplication is almost without exception and is done a lot in applied areas involving math. However, n3n^3 is somewhat large. Just n=100n=100 becomes a whopping n3=106n^3 = 10^6. Can we further reduce the computation here? It seems there’s nothing more to reduce when we’re already not doing unnecessary calculations. However, an amazing method called the Strassen algorithm can reduce this even further. This is the beauty of algorithms.

Logarithmic Time O(log(n))O \left( \log (n) \right)

This means incredibly fast. Suppose there’s a problem of finding the position of 00 in a sorted array sort(x)=[9,1,0,2,3,4,5,6,7,8]\text{sort} (\mathbf{x}) = [-9,-1,0,2,3,4,5,6,7,8]. An intuitive method is to choose a middle element, check if it’s greater or less than 00, and if greater, discard the right side, if less, discard the left side of the array to narrow down and find it. This is called binary search. Since the time it takes for one calculation is T(n)T(n) and the time it takes to compare is cc, T(n)=T(n2)+c=[T(n4)+c]+c=T(n4)+2c=T(n8)+3cT(1)+clog2n=O(log2(n)) \begin{align*} T(n) =& T \left( {{n} \over {2}} \right) + c \\ =& \left[ T \left( {{n} \over {4}} \right) + c \right] + c \\ =& T \left( {{n} \over {4}} \right) + 2 c \\ =& T \left( {{n} \over {8}} \right) + 3 c \\ & \vdots & \\ \approx& T \left( 1 \right) + c \log_{2} n \\ =& O \left( \log_{2} (n) \right) \end{align*} that’s why. It’s fine to say that O(log(n)))O( \log (n))) according to the logarithm base change formula, but originally in computer science, if there’s no specific mention of the log base, it’s usually not e=2.7182e = 2.7182… but 22, so no need to worry about the expression.

Exponential Time O(2n)O(2^n)

This means it takes an incredibly long time. Using recursion to calculate the Fibonacci sequence is very inefficient because the time it takes to calculate the nn Fibonacci number ana_{n} is T(n)T(n) and the time it takes to add the two previous terms is cc, so for the golden ratio ϕ=1.618\phi = 1.618…, T(n)=T(n1)+T(n2)+c=[T(n2)+T(n3)+c]+T(n2)+c=2T(n2)+T(n3)+(1+1)c=3T(n3)+2T(n4)+(1+1+2)c=5T(n4)+3T(n5)+(1+1+2+3)c=8T(n6)+5T(n4)+(1+1+2+3+5)c=ck=1nanϕn11ϕ1c=Ω(ϕn) \begin{align*} T(n) =& T(n-1) + T(n-2) + c \\ =& [ T(n-2) + T(n-3) + c ] + T(n-2) + c \\ =& 2 T(n-2) + T(n-3) + (1+1) c \\ =& 3 T(n-3) + 2 T(n-4) + (1+1+2) c \\ =& 5 T(n-4) + 3 T(n-5) + (1+1+2 + 3) c \\ =& 8 T(n-6) + 5 T(n-4) + (1+1+2+3+5) c \\ & \vdots & \\ =& c \sum_{k=1}^{n} a_{n} \\ \approx& {{\phi^{n-1}-1} \over {\phi-1}} c \\ =& \Omega ( \phi^n ) \end{align*} that’s why. The last part is approximated using the formula for the sum of a geometric series since ϕanan1\displaystyle \phi \approx {{a_{n}} \over {a_{n-1}}}. Such algorithms are practically impossible to use, so look for solutions like dynamic programming, or conversely, exploiting such time complexities could be applied to cryptography.