logo

Dynamic Programming 📂Algorithm

Dynamic Programming

Buildup

When solving a problem, if the solution to a larger problem includes the solution to a smaller problem, it is said to have an Optimal Substructure. An easy example of a problem with an optimal substructure is calculating the Fibonacci numbers. The $n$-th Fibonacci number is calculated as $a_{n} = a_{n-1} + a_{n-2}$, thus, the larger problem $a_{n}$ includes the smaller problems $a_{n-1}$ and $a_{n-2}$.

A simple way to solve this is to use a recursive function. Although most problems that have an optimal substructure seem solvable with recursion, actual implementation reveals that it’s immensely inefficient due to duplicated calls.

Definition

Dynamic Programming is a technique used to circumvent these difficulties, advisable under the following conditions:

  • (i): The given problem has an optimal substructure.
  • (ii): Implementing it with a recursive function is inefficient.

Example

The concept of dynamic programming is simple. It saves the solution to small problems while omitting duplicate calls. For instance, to calculate the 6th Fibonacci number $a_{6}$, instead of $$ a_{1} = 1 \\ a_{2} = 1 \\ a_{3} = 2 \\ a_{4} = 3 \\ a_{5} = 5 $$ calculating it repeatedly, it is stored in memory, and only the last $a_{4}$ and $a_{5}$ are called to calculate $a_{6} = a_{5} + a_{4} = 8$. This is called Memoization1.

Despite how it’s explained, it doesn’t seem particularly Dynamic. Originally, the term derived from the field of control, where solutions are progressively saved in a table. However, according to Bellman, the designer of dynamic programming, as stated directly in his writings, he chose the term dynamic simply because it sounded cool and was favorable for obtaining funding.

Below is a gif comparing the calculation of $a_{36}=14930352$ using both methods and their respective execution times. Dynamic programming took just over 0.01 seconds, while recursion took more than 5 seconds. This difference magnifies as $n$ increases. Theoretically, it can be shown that for a given $n$, recursion uses exponential time, while dynamic programming uses linear time, in a simple proof.

Honeycam2019-08-2614-03-10.gif

Code


import Time

def fibo1(n) :
    if n==1 or n==2 :
        return 1
    else :
        return fibo1(n-1) + fibo1(n-2)

def fibo2(n) :
    memoization = [1,1]
    for i in range(n-2) :
        memoization.append(memoization[-1]+memoization[-2])
    return memoization[-1]

start = Time.time() 
print(fibo2(36))
print("동적 프로그래밍으로 계산할 때 걸린 시간 : %5.3f" % (time.time() - start))

start = Time.time() 
print(fibo1(36))
print("재귀함수로 계산할 때 걸린 시간 : %5.3f" % (time.time() - start))

  1. Derived from Memo, it is not Memorization but Memoization. ↩︎