Dynamic Programming Paradigm

The key to solving challenging problems efficiently

Shelly Mohanty
3 min readMar 24, 2021

How long would it take to calculate the Fibonacci of 100 ? And how long would it take to calculate the Fibonacci of 100 if the Fibonacci of 98 and 99 were known?

Photo by Christin Hume on Unsplash

I am sure that at some point in their lives, all coders have come across the term- dynamic programming and have puzzled over it. Dynamic programming often seems fearful and intimidating, yet it remains essential to solving loads of complex problems.

But what exactly is this bewildering paradigm?

Dynamic Programming is the breaking down of an optimization problem into simpler sub-problems. The solution of each sub-problem is stored, ensuring that each sub-problem is only solved once.

The technique is quite beneficial to optimization problems- problems that try to find the maximum or minimum solution under given constraints. Dynamic Programming looks through all possible sub-problems and never recomputes the solution to any of them. Thus, this method guarantees correctness and efficiency.

Taking the classic example of the Fibonacci Series- Most people I know would write the code using a recursive algorithm. While this code would serve its purpose, it comes with a high cost.

Images from geeks for geeks and StackOverflow

If we draw a tree representing each computation required to find the Fibonacci value for n = 5, we see that the sub-problem for n = 2 is solved thrice. Even for a relatively small number, there’s a lot of repeated computation.

Imagine what would happen if I were to calculate the Fibonacci of 10,000 by this method!

A Dynamic Programming approach can avoid this problem. Instead of calculating the Fibonacci for n = 2 three times, we can use an algorithm that calculates it once, stores its value, and accesses this stored value for every subsequent occurrence.

Storing the solution of sub-problems (In this situation, Fibonacci of 2) so that we do not have to solve them again is what we call Memoization.

Dynamic programming is quite powerful and helps calculate both local and total optimal solution to a problem.

Image from geeks for geeks

But where else can this concept be used?

There are two key attributes that a problem should have for dynamic programming to be applicable: optimal substructure and overlapping sub-problems.

Optimal substructure implies that we should be able to obtain the overall optimal solution, through a combination of optimal solutions to the sub-problems.

Overlapping sub-problems means that the space of sub-problems must be small, ie. the solution to each sub-problem must be required multiple times. Only then does it make sense to store the values in memory!

Some popular algorithms where we see the use of Dynamic Programming are:

  • Knapsack problem
  • Tower of Hanoi
  • All pair shortest path by Floyd-Warshall
  • Shortest path by Dijkstra
  • Process scheduling

To conclude, most of the problems you’ll encounter within Dynamic Programming already exist in some form and can be solved similar to the above algorithms.

Hope this blog was useful.
Thanks for reading!

Photo by Hybrid on Unsplash

--

--