## Dynamic Programming (Overview)

You all might have heard a lot about Dynamic Programming and its awesomeness. Today, I will explain to you why exactly it is as awesome as we say it is. So let me start with the basic definition of Dynamic Programming:

It is a method to solve complex problems by breaking it down into simpler sub-problems, which once solved, do not need to be solved again in the future. This can be done with problems where an optimal substructure can be found. These simpler solutions are then merged to form a greater solution. There are two kinds of Dynamic Programming. As Wikipedia defines it –

Top-down dynamic programming simply means storing the results of certain calculations, which are later used again since the completed calculation is a sub-problem of a larger calculation. Bottom-up dynamic programming involves formulating a complex calculation as a recursive series of simpler calculations.

For example, if you have an array and the problem given to you is one of dynamic programming, you will most probably be calculating a value at each index, depending on the value you calculated at the previous index. Please note, doing this requires a optimal substructure to be found. (Some of you might find this task a pretty daunting one)

Some common examples of problems where Dynamic Programming can be useful are finding the longest increasing subsequence (not continuous) inside an array, finding the set of elements (continuous) in an array with the maximum sum, calculating the longest common subsequence in two given sequences, machine scheduling, etc.

I will discuss such problems in the upcoming posts. 🙂

*PS: A problem discussed earlier relating to 2 finite sequences and intersections and such is somewhat an example of Dynamic Programming as well.*