Introduction to Dynamic Programming

Sweet Codey
A free video tutorial from Sweet Codey
4.3 instructor rating • 2 courses • 9,100 students

Learn more from the full course

Dynamic Programming - I

Master the art of solving Dynamic Programming problems and acing the Coding Interviews

02:28:32 of on-demand video • Updated February 2019

  • Be able to visualize and understand most of the Dynamic programming problems.
  • Develop a strong intuition for any kind of Dynamic programming problem when approaching to solve new problems.
  • Understand what kind of questions are asked in Coding Interviews.
  • Gain Confidence for the Coding Interviews.
English What is dynamic programming? The definition says that it is a technique for solving a complex problem by first breaking it into a collection of similar sub-problems. Along with it. We also remember to store the computed solutions of subproblems or problems to avoid the repetitive computations. A thing to remember here is basically - break our problem into similar problems and then solve each sub-problem just once by storing it in a separate space. It's perfectly fine if you didn't get all of it because every definition is incomplete without an example. So I would like to explain all of this using a very interesting series of numbers called Fibonacci numbers. Fibonacci series is as follows - 0,1,1,2,3,5 and it goes on. Are the numbers in this series just randomly written? Well No! And obviously in fact they can be calculated using a formula that is given here. The 0th and the 1st term are fixed and are 0 and 1 respectively. For the rest of them we follow this relation which says n'th term of Fibonacci series is simply equal to the sum of previous two Fibonacci numbers. For example F[2] is equal to F[1] + F[0] or 1 + 0 or 1. Similarly F[3] is equal to F[2] + F[1] or 1 + 2 or 2 and so on. Now suppose we are told to calculate F[20] How would we do it? It's simple we just use the formula for calculating the N'th term of the Fibonacci series. Using this we have:- F[20] is equal to F[19] + F[18]. And to calculate F[19] we can again write as F[19] is equal to F[18] + F[17]. And similarly for F[18], F[17] and all others. Now remember we said in the definition that in dynamic programming technique we try to write the solution to a problem in terms of the solutions to similar problems. That's what we are doing here. We are writing the solution to F[20] in terms of F[19] and F[18], which are nothing but similar sub-problems. If you remember or if you again look at the definition we also said that we are very particular about calculating or solving the subproblems just once, that is we don't need to solve them multiple times because that would be unnecessary computations. But here, if you take a closer look at our tree we can see that the sub-problems are being calculated multiple times. For instance F[18] is being calculated here and here. Similarly F[17] is being calculated multiple times from this tree. So we can say that although this technique is very beneficial because it tries to solve a problem in terms of the similar sub-problems. But we need to be cautious while using it because if we are not particular about storing the results we have computed once it can lead to a wastage of resources. It's just basically saying that - Hey! I want to calculate F[18], F[18] can be calculated using this tree. Again here- we require another usage of the F[18]. Well if we again go and calculate F[18] as we calculated here, it will definitely lead to a tremendous wastage of the resources which causes a poor performance. So we somehow need to solve this problem. Do you find a solution for this? There actually exists a very simple and intuitive solution. As it was explicitly written in the second part of the definition that why don't we save all the solutions whenever we compute them for the first time. For example here when we are asked to calculate F[20] we needed F[19] and F[18], and for F[19] we need F[18] and F[17] and further for F[18] we need F[17] and F[16]. Now suppose we have calculated F[17] and F[16] by going down like this and we saved them in an array over here. Using this we can calculate F[18] too. We save in this array too. Now when we are asked to calculate F[19] We need F[18] which we had just calculated and saved and have F[17] which we had earlier calculated and saved. So we don't need to calculate it again. We just plug in this earlier computed value here. F[19] is also calculated like this and again saved in this array. Now, finally for F[20] we need f[19] and F[18], both of them which are earlier calculated and saved in this array. So in this manner we got our answer for F[20]. So summarizing again - All we do in dynamic programming are these two steps:- The first step, we tried to write the solution of our problem in terms of the solutions of similar problems. This step requires a keen observation and careful examination of the solution. Second step, here we try to avoid our repetitive computations by storing them or by remembering them as we have done for this case. Essentially, this is how we approach a dynamic programming problem.