hamburger

Dynamic Programming and Its Examples

By BYJU'S Exam Prep

Updated on: September 25th, 2023

Dynamic Programming is generally used to resolve various problems where overlapping subproblems are present. Recursive algorithms are the most common application for dynamic programming. Dynamic programming is utilized for optimization, and most optimization issues call for recursion. However, not all recursive issues can be solved using dynamic programming.

A recursion can only arrive at the answer through a divide-and-conquer strategy unless there are overlapping subproblems, like in the Fibonacci sequence problem. Here we will see the dynamic programming in detail along with the working, application, and comparison of Dynamic programming with the Greedy algorithm to understand the concept better.

Download Formula Notes PDF for Algorithms

What is Dynamic Programming?

A difficult problem can be broken down into several smaller, simpler subproblems using dynamic programming, which then solves each of those subproblems just once and stores the results. To provide the optimum solution for the given problem, a dynamic programming algorithm will look at the previously resolved subproblems and combine their solutions.

When a solution can be recursively explained in terms of solutions to subproblems, it is employed (optimal substructure). In dynamic programming, the problem is solved in a bottom-up approach by building a table of solved subproblems that are used to solve larger ones.

Working of Dynamic Programming

The basic components for dynamic programming are listed below. Dynamic programming applies to things with characteristics like:

  • Those problems are when the subproblems overlap, and the substructures are optimal. In this context, optimum substructure denotes that it is possible to solve optimization problems by simply integrating the best solutions to each constituent subproblems.
  • As we save the intermediate results in the case of dynamic programming, the space complexity would grow, but the time complexity would decrease.

Dynamic Programming Algorithm

Dynamic programming is an algorithmic technique used to solve optimization problems by breaking them into overlapping subproblems, solving each subproblem only once, and storing the solutions for efficient reuse. It avoids redundant computations and improves efficiency by leveraging the principle of optimality.

The steps that dynamic programming takes are as follows:

  • It divides the complex problem into simpler subproblems.
  • It resolves these related problems in the best possible way.
  • It keeps the outcomes of the smaller issues (memorization). Memorization is the process of storing the answers to subproblems.
  • The same sub-problem is not calculated more than once due to their reuse.
  • Calculate the outcome of the complex problem finally.

Dynamic Programming vs Greedy Algorithms

Dynamic programming and Greedy algorithms both function as techniques for optimization. However, Greedy algorithms search for locally optimum solutions to obtain a global optimum. Therefore, Greedy algorithms are likely to make a prediction that appears optimal at the time but ends up being expensive later on and does not ensure a globally optimal solution.

On the other hand, Dynamic programming selects the most optimum solution after determining the best way to solve each subproblem individually. Check out the difference between Dynamic Programming and Greedy algorithm in detail.

Dynamic Programming in DAA

Dynamic programming is a powerful algorithmic technique used in the field of Design and Analysis of Algorithms (DAA). It is particularly useful for solving optimization problems by breaking them down into smaller overlapping subproblems and efficiently solving each subproblem only once.

Dynamic programming is a fundamental technique in the field of DAA that enables efficient and optimized solutions to optimization problems with optimal substructure and overlapping subproblems.

Application of Dynamic Programming

The application or computer problems with their time complexity can be solved using a Dynamic Programming approach that is as follows:

  • Fibonacci number series(O(n))
  • Knapsack problem(O(mn))
  • All pairs’ shortest path by Floyd-Warshall(O(n3))
  • Longest common sub-sequence(O(mn))
  • Matrix chain multiplication(O(n2))

Dynamic Programming Examples

Here are a few examples of the problems that can be solved using dynamic programming techniques:

  • Computing the nth Fibonacci number, where each number is the sum of the two preceding ones (F(n) = F(n-1) + F(n-2)).Fibonacci Sequence: Computing the nth Fibonacci number, where each number is the sum of the two preceding ones (F(n) = F(n-1) + F(n-2)).
  • Finding the longest subsequence that is common to two or more sequences.
  • Determining the most valuable combination of items to be placed into a 0/1 Knapsack Problem with a limited weight capacity.
  • Calculating the minimum number of operations (insertion, deletion, substitution) required to transform one string into another.
  • Determining the minimum number of coins needed to make change for a given target value using a given set of coin denominations.
Important Topics for Gate Exam
Admixtures Truss
Bolted Connection Principle of superposition of forces
Columns and Struts Design of Beams
Polymers Huffman Coding
Mechanical Properties of Engineering Materials Moulding Sand
Crystal Defects Kruskal Algorithm
Free body diagram Internal forces
Influence line diagram Deflection of Beams
Internal and external forces Lami’s theorem
Losses of Prestress Moment Distribution Method
Our Apps Playstore
POPULAR EXAMS
SSC and Bank
Other Exams
GradeStack Learning Pvt. Ltd.Windsor IT Park, Tower - A, 2nd Floor, Sector 125, Noida, Uttar Pradesh 201303 help@byjusexamprep.com
Home Practice Test Series Premium