20

I have been working on dynamic programming for some time. The canonical way to evaluate a dynamic programming recursion is by creating a table of all necessary values and filling it row by row. See for example Cormen, Leiserson et al: "Introduction to Algorithms" for an introduction.

I focus on the table-based computation scheme in two dimensions (row-by-row filling) and investigate the structure of cell dependencies, i.e. which cells need to be done before another can be computed. We denote with $\Gamma(\mathbf{i})$ the set of indices of cells the cell $\mathbf{i}$ depends on. Note that $\Gamma$ needs to be cycle-free.

I abstract from the actual function that is computed and concentrate on its recursive structure. Formally, I consider a recurrrence $d$ to be dynamic programming if it has the form

$\qquad d(\mathbf{i}) = f(\mathbf{i}, \widetilde{\Gamma}_d(\mathbf{i}))$

with $\mathbf{i} \in [0\dots m] \times [0\dots n]$, $\widetilde{\Gamma}_d(\mathbf{i}) = \{(\mathbf{j},d(\mathbf{j})) \mid \mathbf{j} \in \Gamma_d(\mathbf{i}) \}$ and $f$ some (computable) function that does not use $d$ other than via $\widetilde{\Gamma}_d$.

When restricting the granularity of $\Gamma_d$ to rough areas (to the left, top-left, top, top-right, ... of the current cell) one observes that there are essentially three cases (up to symmetries and rotation) of valid dynamic programming recursions that inform how the table can be filled:

Three cases of dynamic programming cell dependencies

The red areas denote (overapproximations of) $\Gamma$. Cases one and two admit subsets, case three is the worst case (up to index transformation). Note that it is not strictly required that the whole red areas are covered by $\Gamma$; some cells in every red part of the table are sufficient to paint it red. White areas are explictly required to not contain any required cells.

Examples for case one are edit distance and longest common subsequence, case two applies to Bellman & Ford and CYK. Less obvious examples include such that work on the diagonals rather than rows (or columns) as they can be rotated to fit the proposed cases; see Joe's answer for an example.

I have no (natural) example for case three, though! So my question is: What are examples for case three dynamic programming recursions/problems?

Raphael
  • 73,212
  • 30
  • 182
  • 400

5 Answers5

15

There are plenty of other examples of dynamic programming algorithms that don't fit your pattern at all.

  • The longest increasing subsequence problem requires only a one-dimensional table.

  • There are several natural dynamic programming algorithms whose tables require three or even more dimensions. For example: Find the maximum-area white rectangle in a bitmap. The natural dynamic programming algorithm uses a three-dimensional table.

  • But most importantly, dynamic programming isn't about tables; it's about unwinding recursion. There are lots of natural dynamic programming algorithms where the data structure used to store intermediate results is not an array, because the recurrence being unwound isn't over a range of integers. Two easy examples are finding the largest independent set of vertices in a tree, and finding the largest common subtree of two trees. A more complex example is the $(1+\epsilon)$-approximation algorithm for the Euclidean traveling salesman problem by Arora and Mitchell.

JeffE
  • 8,783
  • 1
  • 37
  • 47
3

Computing Ackermann function is in this spirit. To compute $A(m,n)$ you need to know $A(m,n-1)$ and $A(m-1,k)$ for some large $k$. Either the second coordinate decreases, or the first decreases, and second potentially increases.

This does not ideally fit the requirements, since the number of columns is infinite, and the computation is usually done top-down with memorization, but I think it is worth to mention.

sdcvvc
  • 3,511
  • 19
  • 28
2

This doesn't fit case 3 exactly, but I don't know if any of your cases capture a very common problem used to teach dynamic programming: Matrix Chain Multiplication. To solve this problem, (and many others, this is just the canonical one) we fill up the matrix diagonal by diagonal instead of row by row.

So the rule is something like this:

diagMatrix

Joe
  • 4,105
  • 1
  • 21
  • 38
0

I know its a silly example, but I think a simple iterative problem like

Find the sum of the numbers in a square matrix

might qualify. The the traditional "for each row for each column" kinda looks like your case 3.

hugomg
  • 1,409
  • 1
  • 10
  • 15
-1

This is exactly not the search space you are looking for but i've an idea of the top of my head which might be of help.

Problem :

Given a $n × n$ matrix say, $M$ with distinct integers in which the entries of each row (from left to right) and each column (from top to bottom) are both sorted in increasing order and the entries in each column are in increasing order . Give an efficient algorithm to find the position of an integer $x$ in $M$ (or say the integer is not present in the matrix).

Answer

This can be solved in the following recursive way :

We have an n×n matrix. Let $k = \lceil{\frac{1+n}{2}}\rceil$. Now compare $x$ with $m_{k,k}$. If $x<m_{k,k}$ we can discard any element $m_{i,j}$ for $k\leq i\leq n$ and $k\leq j\leq n$ i.e., the search space is reduced by $1/4$. Similarly, when $x>m_{k,k}$, the search space is reduced by $1/4$. So after the first iteration, the size of the search space becomes $\frac{3}{4} n^2$. You can do this recursively further as follows: we make 3 comparisons: $x$ with the middle element in each of the remaining three quadrants, and the size of the remaining search space then becomes $\left(\frac{3}{4}\right)^3 n^2$ and so on.

0x0
  • 585
  • 4
  • 10