Je travaille sur la programmation dynamique depuis un certain temps. La manière canonique d'évaluer une récursivité de programmation dynamique consiste à créer une table de toutes les valeurs nécessaires et à la remplir ligne par ligne. Voir par exemple Cormen, Leiserson et al: "Introduction to Algorithms" pour une introduction.
Je me concentre sur le schéma de calcul basé sur une table en deux dimensions (remplissage ligne par ligne) et j'étudie la structure des dépendances des cellules, c'est-à-dire quelles cellules doivent être effectuées avant qu'une autre puisse être calculée. On note avec l'ensemble des indices de cellules dont dépend la cellule . Notez que doit être sans cycle.
Je fais abstraction de la fonction réelle qui est calculée et me concentre sur sa structure récursive. Formellement, je considère une récurrence comme une programmation dynamique si elle a la forme
with , and some (computable) function that does not use other than via .
When restricting the granularity of to rough areas (to the left, top-left, top, top-right, ... of the current cell) one observes that there are essentially three cases (up to symmetries and rotation) of valid dynamic programming recursions that inform how the table can be filled:
The red areas denote (overapproximations of) . Cases one and two admit subsets, case three is the worst case (up to index transformation). Note that it is not strictly required that the whole red areas are covered by ; some cells in every red part of the table are sufficient to paint it red. White areas are explictly required to not contain any required cells.
Examples for case one are edit distance and longest common subsequence, case two applies to Bellman & Ford and CYK. Less obvious examples include such that work on the diagonals rather than rows (or columns) as they can be rotated to fit the proposed cases; see Joe's answer for an example.
I have no (natural) example for case three, though! So my question is: What are examples for case three dynamic programming recursions/problems?
la source
Réponses:
There are plenty of other examples of dynamic programming algorithms that don't fit your pattern at all.
The longest increasing subsequence problem requires only a one-dimensional table.
There are several natural dynamic programming algorithms whose tables require three or even more dimensions. For example: Find the maximum-area white rectangle in a bitmap. The natural dynamic programming algorithm uses a three-dimensional table.
But most importantly, dynamic programming isn't about tables; it's about unwinding recursion. There are lots of natural dynamic programming algorithms where the data structure used to store intermediate results is not an array, because the recurrence being unwound isn't over a range of integers. Two easy examples are finding the largest independent set of vertices in a tree, and finding the largest common subtree of two trees. A more complex example is the(1+ϵ) -approximation algorithm for the Euclidean traveling salesman problem by Arora and Mitchell.
la source
Computing Ackermann function is in this spirit. To computeA(m,n) you need to know A(m,n−1) and A(m−1,k) for some large k . Either the second coordinate decreases, or the first decreases, and second potentially increases.
This does not ideally fit the requirements, since the number of columns is infinite, and the computation is usually done top-down with memorization, but I think it is worth to mention.
la source
This doesn't fit case 3 exactly, but I don't know if any of your cases capture a very common problem used to teach dynamic programming: Matrix Chain Multiplication. To solve this problem, (and many others, this is just the canonical one) we fill up the matrix diagonal by diagonal instead of row by row.
So the rule is something like this:
la source
I know its a silly example, but I think a simple iterative problem like
might qualify. The the traditional "for each row for each column" kinda looks like your case 3.
la source
This is exactly not the search space you are looking for but i've an idea of the top of my head which might be of help.
Problem :
Given an×n matrix say, M with distinct integers in which the entries of each row (from left to right) and each column (from top to bottom) are both sorted in increasing order and the entries in each column are in increasing order . Give an efficient algorithm to find the position of an integer x in M (or say the integer is not present in the matrix).
Answer
This can be solved in the following recursive way :
We have an n×n matrix. Letk=⌈1+n2⌉ . Now compare x with
mk,k . If x<mk,k we can discard any element mi,j for k≤i≤n and k≤j≤n i.e., the search space is reduced by 1/4 . Similarly, when x>mk,k , the search space is reduced by 1/4 . So after the first iteration, the size of the search space becomes 34n2 . You can do this recursively further as follows: we make 3 comparisons: x with the middle element in each of the remaining three quadrants, and the size of the remaining search space then becomes (34)3n2 and so on.
la source