Une distinction de cas sur la programmation dynamique: exemple nécessaire!

19

Je travaille sur la programmation dynamique depuis un certain temps. La manière canonique d'évaluer une récursivité de programmation dynamique consiste à créer une table de toutes les valeurs nécessaires et à la remplir ligne par ligne. Voir par exemple Cormen, Leiserson et al: "Introduction to Algorithms" pour une introduction.

Je me concentre sur le schéma de calcul basé sur une table en deux dimensions (remplissage ligne par ligne) et j'étudie la structure des dépendances des cellules, c'est-à-dire quelles cellules doivent être effectuées avant qu'une autre puisse être calculée. On note avec l'ensemble des indices de cellules dont dépend la cellule . Notez que doit être sans cycle.Γ(i)iΓ

Je fais abstraction de la fonction réelle qui est calculée et me concentre sur sa structure récursive. Formellement, je considère une récurrence comme une programmation dynamique si elle a la formed

d(i)=f(i,Γ~d(i))

with i[0m]×[0n], Γ~d(i)={(j,d(j))jΓd(i)} and f some (computable) function that does not use d other than via Γ~d.

When restricting the granularity of Γd to rough areas (to the left, top-left, top, top-right, ... of the current cell) one observes that there are essentially three cases (up to symmetries and rotation) of valid dynamic programming recursions that inform how the table can be filled:

Three cases of dynamic programming cell dependencies

The red areas denote (overapproximations of) Γ. Cases one and two admit subsets, case three is the worst case (up to index transformation). Note that it is not strictly required that the whole red areas are covered by Γ; some cells in every red part of the table are sufficient to paint it red. White areas are explictly required to not contain any required cells.

Examples for case one are edit distance and longest common subsequence, case two applies to Bellman & Ford and CYK. Less obvious examples include such that work on the diagonals rather than rows (or columns) as they can be rotated to fit the proposed cases; see Joe's answer for an example.

I have no (natural) example for case three, though! So my question is: What are examples for case three dynamic programming recursions/problems?

Raphael
la source
2
Case 3 subsumes cases 1 and 2.
JeffE
Not, it does not, despite the looks. For instance, a case 1 instance can not have a required cell in the upper left area, while a case 3 instance has to have a required cell in the upper left area. I edited the explanation to clarify.
Raphael

Réponses:

15

There are plenty of other examples of dynamic programming algorithms that don't fit your pattern at all.

  • The longest increasing subsequence problem requires only a one-dimensional table.

  • There are several natural dynamic programming algorithms whose tables require three or even more dimensions. For example: Find the maximum-area white rectangle in a bitmap. The natural dynamic programming algorithm uses a three-dimensional table.

  • But most importantly, dynamic programming isn't about tables; it's about unwinding recursion. There are lots of natural dynamic programming algorithms where the data structure used to store intermediate results is not an array, because the recurrence being unwound isn't over a range of integers. Two easy examples are finding the largest independent set of vertices in a tree, and finding the largest common subtree of two trees. A more complex example is the (1+ϵ)-approximation algorithm for the Euclidean traveling salesman problem by Arora and Mitchell.

JeffE
la source
Thanks for your answer, but I explicitly restricted the question to two-dimensional problems and the canonical, table-based computation scheme (edited to make that point even clearer). I am aware of the more general framework but not interested in it at this point.
Raphael
9
Okay, but I really think you're missing the point.
JeffE
As there are many upvotes, I thought I should make this clear: This post does not answer the question and it does in fact not even attempt to.
Raphael
2
@Raphael is correct. My "answer" is not an answer but a criticism of the question, but it was too long for a comment.
JeffE
3

Computing Ackermann function is in this spirit. To compute A(m,n) you need to know A(m,n1) and A(m1,k) for some large k. Either the second coordinate decreases, or the first decreases, and second potentially increases.

This does not ideally fit the requirements, since the number of columns is infinite, and the computation is usually done top-down with memorization, but I think it is worth to mention.

sdcvvc
la source
1
No, the nesting as in A(m1,A(m,n1)) does not really lend itself to dynamic programming. Hehe, this is so odd I'll have to check that my definitions exclude such cases somehow. Non-primitive-recursive DP, oh my...
Raphael
1
Not sure why this answer was downvoted, as it is a good answer. The Ackermann function lends itself extremely well for dynamic programming. In general, any recursively defined function that is repeatedly computed for the same arguments lends itself to dynamic programming. To see this one only has to implement it and compare the running times. What takes years to compute with the ordinary Ackermann function can take seconds with the dynamic programming one.
Jules
@Jules: The problem for the canonical table scheme is that you do not know a (primitive recursive) bound on the table size a priori. Of course you can do memoisation, but not quite in the usual way. So yes, it may be viable for DP but it does not fit the class of problems my question is concerned with.
Raphael
1
I don't think it's a requirement for DP that you have an a priori bound on the table size? In fact as JeffE mentions, the cache doesn't have to be a table at all. It can be any associative data structure. DP is really a very very simple idea: you want to compute a recursively defined function, but this function gets repeatedly called on the same arguments. DP is the optimization where you introduce a cache to make sure you compute each case only once. There are plenty of functions that fit into neither of your cases, even if they are functions of two bounded integers.
Jules
2

This doesn't fit case 3 exactly, but I don't know if any of your cases capture a very common problem used to teach dynamic programming: Matrix Chain Multiplication. To solve this problem, (and many others, this is just the canonical one) we fill up the matrix diagonal by diagonal instead of row by row.

So the rule is something like this:

diagMatrix

Joe
la source
1
Written like this it does indeed not fit any case. However, if you rotate clockwise by 45 degrees you get case 2 (and all of the implied properties). This is true for other examples that work from the diagonal towards a corners, too. But thanks for mentioning it!
Raphael
@Raphael it's not immediately obvious that these are equivalent, you might want to mention that in your question.
Joe
0

I know its a silly example, but I think a simple iterative problem like

Find the sum of the numbers in a square matrix

might qualify. The the traditional "for each row for each column" kinda looks like your case 3.

hugomg
la source
-1

This is exactly not the search space you are looking for but i've an idea of the top of my head which might be of help.

Problem :

Given a n×n matrix say, M with distinct integers in which the entries of each row (from left to right) and each column (from top to bottom) are both sorted in increasing order and the entries in each column are in increasing order . Give an efficient algorithm to find the position of an integer x in M (or say the integer is not present in the matrix).

Answer

This can be solved in the following recursive way :

We have an n×n matrix. Let k=1+n2. Now compare x with mk,k. If x<mk,k we can discard any element mi,j for kin and kjn i.e., the search space is reduced by 1/4. Similarly, when x>mk,k, the search space is reduced by 1/4. So after the first iteration, the size of the search space becomes 34n2. You can do this recursively further as follows: we make 3 comparisons: x with the middle element in each of the remaining three quadrants, and the size of the remaining search space then becomes (34)3n2 and so on.

0x0
la source
1
This is not an instance of dynamic programming, isn't it?
Raphael