Comment la complexité des algorithmes est-elle modélisée pour les langages fonctionnels?

38

La complexité de l'algorithme est conçue pour être indépendante des détails de niveau inférieur, mais elle est basée sur un modèle impératif, par exemple, l'accès au tableau et la modification d'un nœud dans une arborescence prennent un temps O (1). Ce n'est pas le cas dans les langages fonctionnels purs. La liste Haskell prend un temps linéaire pour l’accès. La modification d'un nœud dans une arborescence implique la création d'une nouvelle copie de cette arborescence.

Devrait-il alors y avoir une autre modélisation de la complexité des algorithmes pour les langages fonctionnels?

wsaleem
la source
3
C'est peut-être ce que vous cherchez.
Aristu
1
Vous pouvez répondre à votre question ici: cs.stackexchange.com/q/18262/755 . En particulier, la complexité temporelle dans un langage purement fonctionnel diffère de la complexité temporelle dans un langage impératif d'au plus un rapport de , pour certaines hypothèses appropriées sur les capacités des deux langages. O(logn)
DW
3
GHC Haskell prend en charge les tableaux et arbres mutables, entre autres choses, vous permettant d’accéder au tableau et de modifier les nœuds d’arbre en temps O (1), en utilisant des "threads d’état" (les STmonades).
Tanner Swett
1
@BobJarvis dépend. Une liste est-elle un type de données abstrait pour vous ou envisagez-vous spécifiquement des listes chaînées?
Raphaël
1
Quel est le but recherché par la complexité de la modélisation algorithmique? Cherchez-vous quelque chose de mathématiquement pur ou de pratique? Pour une valeur pratique, il convient de prêter attention à des choses comme si vous avez ou non la mémorisation à votre disposition, mais d’un point de vue purement mathématique, les capacités de la mise en œuvre ne devraient pas avoir d’importance.
Cort Ammon - Réintégrer Monica le

Réponses:

34

λλβ(λx.M)NM[N/x]

Mais est-ce une bonne mesure de la complexité?

tr(.)λpM|M|Mp(|M|) β-reduction steps exactly when tr(M) reduces to a value in p(|tr(M)|) steps of a Turing machine.

For a long time, it was unclear if this can be achieved in the λ-calculus. The main problems are the following.

  • There are terms that produce normal forms (in a polynomial number of steps) that are of exponential size. Even writing down the normal forms takes exponential time.
  • The chosen reduction strategy plays an important role. For example there exists a family of terms which reduces in a polynomial number of parallel β-steps (in the sense of optimal λ-reduction), but whose complexity is non-elementary (meaning worse then exponential).

The paper "Beta Reduction is Invariant, Indeed" by B. Accattoli and U. Dal Lago clarifies the issue by showing a 'reasonable' encoding that preserves the complexity class P of polynomial time functions, assuming leftmost-outermost call-by-name reductions. The key insight is the exponential blow-up can only happen for 'uninteresting' reasons which can be defeated by proper sharing. In other words, the class P is the same whether you define it counting Turing machine steps or (leftmost-outermost) β-reductions.

I'm not sure what the situation is for other evaluation strategies. I'm not aware that a similar programme has been carried out for space complexity.

Martin Berger
la source
23

Algorithm complexity is designed to be independent of lower level details.

No, not really. We always count elementary operations in some machine model:

  • Steps for Turing machines.
  • Basic operations on RAMs.

You were probably thinking of the whole Ω/Θ/O-business. While it's true that you can abstract away some implementation details with Landau asymptotics, you do not get rid of the impact of the machine model. Algorithms have very different running times on, say TMs and RAMs -- even if you consider only Θ-classes!

Therefore, your question has a simple answer: fix a machine model and which "operations" to count. This will give you a measure. If you want results to be comparable to non-functional algorithms, you'd be best served to compile your programs to RAM (for algorithm analysis) or TM (for complexity theory), and analyze the result. Transfer theorems may exist to ease this process.

Raphael
la source
Agreed. Side note: People do frequently make a lot of mistakes about what operations are "constant". E.g. assuming a + b is O(1) when it is really O(log ab)
Paul Draper
3
@PaulDraper That's a different assumption, not necessarily a mistake. We can model what we want -- the question is if it answers interesting questions. See also here.
Raphael
that sounds an awful lot like "get rid of the machine model"
Paul Draper
@PaulDraper Depends on what kind of sentiments you attach to the word "machine". See also this discussion. FWIW, the unit-cost RAM model -- arguably the standard model in algorithm analysis! -- is useful, otherwise it wouldn't have been used for decades now. All the familiar bounds for sorting, search tress, etc. are based on that model. It makes sense because it models real computers well as long as the numbers fit in registers.
Raphael
1

Instead of formulating your complexity measure in terms of some underlying abstract machine, you can bake cost into the language definitions itself - this is called a Cost Dynamics. One attaches a cost to every evaluation rule in the language, in a compositional manner - that is, the cost of an operation is a function of the cost of its sub-expressions. This approach is most natural for functional languages, but it may be used for any well-defined programming language (of course, most programming languages are unfortunately not well-defined).

gardenhead
la source
<Discussion on what is a machine model deleted.> Let us continue this discussion in chat.
Raphael