La complexité de l'algorithme est conçue pour être indépendante des détails de niveau inférieur, mais elle est basée sur un modèle impératif, par exemple, l'accès au tableau et la modification d'un nœud dans une arborescence prennent un temps O (1). Ce n'est pas le cas dans les langages fonctionnels purs. La liste Haskell prend un temps linéaire pour l’accès. La modification d'un nœud dans une arborescence implique la création d'une nouvelle copie de cette arborescence.
Devrait-il alors y avoir une autre modélisation de la complexité des algorithmes pour les langages fonctionnels?
ST
monades).Réponses:
Mais est-ce une bonne mesure de la complexité?
For a long time, it was unclear if this can be achieved in the λ-calculus. The main problems are the following.
The paper "Beta Reduction is Invariant, Indeed" by B. Accattoli and U. Dal Lago clarifies the issue by showing a 'reasonable' encoding that preserves the complexity class P of polynomial time functions, assuming leftmost-outermost call-by-name reductions. The key insight is the exponential blow-up can only happen for 'uninteresting' reasons which can be defeated by proper sharing. In other words, the class P is the same whether you define it counting Turing machine steps or (leftmost-outermost)β -reductions.
I'm not sure what the situation is for other evaluation strategies. I'm not aware that a similar programme has been carried out for space complexity.
la source
No, not really. We always count elementary operations in some machine model:
You were probably thinking of the wholeΩ /Θ /O -business. While it's true that you can abstract away some implementation details with Landau asymptotics, you do not get rid of the impact of the machine model. Algorithms have very different running times on, say TMs and RAMs -- even if you consider only Θ -classes!
Therefore, your question has a simple answer: fix a machine model and which "operations" to count. This will give you a measure. If you want results to be comparable to non-functional algorithms, you'd be best served to compile your programs to RAM (for algorithm analysis) or TM (for complexity theory), and analyze the result. Transfer theorems may exist to ease this process.
la source
O(1)
when it is reallyO(log ab)
Instead of formulating your complexity measure in terms of some underlying abstract machine, you can bake cost into the language definitions itself - this is called a Cost Dynamics. One attaches a cost to every evaluation rule in the language, in a compositional manner - that is, the cost of an operation is a function of the cost of its sub-expressions. This approach is most natural for functional languages, but it may be used for any well-defined programming language (of course, most programming languages are unfortunately not well-defined).
la source