4

As I understand, evaluating something like the following in normal order evaluation is inefficient due to duplicate work:

double x = (x,x)
main     = double some_hard_computation

I remember someone telling me

  • That there are two ways to avoid this: strictness analysis (to recognise that double is strict in its one argument, and then deviating from normal order evaluation and evaluate the argument first) and graph rewriting (store pointers to some_hard_computation such that both elements of the tuple point to the same computation, and evaluating one side of the tuple automatically evaluates the other side as well).

  • That neither of the ways are sufficient on their own to avoid all cases where duplicate work can be introduced, and that it is best to apply both when writing a compiler.

I can imagine that strictness analysis can be hard, but in what concrete case can duplicate work be introduced when utilising graph rewriting? Or is either of the two bullet points above incorrect?

Note: while strictness analysis is done statically (as far as I know), graph rewriting is an evaluation strategy, not a compiler optimisation. It attempts to solve the same problem as the strictness analyser, but on a different level (runtime vs. compile time).

1 Answers1

2

Consider this program:

f (m : Nat) x y = (x, if H(m,m) then x else y)

my_f = f my_number
my_f hard harder

where H(x,x) returns True if Turing machine number x halts on input x and False otherwise. Detecting duplicate work (always) would mean solving the halting problem, no matter whether you want to do the analysis statically or dynamically.

Hence, no method can be complete.

Raphael
  • 73,212
  • 30
  • 182
  • 400