As I understand, evaluating something like the following in normal order evaluation is inefficient due to duplicate work:
double x = (x,x)
main = double some_hard_computation
I remember someone telling me
That there are two ways to avoid this: strictness analysis (to recognise that
doubleis strict in its one argument, and then deviating from normal order evaluation and evaluate the argument first) and graph rewriting (store pointers tosome_hard_computationsuch that both elements of the tuple point to the same computation, and evaluating one side of the tuple automatically evaluates the other side as well).That neither of the ways are sufficient on their own to avoid all cases where duplicate work can be introduced, and that it is best to apply both when writing a compiler.
I can imagine that strictness analysis can be hard, but in what concrete case can duplicate work be introduced when utilising graph rewriting? Or is either of the two bullet points above incorrect?
Note: while strictness analysis is done statically (as far as I know), graph rewriting is an evaluation strategy, not a compiler optimisation. It attempts to solve the same problem as the strictness analyser, but on a different level (runtime vs. compile time).