I'm trying to understand NbE (Normalisation by Evaluation). One thing I don't get is why it uses two different representations of programs: a syntactic and a semantic one.
All the implementations of NbE I found do this. They all look somewhat similar to this:
data Term = ... -- syntactic representation
data Val = ... -- semantic representation
eval :: Env -> Term -> Val
quote :: Ctx -> Val -> Term
nbe :: Term -> Term
nbe = quote newCtx . eval newEnv
I would understand it if they did it this way because it's convenient for whatever algorithm/approach they chose. However all the articles about NbE describe this as a necessity: it sounds like swapping representation is a fundamental part of NbE.
Why is it necessary to model NbE this way? Shouldn't it be possible to normalize "in place", using whatever representation the program is already in?
Rrelated question: why do they switch back to the syntactic representation, instead of just continuing with the semantic one?
I spent quite some times looking for an answer. That led me to hear that a normalization "in place" might reduce terms to a non-unique form, or that it could have issues handling η-conversion. However I don't understand why this would be the case, and would love to see examples.