I wrote a brief introduction to the normalization-by-evaluation algorithm in OCaml, which is the standard way to efficiently normalize lambda calculus terms to their beta normal forms. I hope you find this introduction helpful!
Normalization-by-evaluation is a standard technique for computing β-normal forms of arbitrary lambda calculus terms.
The text is missing an explanation why default implementation of evaluator (Sestoft’s paper) can’t do the same.
Here are two comments. First, the part about going back to the original terms is a bit obscure, due to the use of De Bruijn indices and the way to compute them. Using a different representation, e.g., locally nameless, would make the whole thing much more readable (though a bit slower).
Second, a very important message is left hidden: The whole evaluation process does not need to know about lambda-calculus! It can be whatever you want. (Some machine code compiled with an intricate compiler and running on an exotic processor? Check!) In particular, as far as the normalization is concerned, the payload of Vclosure
will never be explored, it will just be the target of an application. Said otherwise, you can give it the type Vclosure of value -> value
. Conversely, as far as the evaluation is concerned, the payload of Vneutral
will never be explored, it will just be the target of an application. Said otherwise, you can give it the type Vneutral of value -> value
. At this point the code of denote
is now just
match m_val with
| VClosure f -> f n_val
| VNeutral f -> f n_val
As the two branches are now indistinguishable, the difference between VClosure
and VNeutral
is no longer needed. You can just use type value = value -> value
and just do the application m_val n_val
in denote
. (As this definition of value
is recursive, this is a bit tedious to use in OCaml. At this point, you might just as well opt for an untyped language.) Notice how denote
has now become completely unaware of the existence of neutral values. It is just a pure off-the-shelf evaluator for a programming language with closures.
I think this is the most striking feature of normalization-by-evaluation. Contrarily to the way you are presenting it, the evaluation process does not need to know that it is part of a normalization procedure for the lambda-calculus. For example, you can make a trivial translation of a lambda-term to a javascript term (resp lisp, resp whatever) and then feed it to your favorite javascript compiler and interpreter (resp lisp, resp whatever) and you will still be able to use to recover a normal form from it.
(I am not sure where this idea of agnostic evaluation originated from, but it is after reading Maxime Denes’ PhD thesis that I became aware of it.)