Learn Lambda Calculus in 10 minutes with OCaml

Hi everyone :wave:

I wrote a short blog post about teaching the fundamentals of Lambda Calculus with some OCaml code:

Nothing really knew if you’re already familiar with the subject but it’s nice reminder how easy and elegant a Lambda Calculus code can be in an FP language :slightly_smiling_face:

6 Likes

Nice introductive blog post!

You may wish to fix a mistake, though: The term (λx. x x) (λx. x x) is not called Y. It is usually called Ω (and λx. x x is usually called Δ).

The name Y is traditionally reserved to what is called “fixpoint combinators”, i.e., terms such that for any term f, the two terms Y f and f (Y f) have the same behavior.

Many fixpoint combinators exist. They can be used to define recursive functions.

4 Likes

@esope Thanks for fact-checking! I guess my memory failed me a bit here :sweat_smile:

I’ll fix as soon as I can get back to it.

UPD: Fixed

Hey, congrats. Just a simple comment, the usage of => for reduction, made me see for a second as a JS function, maybe a better notation to avoid this would be |->.

Very nice, but a few nits:

Variables are not strings. They’re symbols.

For your abstraction example you gave lambda x. f x. But what is f?

Semantics: there are (is?) no semantics in lambda calculus. That’s why it’s called a calculus and not a theory.

Beta reduction as simple string search. Very wrong. Just consider lambda x lambda x… Variable binding in a world of contexts and renaming is a major pita. LC syntax could not be simpler but LC transforms are not. That’s why schoenfinkel and curry invented combinatory logic, to get rid of variables and the substitution problems that come with them.

I mostly agree with your comment: variable bindings ought to be addressed with more care in this post.

But I’m not sure to follow you on the semantic part. Scott’s domain theory (1969) provides a denotational semantics for untyped LC…

Also, combinatory logic was created before LC, AFAIK to address issues with variables indeed, but not particularly coming from LC.

Yes, but that’s the point: LC does not provide any semantics. alpha-conversion and beta-reduction are syntactic operations that have nothing to do with semantics.

True.

Sorry, I still don’t understand what you mean by “provide”. Do you mean to say that LC was created without a formal denotational semantics?

I’m not a specialist so perhaps you have something in mind I don’t see, but the semantics of LC precisely provides a sound interpretation relative to beta-(eta)-equivalence, doesn’t it?

Absolutely. Of course, Church had a semantic domain (computable functions) in mind when he invented it in the 1930s, but it was completely informal. Coming up with formal semantics turned out to be a hard problem that wasn’t solved until the early 1970s when Scott came up with Domain Theory etc.

For a terrific and very readable account of LC and formal semantics see https://www.amazon.com/Denotational-Semantics-Scott-Strachey-Approach-Programming/dp/0262690764. Having defined LC syntax, he says: “We would very much like to be able to think of it as describing the behavior of functions…We are as yet unable to do so, however, for we have not found a class of functions which it describes. So far we cannot regard it as describing anything: it is a system without semantics… The lack of any [non-trivial] semantic model is the reason why the LC had fallen into disrepute among some logicians.” The rest of the book shows how Domain Theory provides the missing stuff.

Not quite. It’s Domain Theory, not the LC itself, that provides the interpretation.

It’s the lack of semantics that makes the LC computable. If LC expressions depended in any substantial way on semantic notions, then machines would be unable to do anything with them, since computing machines are purely syntactic.

I think we have a completely different “philosophical” vision of the stituation, which is why I was asking. To me, LC does evidently have a formal denotational semantics (and of course operational, given by reduction rules): witness Scott’s or Plotkin’s models. The fact that it took decades to find some satisfying semantics doesn’t change anything. Knowledge is cumulative: once we know a formal semantics, we can’t say LC doesn’t have one because it didn’t have one when created. Granted, I admit such a semantics should not only satisfy certain formal properties but also provide a sufficiently-intuitive explanation, which I think the said semantics do.

Finally I don’t see in any way how the supposed lack of semantics makes LC computable. The fact that reduction rules don’t depend on the denotational semantics to perform their computations is not a “lack of semantics”. Quite the contrary, we expect these rules (a form of semantics too, by the way: operational semantics) to match in a certain sense with the denotational semantics.