Discussion of a sentence on OCaml wikipedia page

Milner repeatedly ran into the issue that the theorem provers would attempt to claim a proof was valid by putting non-proofs together.[10] As a result, he went on to develop the meta language for his Logic for Computable Functions, a language that would only allow the writer to construct valid proofs with its polymorphic type system.

The LCF page says

Theorems in the system are terms of a special “theorem” abstract data type. The general mechanism of abstract data types of ML ensures that theorems are derived using only the inference rules given by the operations of the theorem abstract type. Users can write arbitrarily complex ML programs to compute theorems; the validity of theorems does not depend on the complexity of such programs, but follows from the soundness of the abstract data type implementation and the correctness of the ML compiler.

Is this literally true? Can we actually encode logics in the ML type system so that the ML type system is itself enough to guarantee that the rules of the logic are followed? It seems a little implausible to me that the ML type system on its own is enough to guarantee the correctness of a proof.

The modern ML module system wasn’t developed until like a decade later, so I don’t even know if they are referring to the use of signatures here. Is there a simple architecture that sketches the use of this?

A little Googling give me https://www.cl.cam.ac.uk/~jrh13/slides/manchester-12sep01/slides.pdf and an exemple of LCF style programming (I guess the actual syntax has changed):

module type Birkhoff = sig
  type thm
  val axiom : formula -> thm
  val inst : (string, term) func -> thm -> thm
  val refl : term -> thm
  val sym : thm -> thm
  val trans : thm -> thm -> thm
  val cong : string -> thm list -> thm
  val dest_thm : thm -> formula list * formula
end;;

module Proven : Birkhoff = struct
  type thm = formula list * formula
  let axiom p =
    match p with
    | Atom("=", [s; t]) -> ([p], p)
    | _ -> failwith "axiom: not an equation"
  let inst i (asm, p) = (asm, formsubst i p)
  let refl t = ([], Atom("=", [t; t]))
  let sym (asm, Atom("=", [s; t])) = (asm, Atom("=", [t; s]))
  let trans (asm1, Atom("=", [s; t])) (asm2, Atom("=", [t’, u])) =
    if t’ = t then (List.append asm1 asm2, Atom("=", [s; u]))
    else failwith "trans: theorems don't match up"
  let cong f ths =
    let asms, eqs = List.split (List.map (fun (asm, Atom("=", [s; t])) -> (asm, (s, t))) ths) in
    let ls, rs = List.split eqs in
    (List.flatten asms, Atom("=", [Fn(f, ls); Fn(f, rs)]))
  let dest_thm th = th
end;;

We have the idea. A theorem is an opaque type (which prevent building 0=1 from scratch), with some inference rules given by the functions of the module. These functions have to be correct too… But I guess it would be easier to audit them than the whole ML compiler.

Then, instead of:

Users can write arbitrarily complex ML programs to compute theorems; the validity of theorems does not depend on the complexity of such programs, but follows from the soundness of the abstract data type implementation and the correctness of the ML compiler.

We should have read:

Users can write arbitrarily complex ML programs to compute theorems; the validity of theorems does not depend on the complexity of such programs, but follows from the soundness of the abstract data type implementation and the correctness of the ML compiler, and the inference rules associated to the data type.

(Perhaps the author has considered the inference rules a part of the data type implementation.)

2 Likes

That certainly looks like it. I cannot think of anything else the expression “soundness of the abstract data type implementation” would refer to. (If you exclude all the inference rules from the “implementation”, there is absolutely no code left, so talking about soundness would not even make sense.)

As implausible as it may seems, the problem of checking the correctness of a proof and checking the well-typeness of a term is the same. This is a well known result : Curry–Howard correspondence. As far as I understand LCF is not grounded on this principle, but some proof assistant (Rocq, Agda…) are. And it’s also the case of languages such a OCaml or Haskell.

Suppose that I want to prove that A Or B if and only if B or A, I can do this in OCaml. To do so, I have to identify the proposition A or B with the type (’a, ‘b) Either.t. Then I can trivially prove that if A or B then B or A with this OCaml term (that is a proof term):

let thm : ('a, 'b) Either.t -> ('b, 'a) Either.t = function
  | Left v -> Right v
  | Right v -> Left v

In the same way, the type of function application is the rule of modus ponens : if P then Q, but P, therefore Q.

let modus_ponens (major : 'p -> 'q) (minor : 'p) : 'q = major minor

Where you can see that applying a theorem to its hypotheses is the same as applying a function to its arguments.

And if you want to show that A and (B or C) is equivalent to (A and B) or (A and C), you can use this OCaml term:

let prop = function
  | x, Left y -> Left (x,y)
  | x, Right y -> Right (x,y)

The typing rules used by the OCaml type checker, in the above examples, are exactly the same as the deductive rules used by a logician to justify the correctness of the proves.

1 Like

I appreciate your enthusiasm but

As far as I understand LCF is not grounded on this principle

I would really like to discuss LCF style systems in this thread, where there is an abstract data type of proofs. I have even heard it suggested that in an LCF style system it is possible for the concrete implementation of the proof type to be unit, because as long as the module system keeps the type abstract so that the valid proof rules are the only way to construct it, it is still guaranteed. I think this claim is made in an old blog post by Lawrence Paulson but I cannot recall.

The example given by @Frederic_Loyer discusses equational logic, I am wondering a little bit how far this approach goes. For example, could this method be used together with higher-order abstract syntax to deal with variable binding? What does a realistic abstract data type look like for first-order logic?

Furthermore, as a history question, how evolved was the ML module system by 1984 when MacQueen wrote his paper “Modules for Standard ML”? I didn’t realize that proper abstract data types (not algebraic data types/records/variants) were already possible to define in the original ML language. Or was this built into the compiler?

Lastly, it seems easy for a reader to take away some misconceptions from your post. The OCaml and Haskell type systems are inconsistent under Curry-Howard, because every type is inhabited.

let pf_false () : 'fals = 
  let rec f x : 'fals = f x 
  in f 0 ;;
val pf_false : unit -> 'fals

Then the OCaml type checker accepts that pf_false () is of type 'fals for any type 'fals.

I would say this is true for any language supporting general recursion, a general fixpoint operator such as that found in the lambda-mu calculus (no relation to the lambda-mu calculus of Parigot) and more broadly I would say that any Turing complete type theory is inconsistent but I don’t know how to make that claim precise.

Sure it is. A consistent deductive system (and the corresponding type theory) is not Turing complete. That’s why you can wrote this OCaml term.

I just wanted to state that proof checking and type checking is not two unrelated questions, but only one same problem. You asked this question:

Can we actually encode logics in the ML type system so that the ML type system is itself enough to guarantee that the rules of the logic are followed

and the answer is trivially true, if we restrict to second order propositional logic. We can for sure also prove absurd claim, but the deductive rules of propositional logic are type checking rules of OCaml type checker (ML or OCaml are not proof assistant).

I can’t discuss LCF system, I don’t think that’s the way to follow, there is no future for this principles.

That’s how the OCaml type checker is implemented in OCaml itself, or how the Rocq proof-type checker is implemented.

What’s wrong with LCF? It’s not the only way, but it powers multiple
proof assistants that have use in the real world (HOL light,
Isabelle/HOL, etc.). What does it mean that it has no future?

You don’t need a full module calculus to define abstract types with operations. LCF ML had an abstype declaration for this purpose, which looks very similar to what was introduced in CLU around the same time.

Here is an example of an abstract type of rational numbers with operations, taken from the LCF book:

absrat is the conversion from the representation type int#int (i.e. int * int) to the abstract type rat, and reprat the reverse conversion. Both conversions are usable only inside the implementations of the functions declared in the abstype block.

3 Likes

Sure. This was actually shown in the example of Frederic. Notice how values of type thm only carry the statement of the theorem, not its proof.

If by the LCF approach, you mean the ability to isolate a formal system from the rest of the world by an abstraction of the metalanguage, you can go as far as you want. Below is what the calculus of constructions (without universes for the sake of readability) could look like. The critical part is the Const constructor, which bridges the gap between transparent terms and opaque proofs.

type 't term =
  Const of 't | Var of int | Type |
  Abs of 't term | Forall of 't term |
  Lam of 't term * 't term

module type COC = sig
  type thm
  val qed : thm term -> thm term -> thm
  val print : thm -> thm term
end

module Kernel: COC = struct
  type thm = { stmt: thm term }
  let check stmt typ = ...
  let qed stmt typ =
    check stmt typ;
    { stmt }
  let print t = t.stmt
end

Obviously, due to the conversion rule of COC, it is a lot more coarse-grained than in the case of e.g. propositional logic. You cannot pass the proof bit by bit to the kernel, you need to build the whole term before turning it into an opaque theorem.

By the way, if you want to experience the LCF approach at its finest, I strongly recommend reading the code of the kernel of HOL Light. This is illuminating. (The code is sufficiently short so that it is actually feasible.)

2 Likes