Dependent Types in the Purest Form

Ocaml has generalized algebraic data types (GADTs). GADTs are a special case of dependent types. Fully dependent types are used in programming languages like Coq, Agda, Idris, etc.

For those who want to understand the theory behind dependent types I have written a paper which describes in a textbook style dependent types in its purest form encountered in the Calculus of Constructions.

It is a paper for mathematically oriented readers.

9 Likes

Nice exposition! I always enjoy reading about type theory. I have some nitpicks:

Alonzo Church added types to his lambda calculus. But his main intention was not to avoid silly expressions. He wanted typed lambda calculus to be a model for logic. This laid the basis for the Curry-Howard isomorphism. The Curry-Howard isomorphism connects two seemingly unrelated formalisms: computations and proof systems.

It’s my understanding that Church’s intended logic of the lambda calculus was different from the Curry-Howard correspondence. This article explains Church’s type theory: Church’s Type Theory (Stanford Encyclopedia of Philosophy). The type o is the type of Booleans, and Church’s logic manipulates terms of this type. In this sense, Church’s simple type theory as a logic is actually closer to the way one commonly uses true and false at runtime to express logic, than proving a theorem by inhabiting a type.

I’m also not sure if the definitions of System F and System F omega are correct. System F is the calculus that has polymorphic functions (what you describe System F omega to be). System F omega is the calculus that permits higher-kinded types, like Haskell: System F - Wikipedia

Good paper though! I’m still not good enough to read type theory proofs, maybe I’ll come back to it later…

4 Likes

You’re right. Church uses lambda-terms to represent propositions, and has no explicit representation for proofs. The Curry-Howard correspondence (not really an isomorphism) uses terms to represent proofs and types to represent propositions.

The two views converge in system F-omega: F-omega types are the terms (= the propositions) of Church’s simply-typed lambda-calculus; F-omega kinds are Church’s types; and F-omega terms correspond (by Curry-Howard) to proofs of propositions.

7 Likes

@alan: Interesting comments.

  • I have not been very precise with respect to Church’s typed lambda calculus. You are absolutely right. Church’s typed logic calculus had not yet anything to do with the Curry-Howard correspondence. I just wanted to express that the idea to use typed lambda calculus for logic finally led to the discovery of the Curry-Howard isomorphism. I have reworked this paragraph to make it clearer.

  • I have made an error in describing system F and system F omega. Flip the two paragraphs and it is correct. System F adds polymorphic functions i.e. functions receiving type arguments. System F omega adds on top of that polymorphic types i.e. functions which can compute types. It does not make a lot of sense (but it is possible and contained in the lambda cube) to add polymorphic types before polymorphic functions.

I have put a new version on my homepage which corrects the things and hopefully makes it clearer.

Thanks for your comments.

Regards
Helmut

Are there variations on COC, that have the same core rules and
constructions, but change, say:

  • no P at the bottom, just a hierarchy of Type_i ?
    That can make sense if one is not interested in the Curry-Howard
    correspondance but just in the dependent types aspect.
  • beta/eta-reduction not being baked in the conversion relation, but
    still valid as explicit proof rules (|- (\x. t) u = t[x:=u]).

Are such variants studied at all, or known not to work, or just
dismissed?

@c-cube: There are many variations and extensions of the calculus of constructions possible. I would mention the following:

  • Martin-LĂśf’s type theory (MLTT): He introduced the cumulative hierarchy of sorts Type 0, Type 1, Type 2, .... As opposed to the calculus of constructions there is no impredicative sort P. Without impredicativity it is not possible to express recursion directly. You need other means like recursors or inductive constructions.

  • Zhaohui Luo’s extended calculus of constructions (ECC): The extended calculus of constructions has an impredicative sort P and a hierarchy of predicative sorts Type 0, Type 1, Type 2, .... The ECC has all possibilities of CC and MLTT. The proof of strong normalization or consistency as a logic is significantly more complex than the same proof in CC.

I don’t understand your second point. Could you elaborate it more?

Regards
Helmut

Thank you for your answer!

Indeed I will look at MLTT, thank you. I heard of it before but didn’t realize it was that close to COC.

Regarding (2), I think my phrasing is a bit confusing because I still want inference rules that are not based on well-typedness, but closer to the HOL systems (or any LCF-like system really): you build terms, and you have certain inference rules to deduce further terms from there. For example HOL-light has a builtin rule for β-reduction, but does not work modulo-β: you cannot use the cut rule (a |- b), (|- a) ===> (|- b) if the terms are only β-equivalent as opposed to syntactically equal.

Similarly, one can imagine a calculus close to COC (or MLTT?) that does only has a trivial (synctactic) conversion relation, and deals with β-reduction by other means. Does it make sense?

I have never used HOL of LCF, the only proof-assistant I used is Coq (and to some extent “OCaml”, since I use it as a more convenient and ergonomic version of system F-omega with Curry-Howard in mind), but it seems that LCF proofs and inference rules are based on well-typedness. At least that’s what I understand from this article on LCF history:

It’s not based on well-typedness :slightly_smiling_face: (well, only on the good behavior of the meta-language, i.e. OCaml or SML). Let’s assume it’s OCaml.

The rules are OCaml functions manipulating the types term and thm, where thm is an opaque type. The only way to construct values of type thm (without Obj.magic!) is to use a set of operations that ensure correctness by construction, roughly:

val assume : term -> thm   (*  t |- t *)
val refl : term -> thm (* |- t=t *)
val eq_bool : thm -> thm -> thm (* (G1 |- t=u), (G2 |- t) ==> (G1, G2 |- u) *)
val congr : thm -> thm -> thm (*  (G1 |- f=g), (G2 |- t=u) ==> (G1, G2 |- f t = g u) *)
val cut : thm -> thm -> thm   (* (G1 |- a), (G2, a |- b) => (G1,G2 |- b) *)
val subst : subst -> thm -> thm

and a few others to introduce boolean equality and beta-reduction. This is a subset of what HOL light builds upon, and the trusted kernel is the implementation of these functions.

This makes the system a bit pointless when it comes to dependent types (which HOL Light does not have, hence why it does not matter there). Indeed, when you instantiate forall P, ... -> P x -> ... with some value \y. e, you don’t want to get stuck with a type ... -> (\y. e) x -> ..., as you won’t be able to apply the corresponding function to a term of type e[y := x]. So, beta-conversion is a must-have when it comes to dependent types.

But beta-reduction is only one of the many rules that a system like Coq implements during conversion. So, your question might make sense for some of these other rules. For example, what about the unfolding of global symbols, that is, should conversion automatically unfold definitions when beta-conversion fails? Or should cast operators be added to explicitly tell which definitions should be unfolded?

Nice work, but it has a major error with respect to the nature proof. You write “A proof of A ∧ (A ⇒ B) ⇒ B the modus ponens rule is nearly
trivial in this computation analogy. It is a function taking two arguments.” But modus ponens is not a function, it’s a rule of inference. Or more accurately, it is not a proposition. One way to write it is “A->B, A |= B”. The comma is critical; it is not logical conjunction.

The functional analog is function application, not a binary function. You can write it down as a rule of the calculus, but not a function in the calculus. (I.e. it’s meta).

Logically modus ponens is not the kind of thing one proves, any more than function application is something to be proven.

Recognition of the analogy (isomorphism?) between modus ponens (a logical operation) and function application (mathematical operation) is the cornerstone of Curry-Howard. At least it looks that way to me. Logically, you can prove a material implication (e.g. A → B), but you cannot use it to draw an inference without the rule of modus ponens. (Which I guess you can treat as the elimination rule for ->; it depends on the logical formalism you want to use.) Mathematically, you can define a function A->B, but you cannot use it to compute anything without the “rule” of application.

More generally I’m not sure Church intended the typed lambda calculus as a “model for logic”. I suspect he was mostly concerned to make it work better as a functional calculus, but I could be wrong about that. Logic on the one hand and computable functions on the other were very different things in those days. If Church had been thinking about of any functional calculus as a model of logic, that would mean he already had the Curry-Howard correspondence in mind, which seems unlikely to me at least. (There’s a reason it’s called Curry-Howard and not Church-Howard).

Also: p. 8: “In order to express the predicate
is prime we need functions which map a number to a proposition
i.e. a type.” Shouldn’t that be “functions which map a number to a proof of a proposition”? Or maybe you mean “in order to express the type of predicate ‘is prime’”? Here I think “the predicate” is the problem: do you mean a function type or implementation? In my experience this kind of language (specifically regarding dependent types) needs to be very very precise in order to avoid misunderstanding, exactly because dependent types mix (so to speak) values and types (and type families…)

HTH,

Gregg

“A proof of a theorem of the system is a finite sequence of formulas, the last of which is a theorem…”

“By a proof of a formula B on the assumption of the formulas Ao1, Ao2… we shall mean a finite sequence of formulas, the last of which is B…”

  • A Formulation of the Simple Theory of Types (Church 1940)

Which is interesting because (to my knowledge) it was Gentzen who introduced the notion of “assumption” as a basic notion, in Natural Deduction, but he was in Germany, in the 30s; I wonder if Church’s use of the concept is traceable to him. He’s not cited in Church’s paper.

Evidently the lambda calculus played almost no role in Howard’s thinking until after he had grasped the isomorphism. See Howard on Curry-Howard

Maybe look into Nuprl? It’s based on Computational Type Theory, a variant of Martin Lof Type Theory which starts with an untyped language and puts the types on top. To my understanding, it has untyped reduction rules that the user manually unfolds in proofs: Rules. I think the reason why Nuprl does this is that it has nontermination. Another aspect is that it uses extensional definitions of equality (e.g. for functions), but instead of having a typechecker decide whether two functions are extensionally equal (which is not decidable), it has the user construct the equality proof, and then used the “equality reflection” rule.

Church uses Hilbert-style formulations for his logics based on lambda-calculus (Church 1932, 1933, 1940). That was the standard approach in mathematical logic at the time. I don’t know whether Church was familiar with Gentzen’s work, but it’s clear that Church is not using natural deduction nor sequent calculus in these papers.

1 Like

There’s something that I should miss somewhere. With these primitives you are constructing proofs of theorems, but where is the statement of the theorem you prove? At which point, It seems more natural to me to consider the statement as a proposition, which in turn is a type, and the proof term a value that is well-typed. But it’s the heart of Curry-Howard… and you seem to not be interested in it. Maybe I’m too biased and have some difficulties to see the interest of a proof assistant not grounded in dependent types and Curry-Howard.

The second (or third?) most popular proof assistant, Isabelle/HOL, doesn’t use the CH isomorphism.

Why couldn’t one build the same thing in a dependent calculus? Formulas would be of type Bool (which is a regular type). In classical logic this type has only two values. The upside is that you get all the conveniences of classical logic, including automatic theorem provers for quite large fragments; SMT solvers; and more decision procedures since a lot of them are based on proof by refutation.

Personally I think in classical logic, so CH is of little use to me. Could that work in a dependent calculus?

Yes, thank you. I reckon the vast majority of working programmers don’t care, but as a matter of intellectual history this stuff is fascinating. I mean the very fluid boundaries between logic, mathematics, proof theory, etc. in past 100 or so years.

Sometimes the obvious is very hard to even notice. Until I read your response, I had completely ignored the nature of the logic used to express the lambda calculus. It’s so obvious -like water. It’s a calculus of functions, but it is expressed in a logic that is truth-conditional, which is not computational, and even non-logical, insofar as truth-tables do not involve inference (they just state facts). Church’s language is intriguing - he’s got a section on “rules of inference”, which he parenthesizes as “rules of procedure” - but there is an enormous difference (intuitively) between inference and procedure! To this day we lack a generally accepted formalization of inference (consequence).

(Or put it this way: we had an intuitive concept of “effective procedure” (for calculating a function) long before we got Turing’s model. I’m not sure we even have a (primitive) concept of “effective inference” or “effective reasoning”.)

R. Soare wrote a wonderful series of articles arguing (among other things) that Turing’s concept of computation is more fundamental than the lambda calculus. (Unfortunately his webpage has a severe case of bitrot so it might take some effort to find them. Try this: http://www.people.cs.uchicago.edu/~soare/History/) One of his pieces of evidence is that Godel never accepted the lambda calculus as an adequate model of the intuitive notion of “effective procedure”. I’ve never really understood why, but now it occurs to me that it may be at least partly related to the fact that the logical language used to express the lambda calculus is truth-conditional - not inherently computational. Or maybe not inferential is a better way of saying it. Reducing stuff in the lamda calculus does not really involve inference. Substitution-by-rule is not really inference (indeed inference cannot be by rule, just ask Wittgenstein). AFAIK only Gentzen made inference fundamental (in some sense) - intro and exit rules in Natural Deduction, and then with the sequent calculus he shifted inference from propositions to proposition to inference from inferences to inferences. Often treated as a merely technical innovation but to me it looks like a profound and substantial shift.

What does this have to do with OCaml? Hell I don’t know, but I do know that it’s a hella lot easier to connect OCaml to fundamental concepts of computation/math/logic than just about any other language, which is one reason I like it.

Cheers,
Gregg

2 Likes

@mobileink You know a lot about the history of logic! Are you aware of Nuprl and Computational Type Theory and its difference from the version of type theory used by Coq, Agda, and Lean?

Your distinction between a procedure, a notion captured by a Turing machine, and an inference, is similar to the distinction between the type theory of Nuprl and Coq. In Coq, one constructs a constructive proof term and the typechecker is a decision procedure that determines whether the proof term is valid. In Nuprl, the user constructs a proof by composing rules of inference (some of which do not contain computational content), and then an executable “witness” is extracted, but this witness by itself may not contain enough information for a procedure to decide whether it proves a proposition.

The distinction is apparent in the treatment of equality. Judgemental equality is a metalogical notion and propositional equality is an “internalization” of judgemental equality as a proposition. However, in the Coq/Agda/Lean style, while one can take arbitrary steps to prove a propositional equality, judgemental equality is checked by the typechecker via evaluation of both sides. On the other hand, Nuprl has the “equality reflection” inference rule that lets the equality judgement be derived from an arbitrary proof of an equality proposition. This underlines the distinction between decision procedure (how Coq and Agda’s system works) and inference (how Nuprl’s system works). In essence, in Coq, a typechecker decides typehood, and in Nuprl, the user proves typehood with arbitrary steps.

Here is an article about Nuprl: Innovations in computational type theory using Nuprl - ScienceDirect

This paper refers to the distinction between computation and inference a la Wittgenstein that you make in your post, and the difference between Nuprl and Coq, as the distinction between synthetic and analytic judgement.

(I accidentally posted this as a main reply to the thread instead of a reply to your post, but when I tried to delete this and repost it as a reply, Discourse wouldn’t let me…)

1 Like

My personal experience, having no formal training in any of this stuff, is that real understanding only comes with understanding how things came to be (I’m firmly in the camp of “things have a history”, not “things have an essence”). Unfortunately that is not so easy with most of this stuff. It’s all so new! (100 years is nothing, historically speaking.)

I have looked into Nuprl but only lightly and its been a while. Time to revisit it, I guess.

In general I rely on HoTT for my thinking about types. Most of that book is completely over my head, but the first few chapters are terrific, I highly recommend them for anybody interested in the basics of type theory (not that they are without flaws).

Equality is a major hairball, which I have not yet been able to untangle. Which is pretty odd, considering that most everybody (normal) would consider it trivially obvious. Personally, I think the standard terminology (terminologies) is (are) regrettable. Making a distinction between judgmental (definitional?) and propositional equality makes it sound like we’re talking about variants of a single thing. I’m not sure that’s the case. But that’s a deep question, like asking whether things like computation and inference are one or many.

Thanks for the refs to the papers. I’ll take a close look at them. But I will register one reservation. In my view Martin-Loff’s philosophical ruminations about “judgments” are unconvincing. The second paper you mentioned just restates them.
Pretty obvious when you ask hard questions; e.g. “To know X is to know …” is trivially vacuous. I don’t mean that the substance of Martin-Lof’s work is wrong, but that his philosophical explanations (which he addresses at length in a paper whose title I do not have at hand) are just bad. Computer scientists love the stuff, but try to find a single logician or philosopher who pays the slightest attention to his doctrine of “judgments”. It’s just uninteresting in the context of the fascinating work in the same area by real philosophers (Prawitz, Dummett, etc., especially Robert Brandom and Huw Price.)

I think this is a classic example of the costs of specialization. Philosophers, logicians, and computer scientists seem to mostly if not completely ignore each others’ works.

Ok, so a relevant OCaml-specific question. Isn’t sig ... end a dependent type constructor?