Ocaml has generalized algebraic data types (GADTs). GADTs are a special case of dependent types. Fully dependent types are used in programming languages like Coq, Agda, Idris, etc.

For those who want to understand the theory behind dependent types I have written a paper which describes in a textbook style dependent types in its purest form encountered in the Calculus of Constructions.

It is a paper for mathematically oriented readers.

Nice exposition! I always enjoy reading about type theory. I have some nitpicks:

Alonzo Church added types to his lambda calculus. But his main intention was not to avoid silly expressions. He wanted typed lambda calculus to be a model for logic. This laid the basis for the Curry-Howard isomorphism. The Curry-Howard isomorphism connects two seemingly unrelated formalisms: computations and proof systems.

Itâs my understanding that Churchâs intended logic of the lambda calculus was different from the Curry-Howard correspondence. This article explains Churchâs type theory: Churchâs Type Theory (Stanford Encyclopedia of Philosophy). The type o is the type of Booleans, and Churchâs logic manipulates terms of this type. In this sense, Churchâs simple type theory as a logic is actually closer to the way one commonly uses true and false at runtime to express logic, than proving a theorem by inhabiting a type.

Iâm also not sure if the definitions of System F and System F omega are correct. System F is the calculus that has polymorphic functions (what you describe System F omega to be). System F omega is the calculus that permits higher-kinded types, like Haskell: System F - Wikipedia

Good paper though! Iâm still not good enough to read type theory proofs, maybe Iâll come back to it laterâŚ

Youâre right. Church uses lambda-terms to represent propositions, and has no explicit representation for proofs. The Curry-Howard correspondence (not really an isomorphism) uses terms to represent proofs and types to represent propositions.

The two views converge in system F-omega: F-omega types are the terms (= the propositions) of Churchâs simply-typed lambda-calculus; F-omega kinds are Churchâs types; and F-omega terms correspond (by Curry-Howard) to proofs of propositions.

I have not been very precise with respect to Churchâs typed lambda calculus. You are absolutely right. Churchâs typed logic calculus had not yet anything to do with the Curry-Howard correspondence. I just wanted to express that the idea to use typed lambda calculus for logic finally led to the discovery of the Curry-Howard isomorphism. I have reworked this paragraph to make it clearer.

I have made an error in describing system F and system F omega. Flip the two paragraphs and it is correct. System F adds polymorphic functions i.e. functions receiving type arguments. System F omega adds on top of that polymorphic types i.e. functions which can compute types. It does not make a lot of sense (but it is possible and contained in the lambda cube) to add polymorphic types before polymorphic functions.

I have put a new version on my homepage which corrects the things and hopefully makes it clearer.

Are there variations on COC, that have the same core rules and
constructions, but change, say:

no P at the bottom, just a hierarchy of Type_i ?
That can make sense if one is not interested in the Curry-Howard
correspondance but just in the dependent types aspect.

beta/eta-reduction not being baked in the conversion relation, but
still valid as explicit proof rules (|- (\x. t) u = t[x:=u]).

Are such variants studied at all, or known not to work, or just
dismissed?

@c-cube: There are many variations and extensions of the calculus of constructions possible. I would mention the following:

Martin-LĂśfâs type theory (MLTT): He introduced the cumulative hierarchy of sorts Type 0, Type 1, Type 2, .... As opposed to the calculus of constructions there is no impredicative sort P. Without impredicativity it is not possible to express recursion directly. You need other means like recursors or inductive constructions.

Zhaohui Luoâs extended calculus of constructions (ECC): The extended calculus of constructions has an impredicative sort P and a hierarchy of predicative sorts Type 0, Type 1, Type 2, .... The ECC has all possibilities of CC and MLTT. The proof of strong normalization or consistency as a logic is significantly more complex than the same proof in CC.

I donât understand your second point. Could you elaborate it more?

Indeed I will look at MLTT, thank you. I heard of it before but didnât realize it was that close to COC.

Regarding (2), I think my phrasing is a bit confusing because I still want inference rules that are not based on well-typedness, but closer to the HOL systems (or any LCF-like system really): you build terms, and you have certain inference rules to deduce further terms from there. For example HOL-light has a builtin rule for Î˛-reduction, but does not work modulo-Î˛: you cannot use the cut rule (a |- b), (|- a) ===> (|- b) if the terms are only Î˛-equivalent as opposed to syntactically equal.

Similarly, one can imagine a calculus close to COC (or MLTT?) that does only has a trivial (synctactic) conversion relation, and deals with Î˛-reduction by other means. Does it make sense?

I have never used HOL of LCF, the only proof-assistant I used is Coq (and to some extent âOCamlâ, since I use it as a more convenient and ergonomic version of system F-omega with Curry-Howard in mind), but it seems that LCF proofs and inference rules are based on well-typedness. At least thatâs what I understand from this article on LCF history:

Itâs not based on well-typedness (well, only on the good behavior of the meta-language, i.e. OCaml or SML). Letâs assume itâs OCaml.

The rules are OCaml functions manipulating the types term and thm, where thm is an opaque type. The only way to construct values of type thm (without Obj.magic!) is to use a set of operations that ensure correctness by construction, roughly:

val assume : term -> thm (* t |- t *)
val refl : term -> thm (* |- t=t *)
val eq_bool : thm -> thm -> thm (* (G1 |- t=u), (G2 |- t) ==> (G1, G2 |- u) *)
val congr : thm -> thm -> thm (* (G1 |- f=g), (G2 |- t=u) ==> (G1, G2 |- f t = g u) *)
val cut : thm -> thm -> thm (* (G1 |- a), (G2, a |- b) => (G1,G2 |- b) *)
val subst : subst -> thm -> thm

and a few others to introduce boolean equality and beta-reduction. This is a subset of what HOL light builds upon, and the trusted kernel is the implementation of these functions.

This makes the system a bit pointless when it comes to dependent types (which HOL Light does not have, hence why it does not matter there). Indeed, when you instantiate forall P, ... -> P x -> ... with some value \y. e, you donât want to get stuck with a type ... -> (\y. e) x -> ..., as you wonât be able to apply the corresponding function to a term of type e[y := x]. So, beta-conversion is a must-have when it comes to dependent types.

But beta-reduction is only one of the many rules that a system like Coq implements during conversion. So, your question might make sense for some of these other rules. For example, what about the unfolding of global symbols, that is, should conversion automatically unfold definitions when beta-conversion fails? Or should cast operators be added to explicitly tell which definitions should be unfolded?

Nice work, but it has a major error with respect to the nature proof. You write âA proof of A â§ (A â B) â B the modus ponens rule is nearly
trivial in this computation analogy. It is a function taking two arguments.â But modus ponens is not a function, itâs a rule of inference. Or more accurately, it is not a proposition. One way to write it is âA->B, A |= Bâ. The comma is critical; it is not logical conjunction.

The functional analog is function application, not a binary function. You can write it down as a rule of the calculus, but not a function in the calculus. (I.e. itâs meta).

Logically modus ponens is not the kind of thing one proves, any more than function application is something to be proven.

Recognition of the analogy (isomorphism?) between modus ponens (a logical operation) and function application (mathematical operation) is the cornerstone of Curry-Howard. At least it looks that way to me. Logically, you can prove a material implication (e.g. A â B), but you cannot use it to draw an inference without the rule of modus ponens. (Which I guess you can treat as the elimination rule for ->; it depends on the logical formalism you want to use.) Mathematically, you can define a function A->B, but you cannot use it to compute anything without the âruleâ of application.

More generally Iâm not sure Church intended the typed lambda calculus as a âmodel for logicâ. I suspect he was mostly concerned to make it work better as a functional calculus, but I could be wrong about that. Logic on the one hand and computable functions on the other were very different things in those days. If Church had been thinking about of any functional calculus as a model of logic, that would mean he already had the Curry-Howard correspondence in mind, which seems unlikely to me at least. (Thereâs a reason itâs called Curry-Howard and not Church-Howard).

Also: p. 8: âIn order to express the predicate
is prime we need functions which map a number to a proposition
i.e. a type.â Shouldnât that be âfunctions which map a number to a proof of a propositionâ? Or maybe you mean âin order to express the type of predicate âis primeââ? Here I think âthe predicateâ is the problem: do you mean a function type or implementation? In my experience this kind of language (specifically regarding dependent types) needs to be very very precise in order to avoid misunderstanding, exactly because dependent types mix (so to speak) values and types (and type familiesâŚ)

âA proof of a theorem of the system is a finite sequence of formulas, the last of which is a theoremâŚâ

âBy a proof of a formula B on the assumption of the formulas Ao1, Ao2âŚ we shall mean a finite sequence of formulas, the last of which is BâŚâ

A Formulation of the Simple Theory of Types (Church 1940)

Which is interesting because (to my knowledge) it was Gentzen who introduced the notion of âassumptionâ as a basic notion, in Natural Deduction, but he was in Germany, in the 30s; I wonder if Churchâs use of the concept is traceable to him. Heâs not cited in Churchâs paper.

Maybe look into Nuprl? Itâs based on Computational Type Theory, a variant of Martin Lof Type Theory which starts with an untyped language and puts the types on top. To my understanding, it has untyped reduction rules that the user manually unfolds in proofs: Rules. I think the reason why Nuprl does this is that it has nontermination. Another aspect is that it uses extensional definitions of equality (e.g. for functions), but instead of having a typechecker decide whether two functions are extensionally equal (which is not decidable), it has the user construct the equality proof, and then used the âequality reflectionâ rule.

Church uses Hilbert-style formulations for his logics based on lambda-calculus (Church 1932, 1933, 1940). That was the standard approach in mathematical logic at the time. I donât know whether Church was familiar with Gentzenâs work, but itâs clear that Church is not using natural deduction nor sequent calculus in these papers.

Thereâs something that I should miss somewhere. With these primitives you are constructing proofs of theorems, but where is the statement of the theorem you prove? At which point, It seems more natural to me to consider the statement as a proposition, which in turn is a type, and the proof term a value that is well-typed. But itâs the heart of Curry-HowardâŚ and you seem to not be interested in it. Maybe Iâm too biased and have some difficulties to see the interest of a proof assistant not grounded in dependent types and Curry-Howard.

The second (or third?) most popular proof assistant, Isabelle/HOL, doesnât use the CH isomorphism.

Why couldnât one build the same thing in a dependent calculus? Formulas would be of type Bool (which is a regular type). In classical logic this type has only two values. The upside is that you get all the conveniences of classical logic, including automatic theorem provers for quite large fragments; SMT solvers; and more decision procedures since a lot of them are based on proof by refutation.

Personally I think in classical logic, so CH is of little use to me. Could that work in a dependent calculus?

Yes, thank you. I reckon the vast majority of working programmers donât care, but as a matter of intellectual history this stuff is fascinating. I mean the very fluid boundaries between logic, mathematics, proof theory, etc. in past 100 or so years.

Sometimes the obvious is very hard to even notice. Until I read your response, I had completely ignored the nature of the logic used to express the lambda calculus. Itâs so obvious -like water. Itâs a calculus of functions, but it is expressed in a logic that is truth-conditional, which is not computational, and even non-logical, insofar as truth-tables do not involve inference (they just state facts). Churchâs language is intriguing - heâs got a section on ârules of inferenceâ, which he parenthesizes as ârules of procedureâ - but there is an enormous difference (intuitively) between inference and procedure! To this day we lack a generally accepted formalization of inference (consequence).

(Or put it this way: we had an intuitive concept of âeffective procedureâ (for calculating a function) long before we got Turingâs model. Iâm not sure we even have a (primitive) concept of âeffective inferenceâ or âeffective reasoningâ.)

R. Soare wrote a wonderful series of articles arguing (among other things) that Turingâs concept of computation is more fundamental than the lambda calculus. (Unfortunately his webpage has a severe case of bitrot so it might take some effort to find them. Try this: http://www.people.cs.uchicago.edu/~soare/History/) One of his pieces of evidence is that Godel never accepted the lambda calculus as an adequate model of the intuitive notion of âeffective procedureâ. Iâve never really understood why, but now it occurs to me that it may be at least partly related to the fact that the logical language used to express the lambda calculus is truth-conditional - not inherently computational. Or maybe not inferential is a better way of saying it. Reducing stuff in the lamda calculus does not really involve inference. Substitution-by-rule is not really inference (indeed inference cannot be by rule, just ask Wittgenstein). AFAIK only Gentzen made inference fundamental (in some sense) - intro and exit rules in Natural Deduction, and then with the sequent calculus he shifted inference from propositions to proposition to inference from inferences to inferences. Often treated as a merely technical innovation but to me it looks like a profound and substantial shift.

What does this have to do with OCaml? Hell I donât know, but I do know that itâs a hella lot easier to connect OCaml to fundamental concepts of computation/math/logic than just about any other language, which is one reason I like it.

@mobileink You know a lot about the history of logic! Are you aware of Nuprl and Computational Type Theory and its difference from the version of type theory used by Coq, Agda, and Lean?

Your distinction between a procedure, a notion captured by a Turing machine, and an inference, is similar to the distinction between the type theory of Nuprl and Coq. In Coq, one constructs a constructive proof term and the typechecker is a decision procedure that determines whether the proof term is valid. In Nuprl, the user constructs a proof by composing rules of inference (some of which do not contain computational content), and then an executable âwitnessâ is extracted, but this witness by itself may not contain enough information for a procedure to decide whether it proves a proposition.

The distinction is apparent in the treatment of equality. Judgemental equality is a metalogical notion and propositional equality is an âinternalizationâ of judgemental equality as a proposition. However, in the Coq/Agda/Lean style, while one can take arbitrary steps to prove a propositional equality, judgemental equality is checked by the typechecker via evaluation of both sides. On the other hand, Nuprl has the âequality reflectionâ inference rule that lets the equality judgement be derived from an arbitrary proof of an equality proposition. This underlines the distinction between decision procedure (how Coq and Agdaâs system works) and inference (how Nuprlâs system works). In essence, in Coq, a typechecker decides typehood, and in Nuprl, the user proves typehood with arbitrary steps.

This paper refers to the distinction between computation and inference a la Wittgenstein that you make in your post, and the difference between Nuprl and Coq, as the distinction between synthetic and analytic judgement.

(I accidentally posted this as a main reply to the thread instead of a reply to your post, but when I tried to delete this and repost it as a reply, Discourse wouldnât let meâŚ)

My personal experience, having no formal training in any of this stuff, is that real understanding only comes with understanding how things came to be (Iâm firmly in the camp of âthings have a historyâ, not âthings have an essenceâ). Unfortunately that is not so easy with most of this stuff. Itâs all so new! (100 years is nothing, historically speaking.)

I have looked into Nuprl but only lightly and its been a while. Time to revisit it, I guess.

In general I rely on HoTT for my thinking about types. Most of that book is completely over my head, but the first few chapters are terrific, I highly recommend them for anybody interested in the basics of type theory (not that they are without flaws).

Equality is a major hairball, which I have not yet been able to untangle. Which is pretty odd, considering that most everybody (normal) would consider it trivially obvious. Personally, I think the standard terminology (terminologies) is (are) regrettable. Making a distinction between judgmental (definitional?) and propositional equality makes it sound like weâre talking about variants of a single thing. Iâm not sure thatâs the case. But thatâs a deep question, like asking whether things like computation and inference are one or many.

Thanks for the refs to the papers. Iâll take a close look at them. But I will register one reservation. In my view Martin-Loffâs philosophical ruminations about âjudgmentsâ are unconvincing. The second paper you mentioned just restates them.
Pretty obvious when you ask hard questions; e.g. âTo know X is to know âŚâ is trivially vacuous. I donât mean that the substance of Martin-Lofâs work is wrong, but that his philosophical explanations (which he addresses at length in a paper whose title I do not have at hand) are just bad. Computer scientists love the stuff, but try to find a single logician or philosopher who pays the slightest attention to his doctrine of âjudgmentsâ. Itâs just uninteresting in the context of the fascinating work in the same area by real philosophers (Prawitz, Dummett, etc., especially Robert Brandom and Huw Price.)

I think this is a classic example of the costs of specialization. Philosophers, logicians, and computer scientists seem to mostly if not completely ignore each othersâ works.

Ok, so a relevant OCaml-specific question. Isnât sig ... end a dependent type constructor?