[slightly edited for a number of typos and clarity]

Obviously we can differ on this, but … I don’t see how there’s a difference here. I’ll give the example of Church numerals: the type is:

`churchnat = (A)(A -> A) -> (A -> A)`

there are two inhabitants [oops, I’m lapsing into constructivist lingo – two basic typed terms from which all others can be constructed]: `fun f x -> x`

and `fun f x -> f x`

.

we abbreviate these two terms `O`

and `S`

. The ADT would be `type nat = O | S of nat`

.

How do we compute over `nat`

? I’ll do it two ways, once with the ADT, and then with the Church-numeral formulation.

##
The ADT way

If it were an ADT, we would compute by match+letrec. But that’s not the primitive induction combinator which is defined from the inductively defined type (e.g. what Coq would derive automatically). That latter is something like [again, making this up from memory]

```
(B) (P: nat -> B) ((n: nat) (P n) -> (P (S n)) -> (P O) -> (n: nat) (P n)
```

And when you extract the realizability term in most constructive logics, you get the match+letrec, of the form:

```
induction_combinator = fun indstep basecase ->
let rec indrec n -> match n with
O -> basecase
| S n -> indstep (indrec n)
in indrec
```

[hope I got that right]

##
The Church-encoding way

But that’s all assuming the datatype is defined as an ADT, yes? Let’s do it again with the Church-numeral formulation:

Recall that `O = fun f x -> x`

and `S = fun f x -> f x`

.

And the type was `churchnat = (A) (A -> A) -> (A -> A)`

.

How do we express the equivalent of above? We just take the expressions for the basecase and the indstep, and pass them as arguments to the churchnat, viz.

`n indstep basecase`

Notice how, if the churchnat is really `O`

, then you get `basecase`

and if it’s (let’s say) `S (S (S O))`

, then you’ll get `indstep (indstep (indstep basecase))`

. Notice how, just as with tagless-final (as it was described) you cannot do direct pattern-matching on the ADT (b/c there isn’t any – you can only access the result of the inductive computation on the subsidiary values in the tree (or well-founded order, generally).

##
But the two ways are equivalent

If it’s not obvious where I’m going, I’ll add one more little bit: the two ways of doing this are completely interchangeable: there’s a transformation called “defunctionalization” due to Reynolds (in his seminal paper “Definitional Interpreters for …” (not gonna look it up – that should be sufficient to get the search hit)) that explains how you can start with (computations over) the `churchnat`

and get (computations over) the `nat`

(he does it for a CPS semantics-based definitional interpreter of a language, progressively turning into an abstract machine (pretty much the CEK-machine, which is where Felleisen starts).

And of course, the function `nat -> churchnat`

is trivial.

We can pretend that we’re doing “dynamic selection based upon the constructor”, but if that’s hidden away in the automatically-generated primitive-recursion induction combinator, then just what are we eschewing? And when we can see that using the primitive-recursion induction combinator is strictly less-expressive than just using match+letrec directly, again, what’s the gain?

And if we’re not doing “dynamic selection based upon the constructor” but it’s happening at runtime, b/c the closures are implemented as records, with tags and those tags are used to select which code paths to take, then how is that not the same thing?