# Efficiency of a code using partial function evaluation

Hi,

I have a question if a partial function application can speed up the code in some neat manner.

Consider the following two codes:

Code 1:

``````type me =
| true
| Candel of string

let rec trans a num = match a with
| true -> num
| Candel s -> let temp = trans true in (temp 1) + (temp 2)
``````

Here, the trans function is partially applied once to true, and then the partial function is applied with two integers, 1 and 2.

Vs Code 2:

``````type me =
| true
| Candel of string

let rec trans a num = match a with
| true -> num
| Candel s -> (trans true 1) + (trans true 2)
``````

The full function is evaluated twice with (trance true 1) and (trans true 2).

My question is suppose I have many more evaluations, like (trance true 1) + (trans true 2) + (trance true 3) + (trans true 4), vs (temp 1) + (temp 2) + (temp 3) + (temp 4), etc. then will the two approaches work with the same computational complexity or, will one perform better than the other, and why?

I am asking, because if a part of recursion calls the recursive function twice, then the complexity is exponential typically, whereas when called once it is linear, so I was wondering if partial evaluation helps out in some way?

Thank you,
Gokul

The second one will work better. One important thing is that your `trans` function actually looks like `(fun a num -> match ...)`. So, until the second argument has been applied, nothing will happen. As a consequence, `let temp = trans true` is actually slower because no computation is performed yet, but an allocation has to be performed so that the actual call can later remember that the first argument was `true`; moreover, that call is no longer a direct call to `trans`, but an indirect call to the closure containing both `trans` and `true`. So, your first variant introduces two sources of slowdown and no source of speedup.

Here is a third variant of your code:

``````let rec trans a = match a with
| true -> fun num -> num
| Candel s -> fun num -> let temp = trans true in (temp 1) + (temp 2)
``````

This time, the execution can actually progress as soon as `trans` receives its first argument. So, it is no longer obvious which of the second or third variant is the fastest one, as there is now a trade-off between slowdown (closure allocation and call) and speedup (precomputing with just one argument).

in addition to @silene’s great explanation, I want to direct your attention to the fact that the theoretical time complexity of the algorithm does not really change between your two examples… the code is a bit contrived and your function is guaranteed to execute in O(1) as it is currently written, but assuming a general function in the shape of

``````let rec f = ...
| Base_case -> ...
| Recursive_case -> ... (f ...) (f ...)
``````

you can’t really escape the time complexity of it without introducing a transformation / data structure which “bookkeeps” computations.

Thanks for the detailed clarification.

Thanks for clarifying on the time complexity as well