How do I reduce complexity of this algorithm that take first and last element?

With 1 000 000 I had a stack overflow… then decrease the list size.

As Knuth presumably said, “premature optimization is the root of all evil”. In other words, do not worry about the cost of individual operations until you have made sure your code is not doing a large amount of useless operations. For example, you are allocating and reversing the complete input list and yet you will be using only half of its elements. Here is a constant-space implementation (assuming the input list is not kept alive by someone else):

let arrange s =
  let rec merge u v acc =
    match u, v with
    | [], [] -> acc
    | uh::ul, vh::vl -> merge vl ul (vh :: uh :: acc)
    | _, _ -> assert false in
  let rec split_rev n l acc =
    match n, l with
    | 0, _ -> (l, acc, [])
    | 1, h::t -> (t, acc, [h])
    | _, [] -> assert false
    | _, h::t -> split_rev (n - 2) t (h :: acc) in
  let l = List.length s in
  let (u, v, w) = split_rev l s [] in
  let (u, v) = if l land 2 = 0 then (v, u) else (u, v) in
  merge u v w
2 Likes

@silene, I think yours is the best solution!
I had the same idea to split the list first, but on the other hand the website says “you should not mutate S”, perhaps indicating that the user wants to re-use it afterwards…
But even with this, it remains the best solution because it is 2x faster than mine!

that’s very interesting, it seems that @tail_modulo_cons is able avoid a final List.rev ? But I don’t undertsand how it is possible

The final List.rev is due to accumulating in a list, hence putting the elements encountered first at the bottom of the accumulator. @tail_mod_cons rewrites the non tail-rec recursion (x::y::loop ...) into something similar to loop (acc@[x;y]) but more efficient. It treats the list accumulator as if if were mutable and append at the end, as you would do with a mutable linked list. (This is the high level gist of it). So you don’t need to reverse at the end.

1 Like

that’s good to know,thanks!
On the other hand it feels almost “bad”, because programming tail recursive
loops has become “second nature” for me, and now I have to learn that in some cases it’s better to write the non-tail recursive version and use the @tail_mod_cons annotation :grimacing:

I personally think of tail recursion modulo cons as a relaxed form of tail recursion. (Which it is, as the name implies.) We have learned to use tail recursion because smart implementations of functional languages perform it without unnecessary space cost. We should be able to learn to use tail_mod_cons for the same reason—although the trick is even smarter.

1 Like

The idea of [@tail_mod_cons] is to have (if we examine the assembly code) a plain loop.

Then a function

let[@tail_mod_cons] fun rec f l =
   let x, l' = ...
   in x :: f l'

Would have a hidden pointer which contains where the result sould be stored. Then x::f l' would be interpreted: 1/change the result pointed cell to x::tmp, 2/lets the result pointer point to tmp, 3/loop. Then, like tail recursive functions, there are no “call/ret” assembly function, simply “jmp”.

In Scheme, this could be emulated using a set-cdr! procedure.

OCaml - The “Tail Modulo Constructor” program transformation indicates how the [@tail_mod_cons] works. The dst.idx <- dst'; is equivalent to a (set-cdr! dst dst')