Evaluate for a fixed amount of time or fail

I’d like to invoke a game-playing module’s “get_move” procedure to ask an AI player to make a move, but I don’t want to wait forever. The normal (no-timing) use would be something like this:

   state2 = next_state state1 (get_move state1)

In my best-of-all-worlds, the “timed” use would look like this

let state2 = 
   match timed (fun () ->  get_move state1) 4 with
   | Timeout -> failwith "..."
   | Success m -> next_state state1 m

where “timed”'s second argument is the number of seconds I’m willing to wait to get an answer.

It appears that Async is probably the right thing to use here (although maybe it’s now Lwt), but it’s sufficiently complex and general that I’m overwhelmed. Quickly scanning the chapter in Real World Ocaml suggests to me that the particular bit that I need may involve Deferred.any, but that’s about as far as I got.

I’m assuming that the thing I’m looking for is some standard Ocaml idiom; if it’s not, I can hack my way around it with some shell-scripting for my particular application. On the other hand, if it IS a simple idiom, my Google-fu isn’t strong enough to find it, and I’d appreciate any help I can get. By the way, this doesn’t need to be perfect in any sense. “About 4 second of clock time” is all that I need here: if the timeout occurred after 3.5 or 4.5 clock-seconds, or after 4 CPU seconds, or anything at all like that…I’d be totally happy and could adjust the time-limit to my needs.

1 Like

By the way, it also appears that Clock.with_timeout might be the thing I need, but with documentation like this:

with_timeout span d does pretty much what one would expect.

it’s hard to know. Then again, I sort of like that documentation. One could use it for every procedure one writes. :slight_smile:

There’s a fundamental problem with using Async or Lwt for what you describe here. That’s because both of those are cooperative threading libraries, and what you want here is to time out if a given function runs for too long, which is considerably trickier.

What one might do here depends a fair bit on the context, so maybe you can say a bit more about what you’re trying to achieve? You could run the code in a separate thread, but terminating threads cleanly is tricky to do in a portable way (which is why Java retracted support for this feature long ago.) A multi-process solution will give you cleaner termination semantics, but is a little more work to set up.

So understanding what this is for would help.


And the first sentence of the documentation is a little laughable, but it does have a little more than that.

imageScreen Shot 2017-12-02 at 9.10.52 AM

Anyway, we can fix that comment easily enough…

OK — here’s a bit more context. Most of my students have individually written a “Game” module to represent the game “Connect 4” and have written an “AI player” that chooses a move to make, given the current state of the game. (Others have chosen some other game.) We have a “referree” that lets two players compete: human-human, human-ai, or ai-ai (where the student’s AI player competes with itself). The students have mostly managed to write AI players that can beat them (easily!) when playing on a 6 x 8 board, which is sort of fun.

I’d like to run a ‘tournament’ between different students’ AI players. With a little fooling around and bash-hacking, I can arrange to tie together any two students’ representations of the game with a revised “referee”, and have them play against each other, but if one student does a depth-12 game-tree search and another does depth-3, the first will likely win…but might take a very long time to do so. Thus I want to give them a computation bound, and say “you’ve gotta produce a move within 4 seconds or it’s a forfeit.” (Otherwise watching the tournament during class will be a lot like watching paint dry…)

Hence, you see my need for running a procedure for a bounded amount of time, but you can also see why precise bounds aren’t a big deal — there’s nothing more than a chocolate bar and a certain amount of bragging rights resting on the outcome.

Is that enough context to make clear my needs?

I think for that kind of use case, where you don’t really care about resources or if the function keeps running in another thread, you can use a pair of worker threads. Make one thread wait until the time out with Thread.delay, make the other thread run the AI code, and have the main thread wait on an Event.channel that both the worker threads will write to when they’re done. Then your program can decide the winner an just exit, even though the AI code may still be running in another thread.


That sounds right. You can try using Unix signals, but I believe OCaml’s signal handlers won’t trigger if the code is caught in a loop that doesn’t allocate.

If you need better semantics than Leo’s approach allows, I think the best solution is to start up a separate process to run the AI. It can be relatively simple if you can pass in the state as an s-expression on the command line and read back the result from stdout.


1 Like

Btw, John de Goes recently gave an interesting talk ( https://www.youtube.com/watch?v=wi_vLNULh9Y ) about designing a new IO type for Scala where he covered exactly this scenario. It’s interesting to note that pretty much every async library is broken w.r.t. being able to cancel a running thread (his new IO type fixes this problem).

1 Like

Actually, come to think of it, the threaded version fails in the same way as the sigalrm version, so maybe that’s just simpler.

open Core

let rec takes_too_long n =
  ignore [n;n]; (* ensure some allocation *)
  if 3 > 4 then "impossible" else takes_too_long (n + 1)

exception Timed_out

let () =
  printf "starting thing!\n%!";
  Signal.Expert.handle Signal.alrm (fun (_:Signal.t) -> 
     printf "Timed out"; exit 1);
  ignore (Unix.alarm 1 : int);
  print_endline (takes_too_long 0)

Thanks. I think that this’ll probably be enough to make things work for me. I’m pretty sure there’ll be allocation going on in the “next_move” procedure, too, in case that’s really needed.

I’ll be back tomorrow if things don’t work out. :slight_smile:

If it doesn’t work out, an alternate design would be to fork the AIs off as subprocesses that you then communicate with over a pipe. The Unix module is enough for this all of this, with pipe to create a communications channel and select to wait for I/O with a timeout, and of course kill to punish tardiness.

Example: https://gist.github.com/techate/b7fcc92b84349959656d45f32bc65510

I had a similar use case some time back when I wanted to run a tail-recursive computation for some fixed time. The solution I’m currently using involves manually putting possible breakpoints (Lwt uses cooperative threading):

let myfunc foo =
  let (>>=) = Lwt.Infix.(>>=) in
  let pause z = Lwt_main.yield () >>= fun () -> z in
  let b0 = ... in
  let f a b =
      match a with
      | 0 -> b
      | _ -> let a', b' = ... in pause (f a' b')
  f foo b0


let comp = Lwt.map (fun a -> Some a) (myfunc input) in
let timeout =
    let%lwt () = Lwt_unix.sleep 3.0 in
    Lwt.return None in
match Lwt_main.run (Lwt.pick [comp; timeout]) with
  | Some bar -> ...
  | None -> ... (* timeout *)

I realize that this isn’t quite what you’re looking for but it is a good one-off solution that works.

1 Like