Hi all, as part of a benchmark I’m trying to dispatch a lot of requests (~1M) that come from time-series data.
The constraints unfortunately are that I need the latency of each request (in order to subsequently analyse them) however I am having some difficult actually doing that.
I’ve tried a couple of different approaches (
Lwt_stream.iter_p, a custom array based iteration one) however I just keep hitting different stack overflows (mainly list based ones).
Is there a good standard approach for this kind of problem that I just don’t know?
Each dispatch looks something like:
let apply start f = let%lwt () = Lwt_unix.sleep (start - Unix.gettimeofday()) in let start = Unix.getimeofday() in let%lwt () = f () in Unix.gettimeofday() - start |> Lwt.return
The aim of this is such that each call to
f occurs at a known time from the time series input.
I’ve tried using
Lwt_list.iter_n ~max_concurrency however it seems to artifically limit the throughput.
I’ve also tried using a singular generator thread however that implementation ended up being quite low throughput as well… (though that may have been for other reasons tbh)