[ANN] memprof-limits preview (and a guide to handle asynchronous exceptions)

Dear OCamlers, I am happy to pre-announce memprof-limits, an implementation of per-thread global memory limits, and per-thread allocation limits à la Haskell, compatible with systhreads.

Memprof-limits interrupts the execution by raising an asynchronous exception, an exception that can arise at almost any location in the code. I also announce a guide on how to recover from asynchronous exceptions and other unexpected exceptions that you find in the documentation. It summarises knowledge acquired in OCaml by the Coq proof assistant as well as in other programming languages. To my knowledge, this has never been told in OCaml textbooks, so I thought it might be of general interest to you. This research is part of a wider work aiming to regulate the use of asynchronous exceptions in OCaml in coordination with multicore language designers.

Global memory limits let you bound the memory consumption inside specific parts of your program, in terms of memory used by the whole program. It is inspired by this other post, but in a form readily available for use with systhreads.

Allocation limits let you bound the execution of parts of the program measured in number of allocations, analogous to the same feature in Haskell advocated in a nice post by Simon Marlow. Allocation limits count allocations but not deallocations, and is therefore a measure of the work done, which can be more suitable than execution time.

Memprof-limits, as the name tells, uses the upcoming Memprof engine from OCaml 4.11, with a low sampling rate that does not affect performance. A reimplementation of the Memprof interface compatible with memprof-limits running at the same time is provided for profiling needs.

Memprof-limits is available on the public opam repository, but depends on OCaml 4.11 which at the moment is available from the beta opam repository only. It is experimental for reasons explained in the manual.

FAQ

“Is it wise to rely on the statistical nature of Memprof? If I set an allocation limit of 100 KB, and run a function that allocates exactly 50 KB, then the function might fail, due to the random nature of Memprof.”

Memprof-limits is provided with a statistical analysis meant to help you chose appropriate values for the limit depending on a target safe allocation value. (Nice pictures omitted because this discuss does not support svg.)

Long story short, memprof-limits starts being accurate-enough starting around a safe allocation value of 100 KB with the default sampling rate (meaning a limit of 1 to 3 MB depending on chosen precision), with the ratio between the maximal safe allocation and the limit dropping very quickly for higher values. Correctly, the analysis shows that limits under 500 KB are unreliable.

I have found that the statistical nature of Memprof makes it very easy to reason about its application and not have to factor in runtime implementation details. In addition, Memprof is nevertheless deterministic, which is (essential and) useful for reproducing runs in test scenarios.

“But can we really program with memprof-limits, that is, not only write programs but also reason about them, given the probabilistic nature of the guarantees?”

Yes, if we make two additional hypotheses:

  1. Allocation limits (as used in Haskell) are used by determining peak reasonable allocation usage empirically and picking a limit at a comfortable margin over it, rather than computing a precise memory bound to be used as a limit. In very controlled environments where the latter would be possible, there probably would be better solutions, and the language this is inspired from makes it very hard to make predictions on memory use.
  2. The programmer is fine with a very unlikely possibility of a false positive; indeed the program is already designed to let true positives fail without bringing down mission-critical parts of the program. For instance they can prefer to see a legitimate client having a connexion closed once every 10 year for n of their choosing, if that is the price to pay for avoiding being subject to DOS on maliciously-crafted requests.

Under these hypotheses, the statistical limit is just as reliable as the precise limits à la Haskell.

“Is it possible to also implement local memory limits, to bound the memory consumption of a particular function?”

Yes but read on.

Yang & Mazières (2014) advocates in favour of an allocator-pays model of cost attribution, and note its similarity with memory profiling. In this model, it is possible for instance to process untrusted user input under some memory limit, before the result is distributed to the rest of the program.

Implementing memory limits based on the allocator-pays model, by adapting allocation limits to take into account deallocations, would be very easy thanks to the facilities provided by Memprof. Moreover, the statistical analysis of allocation limits can be transposed, and guarantees similar accuracy at a low runtime cost for limits greater than 100KB.

There is one surprising difficulty, though, which has to do with the way the GC works. The GC has a space overhead: memory that is wasted because unreachable values are not collected immediately. This overhead has to be taken into account when choosing the limit. However, this overhead is non-local and dependent on the total major heap size: one cannot just say “take the double of the desired limit”. Indeed, active threads will pay for memory that has been allocated in the past and kept alive. More experimentation is needed to provide guidance on how to take the space overhead into account.

“Can this be used to bound the consumption of lightweight threads in Lwt and Async?”

It is straightforward to make memprof-limits parametric in the notion of thread id used to track per-thread limits. However, to the best of my knowledge, Lwt and Async are not meant to play well when the computation is interrupted by asynchronous exceptions. If you have more information about this limitation or are interested in experimenting, please get in touch.

Thanks

Thank you to Jacques-Henri Jourdan for his explanations about Memprof and Stephen Dolan for his feedback.

(edit: better-working links)

2 Likes

What is the use case for a layman?