Back in the day there were various attempts to write a JIT compiler for OCaml. Amongst other things this promises a much faster top level.
There was the ocaml-jit project but that was x86 and I have since moved to x64 and then Aarch64 (M1). Then, 14 years ago, there was the ocamljit2 project that targeted OCaml v3.12 and touted huge benefits.
What is the status of JIT compilation for OCaml today?
If I remember correctly, ocamljit2 was not maintained.
Currently there is an in-progress effort to revive
natdynlink and make it easier for native applications to programmatically compile code fragments and link them into the currently running program: Fast native toplevel using JIT by jeremiedimino · Pull Request #15 · ocaml/RFCs · GitHub .
That’s great, thanks!
I was actually banking on being able to use natdynlink in a future project. Good to know it isn’t working!
natdynlink is working - it’s more that
ocamlnat (the native toplevel for OCaml) has been receiving some considerable attention.
Just a small thing I’d like to add: I extended @dra27’s ocaml-jit to support dynlinking directly from OCaml source without a toplevel: Allow dynlink/eval usage by copy · Pull Request #5 · NathanReb/ocaml-jit · GitHub The patch is pretty small, mostly moving usages of the toplevel api out of the library and exposing functions to add symbols that can be used by the evaluated code. An eval example looks like this: ocaml-jit/eval.ml at 57ef5c906681562186565ff28eeb49eb8e05527e · NathanReb/ocaml-jit · GitHub
I think it’s a much lighter alternative to ocaml_plugin, which embeds the OCaml compiler in a tar archive. Feedback is welcome.
Brilliant. That’s actually much closer to what I was hoping for. Thanks!
I am wondering what would be the efforts to JIT an evaluation loop with some constant arguments?
I would expect it to be simpler and have less dependencies than full OCaml compilation code. Maybe it already exists off-the-shelf in the compiler.
Here is a basic example:
type op = A | B
let eval_a x = x + 1
let eval_b x = x + x
let rec eval x = function
|  -> x
| A :: l -> eval (eval_a x) l
| B :: l -> eval (eval_b x) l
let const_eval x = eval x [ B; A; B; A ]
With JIT, I would like to achieve what would have been the result of the compilation of the following:
let const_eval x =
let x1 = eval_b x in
let x2 = eval_a x1 in
let x3 = eval_b x2 in
I think it would mainly require three primitives –
return – in order to create custom jit functions:
type 'a t
val lambda : ('a t -> 'b t) -> ('a -> 'b) t
val apply : ('a -> 'b) -> 'a t -> 'b t
val return : 'a t -> 'a
let comp_eval l : int -> int =
let rec unroll l x = match l with
|  -> x
| A :: l -> unroll l (apply eval_a x)
| B :: l -> unroll l (apply eval_b x) in
return (lambda (unroll l))
let const_eval = comp_eval [ B; A; B; A ]
You should have a look at MetaOCaml which is basically an implementation of this idea – staged programming, when you first explicitly construct program fragments and then you “run” them.
This is what I was looking for, thank you @gasche.
Would be awesome if it also exposes an API through standalone library, but I will definitely start playing with
It would be awesome to hear about progress and plans regarding BER MetaOCaml.
I was just thinking the same thing! I started Googling MetaOCaml last night because I’m in need of some staged metaprogramming and, last I tried it, MetaOCaml was freaking awesome.
Crazy idea: what if there was a
jit intrinsic that took a closure and gave you a JIT compiled version with the body of the function partially specialized over the closure’s environment? Wouldn’t that offer more benefit with less complexity?