Fastest build + exec in watcher?

I am currently executing

dune exec bin/Ffi_Gen.exe -w

with bin/dune :

(executable
 (name Ffi_Gen)
 (preprocess (pps ppx_jane ))
 (libraries core))

The goal here is: as I write code, the dune watcher tries to rebuild + execute on the fly.

Since this is in dev mode, I care mostly about rapid feedback.

Is the above setup fairly optimal, or is there config options to speed up the rebuild + rerun ?

Thanks!

How are you “checking” if the executable is behaving how you expect? Depending on that you may be able to turn that part into an automated cram test.

I did not explain the problem well at all. Let me try again.

Ffi_Gen, when executed, writes to:

/path1/foo.ml
/path2/bar.rs

foo.ml creates ML ↔ JS conversions for some records/structs defined y Ffi_Gen

rust.rs creates Rust ↔ JS conversions for the same records/structs

FFi_Gen ensures that the ML/JS and Rust/JS bindings are compatible, i.e. we can do ML → JS → Rust and Rust → JS → ML.

XY problem: typed OCaml / Rust interop

EDIT: This is basically like poor man’s atdgen

If you are somehow asserting that foo.ml and bar.rs have the correct contents, I often use cram for this. If you are just needed to generate them and then use them…have you considered using a dune rule to do that?

2 Likes

No. I’m living dangerously by “if it compiles; it’s (probably) right”.

It is not clear what this buys me.

We have the following:

types.spec: contains original specs
conv.ml: program converting the specs

out_ocaml.ml : ocaml output
out_rust.rs : rust output

except I intend to combine types.spec/conv.ml into one file, and encode the “types” of types.spec as values in conv.ml (as it’s something I rarely modify, and the only thing I care about is machine generated int’s for the type_ids so all the tags / layouts line up).

If you combine the spec and the conv.ml into a single file, doesn’t that mean you won’t need to run the separate dune exec bin/ffi_gen.exe -w any more? The conv.ml will just be part of your normal dune build, right?

EDIT: or, if you are planning to make conv.ml continue to be a program that generates both OCaml and Rust code, then the benefit of using a dune rule is that the generated foo.ml file will be defined as part of your regular build, and you can use the Foo module in your project and dune will take care of regenerating both foo.ml and bar.rs if foo.ml gets out of date. You won’t need to run dune exec conv.exe -w separately.

I am going to preface this by saying: I know Cargo; I know Makefiles. I do NOT know dune. With dune, I have a “nearest neighbor” understanding, not a “from first principles” understanding. So it is entirely possible I am misunderstanding what you are saying here.

Every time I modify types.spec (part of conv.ml), I need to execute conv.ml in order to have it generate /path1/output_ocaml.ml and /path2/output_rust.rs .

I do not see how this is dine without a “dune exec”; nor do I see the benefit of making this a rule.

If you have time, could I trouble you for a dune + conv.ml, where conv.ml just writesx “hello ocaml” to /path1/ocaml.ml and “hello rust” to /path2/rust.rs ?

Right now, I have no idea what this “alternative to dune exec” looks like.

Sorry for chiming in, my very rough understanding is that a binary of a rule is a dependency of the rule, so if the source of the rule’s binary changes, the rule will build and execute the binary so that the result of the rule is fresh wrt. the dependencies of the rule.

Sorry, I think there is where my lack of foundational dune understanding is showing:

Choice A: implement this as a rule in a dune file

Choice B: separate dune project, with a dune exec bin/Conv.exe -w

Besides saving a Linux watcher process, it is not obvious to me what Choice A offers over Choice B

Here’s a simplified example.

conv/
  (* conv.ml *)
  let () =
    let ml = open_out_bin "foo.ml"
    and rs = open_out_bin "foo.rs" in
    output_string ml "let f x = x";
    output_string rs "pub fn f(x: i32) -> i32 { x }";
    close_out ml;
    close_out rs

  ; dune
  (executable
   (name conv))

ml/
  ; dune
  (executable
   (name ml))

  (rule
   (mode (promote (only foo.rs)))
   (targets foo.ml foo.rs)
   (action
    (run ../conv/conv.exe)))

  (* ml.ml *)
  let () = print_int (Foo.f 1)

This writes the foo.rs in the ml directory as well. But you can, I think without too much trouble, configure it to save in any directory you prefer. It will be produced and saved as a side effect of running the dune build.

1 Like
  1. Thank you for the working example. I am now convinced the ‘rule’ approach is better.

  2. This is a bit “too” magical right now. How is dune inferring the dependencies ?

When I change conv.ml – rust.rs gets updated. What is happening here ?

2a. from the ‘rule’ line, dune has to infer that “foo.rs” depends on “…/conv/conv.exe”

2b. then from the “…/conv/conv.exe” part, dune has to somehow infer that this depends on “conv.ml” ← this part is a bit magical to me

2c. is it making the inference in 2b, or is the reason that “dune build” auto build “conv.exe” whenever “conv.ml” changes, and the updated conv.exe triggers the rule to execute it ?

Thanks!

So the chain of events is:

  1. Run dune exec ./ml/ml.exe
  2. Dune understands that ml.ml depends on foo.ml
  3. By the rule in the same directory, it knows to run ../conv/conv.exe to generate foo.ml
  4. By the executable stanza in conv/dune, it knows to build conv/conv.exe from conv/conv.ml.

In my opinion at heart dune is not much different from make. It operates on a system of targets and dependencies. Usually we don’t think about it because it already encodes the rules for OCaml sources, but for codegen like this we make it explicit.

I actually wasn’t running this step. I was running dune build -w, and everytime I modified conv.ml, it apparently recompiled it, ran it, and put the foo.rs into the right location. (via the promote)

Anyway, thanks for sharing this example; this is really cool and motivates / demystifies rule quite a bit. Thanks!

1 Like