Custom tests template dune

Could anyone give a template of OCaml custom tests, there is no resource on any websites for how to give custom tests, I basically need three components: Dune file, tests.ml, tests.expected. The expected output for inline tests is too long, so I want to do tests with the expected output in another file with the test execution script in one big file for multiple .ml files functions.

Hi @dutingda – welcome to the community :))

First of all, I’m not an expert on expect tests so someone may have a nicer workflow than what I suggest. Also, I’m trying not to assume too much information so if I seem verbose don’t take it the wrong way. But here’s a few solutions that hopefully helps.

Custom Expect-like Tests

As far as I can see, expect tests are essentially just diffing results from functions. The PPX allows you to do this succinctly inline.

This workflow is from custom tests in the dune docs, it works by:

  • Hooking on to the runtest alias so that your dune rules run with dune runtest
  • Producing some kind of output from running a single executable (test.ml)
  • Store that output somewhere and on subsequent runs diff it with the previous version to make sure nothing has changed – or if it has and you expected it to, promote the changes to a file.

So first you can define the test execution script (test.ml):

(* Part of your library you are testing *)
module A = struct
  let add a b = a + b
end

(* Some other part of your library *)
module B = struct
  let mult a b = a * b
end

(* Wrap them up in printing functions *)
let a_test () = print_endline (string_of_int (A.add 10 5))

let b_test () = print_endline (string_of_int (B.mult 10 5))

let tests = [ a_test; b_test ]

(* Iterate over the tests and print *)
let () = List.iter (fun f -> f ()) tests

This is the test execution script. We compile this script with the following dune stanza:

(test
 (name test))

print_endline prints to standard out so the next thing we want to do is to tell dune to capture that output into a file which we can then use to diff against. We do that with a rule:

(rule
 (with-stdout-to
  tests.output
  (run ./test.exe)))

This runs the test execution script and pipes standard out staight into a file (that will never exist in the source tree) called tests.output. The last piece is to do the diff.

(rule
 (alias runtest)
 (action
  (diff tests.expected tests.output)))

You will have to create a blank tests.expected before running this, but now when you run dune runtest your test execution script is compiled, then run with standard out going to tests.output then this file being diffed with your tests.expected and you should get:

+15
+50

Which you can dune promote into the file. (Note the three dune stanzas described all live in a single dune file).

A More Structured Approach

Feel free to ignore this if it doesn’t apply, it’s hard to get what you are testing from the context. However, depending on what it is you want to test, there are some frameworks to help make this more structured.

CRAM

If you are testing the output from standalone executables (like a CLI) the CRAM test functionality in dune is very nice. The extra information you put around the tests is much more structured than the printing I showed above. Dune-release, a tool for easing opam releases of libraries, has a nice CRAM test-suite which may be of interest.

Alcotest

Alcotest pulls together the “diffing” and the testing into “a lightweight and colourful test framework”. You are still writing expectation tests but (in its simplest form) you provide a printing function and an equality function for types and can combine them to produce more complex types. Although if your printed output is very large it can be hard to write than in an .ml file. The reason I like it is that it makes it easier to know what failed. It uses some notion of equality instead of printing to decide what’s right and what’s wrong, but uses the printing and diffing to output the failing tests in an obvious, more human-readable way.

Hopefully some of this helps, good luck :))

2 Likes

Thank you so much for the very detailed answer, I also have one thing to know is that what is the possible way to show the stack trace when I run dune runtest.

if you want it really primitive, you can just assert in your test/parse_test.ml

let test_parse0 () =
  let fn = "dumps/2020-12-02T110016.post" in
  let st = Unix.stat fn in
  Assert2.equals_int "uhu" 235484 st.st_size;
  let ic = open_in_gen [ Open_rdonly; Open_binary ] 0o222 fn in
  match Lib.Rfc2388.process ic "/tmp/" with
  (* | Ok { Part.name = n; filename = None; mime = None } ->
     Assert2.equals_string "uhu" "title" n
  *)
  | _ -> assert true

let () =
  Unix.chdir "../../../test/";
  test_parse0 ()

test/dune is just

(tests
 (names parse_test)
 (libraries Lib))

in project context.

So there’s nothing that can get you in dependency hell when re-running in ten years from now.

Caveat: Those asserts are not friendly to debug or dive into failures. That’s maybe when $ dune utop excels. But to assert stable results, they have unmatched simplicity.

Sorry, I don’t quite understand what you’re after. If it is just for debugging and your code is raising an exception and you wish to see the backtrace then:

OCAMLRUNPARAM=b dune runtest

should provide that functionality. Is this what you were after? If not, if you could provide a little more context that would be great.

The approach outlined by @patricoferris here is very nice, but I’d like to add that it can be simplified a bit more. When you use a test stanza like (test (name test)) and there’s a test.expected file in the source directory, the output and diff rules are set up by dune automatically.

So you just have to:

  • write a test.ml file that prints to stdout
  • just write (test (name test)) in a dune file (you’ll have to add a (libraries ...) field to link against the library under test)
  • touch test.expected

And that’s it! The first dune runtest will suggest a promotion for test.expected with the output from the initial run.

1 Like

Hi, thank you for your suggestions, I have tried the same thing and it works well. For the
libraries part, if, for example, I use open ... in my test scripts, then I need to add libraries in it? I am new to it so I do not quite understand when I need to add any of it.

Hi, thank you, @patricoferris, yes, it is true, when I run it, I also could encounter some bugs and I would probably need to know where the bug is raised.

Yes, exactly! dune will create an executable for your test harness, so if you refer to external modules you’ll need to link against the corresponding library.