[ANN] ortac-0.8 specification-driven testing with Domains

Hi! We, at Tarides, are very excited to announce the release of ortac-0.8.0 for specification-driven testing!

ortac is a tool in the Gospel ecosystem. The core idea is to translate a subset of the Gospel specification language into OCaml code and use these translations to generate runtime checking.

ortac has a plugin architecture. You can install it via opam:

$ opam install ortac-wrapper    # install the wrapper plugin for runtime assertion checking, with all the other necessary parts (runtime and cli)
$ opam install ortac-qcheck-stm # install the qcheck-stm plugin that generates QCheck-STM tests, with all the other necessary parts (runtime and cli)
$ opam install ortac-dune       # install the dune plugin to generate dune boilerplate to use the two previous plugins

This release focuses on making Ortac/QCheck-STM take advantage of more features from the QCheck-STM test framework: namely testing in a parallel context and flexibility of the command generation.

You can take a look at the documentation that explains how it works.

Regarding testing in a parallel context, with the introduction of the bug report feature in version 0.2 and the coverage of SUT-returning functions in version 0.4, it was not possible anymore to easily generate QCheck-STM tests in a parallel context (which is quite easy in hand-written QCheck-STM tests). This is now fixed!

Information for the bug report are partly collected in the function doing the actual testing (comparing the results from the actual run and the results from the model). This means that implementing the bug report feature for the parallel testing requires to rewrite this function, which is the heart of QCheck-STM+Domains and what we are trusting when running the tests (the Trusted Testing Base if you will). In order to check that we keep the same behaviour, that the generated tests have the same semantic as corresponding hand-written QCheck-STM tests, we minimized the diff of the commit introducing the collection of the information so that the preservation of the logic is apparent (at the end of the day, human review is what we trust).

Since version 0.4, SUT-returning functions are included in the tests. The newly created SUT is then added to the store of SUTs that are picked as argument for the next calls. The question here is what to do with them in a parallel context: We can’t add a SUT created in a parallel branch in a global store, as it is not supposed to be shared between domains. We’ve chosen a simple design, where we stop storing newly created SUTs once we are in a parallel context. No worries, these SUT-returning functions will still be fully tested in the sequential part of the testing (in sequential mode and/or in the sequential prefix of the parallel mode).

One of the power of the QCheck-STM test framework is the flexibility of its command generator. This flexibility comes from the QCheck.Gen API itself and from the fact that the QCheck-STM command generator is parameterized over the state of the current model (think generation of a lookup command for a key-value store, you want to be able to have a chance to generate a call that lookup a key that is actually associated to a value). An automatically generated command generator ought to be a bit naive. In order to mitigate this naiveté, we allow the user to provide:

  • weights to be applied to specific command generation, usefull for example to disable the generation of the push command on a work stealing queue on the domain that is not supposed to own the queue.
  • complete command generator implementation when the user wants to take advantage on the command generator being parameterized over the current state of the model.

Happy testing!

7 Likes