Defining standard OCaml development lifecycle processes

This is an unfinished post, where I’m thinking about the friction between my creativity and OCaml tools, particularly as it compares to UX in other language ecosystems.
If you read on, know that it’s incomplete :).

Context for this post. Warning--long, unintentionally kind of rant-ish I started another OCaml project yesterday. It had been a while since I had used OCaml. I tripped over every possible little thing and grew quite frustrated. My IDE not showing the right stuff, different error messages between shells, opam init scripts not loading my local switch, my local switch missing a dep I thought was already present, dune unable to locate a lib because of a rogue project file, ...just to name a few. There were many, many more. None of individual problems are deal breakers, but I continue to experience OCaml tooling problems, and it's death by a thousand cuts. This seems to happen each time I re-pickup OCaml. This time, I thought "this isn't worth it, do your project in a different language".

But OCaml is my favorite to think with. So I’m persisting. Here’s a claim: OCaml tooling far less productive than it could be.

I have been working in OCaml as a hobbyist for a short while. I often go weeks with no OCaml activity, followed by a week or so of moderate activity. Admittedly, I am hot & cold with OCaml programming cycles, with long cold periods. I am still a user with less than one year of experience, so my opinions should be weighed as such. However, I do have over a decade of professional software development experience in a diverse set of languages as well as a couple of engineering degrees, which perhaps should counter balance my OCaml newb-ness.

Each time I return to OCaml, I struggle to define a grokkable, repeatable, productive workflow for managing my work. In studying public user-land projects, I don’t observe many universal processes either. Some common guidances are “build with dune” or “manage your env with opam switches”. These tooling guidances are settled. However, common behaviors do not seem to have well-known recipes. Recipes like

  • declaring and reproducing a development environment (switch, deps, etc)
  • segmenting production & dev-time only dependencies
  • project structuring
  • etc etc

do not appear to have established community expectations. This is possibly because the tooling doesn’t enforce such behaviors or perhaps merely because the publishing ecosystem is (relatively) small.

I feel embarrassed to myself for sinking hours to tweaking, tuning, searching over configuration docs/manpages just in order to do actions what I perceive should be plainly obvious and resistant to user error. It oft frustrates me, and evokes emotion of distaste towards ocaml, when such emotion is unreasonable–I love ocaml! The people in the community and the excitement for effects as a first class language feature keep me coming back.

The ideal artifact produced from this exercise would be a document (or video, perhaps) capturing idiomatic project management across the development lifecycle, starting with a simple project, and evolving to a complicated, mutli-lib multi-bin project. The goal is to get a developer from 0 => productive with minimal noise. Assume all parties have decent knowledge of the language itself, and instead focus only on the tools and processes executed before and after coding. Questions and answers refining the following hypotheticals are desired. How do we scaffold projects? What is their structure? When do we deviate? How can we configure as few things as possible, and have the rest just work? How can compilers, dependencies, IDE systems snap into ready mode with minimal friction? How we keep things that need to be synchronized do so automatically? How can we minimize the numbers of steps needed between ideation and experimentation?

OCaml + opam + dune have lots of documentation, but there are holes between the required and implicit integrations. drom is an effort to unify these entities, which is noble. make is still used regularly in the community. There are hundreds of flags, knobs, tags, <whatever> in all of the tools. Project management is not simple in OCaml. I’ve been here long enough, though, to know that this all can be simpler, and we can grow adoption and expertise in our community iff we can ease the onboarding process.

So, with that said…

What common project management processes do OCaml developers execute, and which are normalized in the community?

Help me by brainstorming “processes I do as an OCaml library or executable author”.
Together, let us:

  • declare those processes
  • discuss if our community has best-known-methods (BKM) to fulfill those processes
  • compare status-quo user-experience (UX) to a (debatable) target UX
    • debate what we like about each others’ flows.

Here is a draft set of high frequency workflows observed in a repository’s development lifecycle:

Process OCaml BKM exists?
clone, install project deps :x:
add, update, delete, & run tests :checkered_flag:
run default executable :checkered_flag:
update a dependency, track it :x:
update a devDependency, track it :x:
swap compiler versions, track it :x:
create projects with multi-file library :white_check_mark:
create projects with multiple libraries :white_check_mark:
create projects with executables :white_check_mark:
publish :checkered_flag:
setup IDE :white_check_mark:

Key: :x: not exists, :checkered_flag: partially exists, :white_check_mark: clearly exists

  • The following are only my perceptions & opinions.
  • I will compare ocaml/opam/dune to node/npm as a UX BKM reference, as the node community has excellent UX when it comes to minimizing friction between ideation and execution. I could use rust, python, maybe java too. rust/cargo offers perhaps a more equitable comparison, due to the fact that ocaml & rust both target native compilations, thus additional concerns & constraints. However, UX is the metric I seek to improve together, thus will stick with node.

Process - clone, install


BKM not established :x:

  • git clone
  • maybe setup a switch?
  • then ??? options:
    • make dependencies?
    • opam switch import ./my-deps --switch .?
    • opam install all-of-my-packages-things?


  • [opam|dune] install mutate a switch env. they do not update any package-local lockfile
  • [opam|dune] do not have a unified approach to declaring dependency types
  • Development-time dependency and env prep is often first or second class in other languages that require explicit pkg declaration (as opposed to go or deno, which just use URLs to implicitly declare dependencies). In OCaml, we have a decent spec for production time env & deps realized via <name>.opam files, but we do not have a clear recipe for development.


BKM established :white_check_mark:

  • git clone
  • nvm use (optional), (e.g. akin to select switch)
  • npm install
  • npm build (optional)

Most libraries or executables are now ready for all development activity. It’s not 100%, but the expectation is there in the community.

ux comparison

Tracking and reproducing envs & dependencies as a first class process in the ocaml tooling space is plain missing. Adding dependencies has a clear solution, but restoration does not have an idiomatic process in ocaml. esy tries to offer this to our community, but I suspect there is very little alignment to it.

Process - add, update, delete, & run tests


BKM partially established :checkered_flag:

  • tests are placed in an independent dune project that you configure to compile and execute your tests, or, tests are co-located with lib/bin source and authored inline
  • executed via dune test


  • it is very common to co-locate your tests and source. you cannot do this easily with ocaml/dune, without specifying (modules ...) which is tedious and unproductive. every file move needs to be accompanies with dune file updates.
  • dune treats common test writing patterns as “custom”, and focuses heavily on inline/expect and cram testing
    • this is not a problem per se. however, by nature of dune being the defacto tool of choice, the reader is immediately pulled off the rails and implicitly encouraged by means of documentation volume and heirarchy to investigate expect & cram style tests, which (subjectivity warning, opinion warning) are more eccentric than your traditional integration/unit test.

Tools are generally in place for testing, but the tools and their associated docs do not guide users to a familiar recipe.


BKM established :white_check_mark:

  • tests land in <root>/tests or src/**__tests__
  • npm test

Sometimes, pre-build steps are needed. pretest hooks tracked in package.json often fulfill this need.

ux comparison

The OCaml recipe makes sense, but forcibly externalizing tests to their own dune project is inconvenient and feels foreign. Some dune sugar to declare tests/** modules as test-only, and include/exclude such contents from the release/test-only binaries would feel more natural, in the context of a single dune project.

Process - run


BKM partially established :checkered_flag:

  • dune exec <path/to/thing>.exe is generally a well established way to run project binaries
    • however, this approach lacks configuration and possible mandatory preprocessing
  • perhaps through some combination of rule and alias a default execution mode could be established

Some builtin aliases exist, but omit a default for “run the software”.


BKM established :white_check_mark:

  • npm start/npm dev are defacto entrypoints for executing the developed artifact, or, tooling to exercise the developed artifact(s).

ux comparison

This functionality is not critical in the big picture, but I include it for thoroughness. Ultimately we design software to run applications. Having a first class mechanism to run applications hosted in a project seems like an easy UX win. npm start, cargo run, etc exist for this purpose. dune exec path/to/thing.exe works, but has shortcomings:

  • an implicit (subjectively obscure, as the file never exists) mapping from .ml to .exe/.bc
  • a required knowledge of the to the executable path, even after dune has already parsed all of the projects and knows about all executable things anyway
  • a required knowledge and execution of any pre-requisites

npm package developers often configure all setup work (prestart scripts, ENV setting, etc) in their package.json, often minimizing the amount of work between git clone and npm start to get the user in motion as fast as possible. It’s akin to many projects just needing to git pull and make && make run.

This topic is definitely a nit, but dune run or dune start to execute a configured default executable, if present, would be cute.

Process - update a dependency, track it


BKM not established :x:

  • opam install my-dep@new-version is the current offering
  • The opam integration sort-of has ideas around this. However, as previously discussed, there’s no lockfile for our projects at development time. This could be opam's responsibility, but opam (perhaps intentionally) does not position itself as a project manager. It is purely an environment manager. This leaves a void that neither opam nor dune fill. drom and esy are both attempting to solve this. The community wants it–we should bake it into our default tooling as well, vs leaving it up to user space. User-space is great, but this creates fragmentation and productivity loss on an absolutely fundamental step in developing OCaml.
  • My hack is to do a manual opam switch export ./switch to dump my dependencies for reproducibility, and import them back in as needed.


BKM established :white_check_mark:

  • npm install --save my-dep@new-version, lockfile updated

For what it’s worth, the package-lock.json in npm stinks. Other tools, like yarn & pnpm in the node ecosystem execute on this much better than npm does, and are generally drop in replacements for npm, and behave similarly on all workflows.

ux comparison

node/npm changes explicitly update artifacts to capture your changes. package.json and your lockfile both update to improve downstream reproducibility by peers or CI systems. opam makes changes in your switch, but those are default hidden to your project. You need to design your own workflow to track your switch changes, which is less desirable.

esy.lock does this now, unsure about drom. Our default workflows should track our changes.

Process - update a devDependency, track it


BKM not established :x:

  • See #update a dependency above
  • See #clone, install above


BKM established :white_check_mark:

  • package.json::dependencies get installed and maybe included by the runtime at runtime
  • package.json::devDependencies are installed at development time, and never installed in production (even if the package requires a native build phase)

ux comparison

Runtime, build-time, & dev-time dependencies are important for supporting your software during its lifecycles. OCaml has some support here, it’s just opaque with confusing UX. Further, it may span multiple tools and files. A simpler workflow around categorizing and bootstrapping dependency types is strongly desired. npm/cargo/esy/friends have a much more clear and consolidated story on the topic. There are reasons npm /cargo/other-friends combine dependency management, compilation, and development execution–these topics are tightly coupled. OCaml segmentation here fragments us into our own scripts, or toolkits that fulfill the same goal, but with radically different ergonomics (compare esy and drom).

Process - swap compiler versions, track it, check it in


BKM not established :x:

There’s probably a way to do this. I don’t know how. I create a new switch and manually bring everything in.


BKM established :white_check_mark:

  • nvm use <version>, or just put a different node/npm version on your PATH
  • npm rebuild, recompile any native dependencies

ux comparison

OCaml intentionally blends the compiler package with other packages in the switch, with little to no discrimination except for a few key places in the opam CLI.
node makes your runtime explicit, as does rustup (err, kinda, rust). The OCaml approach may actually be superior, but I’m unclear on how do it.

Process - create project with multi-file library


BKM established :white_check_mark:

  • freely add files
  • any files/modules that need to be exposed must be re-exported via entry module
  • can add N (library ...) entries, but creates tedious and confusing scenarios where (modules ...) must be managed manually


BKM established :white_check_mark:

  • freely add files
  • import export from anywhere in any file. it’s the wild west.

ux comparison

Comparable, given their distinct domains. Node is clearly easier, but has less constraint.

Process - create project with multiple libraries


BKM established :white_check_mark:

  • discrete libraries get discrete folders get discrete dune configurations

I am personally have not tried to multi-publish using such a scheme.


BKM partially established :white_check_mark:

  • lerna / nx and other monorepo tools required for a multi-library scenario that also effectively links them together for concurrently development
  • import export from anywhere in any library. it’s the wild west.

ux comparison

Comparable, given their distinct domains. OCaml is actually easier for multi-lib work with out-of-the box tooling. Node needs user-space tooling to support linked, concurrent lib/app development, most of the time.

Process - create project with executable


BKM established :white_check_mark:

  • discrete executable gets discrete folder with get discrete dune configuration

As with other dune things, you can mix libs & executables, but this adds easily avoidable complexity in your dune file. Dune’s recommendation is a small paragraph on their docs site. I kind of wish they just forbid co-locating unless you opted out, first recommending a separate project.


BKM established :white_check_mark:

  • add a bin: <path> or bin: { [exectutable-key-name(s)]: path-to-executable } entry

ux comparison

node/npm make adding executables to your project extremely easy, and simple.
OCaml’s module system and dune add a little bit of overhead, and do not immediately
promote isolation of libs & bins. Coaching our users to separate their executables
into independent dune projects from the start will make their lives easier getting started.

Process - publish


BKM partially established :checkered_flag:

  • <package>.opam is established. However, its generation &/or synchronization does not have ubiquitous mechanisms associated with it. do you use dune to generate it, make, manual, or drom? Do you use the custom .opam DSL, s-expressions, or JSON? Will your declaration be verified/linted by your tools naturally just by using them?


BKM established :white_check_mark:

  • package.json drives all inputs for publishing, not including registry location, credentials, etc.

In rare cases, package.json is patched or generated, such as if using semantic-release.

ux comparison

npm does this well. npm's consolidated, one-stop-shop for defining your project makes the cognitive burden for this concern easy and simple. even more so, it adds other benefits. Consider that all of the node tools center around the package.json. The package.json, and its associated lockfile,
are the heart of the project. The tools consume and act on it. In one place you can simulate a build publish npm publish --dry-run, which often has prebuild step. One tool, one build flow, one publish flow.
OCaml, on the other hand, has various project management files and tools, that optimistically work together. This means more tools, more code, more configuration files, and thus a less unified process.
It is akin to the python community, with requirements.txt,, Piplock, and <whatever poetry uses now>. We have scattered tools to fill the voids of a missing spec for dependency management. Because tools want to own different aspects of project management, the python community struggled (struggles?) for years to have a clear unanimous story on dependency synchronization. There was a different dependency solution based on your project’s usage intent, and if you had multiple intents (app + publishable lib), you may end up with three tools just to access and use the same general piece of software (e.g. pip, setuptools, venv, +maybe-more). It was terrible. Let’s not do what they did. Let’s do what node did. Let’s do what esy and drom are trying to do.

Process - setup IDE


BKM established :white_check_mark:

  • VSCode OCaml platform is
    • simple to install
    • select a sandbox
  • VSCode debugger via earlybird is funded, and has positive first impression.

Neither extensions recover particularly well from errors, or propose corrective action to you when things go wrong yet.
They’re new. I’m grateful for clear direction on these fronts from the OCaml Platform!


BKM established :white_check_mark:

VSCode internally ships salsa, which gives autocomplete for various JS-isms. Of
course JS is not statically typed, so support is limited. TypeScript support is baked in,
and the whole IDE is essentially oriented towards node & web development, in all aspects.

ux comparison

OCaml requires two installations (debugger + platform) to do “all of the OCaml” things,
which can be challenging for discovery. However, both are new, and are being signaled
as the official supported tools for the job (err, may not be true for debugger ;)).

IDE setup is satisfactory in OCaml. Merlin working out-of-the-box is a delight!


Hi, did you try ? It sounds like it solves a lot of issues that you’re having … wondering if you tried and didn’t like it?

EDIT: ah, you did mention Esy, OK. So, are you suggesting that opam should behave like Esy? Based on earlier conversations in opam’s GitHub issues, and the emergence of drom, I don’t think that’s likely to happen…

I’ve thought about this as well, and I wonder if reusing the style of cram tests for normal test would help things. Basically, one would have a naming convention for tests. E.g.

# everything in are considered as sources for a single test binary with
# or we can just stick to a single source file

To manage library dependencies, runtime dependencies, etc. we have a tests stanza that selects a subset of tests and applies desired properties:

 (applies_to :whole_subtree) ;; apply to all tests in dir
 (libraries alcotest))

 (applies_to ;; apply to a single test
 (preprocessing (pps ppx_show))

This would make it possible to add new tests without modifying dune files.

I think this is because dune has very little to say about traditional tests. You just write a tests stanza and move on. There’s not much else to describe.

I’m not sure what you mean by “mandatory preprocessing”, but I agree that there’s a void here. $ dune exec is useful, but one cannot save common invocations of it to a dune file. Yes, there’s aliases, but we cannot pass arguments to them. I agree that having something like npm run would be useful to add.

I always imagined something like this:

 (name foo)
 (action (./path-to-thing.exe %{args}))

Invoked by:

$ dune command foo -- bar baz

This is probably of no help,
but I don’t use opam switches, dune or IDEs,
so my process is mostly git init and
copy-pasting some lines to a mkfile.

@cdaringe this is extremely nice work, thank you! I’ve contacted you directly about moving it into a more collaborative document to help refine it together across the various platform maintenance teams.

I just wanted to quickly clarify this:

The development process for opam is driven by a simple principle that is inspired by OCaml itself: don’t needlessly break backwards compatibility without good reason, and when it is necessary, justify it. Our tools are embedded in projects that have lifespans measured in the decades, and we take compatibility seriously. That’s why we take pains to provide migration paths from (e.g.) opam 1.2 to 2.0, from 2.0 to 2.1 that are as invisible as possible, and metadata is versioned in both opam and dune.

This is entirely compatible with the need for graceful evolution, since there is a clean separation of metadata (opam files) from the clients that operate over it (the opam executable). I’ve made it clear over the years that we welcome new experiments in the realm of clients that operate over metadata. Where I push back is when metadata is duplicated, as it’s extremely hard to reconcile metadata evolution in thousands of downstream projects. That’s why (for example) the new ocaml-ci is “zero configuration” as it can operate sensible purely by analysing the opam and dune metadata for a project. To enter the OCaml/opam ecosystem, your project therefore must at least have an opam metadata file, since that is what defines what it means to be in the opam-repository (the relationship to other packages, and instructions on where to get yours).

For clients, new workflows such as esy, drom and the VSCode plugin are the perfect experimental vehicle to inform future opam client development. For example, one lesson from VSCode is that a prime source of confusion for users is that “opam install” doesn’t edit the dune and opam metadata! Addressing this requires rethinking the entire model of the client: @dra27 and I’ve been sketching out an alternative opam client design that is more declarative than the current imperative one. Instead of having any global state, an invocation of nopam would simply get the local environment to the state defined in the local .opam file. In this scenario, to install a package a user just has to edit the .opam file to add the dependency, and then an invocation of nopam would result in it being available.

Many of the discussions on GitHub issues on the opam development trackers reflect our immediate priorities – to maintain a stable and evolutionary release process for the opam 2.x series with the minimum of drama for our users upgrading. Please do not confuse this for a lack of desire to innovate – we absolutely welcome such inputs, and will factor that into the opam 3.x development plans. The only thing we’re very careful of is needless backwards compatibility breakage, which is why changes to the opam metadata format are so carefully scrutinised.

I’m overall delighted with the level of innovation happening around opam these days; lots of plugins emerging, new solvers, more analysis and so forth. Keep it coming! :slight_smile:


Apologies for the tangent, but:

I can’t help but think that the current model of modifying the data and running some code on it is more “functional” in nature than a model where one runs some code and has it modify the backing data itself. :slight_smile:

Hang on, aren’t we in agreement? The current model is to run some code imperatively (opam install) and then modify the data to keep up (edit the opam file). A more functional approach would be to modify the data (‘edit the opam file’) and then run code to adjust the environment state (‘nopam’).

There’s ample room for both approaches though. Which one is optimal really depends on what the user of the tool is trying to achieve, which is different if you’re a distro maintainer, a library author, or a CI system or an end user.

Edit: To clarify my original message, what users ask for is that opam install modifies the opam metadata, but that would make the overall tool more complex. As maintainers, we’re trying to step back and solve their actual problem with a cohesive client design that’s more declarative. It’s tricky to modify an existing established CLI workflow without destroying existing good properties and usecases. Hence the motivation for new clients with new execution models that solve the user workflow problems.

1 Like

Maybe, as usual, I’m just odd. I don’t run opam install and then edit foo.opam, I edit foo.opam and then run opam install ./foo.opam --deps-only. I didn’t realize that workflow is abnormal. :slight_smile:

I guess that a surface workflow that would work smoothly could use a command that edits the metadata to add a dependency (rather than using an editor and possibly getting the syntax of foo.opam wrong), and then runs opam install to sync up the state of the intended switch.


Hi, is drom meant to be an experimental tool, to be supplemented by nopam? And is nopam intended to become opam 3? Trying to understand the evolution.

I think this what dryunit was supposed to provide.
I don’t think it gained momentum, and it’s still stuck in the jbuilder era. A shame really because it’s a nice idea, a single dune configuration that can be generated based on conventions and enables all * to be picked up.


drom is a tool by @lefessan. nopam is a codename I just made up to illustrate the difference between current opam and a hypothetical new client.

None of these are opam 3. When we flush through our opam 2.x stack (notably Windows support and other feature specs), we’ll publish a roadmap for opam 3. My point was that we encourage experimentation outside the critical path of opam releases, and the opam dev team will gather and internalise all the data we have available when it comes to setting the direction for opam 3 and onwards. If you do experiment, and you do post here, your efforts will not be forgotten.

I take no position on your normality, @jjb :wink: The only problem with the “edit opam first” workflow is simply not having feedback on whether or not the solution of packages and dependencies actually works for you. For instance, if I edit the opam file to depend on a package that conflicts with a current one (or introduces a dependency cone I dont like, or something else), then the solver needs to run to show that to me somehow. That works today since opam install shows you that action graph, but something else needs to be receiving these requests with an alternative client.

Many of our platform tools are adding RPC support at present due to this need for more interactive feedback with modern IDEs – dune, ocamlformat and merlin all have that now, and you can already observe the benefits with merlin directly talking to dune for example. It may make sense for ocaml-lsp-server to become the unified process behind which all the other tools sit, and for a CLI tool to also communicate with a daemonised process tied to a project (just as VSCode does today).


Just adding my 2 cents, I don’t have easy solutions or anything, but:

I also have the impression that current tooling is a very difficult pain point for newcomers to overcome. If I were to try another language today and had to learn about dune and opam (and their relatively intricate syntax and features, although dune does much better imho) and the decorrelation between modules names, file names, directory structures, etc. I’d probably ragequit quite quickly. On the other hand, rust, arguably a more difficult language to learn, has a very easy onboarding: cargo build (or cargo build --release) will take you 95% there, by fetching dependencies automatically, creating a (precise) lockfile, and building your project with minimal configuration centralized in 1 (one) file, Cargo.toml.
That’s with a workflow where you typically edit Cargo.toml by hand (adding one line per direct dependency), and run tools afterwards.

They’re discussing merging cargo-add into cargo (to not even have to edit manually) but clearly people are managing without that.

So I think cargo’s workflow is friendlier to newcomers and beginner/intermediate level rust users. In particular, it’s centered around lockfiles, per-project dependencies, and tools have good defaults. In opam a lot of this is doable (although the lockfiles are doomed from the start in the presence of a non immutable repository, imho), but the workflow for per-project dependencies is not easy nor intuitive (like, opam sw create . <the compiler version>? I have to look it up almost every time), and you need to fiddle with environment variables. Dune is better behaved but it’s still a different tool to learn.

It’s a bit ironic that I say rust is more friendly when merlin is better than rust-analyzer, and more stable; but the truth is, to get to the point where you write code, with merlin/ocamllsp enabled, and can build and run the code… a lot of beginners probably have quit already.

My dream here would be that dune would absorb the constraint solving capabilities of opam, and that dune-project would become the single file I have to edit to deal with dependencies.


I think there is a misconception that the opam repository is mutable. For the past few years, with increasing rigidity, the opam-repo maintainers reject patches that modify an existing version of a package (and instead bump an epoch, for example foo.1.0 becomes foo.1.0-1).

What we reserve the right to do is to modify the metadata of packages such that they can remain installable in the face of: installation issues (e.g. due to a new version of clang or macOS or whatever) and serious security issues (to make things uninstallable with the serious issue, but to provide a close version without the issue).

This actually makes lock files more robust, since there is enough versioning flexibility to give the solver a bit of wiggle room, but the broad sweep of changes that happen regularly that prevents software from compiling can be fixed. We may need some adjustments to how we generate lockfiles to really make this solid (e.g. use a >=1.0 & < 1.1~~ instead of =1.0) to allow for epochs, but that’s pretty much it.

This can already be the case if you want it as dune can generate opam files. However, I think the root of your frustration is that (due to OCaml’s 24 year old history), we have multiple namespaces: compilation units, ocamlfind packages, and opam packages. Merging those is an effort in progress, but by its nature must be carefully and iteratively done with backwards compatiblity in mind.

Meanwhile, @cdaringe’s approach to systematically list BKM’s and reflect on alternative approaches in a structured way really resonates with me – it’s not enough to say “Rust does this” because…we’re not Rust. We have our own history and our own userbase that can’t just drop all the existing codebases and users we’ve made stability promises to. But putting our learnings from Rust (and Python, and Ruby, and Nix, and other ecosystems) side-by-side and cherry picking the best bits for the future of OCaml – that will work!


Wasn’t this RFCs/ at master · ocaml/RFCs · GitHub meant to at least solve some of this issue? Wonder what happened to it? The thesis of the RFC seem quite sound to me.

1 Like

Indeed. That RFC was updated though. You can find an OCaml implementation of the RFC against 4.12 here, see the OCaml status in the RFC for details.


My goal was not to criticize anyone, only the state of things, which is an emergent property. I think a lot of choices made sense in the context where they were made. This is more about where to go next, I think.

What we reserve the right to do is to modify the metadata of packages such that they can remain installable in the face of: installation issues (e.g. due to a new version of clang or macOS or whatever) and serious security issues (to make things uninstallable with the serious issue, but to provide a close version without the issue).

Cargo has “yanked” packages for the security bit: the solver will never select these, the only way to use them is if they’re already in a lockfile. I know the repository isn’t too mutable but I remember some changes to z3 last year that were painful for those whose workflow it broke.

We may need some adjustments to how we generate lockfiles to really make this solid (e.g. use a >=1.0 & < 1.1~~ instead of =1.0 ) to allow for epochs, but that’s pretty much it.

See, that’s not really a lockfile then :slightly_smiling_face: . I understand it’s useful still, but the advantage a cargo (or npm) workflow has here is that the lock is on a version with the hash. It’s the most reproducible you can hope for and means you’re not at risk of solver failure or silent package updates (whatever the reason behind this update is).

This can already be the case if you want it as dune can generate opam files.

Yes! I already use that and it’s neat. The next logical step for a more integrated experience, imho, is that opam would become a library (for constraint solving) and dune would be the sole entry point for declaring dependencies, build targets, and also the one way to build a project — dune build @all could/should install dependencies in the project’s _build.

it’s not enough to say “Rust does this” because…we’re not Rust.

I know! But some changes that have been done already went against what old time OCaml users would do, we’re not just stuck with past. For example dune forces a more rigid project structure onto you — a good thing imho — where previously one could have a library spread over a lot of directories. Esy also showed a nicer workflow (as in, closer to npm/cargo) is possible, although the hack to rewrite paths in binaries seems a bit distastful.

My point is that we can’t just drop everything and use cargo-ml, of course. But tools could go in this direction, and propose new solutions that are more cargo-like (like drom). After all switching to dune is a big breakage for existing projects, but tons of people migrated anyway in their own time, showing that providing new workflows can drive adoption.


Yeah @yawaramin, I really admire what the esy team is attempting to do. I don’t mean to advocate that opam should do X or Y, but I certainly mean to advocate that the default OCaml experience should have clear solves for common development processes. esy has answers, and that’s rad. Whatever our default tools are, they should have unambiguous answers to fundamental, universal development problems as well.


Isn’t this an overly optimistic view of the js ecosystem? Most projects use a combination of npm, nvm and or yarn. They might have a package.json file, might have a yarn.lock file and if you’re lucky a that tells you which node version you need and which magic spell is required to set you up.
What’s more, the dependencies in the package.json might be really liberal so it never works on your laptop. yarn install probably doesn’t work and yarn build gives compilation errors as your typescript setup is different from the author’s.
Also, they change their mind about the bkm every few years, so it all depends on how old the library/project is. I’m not saying you should abandon all hope, but js is not the state of the art.
(and typescript has a lying type system :wink: )

@toolslive, definitely. All valid points. I’d still assert that the norms exist and are actively practiced, even if there is fragmentation. There exist defacto processes, even if adoption is not universal.

1 Like


Basically, one would have a naming convention for tests

Love it. I internalize your idea as

  • insert <some-standard-dune-test-expression>
  • (optional) tune your test libraries/ppxs as seen fit
  • write tests and never look back!

I think this is because dune has very little to say about traditional tests

Definitely. It’s certainly not dune’s job to provide a formal recipe. Even so, it kind of does provide an opinion towards those other styles. Not complaining, just observing :slight_smile:

I’m not sure what you mean by “mandatory preprocessing”

Ya, thanks for calling that out. I kind of hand-waved that. I often do preprocessing for integration tests. Things like start a dummy/ephemeral database, create a tempdir for isolated execution, set SOME_ENV=test, etc.

I always imagined something like this: (command …)

While writing this segment, I was trying to replicate your exact example. I thought, “i bet i can cobble together an alias + rule to achieve this”, and failed to do so. Glad dune players have been thinking about this too.