[ANN] Results of the OCaml User Survey 2022

Hi everyone,

on behalf of the OCSF, I’m pleased to announce that the report on the OCaml User Survey 2022 is now available. Unfortunately, the results are not as meaningful as we would have hoped, since this instance of the survey attracted far less respondents than the previous one. On the upside, this allowed us to discover that some members of the OCaml community are expert in the fields, we hope to use their expertise to improve the whole process next time. It still allowed the OCSF to draw some preliminary conclusions, and highlight some pain points where the OCSF can maybe help improve the OCaml ecosystem as a whole.

Last, I would like to apologies since it took me an inordinate amount of time to process the results and release the report. While I have already gotten in touch with some people regarding next year’s iteration, if anyone would like to help (from proof reading to publishing in a more modern way than producing a PDF with latex), don’t hesitate to message me.



35.1% OCaml with js_of_ocaml vs 4.3/2.9% Reason with/without Bucklescript is really good to know!

I’m not sure if this statistic is reliable. The bucklescript (now rescript) users have their own forum.


Thanks for the results!

One thing that I would find interesting is to try to collaborate with other language surveys to uniformize some of the questions and see if there are some eco-system complaint patterns that remain constant –– I find it amusing that whenever I dabble outside OCaml I almost always find the same kind of problems and complaints that people think are specific to this eco-system.


The responses to Q31 and 32 highlight a bit of a tooling pain point I think. Even more so in the freeform answers (seeing lots of “documentation”, “debugger”, “opam/dune”, etc…)
It may be useful to pair “how do you usually do x” questions with “how satisfied are you with that”, instead of lumping it all together under “how satisfied are you with tooling”… Doing it this way may reveal the areas where tooling is weak despite the general high quality.

Also one of the things I’d love to see next year is normalizing freeform answers and putting the ones that repeat enough in the graphical results!
(or maybe even creating a keyword cloud next to freeform answers)

1 Like

A few notes while reading the results.

Q6: How did you learn OCaml (279)

A mistake in this question allowed the respondents to select multiple answers.

I wouldn’t call that a mistake. People can learn OCaml in multiple contexts. Partly at work, partly self-taught, and others.

Q19: Which of these language implementations are you using, as of January 1st 2022? (276)

Compared to the previous survey (respectively 91.6%, 30.8%, 16% 9.3%), Reason usage seems to have decreased amongst the respondents.

That’s pretty obviously because the combination that used to be called Reason+BuckleScript is now rebranded as ReScript. If ReScript is included as a choice I’m pretty sure it will get a large number of respondents.

Q21: What is the oldest version that you try to support in the software you develop? (273)

OCaml 4.08 was released in 2019, and was the first release to feature the…let*, let+ and and+ operators.

This wording is a bit inaccurate, OCaml didn’t ‘feature’ specifically these operators, but added the ability to define let-operators, of which these are examples. However, no actual usages of the operators were shipped in the standard library, and they still haven’t been till date.

General note: it seems like some of the questions simulated multiple-choice answers by having combinations pairing different choices? I think it would have been much simpler to make the choices ‘choose zero or more’ style.

Thanks for compiling the results this year.

The modality for answering questions is not always recalled, but I think it is important for interpreting the results. (This was already the case in the previous survey.) For instance, it is worth having in mind that some multi-choice questions were limited to a fixed and small number of answers. This effectively turns some questions from “what is important” to “what is the N most important”, which radically changes how you read the answers. (The form is no longer available so I cannot cross-check and tell for sure what detail is missing.)

Having filled the form myself, I found that some questions called for some nuanced answers, of course, but the above limitation did not help. I also found that the bias in some questions was telling (e.g. Q49 favourite new language features).

1 Like