Will OCaml be a good choice for writing high performance, parallelizable machine learning libraries

I am learning little bits of Ocaml and Haskell simultaneously. At the same time I’m working with machine learning libraries most of which are written in Python and use C/Cpp for high performance function. As a result, any non-trivial code is hard to understand and prone to breaking at runtime.

I am thinking of re-writing small chunks of these libraries for fun and learning. IMHO the object-oriented nature, support for mutation, strong typing and native compilation make OCaml a great choice for my project.

I’d like to start to by implementing a computational graph generation engine like Torch or TensorFlow.

Is there anything that I’m missing, that might make OCaml not a bad choice for such a project? Is there anything else I should look out for?


I believe that, right now, high performance parallel+numerical code is not a strong point of OCaml. Other functional language implementations do more aggressive unboxing (MLton comes to mind), or have production-ready parallel runtimes (GHC Haskell; OCaml currently has a GIL that requires you to do multiprocessing instead of multithreading, see netmulticore for shared-memory concurrency), and furthermore I expect any of those to be inferior, for high performance numerics, to writing Fortran or OpenMP C++ code.

Where you can hope to win by using OCaml is if a lot of the current application logic is in Python, because writing only C/C++ is too fragile/painful or because your developer base is not expert enough in those languages (common in scientific computing projects for example). OCaml can serve as a middle-ground whose performance is much closer to C++ while its conciseness and programming comfort is closer to Python in many respects. If you can use this to convert more of your program to a faster language, while retaining ease of development, it can be a big win.

You should have a look at the Owl library, that is building up as an interesting OCaml numeric programming library. My understanding is that Owl is not developed for pure performance for now (it was not optimized a lot), but is already competitive with NumPy/Julia for some applications – see the fairly early-stage performance reality check. Of course, whether you use Owl or not, the key to good numerics performance is to rely on high-performance code, so using a good BLAS library is essential – and brings parallelism.


Thank you. I’ll start by replacing the Python parts first.

In terms of Python->OCaml migration, we have one well-written success story which is the conversion of ZeroInstall from Python to OCaml by Thomas Leonard – this retrospective post has links to a series of blog posts written along the way. The qualities that Thomas was interested in are closer to systems programming, so the strengths he saw may or may not apply to your project, but it is still a quite nice migration story.

(On the other hand there are also examples of OCaml->Python migration, or rather of OCaml users giving up and using Python in some cases, for scripting applications that do data processing using data formats that had better library coverage in Python than OCaml. That said, we have made progress since then, and our JSON ecosystem is quite good for example, so the situation may be different today, although I’m sure there are still many cases where Python has better library support, or makes it easier to write quick&dirty exploratory code.)


You may also be interested in lacaml, an OCaml library that provide bindings to BLAS and LAPACK.


You said that Ocaml may be not the best choice for sheared memory concurrency. What about distributed heavy parallel simulations where people use C/Fortran+MPI?

OCaml has an MPI library (ocamlmpi) so you can write distributed programs in the same style as C/Fortran – there are other, higher-level libraries for distributed computation, such as Functory.