The monads library provides the transformers for some well-known monads. All these monads have a more or less standard implementation, offering the same performance as any other monadic library can offer. Like there is no better way of implementing the state monad other than a function. We have experimented a lot with different performance optimizations, such as boxing and unboxing it and inlining various operators, and keep experimenting to get the maximum from the current compiler. In BAP, we heavily use the monads library, first of all for our knowledge representation and reasoning engine, which is the foundation for all BAP analyses. We also use it for emulating binary programs. The rich interface is here to make our life easier and more comfortable when we use monads. It definitely comes for free1 as the number of functions doesn’t affect the performance of the underlying monad.
But… there is always a but Stacking monads using a transformer does have a price. Even with the flambda compiler. The latter is doing an excellent job of unstacking them and eliminating the overhead of having a chain of monads. But our latest experiments show that a custom-made monad (still with the monads library) performs better under either branch of the compiler. We have rewritten our main monads that were relying on transformers and got from 20% to 50% performance improvement. But that is not to say that the monads library itself is slow or that we’re not using it, it is to say that there are other options to transformers that might work in some cases. See the linked PR if you want to learn the trick.
1) Provided that we ignore the size of the executable, e.g., linking the core_kernel library results in a quite large binary, which may increase the startup time. Insignificantly, but in some use cases, it might be imortrant.