Monadic Library for Eio Capabilities?

I would like to get into programming with OCaml 5 effects, but I’m not sure which scheduling library to use. From what I can tell, eio requires threading capabilities through the program, which seems like it could be painful, especially after hearing some feedback from community members. I wonder - can we use a monad to carry the capabilities around painlessly? Does anyone know of a monadic library that is able to do such a thing?

miou seems like it doesn’t require this, but doesn’t seem to be as popular yet. Then there’s moonpool as well, made by Container’s own @c-cube. But perhaps a monadic solution with eio is a good way to go?


eio itself doesn’t, and the main backends (eio_linux, eio_posix, eio_windows) all define a Low_level API that treats the network and filesystem as global variables, like Lwt does, and then add a capability layer on top of that.

However, you will likely find programming much easier without globals everywhere, so I would encourage giving it a try first! Every program I’ve converted from Lwt to Eio got quite a bit shorter, so whatever the pain is, it’s less than the pain of using Lwt.

The Lambda Capabilities blog explains why you usually don’t want to hide references, and gives some reasons why it shouldn’t be a problem (and how to do it if you still want to).


Are there some examples out there of real-world-ish codebases that were ported from Lwt to Eio? I think it would motivate a lot of people if we could see Eio in action.


So is this a viable thing? To put the capabilities in a global variable? There are no race conditions? Would that work for multiple domains as well? Does eio’s interface support just filing the capabilities away if you don’t care about them?

I’ve seen several people on discord saying that they are thinking about going back to Lwt rather than keep using capabilities.

I have to try it myself, but my concern is that here was an opportunity to finally unify the community around a concurrency library, and instead, something was added to the most heavily promoted library, that is causing even more fracturing than we had before. From what I understand, even Mirage doesn’t use eio, and that seems to me to be the most potentially useful application of capabilities in the ecosystem. From other people I’ve talked to in the community, virtually all of them dislike it and find the costs of capabilities outweigh what they consider fairly niche benefits. And they’re also concerned that they made their concerns known and were ignored.


This message from a relative novice of Eio. I haven’t used it at scale. I welcome corrections.

It seems to me Eio was designed to take the place of both Lwt and Async in the ecosystem (to remove the community split). In doing so, it seems the decision was made to make a library which could do all that Lwt could do and all the Async could do and more. But this is mostly a problem of opinionation. More specifically:

It’s actually a modular set of libraries sold as a whole package.

  • The eio.core library exposes the basic concurency concepts (fibers, promises, switches, etc.) as well as their semantic (exception management, yielding, etc.).
  • As mentioned above the backends expose a Low_level module so you can interact more or less directly with the backend (e.g., Eio_linux.Low_level has basically all of the Unix functions albeit somewhat fancier with some labels and such).
  • The eio library does backend-agnostic, capabilities-enforced OS interaction
  • The eio_main library can run your code regardless of which of the modular bits above you are using.

And so sure you can go capability-less, but

  • a module called Low_level located in a separate library is not very inviting,
  • you are writing code for one backend specifically so your code won’t run on windows,
  • you have to read through a large-ish amount of documentation before you learn that it is possible,
  • you still have the capabilities stuff in your cone of dependence.

Essentially, Eio is branded as a all-in-one capability-full plateform-agnostic concurrency library. The code is also organised this way. And going capability-less is going against the grain of the library.

Conciseness and terseness is a bad measure for pleasantness.
E.g., ↑0 1↓↑+.×/5/⊂2 2⍴1 1 1 0


I thought I would do a minimal experiment to have a networking interface that doesn’t require a networking capability: GitHub - patricoferris/neio – this is just using the posix and linux (I don’t have a windows machine currently) low level interfaces and wrapping them in what is mostly a small subset of the Eio.Net interface and making some different opinionated choices (e.g. no capabilities, removing some of the uses of subtying with polymorphic variants and instead providing conversion functions to flows or fds etc.).

Hopefully this is useful to imagine what Eio offers in terms of taking care of a lot of the low level OS interactions in a safe way (structured concurrency, reference counted file descriptors etc.) and you are free to choose how to wrap that and present it to an end-user :))


I don’t have a lot of experience with effects, but it seems to me that you could hack your way using the following technique: instead of passing capabilities as arguments, use a dynamically scoped variable (similar to react’s contexts; that can be expressed using effects) to pass them down the program without an explicit argument being passed.
Would that work?

1 Like

I like this phrasing, and that’s what I thought Eio was aiming to be. But instead we get an opinionated monolithic library. It might be well designed and tasteful, but opinions are bound to create disagreements.

Packaging/maintaining the foundational libraries (fibers, low level IOs, set of effects, local storage) separately in libraries other people can build on should have been the way. It was not done this way and I’ve lost hope that it will.

OS-agnostics seems a little optimistic to me in view of the Path module offered by eio, which seems very Unix-directed (by providing elements that don’t exist on Windows). I’m reminded of this article on Go which points out the supposed simplicity of the API. Personally, I prefer to expose the complexity of a system rather than hide it behind a “beautiful API”. Finally, let’s not even talk about unikernels, which may not have a file system (and where Path’s usefulness then becomes nil).

1 Like

Here are a couple of mine:

The Wayland proxy went from 6661 lines of Lwt to 6527 lines of Eio (it’s a bit longer now as new features were added afterwards).

It’s easy to measure the cost of capabilities by starting from the main entry-point

  • Without caps, there would be no env, just () (saving one character, though we could just a shorter name of course).
  • It extracts fs and uses it to configure tracing.
  • It passes env to main, which extracts fs, net and proc_mgr.
    • fs is used to find the virtio-gpu device file in /dev.
    • proc_mgr is used to run the user’s application and to launch Xwayland.
    • net is used to connect to the host Wayland compositor, to listen for incoming Wayland connections, and to listen for incoming X11 connections.

Following all those uses through, I count 358 characters that wouldn’t have been needed without capabilities. wc **/*.ml **/*.mli says there are 231,949 characters in total, so I make that 0.15% extra code.

On the other hand, you now have a good idea of every way that this process interacts with its environment, and anyone unfamiliar with this code could also have written the above summary in a few minutes.

(note: the tracing code there looks a bit of a mess; I think I wrote that when Eio was missing some file-system functionality)

The solver service went from 3247 lines of Lwt to 2792 lines of Eio. However, it’s still using some Lwt libraries (git-unix and capnp-rpc), so counting extra characters might not get the full cost. But so far I count 302/90,988 = 0.33% overhead.

I agree, but as far as I can tell this is the only thing people are complaining about. In terms of maintainability, security or formal verification, I don’t think anyone is disputing the advantages of capabilities?


I think people loudly complain about the maintainability, yes. Threading more stuff around is just a pain an a hindrance to composability, just like monads, but in a different way. If you now need networking in a part of the program that didn’t (e.g for caching via Redis or whatever), here you go, add capabilities to the whole path from main to that part of the program. Personally I already have enough parameters as it is.

The benefits to security in a language that doesn’t actually enforce capabilities is, imho, dubious, but let’s have the benefit of the doubt here. In terms of formal verification, I mean, I have no idea? Has anyone used Gospel with capabilities yet? There is research about verification based on capabilities but I doubt it applies in an environment where capabilities are just a convention.


I don’t think you would need to thread the ‘net’ capability throughout your program upto the point you want to use Redis though? You would need the ‘net’ capability at or near the top of your application stack to open a connection to Redis, then you would have to pass that open connection (or a connection pool with automatic reconnection etc.) down to the part of your app that needs to do caching. This would not be very different architecturally from what you would do without capabilities. You’d still need to pass the Redis client/connection to the part of your app that needed it, either way.


I don’t think people complaining about having to thread some arguments through a call graph is the same as people complaining about having more characters in their source code.

For example, say you want to add some logging to a function somewhere deep in your call graph and have to add some capability argument to the function and all the stack of functions above it, the cost is not in the characters, it’s more like

  • The cost of having to go through the whole graph, opening file after file, jumping around the source code, etc. You might get help from your IDE/lsp/editor, but the reason help for these tasks is available in your IDE/lsp/editor is precisely because it is a costly task.
  • The cost of having multiple merge-request conflict because two small changes that would otherwise be very local are now affecting multiple files adding arguments in a lot of places.
  • Etc.

That’s a design choice. You could also read a config file and open a redis connection deep in the middle of the code and never thread anything.
No matter what we think of that approach, it’s not the place of a concurrency to impose these choices to its users.

1 Like

Sorry, but may I ask what are the advantages exactly?
We’ve read on this thread that some consider it as an hindrance to some code maintenance.
The security can be bypassed by not using the eio api.
And the portion of software projects which care about formal verification is rather small.

A real-world concurrency library could definitely make a design choice to make resources available at the top layer and force users to inject them into lower layers. This is how Scala’s ZIO (and in fact even Erlang) does it. This helps developers of all backgrounds fall into the pit of success with using the library with correct concurrency and safety patterns. Of course they can always bypass it if they want (there is the low-level API after all).

1 Like

That sounds like an excellent design choice for a high-level opinionated library for people who like this kind of thing, yes. Less so for something that tries to establish consensus and provide common abstractions to reduce fragmentation (ha!).

It sounds like this would be basically solved by splitting Eio into a separate low-level library (core effects and I/O bindings) and a high-level cross-platform capability-driven library in the same repo?

Absolutely, it would be good to have a clean separation so that other people cab build their preferred abstractions on top of the lower level library (which would be (mostly) dependency-free, ideally). It’s a bit trickier for a foundation of fibers because it’s harder to not put opinions in there (e.g. structured concurrency and spawn are closely tied together) but I haven’t seen a lot of complaints about structured concurrency :slight_smile:.

Picos is a library designed explicitly to be a foundation other libraries can build on. I think it’s a very interesting effort and it’s pushed by someone who has a lot of multicore experience from other ecosystems (something we should do more, e.g. by porting Tokio from Rust?).


Interestingly, the blog post mentioned above uses this exact scenario (using Redis for caching) as an example of a case where you might think you’d need to pass net around but don’t (see Plumbing capabilities everywhere).

The blog post gives an example of reasoning about the safety of a web-server; see Web-server example.

There are two cases to consider: depending on a library or application that is actively malicious vs one that is merely buggy/confused. Without support from the compiler, we can only fix the second case. The web-server example above illustrates this with the case of a web-server that is tricked into revealing its private key.

It is possible to ask the OS to drop privileges soon after a program starts, and if the program is using only the high-level (recommended) Eio APIs then it should continue working just as before. The Filesystem Access section of Eio’s README shows an example of this (at the end), where using the stdlib as open_in "/etc/passwd" is rejected with exception Sys_error("/etc/passwd: Not permitted in capability mode").

However, even this won’t protect you from malicious code, as OCaml runs static initialisers in all dependencies before your application can drop privileges.

There’s no need to do that (the example programs I mentioned above both have logging). Modules writing log messages can’t use that to communicate with each other - logging is write-only from their point of view - and restricting logging isn’t useful from a security point of view, so we don’t bother to restrict it.

Notice that Eio provides Eio.traceln that can write to stderr (by default) from anywhere. It does this using fiber local storage. This is discussed in the blog post in the section Thread-local storage.

In the example I gave above, anyone can see how the Eio version of the wayland-proxy interacts with the outside world after studying the code for a couple of minutes. If you were asked to maintain or deploy this code, you’d probably find that pretty useful.

For example, the Lwt version of the solver service I linked above has a security vulnerability. Can you find it?

I think this is backwards. Rust made the opposite choice, not using capabilities in std. But this forced people wanting to write secure code to create the separate cap-std library, splitting the community. It’s much better to have a default library that everyone can use.

For example, someone made a Rust version of sudo using std and in sudo-rs’ first security audit they found a bug allowing users to delete arbitrary files (in rare cases). The highest-rated reply in that discussion says:

The cap-std library in Rust makes this easier than C/C++, so why not just do the right thing from the start if you’re doing greenfield software development?