Eio 0.1 - effects-based direct-style IO for OCaml 5

We often write code in a similar way if we need to pass in values in for testing — however by far not always, and not when we can get away without having to do that. The question is why should this convention be imposed by a foundational library?

One may like this kind of approach for testing their particular project (and I do, for some of my projects). How does it follow from that that this approach should pervade throughout the rest of the ecosystem?

On another axis, when I am testing my projects, I choose what I need to mock. Why should a foundational library pervasively force me to mock a larger set of entities than I need?

I can often get away with passing in mock entities directly to the functions of an inner module under test, for its unit tests. This allows me to keep the outer modules of the project cleaner, if I don’t need to mock for the kind of integration tests I am writing. Why should a foundational library force me to thread mock values all the way through the entire program, and pass them in at the main function? If I do need to do pass in values through the outer modules, that’s fine, but why should I and everyone be forced to?

Why should the extra cost of doing this be spread to all other users and projects, and for me, to more entities than I need treated in this way?

If I’ve adopted the convention that I’m going to pass some extra objects around in some of the functions of my project, why should some random other person working on something totally unrelated automatically have to do that as well?

For some context, the Eio README does state that it aims at becoming foundational (or at least standard) (at least at the ecosystem level):

It would be very beneficial to use this opportunity to standardise a single concurrency API for OCaml, and we hope that Eio will be that API.

With that in mind, I’m happy to see discussions of Eio interface as a candidate for the ecosystem standard IO library. I think a library with this ambition needs to be subject to a serious design and usability critique.

Now, I don’t have much to contribute on the ocap side of things, others seem to have that discussion rolling already. But I’d like to contribute a couple of cents on a different topic:

In the Eio README, is the following paragraph:

If you call a function without giving it access to a switch, then when the function returns you can be sure that any fibres it spawned have finished, and any files it opened have been closed. This works because Eio does not provide e.g. a way to open a file without attaching it to a switch. If a function doesn’t have a switch and wants to open a file, it must use Switch.run to create one. But then the function can’t return until Switch.run does, at which point the file is closed.

In essence, you apply region-type discipline on open file descriptors. A switch is a region, the file-opening primitives take that region as parameter, and all the files are closed when the region goes out of scope. From there a couple questions:

  • Are there other resources than fibres and file-descriptors which are handled this way?

  • Is it possible to attach other, user-defined resources to a switch? Can I have a stateful data-structure (say a cache, a task queue. or user dialog displayed on the screen) that I can set to automatically clear and close when the switch has finished running? If not, is this under consideration?

  • Is it possible to re-attach some resources to inner switches? Say I Switch.run (fun sw -> … let f = open sw "some-file" in … Switch.run ~transfer:[f] (fun innersw -> … (* f is closed here *)) …). This is somewhat equivalent to transferring the ownership in rust. If not, are there plans to support this kind of ownership transfer?

  • Conversely, is it possible to explicitly reattach a resource to an outer context? This is useful when you have some complex logic which you want to isolate in a switch (because you need some cancellation behaviour, because the library is made like this, etc.) which returns a resource. Is this only possible by passing the parent switch around as in `Switch.run (fun outersw → … Switch.run (fun sw → go_fetch_resource outersw sw) …)``?

  • And finally a more < waves hands > foundational </ waves hands > question: Considering there is no type-level support for maintaining any invariant around switches, resources, and scope, should this be so tightly integrated in the API? What happens if you leak the switch itself (let dead_switch = Switch.run (fun sw -> sw) in …) or if you leak a file-descriptor (you just get some exceptions when using it I guess).

    An alternative —which might be seen as more compatible with the standard/foundational aspirations of Eio— is to provide the tools to manage resources without designating any specific resources as such. Specifically, provide a notion of resource switch, provide a facility to attach a clean-up function to it (val at_switch_exit : switch -> (unit -> unit) -> unit), and that’s it. Then, on top of that foundation, you can have a wrapper (call it Eio_with_region_based_files or something snappier) you can build the current API of Eio with its runtime leak-proof file-descriptor management.

    I think that the current design (of the switch-resource system) is very system-programming oriented. It’s easy to explain in terms of the rust borrow system but with a runtime value rather than a compile-time type checker. And the rust borrow system has been designed based on best practices established in the systems programming community at large. But how well does it fits into other areas of programming? Will it require passing parent switches around the call graph just to be able to return resources?

1 Like

While toying a bit with a Fiber API I would like to use before going to sleep (don’t do that, you’ll get insomnia afterwards :–), I wasn’t really convinced by Eio's design on these matters that is the switch and cancellation aspects.

It looks like a recipe for convoluted control and cancellation flow (e.g. the ability to switch cancellation context) and, I suspect, more complicated than I would like it to be. Besides passing cancellation tokens (switch values) around explicitly will likely turn out to be error prone.

We already have a notion of scope in OCaml, it’s function scope and we already have at least one combinator to deal with resources in the Stdlib, that’s Fun.protect.

So I was rather converging on a simple design with enforced structured concurrency that would make good use of these pre-existing tool, rather than introduce new non-language aware notions of scope.


In OCaml, “installing” a handler is what creates a new fibre in the first place. When you call Fibre.fork:

  1. It performs the Fork effect, suspending the current fibre.
  2. The effect gets handled in a backend’s scheduler. e.g. at eio/eio_luv.ml at 9e1df71bd6d95e962b207f3b8c849215ac01d48a · ocaml-multicore/eio · GitHub
    (note that new_fibre here is some metadata for the new fibre, not the fibre itself)
  3. The recursive call to fork then uses match_with to create a new fibre, with the same IO effect handlers.
  • How long it takes to handle an effect depends on how many effect handlers are on the fibre’s stack. Eio only installs one handler per fibre but if you add your own it will slow things down a bit.

Isn’t this a rather important limitation of effect handlers in OCaml? Handlers are supposed to make it easy to control effectful code in a fine-grained way. If their performance in real-world scenarios is not scalable, does this mean there is more work to be done here? Or is this specifically about Eio’s fibres?

This is about OCaml’s fibre implementation. An effect handler works a bit like an exception handler, with the effect being passed to the nearest enclosing handler, which may then re-perform it if it doesn’t want to handle it itself.

Eio is very conservative in its use of effects, doing the minimum needed to provide concurrency. There is one handler per fibre, so it is as fast as possible. Handlers are pretty quick, so a bit of nesting may be fine, but Eio doesn’t force that cost on you.

To be clear: this scales fine to any number of concurrent fibres. The slow-down only happens if you want to nest handlers.

The only good use I’ve found myself for nested effects is in Angstrom, where a handler is used to provide the legacy callback API on top of the new effect-based core. That function shouldn’t be needed in new code though.

1 Like

As Leo White touched upon here, there is an overlap between capabilities and typed effects. I’m wondering how big this overlap will be in practice, and if this is the case, I don’t think it’s very elegant to have several ways to expres the same thing - and it becomes outright confusing for users; is my IO-capability an effect or an argument?

To my understanding, the Std-lib will be rewritten to use typed effects - so when one opens a file within a function (at least when not using Eio), it will become part of its interface.

Daniel talks about how users shouldn’t be using effects directly - but is the compiletime specification of allowed effects not exactly one of the most exciting qualities in the context of OCaml? This is at least what I’m most excited about - that I’ll be able to restrict a big part of my codebase to be pure, and that it becomes obvious when this is not the case.

I love the prospect of making effectful dependencies explicit - but I don’t want two ways to do the same.

1 Like

Indeed, what I meant by that is that one should strive to have its abstractions to be polymorphic over effects, i.e. let the client choses the one it wants in order to satisfy a need (e.g. give me more bytes to process).

A good way of doing this is to let the client specify these needs via functions (which could also turn out to be pure !).

Once given to your abstraction, the concrete effects chosen by the client in these functions will of course propagate to uses of your own abstraction when it uses them.

How could these functions be pure? I’m imagining them throwing an effect, and then some interpreter handling them - or do I misinterpret?

let more_bytes () = None

Most of the time yes.

Yes, I think this is a very interesting question. Is there a spec for typed-effects yet? The most recent information I have is from Leo’s talk, but that was ages ago.

One unexpected (to me) result of writing Eio was that despite being an effects-based IO library, the Eio API doesn’t actually define any IO effects! The four effects we ended up using were Suspend (waiting for promises, etc), Fork (for Fibre.fork), Trace (for debugging) and Get_context (fibre-local context, used for cancellation). Anything that implements these four should work as an Eio backend.

Everything about IO is entirely contained in the various backends. For example, Eio_linux internally defines an ERead effect, and provides (via env) a file-system abstraction that performs this effect. But Eio_luv instead uses a generic Enter effect for this. This is nice and modular, but not how I originally expected it to work!

1 Like

In the lazy initialization pattern, do you really need that a 3rd party library is parametric in the notion of async IO engine you use? Is basic systhread IO not sufficient for this generic purpose?

Regarding thread-safe Lazys, if it is important for this thread then I would like to mention the design I posted here https://github.com/ocaml-multicore/ocaml-multicore/issues/750#issuecomment-978125441, which could still use feedback from programmers and also volunteers to help (let me know).

Apologies in advance, I have not much valuable to contribute, but would love if someone could explain some of the basics / intermittently summarize the discussion for folks like me (or even point to reading material).

In my limited understanding:

  • effects are being implemented as one-shot delimited continuations
  • delimited continuations, like exceptions, should allow for non-local handling, i.e. from the arguments of a function, you have no clue what effects it may perform OR how they will eventually be handled
  • eventually the type-system will be used to track what effects a function is performing

So, the questions in my mind are:

  1. What advantage is there to passing capabilities explicitly? They seem to have some overlap with effects - but shouldn’t they be determined by whatever handler is installed, i.e. non-locally, which is the intended use of effects? Is this just a temporary stop-gap / workaround for not having typed effects and forcing programmers to track effects/capabilities?
  2. Re:

Wouldn’t any mocking just be about whatever handler you install outside - so not passing capabilities explicitly would actually be perfect for mocking? i.e. install a mock-network handler, mock-filesystem-handler on the outside and test the code?
3) Do effects need to be tracked in the same type-system? i.e. do you need to unify the effects with the types of values in the rest of the language? Coz honestly, this seems like too much cognitive overhead:

val run_webserver :
  unit -[www_root: Eio.Dir.t;
         certificates: Eio.Dir.t;
         network: Eio.Net.listening_socket] -> unit

Maybe it’s only a matter of how the type is printed/shown, but also more theoretically, can it not be a separate unification i.e. one for types and one for effects (effects will always be unit in type)? Maybe resulting in two separate signatures for the function like this:

val run_webserver : unit -> unit
eff run_webserver : www_root -> certificates -> network -> tty_io

So it lets you see AND track separately the type and the type of effects/capabilities the function has. You could even have compiler flags switching the “effect system” on and off, to choose between pure/impure language?

1 Like

For the usecase of limiting functions sideeffects using typed effects, I agree that it would be an ugly interface to include a lot of effects in the functions types. Though I imagined a different solution.

For users that don’t care about what effects their code has, they just use the ML-effect “->” in their mli, or any other less general named set of effects. I guess this would be bad style - especially for libraries.

In the case where you want to be exact about what effects are possible - I was imagining a semantics like that of polymorphic variants, with structural subtyping. So one would name a set of effects, and include this set in other sets. The downside of this though, is that you need to go to definition of several sets of effects to see the full set - or let merlin list the set.

EDIT: Probably it won’t be bad style to mark all your functions with a common set of effects in a library, as long as all the effects are handled by the library. This would avoid breaking user code when the set of internally used effects got updated - but I guess this also depends on a feature to make a named set of effects abstract in a module?

Wouldn’t any mocking just be about whatever handler you install outside - so not passing capabilities explicitly would actually be perfect for mocking? i.e. install a mock-network handler, mock-filesystem-handler on the outside and test the code?

My understanding is that there isn’t an effect defined for each system call (because you wouldn’t want to pay that price? and different backends would not provide the same syscalls?). Rather eio uses effects only to simulate direct style and hide the callback hell that is async io… but then, this implies that the typed effects are going to be uninformative about what side effects eio is performing.

Besides, even if we could intercept the syscalls with an effect handler, how would we mock opaque types like file_descr or out_channel in response to an Open effect? The mock effect handler will also need to check that effects like Write (file_descr, ..) are actually targeting the right thing, so you have to reimplement a dispatch that comes for free with a mock object.

I finally did my own experiments with effect handlers, only to discover that I had naïvely overlooked their semantics. It helped me realize that you can’t setup local effect handlers and also expect to use fibres / green threads inside them.


Eio’s “fork” seems to be a misnomer, it’s more like spawning an on-the-side helper thread to run at scheduler scope. A true fork, in which both paths of execution inherit the handlers at fork point, looks to me like the amb/nondeterminism effect, which requires multishot continuations.

Fun read, thanks for exploring

@art-w this is an excellent blog post about effect handlers, could you maybe consider posting it into its own thread?

@art-w related to your article, it’s worth pondering over the fact that effects are isomorphic to monads. Anything monads can do, effects can do too. The problem is that monads are fully type-safe, and until effects gain types in OCaml, they’re unsafe and therefore their proliferation can turn OCaml into an unsafe language. That’s why IMO we should really only use it for concurrency for now, and even then we should be careful.

1 Like

It would be nice to see a small test repository that demonstrates the tradeoffs associated with alternative APIs. This would make arguing the pros and cons more fact-based and less contentious. In my experience it is generally possible to avoid the object system by using functors and first-class modules. I find modules easier to reason about and that typically compensates for any minor syntactic overhead. Extending or overriding functionality in libraries does not strike me as a frequent enough occurrence to worry about verbosity.

When it comes to reflecting capability-based security in the type system, have you considered using polymorphic variants together with phantom types? This would allow you to capture required capabilities in the type of a function, e.g. whether writes to the filesystem may occur.

It may be a good idea to think about who the intended audience for Eio is. I’ll venture to say that it will never be something that beginners should have to learn before being able to write simple OCaml code. I see it as a tool for experienced and disciplined programmers who care about both safety and performance in large, complex applications. Having to manage capabilities or understanding advanced type system features doesn’t strike me as an overly large burden for this audience. Many of them are already using monadic implementations, which may be even more difficult or cumbersome. I’d personally be completely fine with, for example, using functors to instantiate a whole module within a certain capability environment to avoid having to pass capabilities along through functions.