Eio 0.1 - effects-based direct-style IO for OCaml 5

It’s not quite the same; reading a promise does not remove the item. If a stream has many readers then they each get a different item, but a promise gives the same value to everyone.

  • The documentation highlights that Promises are not Lazy values. Are there concurrent-Lazy-values somewhere, or how hard are they to build on top of what there is?

You can wrap a promise in a lazy, so that forcing the lazy starts resolving the promise:

let lazy_promise = lazy (Fibre.fork_promise ...) in
...
Promise.await (Lazy.force lazy_promise)

Using plain lazy values and blocking in the thunk works too, unless another fibre tries to force it while it’s blocked. Then you’d get CamlinternalLazy.Undefined.

  • I see Semaphore but no Mutex. Is the idea to push everyone to the more “reentrant” behavior of semaphores? (Should this be documented?) Is there a performance impact?

I think there should be an Eio.Mutex too.

  • Spawning a Fibre uses functions called fork , the name comes with some Unix baggage that isn’t relevant here. Maybe this could be pointed out.

Seems reasonable!

The fundamental reason why this is wrong is that

  1. Object capability style (as a systematic style) has major programming costs, and is mainly useful in a language and/or ecosystem that is specifically intended for it.
  2. There are no plans to make OCaml into such a language in the future.

Even if there were such plans, for example “in five years,” if EIO becomes foundational in one year, that is four years of everyone paying the costs of ocap without the benefit.

While also paying the costs (as we do now) of the other schemes we rely on, and will have to continue to rely on, to establish various properties of our programs and/or how we run them, because object capability style will not actually be able to do that (or contribute much to it).

That is why, so far, the only argument I can take seriously in favor of pervasive, imposed ocap in OCaml is that it is useful for Mirage, but that is not a good argument for making general users pay heavy and unnecessary costs.

7 Likes

You can wrap a promise in a lazy, so that forcing the lazy starts resolving the promise

let lazy_promise = lazy (Fibre.fork_promise ...) in
...
Promise.await (Lazy.force lazy_promise)

But Lazy values cannot be shared across domains without synchronization, so it would be invalid to do this if we want to share the lazy-promise.

1 Like

Only one fibre runs in a domain at a time, and control only switches when a fibre blocks, so they’re more like Lwt threads than goroutines. Is the goal very high-performance, or is it mostly waiting for things to happen?

Can fibers communicate across domains?

Yes - you can write to an Eio.Stream from one domain and read it from another. They work much like go channels.

If a fiber crashes, how do I handle that?

You can use try ... with ... as usual. There is a tree of cancellation contexts and if you don’t handle it then the exception will bubble upwards towards your program’s top-level, cancelling sibling fibres as it goes. e.g.

# Eio_main.run @@ fun _env ->
  Fibre.both
    (fun () -> for x = 1 to 3 do traceln "x = %d" x; Fibre.yield () done)
    (fun () -> failwith "Simulated error");;
+x = 1
Exception: Failure "Simulated error".

What happened here was:

  1. Fibre.both created a new cancellation context for the child fibres.
  2. The first fibre ran, printed x = 1 and yielded.
  3. The second fibre raised an exception.
  4. Fibre.both caught the exception and cancelled the context.
  5. The first thread’s yield raised a Cancelled exception there.
  6. Once both threads had finished, Fibre.both re-raised the original exception.
4 Likes

Hello - firstly, thank you for this work, the performance results look very impressive!

I am wondering whether the concepts and terminology behind Fibre, Switch.t, and Promise.t are standard or well-known - I am not at all familiar with structured concurrency. I’m only familiar with Lwt, so I’m trying to mentally map the concepts to that.

I have some questions I’m hoping you can answer:

  1. Would a pool of lightweight threads be a semi-accurate way to think of a Switch.t?
  2. Is it accurate to say that creating a fiber using both, pair, all, first etc. runs the argument(s), and then there’s 3 logical effects that can be performed during their execution - yield, cancel, resolve ? The resolve is the only performed on the function(s) returning, and the others can be performed explicitly by the programmer e.g. raise Cancel.Cancelled or Fiber.yield ()?
  3. Must all of the above fibre creations happen within a surrounding effect-handler (like an Eio_main)? Otherwise it’ll behave like an unhandled exception?
  4. Are all the fibres created above associated with some kind of default Switch.t for the current domain? For the ones created with Fibre.fork* one must pass in explicitly a Switch.t, and you can create sub-switches in it too, but you have no access to the default/top-level switch for the domain?
  5. What is a Switch.hook? I could not understand the meaning of “A handle for removing a callback.” - an example of when it is useful would be nice in the documentation.
  6. Is it possible/desirable to disentangle the 3 modules? Fibre refers to both Switch.t and Promise.t in its interface, but a Switch.t is defined as a group of Fibres, and a Promise.t is defined as a way to communicate between Fibres - so conceptually they seem somewhat circularly dependent.

Many thanks!

Thanks! :slight_smile:

I am wondering whether the concepts and terminology behind Fibre , Switch.t , and Promise.t are standard or well-known - I am not at all familiar with structured concurrency. I’m only familiar with Lwt , so I’m trying to mentally map the concepts to that.

Promise is standard (and the same in Lwt). Fibre is what OCaml calls them. Switch is non-standard - it evolved from Lwt_switch but it’s not very similar. I’ve seen other systems use the terms “bundle” or “nursery”. It’s also a bit like a “wait group”. If someone can think of a better name, let me know!

  1. Would a pool of lightweight threads be a semi-accurate way to think of a Switch.t ?

A pool (e.g. Lwt_pool) to me suggests they’re just sitting around waiting to be used, whereas it’s just a way of grouping fibres and other resources. A switch allows all the fibres to be cancelled, waits until they’re all complete, and frees any other resources before ending. The metaphor is turning off a switch to stop everything connected to it, but I don’t really like the term.

  1. Is it accurate to say that creating a fiber using both , pair , all , first etc. runs the argument(s), and then there’s 3 logical effects that can be performed during their execution - yield , cancel , resolve ? The resolve is the only performed on the function(s) returning, and the others can be performed explicitly by the programmer e.g. raise Cancel.Cancelled or Fiber.yield () ?

Yes, perhaps. The arguments can finish or raise an exception like any normal OCaml function, but I wouldn’t call that an “effect”. They can yield any number of times.

Note: you don’t normally raise Cancelled yourself; that happens when the switch wants a fibre to stop. It specifically indicates that you don’t want the exception reported (e.g. in a log warning or dialog box), because something else is dealing with that.

  1. Must all of the above fibre creations happen within a surrounding effect-handler (like an Eio_main )? Otherwise it’ll behave like an unhandled exception?

Correct.

  1. Are all the fibres created above associated with some kind of default Switch.t for the current domain?

There’s no default switch. Fibre.all is defined like this:

let all xs =
  Switch.run @@ fun sw ->
  List.iter (fork ~sw) xs

So it creates a switch, forks all the new fibres into it, and then the switch waits for them all to finish.

For the ones created with Fibre.fork* one must pass in explicitly a Switch.t , and you can create sub-switches in it too, but you have no access to the default/top-level switch for the domain?

Yes, because there isn’t one. Functions like Fibre.both that create fibres and wait for them can create a switch internally, but for e.g. Fibre.fork the fork function returns while the fibre is still running, so it needs to be given a switch from outside, because a switch can’t out-live the function that creates it.

  1. What is a Switch.hook ? I could not understand the meaning of “A handle for removing a callback.” - an example of when it is useful would be nice in the documentation.

You can arrange for things to be cleaned up when the switch finishes, by using Switch.on_release. For example, when opening a file you would arrange for it to be closed (this happens automatically as part of the open operation, so you can’t forget).

But if the file is closed manually before the switch finishes then that isn’t needed any longer and you’ll want to get rid of it. For that, on_release_cancellable returns a hook that can be used to remove it. You probably don’t need to care about this unless you’re implementing a low-level operation to open a file or similar.

  1. Is it possible/desirable to disentangle the 3 modules? Fibre refers to both Switch.t and Promise.t in its interface, but a Switch.t is defined as a group of Fibre s, and a Promise.t is defined as a way to communicate between Fibre s - so conceptually they seem somewhat circularly dependent.

Interesting; I hadn’t thought of that. The cycles here are only in the documentation, though, not in the APIs themselves.

I think part of the confusion is that Eio.Fibre doesn’t define an OCaml fibre. The concept of a fibre is built in to the language. Rather, Eio.Fibre provides Eio operations for managing OCaml fibres. So Switch’s documentation is referring to the OCaml concept of fibres, rather than the Eio.Fibre API.

Promise only mentions Fibre in an example showing how you might use a promise to do something.

Thanks for the questions - this is very useful!

1 Like

Maybe “Yarn” or “Strand”?
image
There is also “Warp” which are the longitudinal threads in woven cloth…

2 Likes

Out of interest, why do you say without the monadic part? Lwt’s current monadic interface covers promises which are intended to be fulfilled (or rejected) at some time in the future by a callback running in an event loop. This scheme could (I would have thought) equally well be implemented by having delimited continuations (suspended computations implemented by effects) restarted by a callback running in the event loop. To offload computations onto other cores there is already Lwt_domain.detach.

Delimited continuations do not require monads: they can have effect directly. But reusing Lwt’s monadic interface in this way would provide a type discipline and seem to require little additional cognitive effort for current users of the library. And it would not prevent the development of other libraries such as eio which use a different approach.

Aaah! Thank you for that clarification - this was indeed a source of confusion for me. I am now reading these slides and wish I’d paid more attention during the presentation. Is there a section in the manual talking about fibers (that is already rendered somewhere)?

Another thing I’d like to point out is the Fibre v/s Fiber difference in spelling - not sure whether it was a deliberate choice. For what it’s worth, file-names in the compiler source code are fiber.h and fiber.c, and I’ve caught myself typing Fiber already a few times. Have no preference for one spelling over another, but I would hope if they refer to the same concept, then they have consistent spellings.

1 Like

Is it possible to make the object capability style optional, e.g. part of an eio.cap library built on top of a core eio library? (they could still both live in the same repository to make things easier)
Then users who like/want to use capability style could use eio.cap which doesn’t expose any direct access and you need to provide a capability to each function, and those who don’t could use the slightly lower level eio?

I appreciate that Eio with capabilities may encourage a certain style of writing your programs which may be less error-prone, or at least force you to think about what your functions really need access to, but if you’re trying to integrate this with existing code sooner or later you might find that you need some escape hatches for compatibility and at that point it’ll be difficult to prove that your program is still safe.
However that should be implementable on top of a thin abstraction layer that just wraps Unix / uring / etc., and make it easier to gradually adopt multicore into a project.

E.g. when I attempted to implement a module to protect against file descriptor leaks I ended up having to implement something similar to Rust’s move semantics - with runtime checks - because otherwise the file descriptors could simply not be used for common things like caches or resource pools where the file descriptor would outlive its caller. I’m not happy with the resulting API (xen-api-libs-transitional/resources/unixfd.mli at master · xapi-project/xen-api-libs-transitional · GitHub and xen-api-libs-transitional/resources/safe.mli at master · xapi-project/xen-api-libs-transitional · GitHub), it is only safe as long as you follow a certain convention of not storing/using more than once the result of ‘dereferencing’ a safe file descriptor, and is cumbersome to use. And the first time someone else wrote a PR to that piece of code they promptly used the escape hatches in the API to bypass the safety checks, because writing it the correct way was entirely non-obvious and difficult to figure out.
I’d love to explore how to implement such an API on top of EIO, but that’d also require move semantics (I think) to make it practical. However I wouldn’t want to make that pervasively part of the EIO API.

Some of the tension in the API design might come from trying to use OCaml’s type system to prove all sorts of safety properties for our programs, and although the type system is great for proving some properties, perhaps more complicated properties like linearity or capabilities would better be left to static analyzers, and all an API should provide are annotations to be processed by a static analyzer?

5 Likes

Out of interest, why do you say without the monadic part? Lwt’s current monadic interface covers promises which are intended to be fulfilled (or rejected) at some time in the future by a callback running in an event loop. This scheme could (I would have thought) equally well be implemented by having delimited continuations (suspended computations implemented by effects) restarted by a callback running in the event loop. To offload computations onto other cores there is already Lwt_domain.detach.

I didn’t mean to get rid of Lwt’s monads! My point is that there should
not be the need for a new IO library at all, if we can introduce in
Lwt new effect-based APIs. My first thought is, again:

val await : 'a Lwt.t -> 'a
val suspend : (unit -> 'a) -> 'a Lwt.t

With that, you can keep your existing code, keep the informativeness of
returning _ Lwt.t for non blocking code, but also use direct style
when convenient.

For example, if you wanted Lwt_list.iter_s but for arrays (or another
structure that provides iter), it’d become quite easy:

let array_iter_s (f: 'a -> unit Lwt.t) (arr: 'a array) : unit Lwt.t =
  suspend @@ fun () ->
  Array.iter (fun x -> await (f x)) arr

Process all events in a queue? same:

let process_all (q:task Queue.t) : unit Lwt.t =
  suspend @@ fun () ->
  while not (Queue.is_empty q) do
    let task = Queue.pop q in
    await (perform_the task)
  done

Delimited continuations do not require monads: they can have effect directly. But reusing Lwt’s monadic interface in this way would provide a type discipline and seem to require little additional cognitive effort for current users of the library. And it would not prevent the development of other libraries such as eio which use a different approach.

Yep, it’s a gradual improvement that doesn’t reinvent the wheel. I don’t
think we should expect people to port or rewrite their software as 5.0 is
released; OCaml prides itself on its backward compatibility, and
building on existing core libraries such as Lwt is a good way of
achieving that.

In that world, I’d imagine Eio as an experimental IO library with a more
opinionated, experimental style, that demonstrates what the future of
Mirage is.

5 Likes

Indeed, that would be good!

FYI, the plan is for OCaml to eventually have typed effects. This means you get the best of both worlds: the typed protection of monads, but with the ease-of-use of non-monadic code.

1 Like

For the record, I feel that capabilities are given a bad rep. in this thread. (But then the people complaining about it, for example @antron, @lpw25 and @c-cube certainly have more experience writing systems-y code than I do, so I’m sure they are also onto something!)

In functional programming languages, “passing stuff around explicitly” is the default approach. What @talex5 calls the capability style is basically just this. Some other ways of doing things, in particular setting global mutable variables, are considered bad style.

Of course, there are some things that we don’t typically consider global mutable state (for example: how to read the current time) that one may want to use in a fine-grained way from within the functional core of the application, this can add friction and I understand the worries (Haskell has a cottage industry of “effects done X or Y way” with a lot of tough ergonomics issue to solve, because they are digging all the way down that rabbit hole.)

There are other approaches to doing this, for example:

  • Functors… but most people find that too heavy; maybe a system similar to Backpack for big-functors would have helped (in-passing self-advertisment about old namespace proposals).
  • Global variables (“ambient capabilities”). This may be acceptable if the global variables preserve referential transparency for the rest of the application, to consider on a case-by-case basis. But then, as @talex5 pointed out, it’s easy to build a global-environment approach on top of an explicitly-passing-stuff around approach, more than the other way around.
  • Judicious use of effect handlers, as suggested by @lpw25. It’s exciting indeed to consider this, and it does correspond to the approach that EIO used for the “concurrency API” part. But it sounds a little premature to me to be on this, when we don’t have actual language support for effect handlers right now, and no ETA on effect typing.

The Scala community has done a lot of nice work on capability-passing designs, and over time they grew some language support to make it nicer. True, we don’t have this language support right now, they didn’t either when they started! I’m not an expert, but I also have the expression that their language support is not so invasive (it’s not a radically different flavour of types like a full-blown effect system, linear types etc.), and it’s still easy to write code in this style without.

Then @antron makes the fair point that following an object-capability style does not actually provide strong security guarantees. Sure, but most of our programming practices, including many that are dear to the OCaml community, are this way. We don’t have a proof assistant to reason formally about OCaml programs yet, most guarantees are not, strictly-speaking, enforced by tooling but rather honored by conventions that make it easier to respect them, and harder to break them. One example among many: it’s possible today to write OCaml code that segfaults (Array.unsafe_get [||] 0 ^ "foo"), yet we typically think of OCaml as a segfault-free language, without having invested in iron-clad tooling to completely rule out some unsafe idioms or track precisely who is transitively depending on them.

All this to say: I feel a little bad for @talex5 reading this discussion, because I think that the opinions around the table are all sensible, but his is given less merit than it deserves. Maybe we should avoid the notion of “Foundational” library and consider EIO as what it currently is, an experimental library that just had its first public release. (And sure, if you can think of easy ways to update Lwt or whatever to support judicious usage of parallelism, please go ahead and work on this as well!)

24 Likes

With my compiler hat on, some things I’m mildly worried about is as follows:

  • Currently the Multicore-enabled compiler only provides low-level primitives for parallelism, and basically nothing for concurrency. (In fact, we do have the Threads module that works surprisingly well with Multicore. It was built for backwards-compatibility only but currently it’s, ironically, a fairly sensible option to write concurrent code in Multicore.)
  • The idea from the start was to be able to write concurrency support (an event loop, etc.) in user land on top of fibers / effect handlers, to experiment with what works best for various problem domains, etc.
  • But then, if everyone does something different, there is a risk of fragmentation of the ecosystem where we end up with 5 different concurrency abstractions that can’t talk to each other, and OCaml beginners have to choose the camp they belong to from day 1. (That wouldn’t happen, right? We totally didn’t do that for cooperative concurrency libraries, build systems, package managers, etc.) Exploring many different approaches is great, but ending up with insular ecosystems is not great.

I think that the EIO people are hoping that their honest effort to come up with a good concurrency story on top of the low-level compiler primitives will win everyone’s heart, people will adapt existing libraries (Lwt, Async, what have you) on top of it or move to their new code, no fragmentation anymore, end of story. And given that there are steady contributions to EIO, with excellent people like @talex5 working on this, coming from highly-respected OCaml shops that have a lot of influence in the ecosystem, for tooling decisions, etc., this sounds like a plausible scenario!

If people are worried that EIO “is not Foundational enough” (as in: we’d like to do something different that is not on top of EIO, but on top of the low-level primitives), do they have plausible proposals for how to avoid ecosystem fragmentation? For example, we could consider some of the following ideas:

  • Hope that the various concurrency libraries will be able to collaborate with each other (integrate their event loops together somehow?). That sounds very hard, and to my knowledge we already didn’t manage to do this in the sequential world – but then people like @antron know much better than I do about this.

  • Design a sort of mid-level concurrency layer that is more high-level than just domains, and expressive and non-opinionated enough that people want to build on top of that, and cooperate through that mid-layer. But what would this be? Isn’t the “concurrency primitives” part of EIO something like that?

  • Your suggestion here.

In theory, we are not in a real hurry to solve this question: OCaml 5.00 is not out yet, it will not be a production-ready release that all projects jump onto from day 1, we can expect a stream of increasingly robust 5.x releases with only some parts of the ecosystem getting their foot wet at first. And it’s not reasonable to expect people to come up with full answers to questions about how to use an implementation that is not released yet! (Note: the Multicore people have a head-start here thanks to their experiments on top of the Multicore runtime; everyone could have participated, but few people had the time (and funding) to do this, it’s great that they did many experiments.)

In practice, there are some basic questions about OCaml programming that are hard to answer right now about OCaml 5.0, because we don’t know what to recommend to users willing to write multicore code. For example, Lazy values cannot be used concurrently, and the reason why is that we don’t know how to block code that forces a thunk that is already being forced by someone else. (We could do this with a domain mutex, but this would block the whole domain, whereas people want something that will only block the “current fibre” and yield to another concurrent computation on the same domain.) This is one example of the Stdlib, but I’m sure there will be many other examples once people start porting their projects to Multicore: whenever the natural answer to a question is “well you should block until X happens”, nobody knows what’s the good way to do this, because we don’t know what concurrency layer the user wants to use – we don’t know what blocking means for our end-users.

So I think that we urgently need some sort of standard for these questions. I don’t know if EIO is the right answer, but I haven’t seen proposals in this thread that sound like better answers. (Of course people were discussing different things, like “what’s the best way to adapt full programs to Multicore”.) My intuition is that if we think hard enough about this, we may find reasonably simple answers that can please everyone, because “how should library block on an operation” sounds like an easier question than “what’s a good API design for all interactions with the OS?”. Well…

12 Likes

These programming practices are not imposed on us by a potentially foundational library. None of the examples you could easily think of involve potentially globally imposing a programming convention on all users. This type of argument again misses the point that is being made, which is a conjunction:

  • Object capability style is costly,
  • AND none of the benefits it has are as strong as claimed, in an OCaml context,
  • AND none of the benefits it has are strong compared to benefits of what else we are already do, and would have to keep doing,
  • AND a foundational library would try to specifically -impose- this style.

Addressing one multiplicand “branch” of this conjunction doesn’t address the point and misunderstands the nature of the objection.

Indeed, as I’ve already said,

So a reply of the form

  1. It is already sometimes used.
  2. We sometimes use other things in the same direction (most of which, by the way, are for the most part not this costly).

Can you find an example where we are considering widely imposing an artificial syntactic convention, forcing users to partially construct (still inaccurate) proof terms for some property as they write their program? Because one part of the point is that everything else that OCaml does is quite “light” and fits together nicely, and eio with ocap is not light, it is costly.

Current OCaml has awkward syntax mainly for rarely used features or in cases that are widely considered to be limitations of the language. An ocap-based I/O library that isn’t rare would effectively become a limitation of the language (ecosystem).

It seems reasonable to consider EIO experimental. We just need to avoid drifting into this becoming actually foundational.

1 Like

I’m essentially ready to design and implement some kind of “lower level” (than EIO) multicore and effect I/O library, that EIO with ocap could be interpreted over, if desired. Given I worked on Lwt and Luv already, I don’t find such a project personally intimidating. I’ll probably ping people in the coming days, to see what the objections are and get a better idea of what people would want.

8 Likes

Can you find an example where we are considering widely imposing an artificial syntactic convention, forcing users to partially construct (still inaccurate) proof terms for some property as they write their program? Because one part of the point is that everything else that OCaml does is quite “light” and fits together nicely, and eio with ocap is not light, it is costly.

Could we see a concrete example of how “costly” the passing-around-stuff explicitly idioms become when relying on EIO? Do people here have a code snippet from an existing project that shows maintenance problems when using this approach at scale?

3 Likes

This is pretty much how it works. For example, Eio_linux.Low_level wraps the Linux io_uring API (no objects here), and likewise for Eio_luv.Low_level, etc. The main Eio API is a cross-platform abstraction over these low-level interfaces.

What to use here is less clear. For example, the original versions of Unix (see unix-6th - 2) provided an open call, like this:

int open(const char *path, int oflag, ...);

However, POSIX later added openat:

int openat(int fd, const char *path, int oflag, ...);

The purpose of the openat () function is to enable opening files in directories other than the current working directory without exposure to race conditions.
[ source ]

Lwt wrapped the original API, e.g.

Lwt_io.(with_file ~mode:input) "./README.md" (fun x -> ...)

Whereas Eio wraps the new one:

Eio.Dir.with_open_in cwd "README.md" (fun flow -> ...)

There is no “unopinionated” choice here. The choice is the old API (which will work on all systems, but prevents using features of newer ones) or the new one (which requires some best-efforts hacks to run on old systems).

On Linux, io_uring does not provide open, but you can pass -100 as the FD argument to openat to get the old behaviour. On FreeBSD, access to the old open call is blocked once you’ve called cap_enter, and only the new API will work.

Incidentally, the Eio.Dir API is named after Rust’s cap_std::fs::Dir, although at the moment it doesn’t have many functions, and I haven’t looked much at that API. The corresponding Mirage API is a bit different, and treats a path as a list of strings rather than as a single string as POSIX and Eio do.

If I want to capture the current time, I don’t want anything more complicated than Mtime_clock.now() or Ptime_clock.now() (even Eio.Clock.now ~clock:Global.clock () seems quite annoying if you add the initialization bit). I certainly don’t want the rest of the code to know I’m calling that ( it might just be for tracing or debugging or logging edit: @talex5 addressed that, sorry).

I think I’ll just add Eio_unix.sleep to match Lwt_unix.sleep. Adding sleeps to help debug race conditions seems like a good use for this, and we can think about this more later.

We’re writing OCaml, not Haskell, and side effects and impure functions are normal .

Yes, the Haskell choice of completely-pure or can-do-anything doesn’t seem that useful. I think “capability-safe” adds a nice middle ground: you can do side-effects, but bounded by the inputs you’re given.

Another question I’ve had for quite a while, is: why do we need a new library, instead of working effects into Lwt in a retrocompatible way? Something to get val await: 'a Lwt.t -> 'a would be incredibly helpful, and then more and more APIs could provide effect-based versions that eschew promises.

Yes, that’s what Eio_lwt.Promise.await_lwt does.

At the same time, that would allow the whole ecosystem of Lwt users to gradually migrate to effects without having to go through the pains of show-stopping migrations (remember python3?).

Yes. The Lwt to Eio migration guide walks you through the process of converting an Lwt program to Eio, bit by bit, keeping the whole program working during the transition. You can call Eio from Lwt and Lwt from Eio.

Incidentally, if you want your ambient access to sleep now, you can do it like this:

Lwt_eio.Promise.await_lwt (Lwt_unix.sleep 1.0);

I talked about Globals module before, and that’s how it works. You just start your program with:

Eio_main.run @@ fun env ->
Lwt_eio.with_event_loop ~clock:env#clock @@ fun () ->
  ...

Lwt_eio creates a top-level switch and within it sets the Lwt engine (a global), which also holds the clock. Then any Lwt code can call Lwt_unix.sleep as normal without needing a clock argument. The global switch also allows Lwt.async to bypass the normal structured concurrency:

open Lwt.Infix
open Eio.Std

let some_fn () =
  Lwt.async (fun () ->
      Lwt_unix.sleep 1.0 >|= fun () ->
      traceln "Surprise - I'm still running!"
  )

let () =
  Eio_main.run @@ fun env ->
  Lwt_eio.with_event_loop ~clock:env#clock @@ fun () ->
  some_fn ();
  traceln "Structured concurrency means everything is finished now!";
  Eio.Time.sleep env#clock 2.0
$ dune exec -- ./test.exe
+Structured concurrency means everything is finished now!
+Surprise - I'm still running!

This is why I think Eio is “unopinionated” - we can bypass the capabilities and the structured concurrency and run existing Lwt code within it unmodified.

8 Likes

I’m kind of repeating myself, but this is what I think we should aim for: composable primitives.

At that point in time, I will be suspicious of anything that will want to take over my main loop or invert control in invasive ways. I want a certain degree of freedom on how my application is to be structured, if only to experiment new ways effects could give me.

Also one thing that I think is not stressed enough is that most code out there should actually not use concrete effects or effectfull primitives at all.

I have been waiting for effects for the past 7 years precisely so that I don’t have to use them. They eventually allow the separation of concerns I was seeking for: most library codec code should not be concerned about how you move data in and out: that’s for the application to decide, using any kind of effects it finds desirable.

In that respect developing and offering simple standard abstractions in the Stdlib like I mentioned in this message which are effect friendly without mentioning any of them are more important to me than deciding on concrete effects.

Basically I think there’s the need for infrastructure and patterns so that effects can be used in a compositional way and that using something like eio or not rather becomes a matter of application choice.

7 Likes