Hi folks just released Riot v0.0.5 on opam with some quality of life improvements, and a bunch of bug fixes:
You can now do communication by name by registering a pid. Names are currently strings – this helps in situations where you need a globally known process but you can’t thread the actual process id everywhere.
Timers should be working correctly now
I/O should behave more reasonably when reading/writing to closed fds – no more pesky SIGPIPEs killing your app without reason.
Riot is an actor-model multi-core scheduler for OCaml 5. It brings Erlang-style concurrency to the language, where lightweight processes communicate via message-passing.
Have you thought about adding support for Custom Runtime Events into Riot? That would enable a nice tracing capability with OCaml 5.x and it could integrate with tools like meio to show the live actor / supervision tree.
I have now! This looks great, I need to dig deeper into how meio expect these events/traces to look like. Thanks for the pointer! Also any guidance is appreciated
What I was considering is building an observer-cli clone at some point, since there’s more info in Riot about each process (like mailbox sizes, current suspension status, whether a process has any timers associated with it, etc).
I’m intrigued/interesting in the new runtime events, but I’ve found them a bit confusing so far. My main concerns are:
it looks like whatever listens to them must be polling in a tight loop, with a noticeable CPU time consumption?
it also seems like there’s a chance some events might be lost if the ringbuffer is full. There are cases where it’s fine, but also cases where it’s not, I think, in a general purpose tracing/logging facility.
For context I mostly rely on ocaml-trace for these needs, and I think it could have a runtime-events backend. But my attempt at hooking runtime events to get GC info was a failure.
I’m curious what you were trying to do with GC info which ended up in failure.
Relatedly, have you looked at Olly? GitHub - tarides/runtime_events_tools. It provides a good template to start from if you are hacking your own thing with runtime events.
I was trying to build spans (with timestamps) for GC entry/GC exit zones
based on the corresponding events, and ended up with spans that would
not close, spans that would overlap with other spans, etc. I don’t have
the code at hand though. The docs could certainly use a few examples of
how to collect events for “start of major GC” “end of major GC” and
counters, for example!
edit: I found the main issue: timestamps. The runtime event timestamps are opaque and there’s no way to correlate them to anything else (e.g. Mtime.t). This means I can’t integrate GC major begin/end into a wider tracing system, because I can’t adjust the timestamps.
This is a great alternative to having to wait on the great unification of async/lwt/eio (not that it’s ever going to happen nicely). With a scheduler in place, I realized that I can write synchronous blocking code now in each process, à la go and elixir.
Just worked my way through the 5 provided examples and Riot seems super nifty! Eagerly looking forward to the next few example sections!
PS. I saw this in the 4th example and I curious how this works?
In fact, we are strategically placing yields all through the standard library to make it as seamless as possible to write Riot programs without thinking about scheduler starvation.
thank you! – this is pure motivation for me to write them haha so thanks for the kind words.
The Riot scheduler relies on cooperative scheduling, which means that the processes need to suspend themselves in one way or another. To suspend themselves, a process can call Riot.yield (). This function is part of Riot and it performs an algebraic effect that stops the function, saves the current state of the process, and starts running another process instead.
Eventually, this other process will suspend too, and the first one will run again.
The classic example is to make a process that loops infinitely:
let rec loop () = loop () in
let pid = spawn (fun () -> loop ()) end
Unfortunately this process also starves a scheduler. The loop function has no room for suspension. We can fix it with a call to yield ():
let rec loop () =
yield (); (* <- this line is new! *)
loop ()
in
let pid = spawn (fun () -> loop ()) end
Done! Now this process runs forever, but also leaves room for other processes to execute.
The problem here is that sprinkling yield () through application code just sucks – so instead of forcing you to do that, we include calls to yield () in strategic places.
The most natural place to suspend is actually the receive () call, where if you have an empty mailbox you’ll just be suspended until a message arrives.
But there are other examples. Like if you are going to read from the network with Net.Socket.receive, this operation would typically block your application, so really Net.Socket.receive actually uses an effect to suspend the process until the socket is ready to be read.
In a similar way, future operations like File.read will be “blocking” from the point of view of the current process, but they will be suspension points under the hood.
Can you try putting a deliberate error in that example, like a syntax error, then running the build command? Just to check if it’s actually building and succeeding with no output.
Yes, it should be in _build/default/... somewhere. You don’t need to know its path, you can just run dune exec ./main.exe to get dune to run it for you.
Oh, got it, I thought the executable would end up in riot/examples/1-hello-world and wasn’t seeing it. I found it in: riot/_build/default/examples/1-hello-world/main.exe