Tutorial: Roguelike with effect handlers

Nah it would scale just fine ^^ . The endings are sketchy because the game state isn’t given the proper attention it deserves. I wish I had explained how to remove the impure mutations on the world by using an effect End_of_turn : world -> world and other variations to help with the characters public state.

Anyway you would have to solve the same issues with The Elm Architecture, because TEA is actually very similar if you ignore the obfuscation by defunctionalization. In your flappy game, you have this “big match” function on all the events:

type event =
  [ `WingFlap ...
  | `Frame ...
  | `PauseToggle
  | ... ]

val update : event -> (model -> model)

This update is giving an interpretation of event as a function to update the model… But since that’s the only use of event, you could re-functionalize your code:

type event = model -> model

let update event model = event model (* no more match! *)

let wing_flap (player, direction) model =
  when_not_paused model @@
    ... { model with birds; feathers }

let handle_keypresses ev = match ev##.keyCode with
  | 87 (*w*) -> GameEvent.sink_eupd (wing_flap (0, `Up)) 

This gets rid of the big match on all the events. If you keep pushing in that direction to support the sequencing required by character’s AI, you can quickly end up with a monad for event – at which point it’s pretty much the same as effect handlers :slight_smile:

In the roguelike tutorial, I mentionned the explicit state machines solution which would yield a similar outcome as the defunctionalization of TEA. I guess it would be interesting to implement the elephant and spider behavior with TEA if you are not convinced!

(Thanks for sharing your game btw!)


This doesn’t address the specific critique I mentioned. I’ll try to describe the problem in other terms. What seems elegant in your code to begin with is that you jump back/forward between the scheduler and the current branch in the decision-tree of the AI.

The problem that arises is that you suddenly want to let other entities influence your state - and as your decision-tree is dependent on your state, you suddenly need to be able to jump back to the root of the decision-tree under certain conditions. So the earlier advantage of continuing the AI from where it left of, now raises the complexity of the logic. What would your solution be to this? I’m guessing you would need to throw something like an Reset_decision_tree exception/effect, catched at the top of the AI loop.

Your suggestions to my games structure doesn’t seem advantageous to me. What I like about the current structure, which seems lost after your suggestions, is:

  • you know where to look to see all updates to the game-model
  • a function to update an entity never is given the whole model - as I want to be explicit about dependencies for all functions
  • the game-logic is fully separated from the imperative IO

I believe the most ergonomic way is to pass the event affecting the entity via continue, in addition to the remaining sleep time or None if nothing interrupted the sleep. We then also need an event effect that resumes another entity immediately i.e.

effect Sleep : int -> (event * int) option
effect Event : (cell * event) -> unit

If, for example, the elephant is sleeping for 20 turns but is hit after 15, it would be resumed immediately with (HitBy Spider, 5) which it could then handle by:

  • Ignoring if irrelevant
  • Handling and going back to sleep

That would also allow chain reactions (if intended). For example a knockback event could result in:
Elephant acts and knocks Camel back. Camel wakes up, attempts to move in intended direction but is blocked by a spider. It would then perform a knockback event on the spider (returning to the scheduler which continues the spider with the appropriate event). The spider then moves out of the way and sleeps again. Camel has succeeded with its event and moves too etc.

We could also discontinue the Sleep continuation with an exception. The result is basically identical to the Reset_decision_tree exception.

This is what I mean - you would need to handle this after each performance of an effect. And as there is choice here - depending on the branch of the decision tree - you get more complex code than neccesary. The alternative is that you go through your decision tree based on your updated state from the root on each frame - here there is only one place where you choose what decisions to prioritize.

I don’t think there is much difference there because this is inherent, unavoidable complexity. You always need a function akin to f : internal_state -> event -> action * internal_state because for every point you need to decide how you react to external events. It shouldn’t be that complex because you just have to wrap each sleep in an appropriate handler (e.g. vulnerable (sleep 10), burrowing (sleep 1), defending (sleep 1) etc).
Whether you always start at the root of your decision tree and traverse it according to the internal state and the event or you continue at the previous node, handle the event and then continue boils down to personal preference.

I think:

  • Always starting at the root prefers shallow decision trees and complex event handlers as we must explicitly encode in internal state at what node of the decision tree we were but events are free to manipulate the state in any way they want.
  • Continuing from the previous node prefers complex decision trees and simple event handlers (specifically: event handlers that do not interrupt the behavior in complex ways by e.g. continuing along a different path) as jumping to a sibling node in the decision tree is difficult.

I believe in general the second approach is preferable because we don’t have to manage a large internal state encoding all the necessary information and instead can handle each subbehavior in its own function.

1 Like

I would need to test the different structures in practice to be sure, but it seems like several game-features would become more complex to implement via the effect-based one, as the game-state is distributed between the decision-tree and a game-state-value of the entity. Also, as the state of the decision-tree is represented via continuations, it can’t be manipulated directly as a result of incoming events.

Another observation: it seems like there is a tendency of the effects-based structure to distribute state and handling to separate places (like OO), whereas FP commonly handles state-transitions in the same place.

I’m currently experimenting with a UI framework using effect handlers and have made similar observations. Some features become more complex (in my case modifying the current state based on certain, rare events) but it also makes most of the code easier to reason about.

The structure encouraged by effects seems to be very similar to the actor model of Erlang with a lot of objects/actors/threads communicating via effects. While “resumable exceptions” are certainly an interesting feature, it seems that, whenever an effect handler invokes continuations of previous effects, the real power of effects is seen. For example a scheduler managing multiple threads, actors communicating via (a-)synchronous message passing etc. In some sense, resuming continuations later is all about distributing state and handling in a functional way I guess.

1 Like

Hm so, I don’t think the two styles are that different, as there is a mechanical way of rewriting one into the other and vice-versa [1]. But yes they do favor different situations: Effect handlers are very good at sequencing “do this and then do that”, while TEA is all about “how do you react to this or that”. The good news is that you don’t have to pick a side, you can mix and match the two depending on the situation. For the roguelike, it made sense to use sequencing for the AI [2] – but when handling external events after an End_of_turn, you would naturally opt for a TEA update on the character state.

Anyway, state management in FP is always going to be a variation on model -> model. Whether it’s better to “keep everything in the same place” or “distribute in separate places” is not about FP vs OO, it only depends on what’s important to understand the logic. I mean, you are going to split your code as soon as it doesn’t fit in one file/function, but you are going to choose which stuff to keep together or separate. (For example in flappy, it could make sense to separate the menu/pause logic and the gameplay logic… but that refactoring wouldn’t make it less FP)

Ah also, closures already give you “internal private state”, effect handlers don’t bring anything new here. This internal state is not a bad thing if it gives you confidence that no one else is touching it (just like abstract types), but yes you should also have “public state” in your model for the rest :slight_smile:

  1. You don’t even need to rewrite your code to integrate it with the other style: if you give me a TEA framework, I can still use it to write my code with the effect handlers style. And if I gave you my effect-based scheduler, you could also do an inversion of control and write your code as TEA recommends. ↩︎

  2. Keep in mind that my examples AI for the roguelike are tailor-made for demonstrating effect handlers! ↩︎


I don’t think that argument is relevant - we are talking about the complexity as seen from the programmer - who’s not just mechanical.

Yes ofc. you can, but what I bring up is how practical it is to extend the game logic when you have comitted yourself to a certain structure. Specifically I comment on the scaling of complexity when representing part of the gamestate as continuations instead of as data. Why I comment on this, is that decision-tree as continuations was essential to how you elegantly factored the gameloop to become a scheduler.

This doesn’t have anything to do with it. As long as the auxiliary functions are called from the same place, you know where to look for the code updating the state. With the effect-based structure it seems like it becomes much less obvious who’s updating what state when.

I would not have chosen to represent gamestate as closures.

Would be interesting to see examples of this (including the UI framework) (:

I think we have a disagreement on closures because we are looking at them from a different point of view: for me they are the solution to managing the scaling of complexity, but for you they increase it? I’m speculating, but I’m looking at the code complexity (where implicit closures/continuations help) while I believe you are looking at the datatypes? (where seeing a closure is a red flag: “this could do anything, good luck figuring it out!”)

To reuse the flappy example:

module Actions : sig
  type event
  val update : event -> model -> model
  val wing_flap : direction -> event
  val toggle_pause : event
= struct ... end

You already said that you prefer to implement this module with type event = Wing_flap of direction | Toggle_pause, a complex “big match” update and trivial constructors for the operations.
While for me it’s much more direct to define type event = model -> model, because then update is trivial and all my update code is organized in a standard way by OCaml builtin let. The explicit sum type would add an indirection in that logic[1], so I would only do that extra work if I needed to serialize those values (which is a valid point for not having closures in your game state!)

Anyway so to answer your question, my solution to handling the complexity of closures is that the model should contain only public stuff: then it’s much less dangerous to have arbitrary closures since they can only do so much. But I get to hide the private state in the closures, so they can still be smart, and I can add new closures/perform at arbitrary points without much effort (which is nice for exploratory programming, a big part of game dev.)

Without hidden internal state, the model must contain both the public and private state… and from my perspective, this is unmanageable: it’s less clear what is really public with all the noisy private state in there. It’s more fragile, since adding/removing private state for some operation requires a model type update which may break other functions. In fact in Elm, they grossly cheat when it comes to HTML widgets as they are allowed to have internal private state that isn’t reflected in your model (but it’s a convenience hack, so this feature isn’t available to Elm programmers).

Either way, there is always going to be a magical convention that helps structure our understanding of the code. With enough practice in one style, we learn to ignore the organizational aspects that don’t contribute to the task at hand.

  1. The code is the same, I just feel like I’m doing the compiler work if I can’t use closures! ↩︎

Sorry about the late reply, I was busy actually verifying that my concept works out. Thankfully it does :smile:

The UI framework is inspired by Concur which means that every widget listens for some set of events and suspends computation until one of these events occurs. Once it does, it continues execution until it encounter the next await at which point it will suspend once more. Once a widget has fulfilled its purpose it terminates with some return value (e.g. text input is confirmed with enter → return with a string).
Complex UIs are then built by composing simpler widgets. A more detailed explanation can be found in the link above.

I’ve implemented this concept using an await function that takes a list of triggers and a handler for each possible event:

effect Await : Event.t list -> Event.t
let rec await triggers handler =
  handler (EffectHandlers.perform (Await triggers))

let rec check_box checked  =
  (* display check box *)
  await [Mouse_press; Key_press] (function
  | Mouse_press ->
    print_endline "I've been (un-)checked!";
    check_box (not checked)
  | Key_press -> (* Terminate if any key is pressed *) checked)

Every widget can then be implemented as a function which displays the widget and performs an Await triggers which is resumed by passing an event from triggers, for example the check box above.

The most complex widget I’ve implemented so far is a single line text input. It can be clicked or selected with tab. Moving the mouse while holding the button down changes the selection. As an automaton:


Obviously, this is not a directed acyclic graph and therefore not a perfect fit for the implicit state stored in the continuation. Specifically, Pressed has an edge to one of its multiple parents. We can extract the Pressed state into its own function and therefore avoid this issue by ‘duplicating’ this state. Now Pressed no longer has multiple parents:


Some cycles remain and we can’t remove them because they are essential to the functionality. Instead we throw an exception Repeat that returns us to a parent node (explicitly shown for Focused → Pressed → Released → Focused). To do that we modify await:

let rec await triggers handler =
  try handler (EffectHandlers.perform (Await triggers)) with
  | Repeat -> await triggers handler

In the end this results in this main method for the text input, with only minor simplifications:

method execute =
  (* Represent the Pressed state.
     We await the Mouse_release and handle Mouse_motion while we wait. *)
  let pressed (x,_) =
    selection <- Point x;
    await [`Mouse_release; `Mouse_motion] @@ function
    | `Mouse_release (_, LMB) ->
    | `Mouse_motion (x,_) ->
      self#select x;
      raise Repeat (* This restarts the await function *)
    | _ ->
      raise Repeat

  (* We start in the Unfocused state *)
    await [`Mouse_press; `Key_press] @@ function
    | `Mouse_press (pos, LMB) ->
       (* We have registered the press, but only when it is released
          will we be focused. *)
       pressed pos
    | `Key_press Tab ->
      selection <- Area (0, List.length keys)
    | _ -> raise Repeat

  (* We move into the Focused state *)
    await [`Codepoint; `Key_press; `Mouse_press] @@ function
    | `Key_press Tab | `Key_press Return ->
      () (* The only path without raising Repeat.
            Therefore we only leave this await when a tab or return occurs *)
    | `Mouse_press (pos, LMB) ->
      pressed pos;
      raise Repeat
    | `Key_press c ->
      self#insert c;
      raise Repeat
    | _ -> raise Repeat
  (* We have reached the finished state. We can now return the entered text. *)

I think that this method captures the automaton above quite nicely and can be relatively easily understood (hopefully even when one is unfamiliar with the framework and accepts that some magic is happening in the background (: ). Implementing automatons in terms of effect handlers seems to work quite well, at least for games and UIs. What these automatons have in common is that they can be thought of as flows, starting at some state and ending at one of multiple final states and only have few edges that don’t fit this scheme, turning them into ‘directed almost acyclic graphs’.

There is obviously a lot more necessary for a UI framework (e.g. resizing the window/widgets, delegating the events to the correct widget, composing widgets, drawing on the screen etc.) and I plan to write about it at some point in the future. But for that I will first need to actually solve these problems as right now their implementation is quite barebones. The code can be found here for those interested (still very early in development!): GitHub - Willenbrink/bogue: GUI library for ocaml based on SDL2


If you want to be able to animate several widgets at the same time - do you intend to handle concurrency in the widget-scheduler or in a wrapping concurrency-scheduler? Would be nice to see composition of schedulers.

Though maybe one wants to handle concurrency in the widget-scheduler in the end, e.g. to control render-framerates of widgets based on some heuristic. Could this be implemented by handling effects intended for the wrapping scheduler, and filtering them before reperforming them?

But I guess that you often want a widget to continue existing, and return intermediate values too?

Very much like FRP, though you can use fixpoint combinators to make recursion over the whole FRP-graph. Do you avoid cycles to let the code be simple, or are there other reasons?

I’ve not given much thought to animations yet but the simplest way would be to add a Clock or FrameRate event that is triggered at a fixed interval (basically a FRP signal). Then a simple delay could be:

let duration = ref 100 in
await [`Clock] @@ function
| `Clock delta_t ->
  duration := !duration - delta_t in
  if !duration > 0
  then raise Repeat
  else ()

Regarding the scheduler, currently there is no one singular scheduler. Each widget (except the root widget) is contained in another widget. This parent widget is then responsible for scheduling+drawing its children.
In the layout window (row [button; label]) the awaits of button and label are handled by row and its continuation are also stored there. row then performs its own await with a union of all the awaited events of its children. That gives row a lot of fine-grained control on how it intends to delegate the events (e.g. redirecting the click events on both the button and the label to the button, thereby allowing one to click on the label and also affect the button).

If I understand you correctly, this is handled by the distributed schedulers.

Yes, yes I do. Currently I can’t get that to work unfortunately. Ideally we solve this by simply extending the Await effect: effect Await : Event.t list * 'a -> Event.t. This has the unfortunate effect that (perhaps due to the lack of an effect system?) the parent can not determine that the child performed an Await with the correct type. In theory it should be possible as every widget is represented as a parametric object and the children and the parent have an identical type:

class virtual ['a] widget =
    (* An effect system should determine that execute only raises Awaits with type 'a *)
    method virtual execute : 'a
class ['a] row =
    val child : 'a widget (* For simplicity only one child *)

    method execute =
      try child#execute with
      | effect Await (events, state) k -> ... (* state has a locally abstract type and NOT 'a *)

    method exeucte_alt =
      (* Does not typecheck as 'a is not bound (although it should be by the surrounding class ['a] row.) I might miss an obvious solution here. *)
      let effect Await_alt : Event.t list * 'a -> Event.t in
      let await x = perform (Await_alt x) in
      try child#execute_alt await with
      | effect Await_alt (events, state) k -> ...

Perhaps objects were not the right choice and I should have used first-class modules instead. Although objects seem to perfectly fit the needs of a GUI.

I’ve never really looked into FRP and am therefore unfamiliar with the fixpoint combinators. I also don’t really see the parallel here to FRP. The window emits events on user input and a signal representing frame draws / elapsed time and the widgets react to events. This is similar to FRP. But what I was talking about there is the implicit state stored in the continuation. Can you elaborate? As I understand it:

await [A,C] @@ function
| A -> (*state A *)
    begin await [B;C] @@ function
    | B -> ... (* state B*)
    | C ->
(* How would I move from here to state C below? That would basically be a goto.
   My solution is to duplicate the state C by extracting it into a function called both here and below. *)
    handle_state_C ()
(* Alternatively we can do that with exceptions *)
    raise State_C
| C | exception State_C -> handle_state_C ()

Regarding the parametric await, would this work? (I’m not sure that you can do this directly with the ['a] class type variables)

class ['a] row =
    val child : 'a widget
    method execute_alt _ =
      (fun (type a) (child : a widget) ->
        let effect Await_alt : Event.t list * a -> Event.t in
        let await x = perform (Await_alt x) in
        try child#execute_alt await with
        | effect Await_alt (events, state) k -> ...
1 Like

Thanks a lot, that does work. Although the typing with all these continuations, objects, modules etc. is breaking my head. Especially the continuations ^^

What I meant was that I would more commonly want to control FPS from the outside, and not from within a widget. My idea was that the concurrency-yields could be filtered by the wrapping handler, thereby limiting the FPS of the inner widget.

Sorry, my parallel was not a direct one, and on further thought might not be too interesting.

Yes, this is actually a good example of one of the reasons why I prefer to represent state as data. In FRP I usually implement these small “state-machines” as simple E.folds - which accumulates the current state over a stream of events.

The simplicity and power of this - mixed with the guarantees of pattern-matching, I feel makes for a nice design-pattern. The pattern-match over the tuple of the accumulated state and the input-event guarantee that I’ve taken every state-transition into consideration.

That is something I’m still considering. Right now FPS = Events/s as we only redraw when an event, that some widget listens for, occurs (plus some external events like window resize). The question is how well this interacts with realtime widgets, animations and how often events can occur. It’s also not clear to me how tightly the logic/interactivity of the UI should be coupled with the display. In theory we could have two separate functions state -> event -> state and state -> display (similar to the functional vision presented in the blog post by @Gopiandcode) with potentially the same solution to animation problems.

A fair point. I previously had a nice trick with polymorphic variants working (the events listened to are a subset of all events and the match must cover only those events listened to). Unfortunately that doesn’t work right now and I will need to read up on polymorphic variants and GADTs a bit more to find a nice solution. Potentially a ppx could also solve this problem by extracting the awaited events from the match. Then the pattern matching could again guarantee that all transitions are covered.

One general problem of your approach is that you cover significantly more transitions for sparse automatons (e.g. the sequence A → B → C → D is absolutely trivial with the effectful approach and requires no state at all).

To some extent both approaches can be mixed and matched (although I haven’t done that yet) to get the best (or worst?) of both worlds. The effectful approach has a lot of potential and I’m interested to see where this leads to. Only time will tell how the tradeoffs scale to larger programs and whether the benefits outweigh the disadvantages.

Ah, this wouldn’t work - as the continuation would then be lost. But delaying the yield might? Though if the logic depended on order of execution between widgets, this wouldn’t work either.
Edit: And if delaying - the simulation would be delayed, which isn’t wanted.

Yes, I also used this separation of simulation and rendering in the FRP code of niseq too - with each running at independent update rates. This structure is the same as in Elm and MVC.

Though a problem with this structure concerning modern UI’s, is that the display can choose where to render an interactive widget - which makes the event dependent on the display. In FRP you solve this with a fixpoint combinator around the whole program, and in Elm this recursion is hidden from you. How do you think you would solve this?

A continuation should never be lost. Maybe raise another effect that stores the continuation of the widget and disables the listener for some delay? I still don’t see the problem either way. If events occur less than 60/s then no limit is necessary at all as the widgets are updated rarely. If 3 events occur within a single frame, why should I limit the update rate? If this was intentional (e.g. OS hanging) all 3 events should be handled within that frame and the result displayed only once they have been handled.

Currently the whole state necessary to display a widget is stored in a widget object that the event loop also has access to. So display can modify the object and the event loop reacts to that. That would be a practical solution on a small scale but that kind of mutability will certainly be hard to reason about.

I can’t really think of a practical example of display influencing event besides animations. Even then I’m not convinced. Generally I would handle anything that might influence event within event. If a button is animated for 100ms before becoming clickable that should IMHO be part of event, not display. (display might still be stateful in addition, event should not be concerned with e.g. keeping track of the percentage of a loading bar for a delay as this does not influence the interaction).
For e.g. resizing events and widgets becoming hidden I would simply have them wait. If a button awaits a click but is hidden there can never land a click on it, ergo it simply waits. Reacting to a resize should also be possible without handling that only in the display loop.

Lastly it should be said that creating the most advanced and beautiful UI is not a goal for this project. Instead I aim to create a framework that is primarily easy to use as a developer and only secondarily offers additional features. I simply think it’s unfeasible for a single person to create a both powerful and simple UI framework. If some rare feature does not work nicely, so be it. As long as I actually finish the framework (: