Tutorial: Roguelike with effect handlers

Regarding the parametric await, would this work? (I’m not sure that you can do this directly with the ['a] class type variables)

class ['a] row =
    val child : 'a widget
    method execute_alt _ =
      (fun (type a) (child : a widget) ->
        let effect Await_alt : Event.t list * a -> Event.t in
        let await x = perform (Await_alt x) in
        try child#execute_alt await with
        | effect Await_alt (events, state) k -> ...
1 Like

Thanks a lot, that does work. Although the typing with all these continuations, objects, modules etc. is breaking my head. Especially the continuations ^^

What I meant was that I would more commonly want to control FPS from the outside, and not from within a widget. My idea was that the concurrency-yields could be filtered by the wrapping handler, thereby limiting the FPS of the inner widget.

Sorry, my parallel was not a direct one, and on further thought might not be too interesting.

Yes, this is actually a good example of one of the reasons why I prefer to represent state as data. In FRP I usually implement these small “state-machines” as simple E.folds - which accumulates the current state over a stream of events.

The simplicity and power of this - mixed with the guarantees of pattern-matching, I feel makes for a nice design-pattern. The pattern-match over the tuple of the accumulated state and the input-event guarantee that I’ve taken every state-transition into consideration.

That is something I’m still considering. Right now FPS = Events/s as we only redraw when an event, that some widget listens for, occurs (plus some external events like window resize). The question is how well this interacts with realtime widgets, animations and how often events can occur. It’s also not clear to me how tightly the logic/interactivity of the UI should be coupled with the display. In theory we could have two separate functions state -> event -> state and state -> display (similar to the functional vision presented in the blog post by @Gopiandcode) with potentially the same solution to animation problems.

A fair point. I previously had a nice trick with polymorphic variants working (the events listened to are a subset of all events and the match must cover only those events listened to). Unfortunately that doesn’t work right now and I will need to read up on polymorphic variants and GADTs a bit more to find a nice solution. Potentially a ppx could also solve this problem by extracting the awaited events from the match. Then the pattern matching could again guarantee that all transitions are covered.

One general problem of your approach is that you cover significantly more transitions for sparse automatons (e.g. the sequence A → B → C → D is absolutely trivial with the effectful approach and requires no state at all).

To some extent both approaches can be mixed and matched (although I haven’t done that yet) to get the best (or worst?) of both worlds. The effectful approach has a lot of potential and I’m interested to see where this leads to. Only time will tell how the tradeoffs scale to larger programs and whether the benefits outweigh the disadvantages.

Ah, this wouldn’t work - as the continuation would then be lost. But delaying the yield might? Though if the logic depended on order of execution between widgets, this wouldn’t work either.
Edit: And if delaying - the simulation would be delayed, which isn’t wanted.

Yes, I also used this separation of simulation and rendering in the FRP code of niseq too - with each running at independent update rates. This structure is the same as in Elm and MVC.

Though a problem with this structure concerning modern UI’s, is that the display can choose where to render an interactive widget - which makes the event dependent on the display. In FRP you solve this with a fixpoint combinator around the whole program, and in Elm this recursion is hidden from you. How do you think you would solve this?

A continuation should never be lost. Maybe raise another effect that stores the continuation of the widget and disables the listener for some delay? I still don’t see the problem either way. If events occur less than 60/s then no limit is necessary at all as the widgets are updated rarely. If 3 events occur within a single frame, why should I limit the update rate? If this was intentional (e.g. OS hanging) all 3 events should be handled within that frame and the result displayed only once they have been handled.

Currently the whole state necessary to display a widget is stored in a widget object that the event loop also has access to. So display can modify the object and the event loop reacts to that. That would be a practical solution on a small scale but that kind of mutability will certainly be hard to reason about.

I can’t really think of a practical example of display influencing event besides animations. Even then I’m not convinced. Generally I would handle anything that might influence event within event. If a button is animated for 100ms before becoming clickable that should IMHO be part of event, not display. (display might still be stateful in addition, event should not be concerned with e.g. keeping track of the percentage of a loading bar for a delay as this does not influence the interaction).
For e.g. resizing events and widgets becoming hidden I would simply have them wait. If a button awaits a click but is hidden there can never land a click on it, ergo it simply waits. Reacting to a resize should also be possible without handling that only in the display loop.

Lastly it should be said that creating the most advanced and beautiful UI is not a goal for this project. Instead I aim to create a framework that is primarily easy to use as a developer and only secondarily offers additional features. I simply think it’s unfeasible for a single person to create a both powerful and simple UI framework. If some rare feature does not work nicely, so be it. As long as I actually finish the framework (: