'Rails on OCaml'

I’d like to discuss @jasim 's post a bit. It’s a great conversation starter but it got me thinking that we need to actually look at how such a framework would look and feel from the user’s (i.e. the web dev’s) perspective. And whenever I think of web dev, the pattern I keep coming back to is the tried-and-true MVC. And I really like how Phoenix has done it, so I keep wanting to borrow some ideas from there. So to me, here’s what a nice project layout might look like (this is native so all module names are disambiguated by dune):

dune-project
README.md
src/
  dune
  model/
    dune
    User.re
  service/
    dune
    User.re
  web/
    controller/
      dune
      User.re
    dune
    middleware/
      ...
    Server.re
    template/
      not-found.mustache.html
      not-found.mustache.json

And here are a couple of example usages from this imaginary project: https://gist.github.com/yawaramin/f0a24f1b01b193dd6d251e5e43be65e9

So in the above snippet, the main app is set up in src/web/Server.re. What I wanted here was a really minimalistic but also extensible way to set up routes and middlewares. And to me the way that makes the most sense to handle routing in OCaml is pattern-matching (kudos to the ReasonReact team for coming up with that simple and elegant design). So the Server module is the heart of the app, but this being OCaml, there’s no magic involved–the route handlers are just functions from routes and requests to (promises of) responses.

Requests should contain things like the headers, query params, cookies, session, flash, body, etc.

Responses should be constructed programmatically (using ReWeb.Response.{make,makeAsync}) or from ocaml-mustache templates (see template/ subdirectory). The response constructor function should take optional arguments for status, headers, cookies, etc. with appropriate defaults.

Middleware should be functions that take a response and return a promise of request (which might be a cancelled request); in this way they are composeable (much like Elixir’s plugs).

I haven’t touched on data access here; to be honest I’m not very familiar with the OCaml database libraries at the moment, but I am leaning towards Caqti since it seems to be active, and supports the popular OSS DBs as well as Lwt.

Scaffolding would be great to have and it would be great if dune or opam would let us do that (e.g. opam new my-app --scaffold reweb), but in the meantime we can probably get away with maintaining example starter projects which people can copy to start with.

Lots of this is off the top of my head but what I’m trying to do here is uncover an ergonomic design for a web framework, based on learnings from recent successful frameworks. I think a large part of it would be just giving users a ‘blessed stack’ of libraries to start with, so they don’t have to go hunting throughout the ecosystem and don’t have to figure out and set up a project structure from scratch (where does the router go? where do the controllers go? where do I put the tests?). And so on.

P.S. the pattern-matching routers look really nice in OCaml syntax.

4 Likes

Is it the only pattern?
In fact, what are all the possible patterns?

1 Like

Well, for example Eliom has a service-handler pattern for creating endpoints: https://ocsigen.org/eliom/6.6/manual/server-services

There is ocaml-webmachine which treats endpoints as resources that are exposed according to a state machine.

There’s Cohttp which treats the server as a single function that gets run by the event loop. It’s up to users to define abstractions on top of that.

So I think there is no shortage of approaches to web servers in OCaml.

1 Like

Thanks @yawaramin for framing this nicely and continuing the conversation.

In my mind I wanted to design an ideal framework, yet somehow avoid the second-system effect by basing the design on actual application needs. And so in service to this idealism I have wanted to build a web application in Haskell and borrow the best ideas they have to offer. I think it might never come to pass, so if there are people who know how the Haskell ecosystem approaches this, that’d be great to hear.

Since we have the opportunity to do something from scratch, it might be worth thinking what a futuristic framework should look like. Otherwise we’ll have to play catch-up with mature alternatives, repeating their mistakes, and not having anything new and radically better to offer to the world. Working on that kind of a project might not inspire the kind of passion it takes to bring something like this into production.

I think dhh’s approach to Rails is worth emulating - framework must be extracted, not envisioned. So what kind of web applications is Rails ideally suited for, and what kinds does it make it difficult and unwieldy? And how can we have the best of all worlds?

Rails is built as a vehicle for Basecamp, which is a project management software. The major operations are create, read, update, and delete of database records. It is filled with forms, and a simple request-response cycle works well for most of them. The closer the application you’re building is to a Basecamp, the more affordances Rails gives you. But if you’re building say a WhatsApp clone with Rails, you’re going against the grain of the system.

ORM

Since most operations are CRUD, Rails gives significant developer comfort when it comes to database operations, and that is thanks to its ORM. It gives a lazy interface to construct queries - they are first class values - and gets executed only at the last possible moment. Rails follows the ActiveRecord pattern and I prefer it over having to construct SQL statements for mundane stuff. We can always drop down to raw SQL when needed, but for simple reads and updates, I’ve found ActiveRecord to be more ergonomic.

But would a simple layering like mirage/orm be fine, or are Typed ORMs like pgocaml, which extracts database schema info in compile time and makes types out of them, worth looking into? Haskell has multiple such implementations and I hope they have converged on some Do’s and Don’ts there.

Database Migrations

I think Phoenix, Rails, Django, and even Java with Flyway have all converged over an identical pattern for database migrations and there is no wheel to be reinvented here.

Streaming

Rails support streaming by treating the response object as an enumerator. This allows us to build pages that can immediately start sending data back to the client without having to wait till the whole page is built.

Concurrency, Evented vs Threaded vs Multi-process

My understanding is quite hazy on this. What is it going to be? I know I don’t want every single one of my database call to be a callback, like we’re forced to do in Node, which is now papered over by async/await. But is it possible to build non-evented servers on which a WhatsApp can be built?

Background Jobs

Do we serialize background jobs outside the application - like in Sidekiq, which stores everything in Redis? Or is it possible to spawn off new processes like in Erlang? How do we scale processes or jobs across multiple servers? How about observability?

Templating

Conventional string-interpolation with partials is much less ergonomic than writing reusable components and dealing with them as HTML tags like in JSX. It should be possible to use the same code to render pages both on the server and the client – React+JSX server-side rendering does this really well.

Client-side code

Ocsigen seems to allow the ability to write functions that execute on both the client and the server transparently, while allowing context-dependent behavior when necessary. It takes care of maintaining and transferring state to and fro between server and client, eliminating the need to write custom endpoints. I’ve tried pursuing the documentation and found it difficult to understand, happy to be corrected.

In my experience the majority of client-side code need to be present only on the browser, and treating the code paths separately is clearer to understand. What we need however is a better way to allow data to be passed around, which the next section addresses.

Client-side data

Rich web applications require hundreds of JSON endpoints with varying levels of granularity. GraphQL attempts to solve this by creating a declarative API, which express the schema of the data needed by each client request. Interesting waters there.


These are a set of questions I had in my mind when I wrote that post. They are not terribly well thought out, but I’m hoping those with more clarity can chime in.

5 Likes

Note that Eliom contains several pieces:

  • The web framework indeed promotes this service-handler pattern, although that is fairly lightweight and flexible (and you can basically treat it like cohttp and add your abstraction on top, if you feel like it)
  • The language, on the other hand, is almost completely model-agnostic, it only cares about making sure data-transfers between client and servers are well typed, sound, and as easy to do as possible.

So far, the language and the framework are not as separate as I would like, but it’s certainly on my todo-list to make all this more modular.

2 Likes