And yet, it is precisely how monads were discovered and applied to PL semantics and implementation. We can go to (nearly) the source: Wadler’s original paper where he proposes monads for Haskell: “The Essence of Functional Programming.” There, in section “5.2 The Past” he writes:
Finally, something should be said about the origin of these ideas.
The notion of monad comes from category theory [Mac71, LS86]. It first arose in
the area of homological algebra, but later was recognised (due to the work of Kleisli
and of Eilenberg and Moore) to have much wider applications. Its importance emerged
slowly: in early days, it was not even given a proper name, but called simply a “standard
construction” or a “triple”. The formulation used here is due to Kleisli.
Eugenio Moggi proposed that monads provide a useful structuring tool for denotational
semantics [Mog89a, Mog89b]. He showed how lambda calculus could be given call-by-value
and call-by-name semantics in an arbitrary monad, and how monads could encapsulate
a wide variety of programming language features such as state, exception handling, and
continuations
In that paper, Wadler shows how you can rederive the call-by-value CPS interpreter, from a generic call-by-value CPS monadic interpreter, by instantiating the CPS monad. He mentions that this is a generic trick, and you can get other interpreters (e.g. for lambda+Errors) by instantiating other monads. Given that denotational interpreters for languages like lambda+Error, lambda+state, lambda+continuations had existed for well over ten years by the time Moggi came to write his paper, it is probably reasonable to assume that he came to his monad ideas by observing the similarities and differences among various denotational interpreters. The similarities became a monadic interpreter (the “call-by-value semantics in an arbitrary monad”) and the differences became various monads.
All of this is pretty anodyne, and very obvious from Wadler’s paper. Now let me illustrate the problem that a beginner has with monads.
1 the beginner wants to write a program that will raise and catch exceptions.
2. the beginner’s PL wizard friend tells them “use the result monad instead, it’s great!”
3. and sure, you can write lambda, variables, constants, application, and even raise
with the result monad.
4. but nowhere is it written down how to do try...catch
.
Is our beginner to just figure it out for themself? I mean, I learned this stuff in 1986: so going on 37 years ago. But our beginner has to figure it out without the benefit of a graduate course in programming languages from Prakash Panangaden: it’s not so easy for them.
For the record, it’s straightforward (in the language of the result monad we can find in OCaml) to implement try-catch:
let trycatch _M _FN = match _M with Ok _ as v -> v | Error e -> _FN e ;;
val trycatch :
('a, 'b) result ->
('b -> ('a, 'c) result) -> ('a, 'c) result = <fun>
The type tells us that this probably does what we expect. And we use it thus:
let readit f =
let open Fpath in
let f = v f in
let* contents = OS.File.read f in
Ok contents ;;
# trycatch (readit "/etc/motd") (function `Msg s -> Ok s);;
- : (string, 'a) Rresult.result =
Ok
".... contents of /etc/motd ...."
# trycatch (readit "/etc/nonexistent") (function `Msg s -> Ok s);;
- : (string, 'a) Rresult.result =
Ok "/etc/nonexistent: No such file or directory"
#
I’m not claiming that this is the best way to implement a try-catch combinator. But some try-catch combinator should be supplied. It shouldn’t be up to the user to just dream one up. Notice also that in this example, we don’t leave the result monad in order to do a try-catch: we stay in the little language. That’s important, too: it means that your beginner doesn’t have to jump back-and-forth between the “language above the monad” and the “language below the monad” (or in semantics terms, between the left-hand-side and the right-hand-side of the sematic interpreter).