Can monads help me my refactor code for an enhanced data structure?

Consider me to be a relatively advanced, or transitional, member of the “I don’t get monads” population. After reading more blog posts than I can remember, parts one or two of Wadler’s papers (until I got stuck), sections of my old Haskell textbooks, and answers here, I kind of get the idea, and kind of see why you might want to use monads in some contexts outside of Haskell. (Oh yeah, and I studied category theory a little bit a couple of years ago.) But I have never seen a need for monads in my own code. (Exceptions? OK with me. Lists are monads? Great! I already have them.) Now, to my surprise, I think I might have a problem that would benefit from monads. Or maybe not.

I’ve written a moderate amount of sometimes complicated code to process sequences of data. In particular, these are OCaml lazy lists (Batteries.LazyList) of lists of Owl matrices. I’ve decided that the lists of matrices that are the elements of the lazy list should have timestamps in the form of integers. I can define pairs, or better yet records with one field to represent the time and one field to represent a list of matrices:

type dists_at_t = { t : int; dists : Owl.Mat.mat list; }

Then the lazy lists will contain dists_at_t's. However, that means that I have to go through my code altering it to pack and unpack the dists_at_t records in the lazy lists being processed, since most of the processing only involves lists of matrices.

Since one of the things monads can do is pack and unpack data structures, I thought that they might help, but I’m not sure what my next step is. Maybe I have the same amount of work whether I write my code monad-style or not. Does it seem likely that some existing library might help? Maybe the new BAP monads library? (Have I not told you enough about the details of the code to answer?) Thanks.

1 Like

Maybe you just need to use the map operation? Define your computation function f which operates on your type dists_at_t and map it over the lazy list? This will give you a lazy list of results.

If there is some context that you need to carry as you process the list, you might want to use a lazy right fold. Perhaps there is a simpler way to do the same using monads, but this is one way to do it without them.

Maybe you’re doing something more complicated – say each Owl matrix gives a new list of matrices, then maybe you want to use monadic bind on lists:

let bind lst f =
  List.concat (List.map f lst)

Maybe you want a time-keeping monad, to get a record of running times?

type 'a timer = {t: int, dists: 'a}
let return x = {t = 0; dists: x;}
let bind mx f =
    let init = mx.t in
    let start = int_of_float (Unix.time ()) in
    let tmp = f mx.dists in
    let delta_t = int_of_float (Unix.time ()) - start in
    {t = init + delta_t; dists = tmp}
(* wrap up this code in a module if desired... *)

let computation (m: Owl.matrix) = (...)
let data = (...) (* define lazy list of lists of matrices *)
let results = 
  LazyList.map (return %> fun mx -> bind mx (List.map computation)) data in
(...)

I’m not entirely sure what exactly you’re looking for so I’ve just listed some possibilities I could think of…

4 Likes

Thanks vey much @theindigamer. This is helping me think through my (poorly described) issues.

Wow–getting the types right for Batteries.LazyList.lazy_right_fold is non-trivial (for me).

I may be back with a concrete question if I can figure out what to ask.

Monads are definitely not about packing and unpacking data structures. Monads are about representing computations as values. A computation is a procedure, (command, a set of instructions) bound with an execution environment and producing values and affecting the environment. Monad, in this case, is a design pattern, along with an abstraction, that helps to deal with this.

Concerning your problem, monads may be of help or maybe not. What you want is to make your code generic with respect to the data representation. What you need is a proper abstraction. In your case, you don’t have one. A type in OCaml defines a data representation, not an abstraction. Your functions depend on a particular representation. By exposing the representation you break the abstraction. The abstractions are defined by module types (signatures) and are implemented by modules. So what you need is a module Dist that will abstract the representation of a dist. Now your code will depend on a Dist.t abstraction, and will not break when you will change its representation.

Now, when a monad can help:

  1. you need to pass a state to many functions. You can either pass it explicitly as a parameter to all functions, i.e., many of your functions will have type 'a -> state -> state, you can use a mutable global variable, you can use OO. Or you can use a state monad. In fact, underneath the hood, a state monad is a function that takes an 'a value and returns the state -> state function. So, since you have lots of functions that end with the state -> state type, the State monad just captures this idea into an abstraction of state transformation. (There are also two refinements, Reader and Writer, that do the transformation in only one direction).

  2. You need to deal with computations that may diverge. You may use exceptions, but they hide the effect of the diverge from the type system. Thus you can use any partial computation monad (the Result Monad, the Option (Maybe) monad, etc) to work with the diverging computations explicitly.

  3. You would like to work with non-deterministic computations, i.e., functions that return several results. And you would like to express your algorithms in terms of deterministic functions. Then you can use the List monad.

  4. You need co-routines, generators, gotos, and arbitrary control flow, or you just want to confuse people that read your code, or you just want to show off. Use the continuation monad :slight_smile: Well, in fact, it has its own applications, for example in BAP we are using the continuation monad to implement non-deterministic Turing Machines so that we can jump between different computation in arbitrary order. It allows us to express algorithms in terms of deterministic machine, while the monad takes care of extending it to the non-deterministic case.

15 Likes

@ivg, that is one of the clearest things I’ve read about monads, ever. Thank you.

Some of you wrote is not unfamiliar, but still, like many people I find that understanding stuff about monads is a lot different from grokking them. The context you gave about abstraction and about what I have now was extremely helpful context for the rest.

One thing that confuses me in what you wrote is

Monads are about representing computations as values. A computation is a procedure, (command, a set of instructions) bound with an execution environment and producing values and affecting the environment.

This also sounds like things I’ve read (hmm maybe when I skimmed the BAP monads intro?), and what confuses me when I see it is that it sounds like a closure, or maybe a thunk carrying a closure, maybe with references in the carried environment. i.e. it seems like nothing new that would require the mysterious … monad. :slight_smile:

So let’s remove the mystery from the monad, let’s break the abstraction and look into the representation:

The stateful monads:

  type ('a,'s) reader = Reader of ('s -> 'a)
  type ('a,'s) writer = Writer of ('a * 's)
  type ('a,'s) state = State of (('a * 's) -> 'a)

The Continuation monad:

type ('a,'r) cont = Cont of (('a -> 'r) -> 'r)

A partial monad:

type ('a,'e) partial = Ok of 'a | Error of 'e

A non-deterministic monad:

type 'a nondeterm = Nil | This of 'a * 'a nondeterm

As you may see many of monads, are indeed closures underneath the hood. This shouldn’t surprise us, as monad is a functor and a closure is a versatile tool, in fact, you can implement all data structures, like list, option, map, etc using only closures. You can even go further and implement natural numbers as functions.

So why do we need a mysterious monad instead of closures? All the data types above can be circumscribed by the concept of a monad. You may see, that the representations are rather different, so we can’t actually generalize it using some mechanism of a language, e.g., parametric type, or functor, or anything like this. However, people noticed, that in all cases we use the same pattern: we have a return function of type 'a -> 'a t and a bind function, that has type ('a -> 'a t) -> ('a t -> 'a t) (alternatively, we can find a join operation as more natural, i.e., a t t -> 'a t).

Thus a monad is just a software design pattern, that may have different implementations, including closures, trees, lists, whatever. Unlike most design patterns, which you are usually implementing from scratch, Monad already has a few prebuilt implementations that you may tailor to your needs, so it’s quite rare when you really need to build your own monad from scratch. The Monads library provides a set of parametrized monads that should cover like 99% of needs. Though it would be never complete because the concept is very generic.

In other words, a monad is an abstract algebraic structure, like a group. A group is a set equipped with one commutative operation, that has certain properties. A group has an infinite number of implementations, as simple as natural numbers with addition or multiplication, and as complex as Galois and Lie groups. The same is the monad structure, it is a set of functors T on category C (category of values) equipped with two operations a unit operation that brings a value into the monad, of type C -> T(C) and a join operation that collapses the monad T(T(C)) -> T(C), and two rules that require associativity and the identity of the unit element. This gives us a very general and abstract definition, and many actual theories fit nicely into this abstraction and have these properties. A way to understand this abstract definition is to think of monads as computations that yield values of type C. The join operation defines the reduction rules, and the unit operation embeds values into pure computations (pure is yet another synonym for return, as well as unit). This idea fits nicely into a substitution monad (the one that is not in the Monads library), that defines a small step (reduction) semantics of some expression language.

To summarize, a monad is an abstract structure. Recognizing that some algebra is a monad, allows us to express concisely the properties of the structure. But more important, is that every abstraction opens an avenue for generalization. We can take an algorithm and generalize it with a monad, without even knowing what this monad would be, except that it would be a structure that obeys certain rules and have the specified signature. This makes our algorithm applicable and reusable and many very different contexts.

7 Likes

I probably went too far in trying to explain what a monad is, and finally caught myself into the trap of all monad tutorials that are trying to explain what monads are, instead of what monads are for. In the Monads library tutorial, I was trying to shift this perspective and was focusing on the Ding fur uns notion instead of Ding an sich, and I’m here is referring to the Critique of Pure Reason work by Immanuel Kant, who criticized an attempt to reason about things without their applications, and that understanding of a thing should be derived from its usage. That’s why in the Monads library tutorial, I’m describing an idea of a monad through its usage. And we can extend it even more, by saying that instead of focusing on abstractions, we should focus on their usage, i.e., on the generalization. What monad gives us in this context, is that we are generalizing our algorithm, by allowing a user to specify the behavior of the semicolon operator and the variable bindings. And this semicolon plus binding structure is actually a monad. And the benefit of using a monad abstraction is that our generalized algorithm allows a user to insert its code in-between the semicolons, where each semicolon is an extension point of our algorithm. Basically, it each step of an algorithm, we are asking a user what we should do next.

9 Likes

First time I see Kant mentioned in an explanation about Monads. :stuck_out_tongue:

1 Like

Thanks again @ivg. All of this is helpful. I had looked at the tutorial, too, but some of the things that you said above were more helpful for me.

I think I understand nearly everything that you wrote in all of the posts, and in the tutorial, and in everything else that I’ve read about monads in the last year or so. My suspicion is that for many people (me, at least), the only thing that will really give a feel for the value of monads would be working through an extensive project, either as a series of exercises, or reading through and playing with someone else’s code. The problem as I see it is that (a) monads are a very general abstraction over things that one could do without monads, and (b) they impose an additional conceptual cost, and a coding cost, in that everything has to be reformulated in monad form, rethought in monad form, and some code would have to be rewritten . It sounds to me like the real benefit of monads (outside of a language like Haskell) comes only in projects where there are moderately complex general algorithms that need to apply to different types. So one (some of us) need a project like that to make it worth learning to think monadically, but also, one can’t really learn to think monadically without doing a project like that! And there’s little reason to incur the costs otherwise. For me, I don’t yet feel that the costs are worth the effort. (I had hoped that my current little mini-project would provide a way to start. Your feedback helped me to see that I don’t want to go the monad route yet, but that was very helpful.)

It should be clear that this isn’t a criticism of monads or of what you’ve written @ivg. It’s actually partly based on what you wrote. What I wrote is more of an attempt at a diagnosis of what makes monads “difficult” in some sense–even though they’re also simple in another sense–for many people.

1 Like

I totally agree with you, in fact, though I’m kind of a monad expert, and I’m indeed using monads in real life, I’m definitely not a proponent of using the monadic style everywhere, and I believe that in most of the settings there are solutions that suit better. Monads introduce significant performance and readability costs and sometimes make your code over-generalized. You should also understand that monads are useful in libraries, not in the application code. Suppose you’re designing some complex recursive data structure, such as a tree, graph, or maybe just a list. The first thing is that you will find yourself implementing different sorts of iterators over this data structure, e.g., fold, fold_until, iter, find, find_map, etc. All these iterators apply user function (computation) recursively on your data structure. They just differ in how the user state is provided and passed, and how is the recursion applied. And it happens, that this is exactly captured by the Monad concept, and we can provide just one function, that will fit into all these iterators, e.g., for the list data type:

open Monads.Std

module Visitor(Monad : Monad.S2) = struct
  open Monad.Syntax
  let rec run xs ~f = match xs with
    | [] -> Monad.return ()
    | x :: xs -> f x >>= fun () -> run xs ~f
end

And now we can parametrize this generic visitors with arbitrary monads, to get different behaviors, for example, the Identity monad will give us a plain iter, the State + Maybe monad will give us find and find_map, and fold_until, just State will give us a plain fold. For example, this is how we can derive fold from the above definition:

module Fold = Visitor(Monad.State)

let fold xs ~init ~f =
  let open Monad.State.Syntax in
  Monad.State.exec (Fold.run xs ~f:(fun x ->
         Monad.State.update (fun s -> f s x))) init

And yes, it looks overcomplicated for such simple case as the list data structure. But if you are implementing more complex data structures, like program representations, or graphs, it starts to pay off very soon. Especially, since monad allows even bigger set of behaviors, like backtracking, IO, error handling, logging (don’t want to decide how to handle errors, and which logging facility to use, just wrap everything into a monad, and let the user choose).

To summarize, monads are good in generic libraries, especially for complex abstract data types, as you may define the structure of the datatype and let a user define the semantics.

9 Likes

Thanks @ivg ! That makes a lot of sense.

I like it, this book is like a Bible to me :slight_smile:

I’ve read someone who said (can’t remember who :wink: ) something like : “I also have a strong belief that programming is a prodigal son of mathematics, and will be happy to see the Return of the Prod”; and I can say that every kantian is waiting the time when every mathematician will throw away most of what Frege said about mathematic and logic, and will come back to the kantian home’s philosophy of mathematic.

Nevertheless, I’m not sure that the concept of Ding an sich, Thing in itself, or chose en soi as we say in french (which is a fundamental notion in kant’s philosophy) is of any interest in this situation. If find it more useful for physicists (most of them, as most of the human being, think that the reality is a thing in itself), than for computer scientists.

I think that chapter on conceptions in Kant’s logic could be more useful (i already used it in a french forum to explain to people more familiar with OOP why in OCaml subclasses are not necessarily subtypes), especially the part on genus and species and the one on “the use of conceptions in the abstract and in the concrete” :

For instance int option is used concretlly relatively to 'a option, and 'a option may be use concretly relatively to the conception of monad. The general definition of a monad is this signature

module type Monad = sig
  type +'a t
  val return : 'a -> 'a t
  val bind : 'a t -> ('a -> 'b t) -> 'b t
end

which we can instantiate concrectly with the option type :

module Opt : Monad with type +'a t = 'a option = struct
  type 'a t = 'a option
  let return x = Some x
  let bind m f = match m with
    | None -> None
    | Some x -> f x
end

And as you illustrate with your visitor pattern code, we can apply the conception of monad “to many things but there is less contained in it”.

@mars0i : you may be interested in reading the article functors, applicatives and monads in pictures, or the more detailed chapters :

in the book " Learn You a Haskell for Great Good!".

Out of topic point : since we speak about Kant, the Critique of Pure Reason and the conception of monad, i would like to mention kant’s refutation of Leibniz’s monadology :

Here the distinction between phenomena and thing in themself is of fundamental importance, and we find it in the distinction between physical and structural equality in OCaml :

type 'a leibniz = L of 'a

L 1 == L 1;; (* Kant's phenomenal point of view *)
- : bool = false

L 1 = L 1;; (* Leibniz non-discernibility point of view *)
- : bool = true
6 Likes

I was actually thinking more about the process of cognition of the Monad concept (the teaching and understanding framework), not about the Monad itself, and I was trying to say, that we should try to shift the attention to the a posteriori knowledge, as conceiving the noumenon of Monad is difficult, if not impossible, if it is described a priori and analytically. For example, when I was a young student, I was taught mathematics in a very abstract and axiomatic way (at the time when I was studying, Russian mathematical school was severely influenced by the Bourbaki group). It was really hard for me to get the real understanding or working knowledge of totally abstract ideas, expressed as a set of axioms (i.e., analytically). I basically started to understand the concepts many years later, when I started to use these concepts in a real life as a working engineer. So, I agree, that I extended the notion of Ding an sich a little bit further from the original Kantian definition. Though to myself, I’m envisioning all these dichotomies, like “thing in itself” vs “Thing for us”, “analytical vs. synthetic”, “abstract vs. concrete”, as categories connected with adjoint functors, that actually means that even reasoning about a monad is the monad. But seriously, we indeed can’t prefer abstract to concrete, or concrete to abstract, as neither is better than another, though what I’m seeing, especially in modern mathematics, is that there is a severe imbalance in the direction of abstraction, while the dual side doesn’t not get an equal attention.

With all these said, I totally agree, that the idea of conceptions can and should be used to illustrate the difference between a monad as an abstraction, vs. a monad as an implementation. However, abstraction has another duality - generalization. And I’m focusing on this dichotomy.

Concerning the Leibniz equality, it’s a funny coincidence, that I’m currently working on a hashconsing library that preserves the discernibility of structurally equal values, and trying to remove any notion of physical equality. The technical problem with physical equality, is that L 1 == L 1 is not always false, it is implementation specific, and can easily become true under flambda or other backend, such as bucklescript (and as a result of hashconsing optimization). That means, that we can’t rely on physical equality to distinguish between two drops of water, i.e., observably same objects in different place, as a compiler never took the responsibility to respect an object personal space.

3 Likes

I guess I should out myself here: I am a philosophy professor. (It wasn’t a secret–only a few clicks away.) I’ve been enjoying the philosophical analogies in this discussion.

(If anyone was interested, I could start a new thread on how Bertrand Russell’s early metaphysics shows what’s wrong with a common argument for OOP. :slight_smile: )

5 Likes

Indeed.

I’ll be delighted to hear what you have to say about this. I don’t think that the ideas behind OOP are wrong, but the problem is that most of OOP languages don’t handle well the distinction between subclasses and subtypes, because they don’t take care of variance problems (covariance, contravariance and invariance) with type parameters in type constructors. The main structural principle with the subtyping relation is this one :

M <: P    S <: M
-----------------
    S <: P

which is only a translation, in Gentzen style, of the first figure Barbara in aristotelian syllogistic :

  • All men are mortal. (M <: P)
  • All Greeks are men. (S <: M)
  • All Greeks are mortal. (S <: P)

Maybe, I will have to argue that what is wrong is the predicate calculus (that’s what I was referring to when I said to throw away what Frege said about logic).

Note that there is only one metaphysic : the kantian one ! :stuck_out_tongue:

4 Likes

I’m not familiar with these distinctions. (Pointers?)

Well, all right. This thread has gone far away from its origin anyway.

This is not about sub(classes,types) but about attributes and methods. It’s a simple point. And you don’t need Russell to see it, but I like the connection.

Sometimes people argued that mainstream OOP (Java, C++, Smalltalk, not CLOS) is a good way to organize code that models processes in the world, because things have properties and so it’s good to have objects with attributes and methods that are attached to them. Things have sizes genders and amounts of money and collections of employees. That’s how the world is, so let’s reflect that in our code. I think I read an argument like this in Booch’s Object-Oriented Analysis and Design with Applications back in its bible days. (Or is it still a bible?)

In The Problems of Philosophy, Bertrand Russell argued, against some earlier philosophers, that there are not only be monadic properties in the world–attributes–but also relations, such as taller-than, is-standing-between, etc. The above argument for mainstream OOP ignores this fact, and makes you represent relations as attributes and methods, which is an unnatural way to model things, and can sometimes complicate your code. You don’t need a philosopher to see this point, though. I think it’s obvious the first time you see something like

myBigDecimal.equals(yourBigDecimal)

But there you have it: Russell on OOP.

2 Likes

No problem, that’s not so difficult to understand. The subtyping relation lies in this question : i have a function which expect something of type A, can I give it something of type B ? If so, we say that B is a subtype of A. For instance human being is a subtype of animal : every human being is an animal. This is the way we organize our thought : we have a hierarchy of concepts (or type), not a hierarchy of objects (in the sense of OOP). The subtyping relation is what Kant called a categorical judgment (a relation between a subject and a predicate).

Now, take a look at type constructors (as option in OCaml) : if a is a subtype of b (a <: b), is a option a subtype of b option ? The answer is yes, we say that the option type constructor preserves the subtyping relation : it is covariant in its parameter. On the other hand, when a type constructor reverses the subtyping relation, we say that it is contravariant in its parameter. An example of such type constructor is : 'a -> int. Indeed, suppose you have a function which returns an int if you pass it an animal (animal -> int), then it is a function wich returns an int if you pass it a cat (cat -> int), therefore animal -> int <: cat -> int (but cat <: animal). Finally, when a type constructor is nor covariant, nor contravariant, we say it is invariant (as the array type constructor in OCaml).

In OCaml, we can put variance annotations in the definition of type constructor (when I wrote type +a t in the definition of a monad, the + means that the constructor has to be covariant in its parameter). Examples :

type -'a contra = 'a -> int
type +'a  cova = int -> 'a
type (-'a, +'b) app = 'a -> 'b

As you can see, the function type constructor (->) is contravariant in its first parameter, but covariant in its second parameter. Therefore, if we do 'a = 'b, the type 'a -> 'a is invariant in its parameter. And there lies the problem with inheritance between classes when you have binary methods (like the equals of your example :wink: ).

Let see that 'a -> 'a is invariant :

type +'a inv = 'a -> 'a;;
Error: In this definition, expected parameter variances are not satisfied. 
       The 1st type parameter was expected to be covariant, 
       but it is injective invariant.

type 'a inv = 'a -> 'a;;
type 'a inv = 'a -> 'a

And now, let see the problem with inheritance and subtyping when you have a binary method. I will consider vector in 1 or 2 dimensions with an add method :

class virtual vect = object ( _ : 'a)
  method virtual add : 'a -> 'a
end

class vect1d x_init = object
  inherit vect
  val x = x_init
  method get_x = x
  method add v = {< x = x + v#get_x >}
end

class vect2d x_init y_init = object
  inherit vect1d x_init
  val y = y_init
  method get_y = y
  method add v = {< x = x + v#get_x ; y = y + v#get_y >}
end

The class vect2d inherits the class vect1d but it is not a subtype, because the type of method add is invariant in its parameter. In OCaml we can coerce a value v : a to a supertype b of a with the notation v :> b.

let v = new vect2d 1 3;;
val v : vect2d = <obj>

(v :> vect1d);;
Error: Type vect2d = < add : vect2d -> vect2d; get_x : int; get_y : int > 
       is not a subtype of vect1d = < add : vect1d -> vect1d; get_x : int >
       Type vect1d = < add : vect1d -> vect1d; get_x : int >                    
       is not a subtype of vect2d = < add : vect2d -> vect2d; get_x : int; get_y : int >

To be a subtype of vect1d, the type of method add of vect2d must be a subtype of the type of method add of vect1d. But the former has type vect2d -> vect2d, and the latter has type vect1d -> vect1d. Therefore, we should have vect2d <: vect1d and vect1d <: vect2d, since the function type constructor (->) is covariant in its second argument and contravariant in its first. And so, we should have vect1d = vect2d, which is obviously not the case.

If you want more precisions about the object system of OCaml, @lpw25 wrote a great chapter on this subject in RWO (especially the section on subtyping). This problematic is also formally exposed in Coq, in the book Software Foundations in the two chapters :

Hope this explanation could be useful.

3 Likes

@kantian, thank you!

Still finding time to digest it all. Seems very clear. Might have questions later.

Why? I’m not yet seeing the rationale for the last claim. (Thanks again.)