Law of Demeter on module signatures?

When OCaml-ing, I’ve often ended up with a lot of chains of modules that look like this:

module Foo : sig
  type t
  var is_quux : t -> bool
end

module Bar : sig
  type t
  var foo : t -> Foo.t
end

module Baz : sig
  type t
  var bar : t -> Bar.t
end

Usually this means that Foo is an abstract data type, Bar is a bigger type that contains one instance of Foo inside it, and similarly for Baz. Often, in these cases, I’ll get a Baz.t and want to know if its inner Foo.t is_quux, so:

  baz |> Baz.bar |> Bar.foo |> Foo.is_quux

This works, but my inner object-oriented programmer is dissatisfied because this set-up means that I have to couple the concept of getting is_quux on a Baz.t to the existence of modules Baz, Bar, and Foo, and the implicit knowledge that each Baz has a Bar and each Bar has a Foo. So, I might be tempted to add these to my modules (and corresponding signatures):

(* in module Bar *)
let is_quux (t : t) : bool = t |> foo |> Foo.is_quux
(* in module Baz *)
let is_quux (t : t) : bool = t |> bar |> Bar.is_quux

This is basically applying Law of Demeter reasoning to OCaml. I might even go further and make a functor that, given the appropriate projection from bigger module to smaller module, builds a set of of forwarding functions of this sort.

My question is this: in the context of OCaml, is this a good idea? It feels like I’m overstraining the definition of a module interface by pushing things into it that technically belong to another module’s interface. The signatures I’m generating are therefore very large and unwieldy. However, it feels like stringing through huge chains of 'get the bar, then the foo’ through my program makes it hopelessly brittle to changes in the module structure later on.

With your specific example, I don’t quite understand why those functions are needed. If you want to express type equalities across modules, you can do so directly.

module Foo : sig
    type t
    val : is_quux : t -> bool
end = struct ... end

module FooExt : sig
    type t = Foo.t
    val is_boop : t -> bool 
end = struct ... end

let f (x : FooExt.t) : bool = FooExt.is_boop x || Foo.is_quux x

I’m guessing maybe the problem is arising due to more complicated interactions between the modules? Or I’m totally missing the point somehow. If you could provide a bigger/more realistic example, that might help in providing better guidance.

Ah, it wasn’t particularly obvious in that example but the three modules don’t have equal types; the situation I wasn’t thinking of was something like:

module Foo = struct
  type t = { maybe_quux: Quux.t option }
  let is_quux x = Option.is_some x.quux
end

module Bar = struct
  type t = { foo: Foo.t; more_stuff: Blah.t }
  let foo x = x.foo
end

(* etc *)

Each module’s type is basically nesting another module’s type with some additional information. As such, my question is whether it makes sense to export bits of the inner types’ signatures in the outer types, to avoid having to manually drill down the composition to find the specific subcomponent to which the bit belongs.

(I’ll try haul out a concrete example once I’m near a proper keyboard :slight_smile:)

Ah, got it! Now I understand the problem much better. I’m guessing the problem is arising due to a combination of the following factors:

  • All the modules Foo, Bar, Baz are public.
  • All of their corresponding t types are abstract, so you have the projection functions.

And the question is, should you add corresponding functions for Bar and Baz so the consumer doesn’t have to think about what projections to use…

So, there are three options, as I see it:

  1. Provide forwarding functions but not provide projection functions.
  2. Provide projection functions but not forwarding functions.
  3. Provide both forwarding and projection functions.

Option 1 gives you more flexibility as the API designer, because you can swap out the implementation. Option 2 gives more flexibility to the API consumer, because they can use every function available for Foo directly (if you provide a projection function, Foo must necessarily be available to do actually something useful). Option 3 seems like the worst option as it creates redundancy in the API for potentially little gain, and leaves you with less flexibility as the API designer.


It feels like I’m overstraining the definition of a module interface by pushing things into it that technically belong to another module’s interface. The signatures I’m generating are therefore very large and unwieldy.

Makes sense. Generally, you don’t want to re-export another module’s functions (or wrap them in something else and re-export them), unless (a) you have a top-level module which is only present to re-export several modules together, or (b) the particular combination is going to be used a lot.
This creates a maintenance burden for potentially little gain. Adding things in the future is much easier than removing things.

However , it feels like stringing through huge chains of 'get the bar , then the foo ’ through my program makes it hopelessly brittle to changes in the module structure later on.

If a particular combination is used often, the API consumer can always define a short function for this in their own library. This is plain DRY, so if things change in the future, it isn’t a big deal.

It’s interesting you mention this. Golang has a concept that addresses this thus:
type T struct {
a int
}
type U struct {
T
b int
}

Now, for u of type U, “u.a” makes sense, and does what you expect. It is internally rewritten to “u.T.a”. Similarly with methods on T. It is a (uh) halfhearted attempt at inheritance (of a sort). Because other than that, Golang doesn’t have any. And no virtual base classes or virtual methods.

OK, enough of the digression. I could be wrong about this (maybe very wrong) but I feel that ML-like languages don’t try hard to be DRY, nor do they work hard on obeying the Law of Demeter (heh, never knew about this law). Instead, they try to help the programmer produce such strict typings, that when something changes in one place, the compiler naturally leads the programmer to all the other places where related things need to be fixed. Though maybe not: I hear that Haskell’s “Scrap Your Boilerplate” is all about being DRY. Then again, Haskell’s typeclasses are a bit O-O, aren’t they? [not a Haskell programmer, can’t comment]

I guess what I’m suggesting is, this pattern of a type Foo, which included as a member in type Bar, which is included as a member in type Baz … in type Argle, seems to be a particularly O-O kind of thing. Um … I don’t think it comes up all that often in ML settings.

OK. That’s not very helpful, is it? What I’m saying is, I doubt ML can help you much unless you reach for Ocaml’s objects. Or, maybe find a way to restructure your types to eliminate this pattern (again, that’s not “ML helping you”).

I think your question here is essentially impossible to answer because Foo, Bar and Baz are placeholder names without any inherent semantics. Whether you should have is_quux in Baz depends on what Baz and is_quux actually are.

For instance, it is easy to create an example where you probably should not provide such a function:

module Name : sig type t val is_capitalised : t -> bool end
module Class : sig type t val name : t -> Name.t end

Here it makes no sense to ask “is this class capitalized?” – it only makes sense to ask “is this class’s name capitalized?” – so there should not be an is_capitalized function in the Class module.

On the other hand, something like this probably should have the function:

module TypeDesc : sig
  type t =
    | Tvar of string
    | Tuple of t list
    | Int
  val is_tvar : t -> bool
end
module Type : sig
  type t
  val desc : t -> TypeDesc.t
  val loc : t -> Location.t
  val is_tvar : t -> bool
end

Here the splitting of Desc apart from Type is a detail that most uses probably don’t want to think about, so it makes sense to provide functions like is_tvar in the Type module as well. In particular, it makes loads of sense to ask the question “is this type a type variable?” so the function’s existence makes perfect sense.

Ah, fair point. Here’s a slightly paraphrased but more concrete example (which, in hindsight, I should’ve given straight-up to avoid possible XY problems):

(** A bundle of information that describes how to interact with a compiler. *)
module type Compiler_spec : sig
  type t
  val prog : t -> string (** Name of the compiler program *)
  val argv : t -> string list (** Arguments always supplied to this compiler *)
  val emitted_arch : t -> string (** Name of the architecture this compiler emits eg. x86 *)
end

(** A combination of a compiler specification and an ID used to refer to it in configuration files etc. *)
module type Compiler_spec_with_id : sig
  type t
  val spec : t -> Compiler_spec.t (** The spec proper. *)
  val id : t -> string (** The ID of the compiler. *)
end

In this case, the question is whether I should add something like include Compiler_spec with type t := t to Compiler_spec_with_id, and possibly remove the projections. Here, the semantics of Compiler_spec_with_id is that it’s exactly just a Compiler_spec with more information tagged on, and most of the code will want to access that information and not the ID.

(In hindsight, this particular example is an artefact from when I was first starting out with OCaml, and, given that the ‘attach ID to item’ pattern has appeared multiple times in the same project, I’d now do something like:

module With_id = struct
  type 'a t
  val data : 'a t -> 'a
  val id : _ t -> string
end

and take the trade-off of increased indirection over exposing the right abstraction. Of course, I could then keep Compiler_with_id but set type t = Compiler_spec.t With_id.t and include all the forwards anyway, but I suspect that’d be overkill.)