Modular Implicits

Given that there is a working prototype of modular implicits, is the problem simply proving principality in the general case, or have actual problematic examples been found?

The problem isn’t really with proving things. The prototype is just that – a prototype – it is not a full implementation. There are plenty of parts not yet implemented and most of these parts are not yet precisely specified. They need a precise specification, producing that specification and making sure that it has predictable inference is the thing that still needs to be done. Obviously the most reassuring way to “make sure that it has predictable inference” is to prove it – but we’re not at that point yet.


I’m curious as to whether there’s any news on Modular Implicits…

1 Like

We (a subgroup of OCaml developers including Leo and myself) met in November to discuss many things, including modular implicits. Leo’s plan is fairly ambitious, basically it involves making the module system of OCaml as powerful as general dependent types (has been done for module systems before, is not such a jump from the current system, but still requires quite some work to implement unification correctly), and there are still some orthogonal implementation questions that are still open for discussion. So more work is required, and since November everyone has been fairly busy with plenty of other things, so not much progress has been done I think.

(If you know someone interested in both the metatheory of module systems and implementation questions, willing to do an internship or some other form of short-term contract paid on research salaries, I would wildly guess that there is space for a year of work on these questions.)


Is it possible that a more limited implementation that was not so ambitious but was compatible with a future more general version would be feasible? It would be a shame if it took many years before the feature appeared, though I would understand concerns about not knowing if a simpler version would be syntactically and semantically compatible with a more ambitious version.


Is it possible that a more limited implementation that was not so ambitious but was compatible with a future more general version would be feasible?

Much of the work for a limited version would need to be redone from scratch for the more complete version. Given our very limited man power, doubling the amount of work we have to do is probably not a good idea. The more limited version would still be a huge amount of work, so it is not like it would appear any time soon anyway.

My plan is certainly ambitious, but the potential pay offs are pretty large. I think I can give some idea of them with a relatively short description…

The key mechanism of modular implicits is the ability to search for a module from a list of possible modules based on a module type. For example, given:

module type PP : sig
  type t
  val print : Format.formatter -> t -> unit
let print {P : PP} ppf x = P.print ppf x
let () = print Format.std_formatter [1; 2; 3]

we will be searching the implicit modules for a module with type:

  type t = int list
  val print : Format.formatter -> t -> unit

Now OCaml already knows how to check if a module has a particular module type. So if we just had a finite list of possible modules, it would be easy to check each of them against this module type. However, we also have implicit functors like:

implicit module Print_list {X : PP} = struct
  type t = X.t list
  let print ppf x = Format.pp_print_list X.print x

which means that there are an infinite number of possible implicit modules. We require that our solution be unique to avoid ambiguity – so we need to know that all but one of this infinite collection of modules does not have the required module type.

This requires us to answer questions like: “Is there any module X such that the result of Print_list(X) would be of the required module type?”. Which is actually just
a unification problem at the module level, essentially can we unify = Print_list(?X) with sig type t = int list val print : ..., where ?X is a unification variable and = ... is a singleton type/module alias.

So implementing modular implicits requires us to implement a unification algorithm at the module level. Now the prototype uses a very naive algorithm which is very weak and quite unpredictable. My more ambitious plan is to implement a higher-order pattern unification, of the sort found in languages like Agda, and to do so for the full module language. My plan also includes strengthening the module language with additional equalities – in particular module aliases on functor applications and eta-equality on structures – in order for unification to have some more desirable properties.

In addition to giving a more expressive and predictable implementation of modular implicits, having a powerful unification algorithm for the module system opens up a number of possibilities:

  1. As well as implicit module arguments, we can support inferred module arguments, where the required module is found entirely by unification. Allowing parametric polymorphism over a module, to go with the ad-hoc polymorphism provided by implicit arguments.
  2. Inferred module arguments, combined with eta-equalities at the module level, allow us to have the equivalent of polymorphic type class instances in Haskell.
  3. Unification is apparently the hardest part of implementing dependent pattern matching. This opens up the possibility of supporting inductive families and dependent matching in the module language.
  4. Similarly, we should be able to support ordinary types parameterised and indexed by modules. Combined with the above this would allow e.g. a vector type indexed by a length of module type Nat defined as you would expect – no more silly games pretending that that the types z and 'a s are the natural numbers.
  5. It should allow a full and predictable solution to the problem of “double vision” in recursive modules.
  6. It should also allow the inference of definitions in an .ml file based on the contents of the .mli – so you could just write type t = _ in the .ml and avoid having to write the definitions twice.

Of course all of those are large amounts of work, and they are not going to be included as part of the modular implicits project, but I think that they definitely help to justify having a more complete solution to the unification issue in modular implicits.

The curious can see a formal description of a module system that includes the “additional equalities” I mentioned here.

As Gabriel said, everyone is very busy at the moment, so not much has been done since November. Personally, I’ve been using my free time on the algebraic effects work, but I hope to get back to looking at implicits in the second half of the year.


Honestly, this sounds like a completely different language (though compatible with OCaml). This new language should get a completely new standard library and will enforce different paradigms of programming, i.e., using lot’s abstractions and making everything generic. Probably it should be named OCaml anymore, just a very powerful language that is compatible with OCaml.


It would be difficult to find it a name. It will be a mix between a camel, a lion (leo in latin) and a rooster (coq in french). Very strange animal, isn’t it? :smiley:


@lpw25 This sounds amazing and totally awesome, but wouldn’t a language that allowed only searching finite lists of modules and not implicit functors still be a significant win over what we have now? You don’t need unification for that, so maybe that wouldn’t require doing an enormous amount of work on unification that needed to be thrown away? It would still allow for a lot of interesting day to day programming and make everyone’s life easier.

I’m not suggesting that the full plan not be executed on, just that it is easier to eat an elephant one piece at a time. Also, the presence of a mechanism to compete with Haskell’s typeclasses would bring more labor to the OCaml ecosystem, and might make it easier to execute on the full program. If you do it once piece at a time, then each time you make the language richer, you’ll have made it more useful to people, you’ll bring more people in to the community, and that will make it that much easier to do the next piece because there’s a richer ecosystem and labor pool.

My experience in other environments is that if it takes too many years to get a simple but very useful version of a feature because the totally awesome version of it requires more work, sometimes this means that you never get the community strong enough to sustain the totally awesome vision. It’s always nice to look for a way that allows incremental steps.


wouldn’t a language that allowed only searching finite lists of modules and not implicit functors still be a significant win over what we have now?

I don’t think so. It rules out using modular implicits with type constructors. I’m not aware of any use cases for which this doesn’t make them unusable.

1 Like

Is there any other possible intermediate goal that could be attempted that would permit incremental progress?

I don’t think there is anything that easily allows incremental progress. You could make a weaker version of the unification – but then you are really creating work that must be done twice. You could try to leave out some convenience features – but they mostly don’t take much extra effort and leaving them out risks damaging people’s initial impression of what implicits are like to use.

One way to think of the problem is that we are, in a way, adding a single new operation to the language. It is a lot of work because all the cases in the language must be handled by the operation, but that work can’t really be subdivided into smaller blocks because the operation needs to be completely defined before it works at all.


May you provide a signature of a list with a length as a type parameter, how would it look like with modular implicits? How close it would be to dependent types we have in F* or coq?


Well, I’m just making up syntax as I go here – as I said these are possibilities opened up by having a full unification algorithm, not concrete plans within the modular implicits project – but I imagine something like:

module type Nat =
  | Zero
  | Succ of Nat

type ('a, 'L : Nat) vect =
  | Nil : ('a, Zero) vect
  | Cons : 'N : Nat. 'a * ('a, 'N) vect -> ('a, Succ('N)) vect

module rec Plus (X : Nat) (Y : Nat) <: Nat =
  match X with
  | Zero -> Y
  | Succ(X) -> Succ(Plus(X)(Y))

val append : <'N : Nat> -> <'M : Nat> -> ('a, 'N) vect -> ('a, 'M) vect -> ('a, Plus('N)('M)) vect

There are all kinds of design decisions around syntax that would need to be resolved, but conceptually the above should work.

How close it would be to dependent types we have in F* or coq?

It would probably be pretty close to the core of those languages, although they obviously have all kinds of additional features. It’s worth noting that the module language is already a dependently typed language, just a crippled and extremely inconvenient one. For example, you can already construct girard’s paradox in it.


Beautiful !

About the first part of your comment, I have a question: isn’t strange to call Zero a module (or any other module of type Nat)? There is no value (nor type) defined in it. They are like the empty module but with another module type.

1 Like

There is no value (nor type) defined in it

I think here we need to distinguish “modules” from “structures”. For me “module” just means a value in the module language. Currently the module language only contains structures and functions (called functors for historical reasons), but there is no reason it cannot have other sorts of values in it.

For me the only fundamental differences between the module language and the core language are predicativity – every thing in the core language is “small” (in the cumulative hierarchies of universes sense) – and inference – the core language strives to have powerful type global type inference, the module system has no global inference. In an ideal world (see 1ML) the core and module languages would be separated only in their types, rather than syntactically, but that ship has sailed for OCaml.


Ok, I better understand. This is like if you say: currently in the module language we only have product and exponential types whit their values: records (or structures) and functors, but we can also add sum types and their values (like the module type Nat of your example). Hence, since there is subtyping in the module language, do you plan something similar to Stephen Dolan’s biunification in your unification algortihm?

do you plan something similar to Stephen Dolan’s biunification in your unification algortihm?

Unfortunately, biunification does not work well with the kind of subtyping in the module system. The module system’s subtyping coercions actually exist at runtime, and biunification doesn’t handle inferring when/where those should happen. It is also still work in progress to mix that approach with higher-rank types, higher-kinded types etc. So the unification will be quite weak around subtyping – although I don’t expect that to cause many problems in practice. It may encourage us to also add ordinary non-subtyping record types to the module system as well as sum types, but I suspect that they would be useful anyway.

Probably it should be named OCaml anymore, just a very powerful language that is compatible with OCaml.

Except that OCaml is currently somewhat deficient, and could really use having the module language be made smarter and more intuitive. This isn’t obvious when you evaluate OCaml in a vacuum, but when compared to its state-of-the-art competitors haskell and rust, it’s clear that the ease of use isn’t quite where it should be. The reason people have been wishing for modular implicits is that they’ve used type classes and seen how powerful and easy type-based dispatch is (another alternative is OOP mixins as implemented in Scala). If someone offered me to ditch OCaml’s current, complex module system and use type classes instead, I would instantly take that deal, and I suspect most users would, too.

And 2 months later, you would cry as you realize you have lost type abstraction, proper (fast) separate compilation and half the ecosystem because type classes really can’t do the same jobs as functors. :slight_smile: