Critique of implicits


I have been assuming that implicits would not leak out of libraries unless that was part of the design of the library. Is that incorrect? Or is the worry that people will build implicits into the interfaces of libraries, and not give users a choice? That doesn’t seem like a good practice, off the top of my head.


3 modules as an argument is a quite rare thing in the OCaml code I was used to.
So, I remembered this example.


I agree that the proposal for modular implicits is much more sane than Scala’s implicits, but that should not be the scale upon which to evaluate if a feature is fit for OCaml I think. OCaml has a big advantage in its current form I think - simplicity and explicitness, while being powerful.

I don’t think it’s so much about specific people, it’s about people in general, in the sense that we all have limitations when confronted with complexity. Modular implicits will add yet another dimension along which to evaluate the correctness of your program.

I’m not only suggesting that code will be harder to read - I’m suggesting that it will be harder to keep the code correct. At least when you make the code-base and the team bigger, and the code is refactored over time.

I think that there is an important difference with Haskell’s typeclasses, in that they are canonical.

Lets say I write a block of code in one scope, and then want to move it to another scope, be it a file or within the same file. Then to be sure that the semantics of the code hasn’t changed in the new scope, I would need to know all the functions in the block of code that took implicit parameters, and then check that all available instances of the implicit dependencies in both of the scopes matched.

This gets even more complicated to check, as implicits are composed together implicitly - so for one function taking one implicit parameter, there can be several relevant implicit instances to check for.

This would not be a problem with type classes, as there could not be other instances in other scopes. So one could say that type classes have a one-to-one relation between a textual piece of code and its possible semantics, whereas implicits have a one-to-many relation.

This brings more complexity to OCaml, and makes a piece of code much less explicit. I would like to keep having the dependencies of my code, that can change the semantics of my code, visible in the code itself.

One could argue that some function not taking implicit parameters, that I use in the block of code, can have different definitions with differing semantics in each scope - but I can see that the function is called, so it’s more obvious to check its definition when evaluating the semantics.


@rand, I’m going to disagree. I think that there are places (like getting rid of polymorphic comparison, or providing a uniform printing interface for data structures) where ad hoc polymorphism is absolutely what you want. I’m looking forward to seeing Modular Implicits if and when it appears.


Will there be a compilation flag to allow or deny them, once they land?


I hope not. That would imply keeping things like polymorphic compare forever.


You’re essentially saying that, when a programmer copies print_endline ["Hello";"World"]

from one context into another, they ensure the correctness of the code by checking the new context to ensure that List, print_endline, :: and [] are bound to the same things as they were in the previous context. If that is indeed how people work, then they can accomplish the same thing with implicit arguments using an additional button press – you just ask Merlin to display the implicit arguments and then you check those identifiers along with the explicit ones.

I would suggest that this is not actually how people work. I think that people know they can move that code because they diligently manage their context. They do not bind common names like List unless they have to and then only very carefully. The same diligence will have to be applied when using implicit modules. They need to be careful when creating contexts containing non-standard implicit implementations of common implicit parameters.

I’m not saying it doesn’t add one more thing to think about. It clearly does and that clearly has a cost. I’m just saying that it is not a whole new kind of thing that you need to deal with – it is an extension to an existing aspect of reasoning about your code. It can be handled using broadly the same approach that people already take to managing their context. Basically, yes there is more rope to hang yourself, but it is not an unmanageable amount of rope.

I think it is also worth pointing out that different choices of interface can make this problem easier or harder. For example, an interface like:

val empty : {O : Ord} -> unit -> Set.Make(O).t
val add : <O : Ord> -> Set.Make(O).t -> O.elt -> Set.Make(O).t

is very safe to use because the set type completely dictates the choice of implicit argument. You can’t select a different implicit argument without changing the types of things.

Whereas an interface like:

val add : {N : Num} -> N.t -> N.t -> N.t

might be more risky. You would probably want to be very careful about adding a non-standard instance of Num where t = int for example.


We should never assume how people use language features by our and others around us experience, because this always results in a very limited view that does not generalize to the rest of the community.

As per @rand’s experience report with implicits, which should be taken very seriously, if some feature can have bad effects on code quality and maintenance, then be sure that many programmers will use it to shot themselves and make their and our lifes very difficult in understanding and maintaining the code.

Being explicit and clear is one of the best features of any programming language: it makes code easy to understand and keeps the mind focused on the tasks to be done.

I really do not think that adding more and more features, especially risky ones like this, is the way to go. That’s why Go designers keep their language simple against the desire of many asking them for the missing features from other languages.

There are other more important and useful features in which to put OCaml efforts, like supporting multicore, web assembly, and general full stack development with only OCaml. And better documentation too!

I have been using Elixir’s flavor of macros, and experiencing the creative freedom they provide, and a good macro system is something that would perhaps be a good addition to OCaml.

Elixir also seems a good example of explicitness.

If it is the desire of the OCaml community to compete with other languages, then I suggest:

  • compete with Erlang and Haskel on the multicore
  • and, compete with Elixir, Julia and Racket on metaprogramming and macros

Best whishes for OCaml.


Critiquing modular implicits for the potential of people writing obscure
code with them, and in the next breath suggesting to add macros, is…
mystifying. Let’s keep in mind that bad Fortran can be written in any
language. It is important to enable people to write clear idiomatic
code, it is impossible to prevent people from being able to write
incomprehensible code. I can totally write impenetrable messes of
multi-barrel CPS spaghetti using just fun and higher-order functions,
but that is no reason to remove or not add them.


He has no experience with modular implicits. He’s used a different feature in a different language that doesn’t have the same semantics but has a few common features. No one has yet had serious experience with modular implicits as there’s been no work beyond a prototype. This is not a small point. Scala is not OCaml.


What I mean is that we have the code-text tell us what the dependencies are, and some dependencies are of course much more likely to change from context to context than others. Implicits in Scala are actually often intentionally used to configure the code, i.e. change its semantics - as this will be easily possible with modular implicits too, it will primarily be the ideals of the programmer, and of the designers of libraries, that control the usage of implicits for this. Personally, I think that I would rather not want to configure any of my code with implicits. To me it feels a bit like the ‘framework vs library’ problem, where I like to use libraries rather than frameworks.

Different teams works in different ways - if programmers primarily work on their own code, make tests and there is code review - then I think implicits are fine. But if you have a more fast-paced style in a team, then it will get harder to ensure correctness than the status quo.

As you are saying, merlin can help you - but I don’t like the idea of one being dependent on a tool like merlin to see ones dependencies. If I’m forced to take my context into consideration in evaluating the correctness of some code, then I would like to be able to search for implicit open’s in the file, textually. Will this always be possible? E.g.:

  • In section 4.2 in the paper, a functor defines an implicit module inside, will the user need to write open implicit F(M) to get the implicit into scope?
  • What happens if you write List.(expr) - does expr see the implicit instances inside List?
  • What about code that include's a module containing implicit instances?

I like the possibility with modular implicits to express further restrictions on the types, but can you suggest a restricted version of e.g. sort?

I am actually excited for a feature like type-classes or a well designed modular implicits, so take my critique as food for thought on what might make the design better.


I also have a recollection of writing good Java code at university - because I was the only programmer, and didn’t use a lot of libraries that imposed their semantics on me.

Edit: My point is that there are reasons one prefers some languages over others - possibilities and limitations influence the quality of the code written.


Will this always be possible?

As the current design stands, implicits can be introduced by open, open implicit, include, include implicit and toplevel implicit statements. Although this part of the design is still somewhat up in the air, so we could still make it more restrictive.

but can you suggest a restricted version of e.g. sort ?

Well, it would certainly be possible to define a type that was “list sorted by order O” and then you would get that kind of safety. However, I suspect most people would probably use a sort function that operated on ordinary lists eschewing the safety for some convenience. In which case you would want to be careful bringing non-standard orders into your scope. Of course that is all still much safer than what a lot of code does right now: sort using the polymorphic comparison functions.


Did you really try to play with the prototype? Here the answers to your questions.

module type Show = sig
  type t
  val show : t -> string

let show {S : Show} v = v

(* the code from section 4.2 *)
module F (X : Show) = struct
  implicit  module S = X

implicit  module  Show_int = struct
  type t = int
  let  show = string_of_int

module M = struct
  type t = int
  let  show _ = "An int"

module N = F(M)

and the tests:

(* the implicit module in N is not visible *)
show 1;;
- : string = "1"

(* with a local open it is visible, hence an ambiguity *)
N.(show 1);;
Error: Ambiguous implicit S: N.S and Show_int
are both solutions.

(* idem with an include *)
include N;;
module S = N.S

show 1;;
Error: Ambiguous implicit S: S and Show_int
are both solutions.

Do you really have problematic examples with the current proposal for modular implicits, or do you only have problem with implicits in Scala?

I just quote the section 7.2 of the article:


Here is an example of the pattern that I would not like to see in OCaml, that Li Haoyi calls “implicit contexts”, see

And some form of the pattern with modular implicits:

(**Example 1: Dependencies not visible in block of code and not directly text-searchable using 'implicit' keyword*)

module Unknown = struct 
    (*Some code unknown to programmer in separate file, e.g. a library*)
    module type T = sig
      type t 
      val unknown : t -> t

    module Dsl = struct
      let unknown {U : T} v = U.unknown v
    module Conf1 = struct
      implicit module Conf1_std = struct
        type t = int
        let unknown v = -v 

    module Conf2 = struct
      implicit module Conf2_std = struct
        type t = int
        let unknown v = v * 2  


module Scope1 = struct                
    (*Top of scope, not visible to programmer*)
    open Unknown.Conf1

    (*... Lower down in scope - in view of programmer*)
    let foo = Unknown.Dsl.(
      1 + unknown 2


module Scope2 = struct                
    (*Top of scope, not visible to programmer*)
    open Unknown.Conf2

    (*... Lower down in scope - in view of programmer*)
    let foo = Unknown.Dsl.(
      1 + unknown 2


… here my argument is that, as this is possible, the programmer will need to keep this pattern in mind when looking at the definition of any expression. Without this patterns existence, I would expect =, but this is not the case here. I see this as loosing trust in what I read.

This pattern is not only a major part of the design of a lot of libraries (even Scala’s Future's), but also seems to happen unintentionally as e.g. a user can be required to implement implicit instances for all his types, and these instances can be used to configure behaviour (and there are no other places to configure the behaviour…).

If one were required to write open implicit Unknown.Conf{n}, I would be happier, but I still don’t like it


Your unknown library (I image this module represents a possible library) is not well design for modular implicits. I expect that nobody will write such code. The implicit arguments should always stay under the control of the user (the caller). In the case there is more than one possible implementation for the implicit arguments, the library author should not define them as implicit, and write the library this way:

module Unknown = struct 
    (*Some code unknown to programmer in separate file, e.g. a library*)
    module type T = sig
      type t 
      val unknown : t -> t

    module Dsl = struct
      let unknown {U : T} v = U.unknown v
   (* the two implementations are not defined as implicit *)
    module Conf1_std = struct
      type t = int
      let unknown v = -v 

    module Conf2_std = struct
      type t = int
      let unknown v = v * 2  

And then the library users can choose to turn any of them as implicit in their code, depending of which aspect of the structure they are interested in.

module Use_case1 = struct
  (* in the following we're interested in Conf1_std *)
  implicit module Conf = Unknown.Conf1_std

  let foo = Unknown.Dsl.(1 + unknown 2)

module Use_case2 = struct
  (* in the following we're interested in Conf2_std *)
  implicit module Conf = Unknown.Conf2_std

  let foo = Unknown.Dsl.(1 + unknown 2)

This is similar to the use case studied in section 3.8 (local instances) of the article, or when you read in text book phrases like “until the end of the section, we consider the natural numbers ordered under their reverse natural ordering”.

Edit: or the user can choose to always be explicit

module Use_case3 = struct
  open Unknown

  let foo = Dsl.(1 + unknown {Conf1_std} 2)

  let bar = Dsl.(1 + unknown {Conf2_std} 2)


Seems like we agree on how we should write the code :slight_smile: - but that is not what my critique is concerned; it’s about whats possible, and therefore what the programmer needs to think about.

It seems like you missed my point about the Unknown module being unknown to the human programmer in question. The important part is that the only thing the programmer can see on his screen in the middle of the file is (e.g. also imagine that this is a page-long function instead of one line):

let foo = Unknown.Dsl.(1 + unknown 2)

This doesn’t inform the programmer that there are any other semantic dependencies than functions or values in Unknown.Dsl or the values in the expression. So if he is ‘naive’ and then moves the code to another scope with another implicit instance, the semantics will be wrong.

This pattern can be generalized to countless examples. Same problem with sort : {O: Ord} -> Ord.t list -> Ord.t list.

Found a talk by Edward Kmett, where funnily enough, this problem is the main theme. He seems to be ranting a bit at the end, but I think his arguments are valid.


Sure, this pattern is equivalent to implicit parameters in Haskell.

You’d have the same issue if Unknown.Conf1 or Unknown.Conf2 defined their own + function. You’re argument is “but I can see that this code uses + so I know that if the opens include + then they may affect the code”, but it is just as true to say “I know that any code might use implicit arguments so I know that if the opens include some then they may affect the code”. Either way you need to look at the module type’s of Unknown.Conf1 and Unknown.Conf2 to know what is going on with this code. It is as much a criticism of open as it is a criticism of modular implicits.

The only additional burden that implicits are adding is that once you know that opening Unknown.Conf1 or Unknown.Conf2 might change the behaviour, you need to ask Merlin to show you the implicit arguments in order to check (or work it out yourself from the types). Although in more complex examples you already need to ask Merlin or do some type checking to work out what is in scope, so this isn’t an entirely new phenomenon.

Again, I’m not saying it doesn’t increase the burden for reasoning about your code, just that the increase is not particularly large.

It is also worth comparing implicits with one of the main things they replace: ppx extensions like [@@deriving sexp] and [%sexp: ...]. These extensions are just as susceptible to unexpected items in opens, if an open includes something like sexp_of_string then it can change the behaviour of code using [@@deriving expr] or [%sep: ...] even thought there is no mention of sexp_of_string in the code.


Sure, and I still miss it. Who is this strange programmer that uses a module when he doesn’t know anything about it? :open_mouth:


Yes - this made me think that my problem with implicits in contrast to type classes is the dynamicity - like with dynamic scoping vs lexical scoping. The ease of which it’s possible to express dynamic scoping with implicits, I think tells something about the nature of the feature:

module type IntDynamicV = sig
  val v : int ref

let dynamically_scoped {I : IntDynamicV} v = I.v := v + 10

let foo =
  let implicit module MyDynamic = struct let v = ref 10 end
  in dynamically_scoped 23;

OCaml feels very lexically scoped, and that is also one of the things I really like about the language.

I think the most important differences between open and implicits are that:

  1. Values that come from opening a module doesn’t compose automatically and implicitly with other modules opened. This means that it’s fine for the programmer to always locally open the modules needed in some scope. Whereas with implicits - you want the implicits to ‘mate’ - so there are several modules you would need to open locally to make them work, in the case of more nested types. So as programmers are lazy and want elegant code - they open the implicits at the top of the file - in contrast with current OCaml standards for where one opens modules that contain values.
  2. Today libraries are not able to inspect what values a user of the library has in scope, unless the user explicitly passes the values or configures the library by e.g. using a functor explicitly. With implicits, libraries are made able to use the implicit scope of the users - so library authors can this way begin to require users to put instances into their scopes. Users wanting to have very locally controlled scopes would need to include all the needed, potentially many, instances in the local scopes - and who wants to do that…

I like that implicits will make OCaml better in a range of ways - but wouldn’t it be nice if we could come up with a design for implicits where it wasn’t just replacing the complexity by another kind of complexity.

I imagine that one can see type classes as implicits with all implicit parameters being placed on the program itself, instead of on functions. And where instances are placed in global/program scope (including libraries). Could one maybe support a compiler-flag where all implicit instances were global? This might also force library authors to support this usage of their libraries.