Critique of implicits


I really don’t understand why I have not seen any resistance to implicits in the discussions I’ve read. Also I just saw Janestreets latest video where Yaron Minsky said that everybody in the OCaml community wants the feature.

My impression of OCaml as a language, as it is now, is that one of its strong suits is exactly its explicitness. I agree that the examples where implicits are used in a less-than-screen-long example, then they seem ‘nice’ - but this is not what the code is going to look like when

  • it is used on more complex problems
  • when there are more than 5 implicit modules in scope
  • when the implicits are used so often, that they will get opened in the top of the file instead of locally

I’ve been working a couple of years now with Scala in my job, and implicits has in my experience not been a win at any point - if you want readable code. Of course, in scala it gets many times worse, as people making libraries intermix implicits with reflection, macros and subtyping. I have to remind myself that this is probably also a cultural problem with the Scala community.

I have an ideal for functional code that I see is potentially getting undermined by a feature like implicits:

Functional code can be really easy to jump into, as there is only input and output - the only thing one has to know to understand the code, are the functions being used locally, and what type of values you have.

The problems that arise are:

  • When implicits are opened in scope somewhere, one can’t reason locally about which functions are getting called.
  • When a library is written with heavy usage of implicits, one has to know the whole library to be able to understand exactly what the big block of code using it is doing.
  • When many datatypes have the same functions available in the same scope, and you have a longer chain of functions that map from datatype to datatype; then you have to keep state in your head about what function is called at which point in the chain, else you can get the wrong semantics when you refactor. (This problem is probably enlarged in scala because of subtyping)

What happens is that without “implicits” you have an explicit program, and you have a bit of implicit knowledge of the datatypes and algorithms in your head. Then comes “implicits”, and you get an implicit program, where you in turn need to keep track the before explicit knowledge in your head too.

Keeping state in your head is exactly what I want to minimize, and this feature is not something I’ve seen helps this.


Most people expect the developers to “only do sane things”. Even though this is the same justification used to defend C language memory model, i don’t think it’s a huge risk.

All your points only apply if people go all in and write heavily impllicit code, and the same argument applies to both :

  • infix operators
  • functors. Even though functors are “explicit” applications, heavily functorized code can be hard to reason about locally when a functor takes 3 modules as an arguments themselves constructed by functor application…

I think in practice most developers realize it can be a problem and are careful to use language features as tools and not booby traps.

Two counter points :

  1. tooling, like merlin, is very good at sorting ambiguity in case there is one in the programmer head, thanks to the canonicality.
  2. languages with “implicit” like features have very sane code written in the wild.
  3. With the design of modular implicits, one can always use implicits explicitly.


Why do you not think it is a huge risk? The reasons I can see for it to potentially not be a huge risk are primarily:

  • affordance of the tool
  • culture

Concerning culture, you might be right, but as for affordance; the proposal for implicits seems to me to be much easier to use than the module system by itself. This means that people will tend to use it more often, as the friction is smaller.

No I don’t think the points only apply for heavily implicit code - just the kind of code that one expects to see in a bigger codebase.

Concerning your comparisons to other features;

  • Infix operators: these are used explicitly, not passed implicitly. Also; implicits are implicitly composed in arbitrary ways (and are supposed to be), and of arbitrary depth of composition - not the case with infix operators. Infix operators will also primarily be a small set opened globally in a file, or locally. Implicits can be operators for everything globally in a file.
  • Functors: They are heavy to use, so their affordance pushes towards less usage. They are still explicit, so the abstract dependencies are visible in the code itself.

The last point touches upon another critique I didn’t include; implicit arguments hide the dependencies of code - one of the nice things about pure functions is that they are explicit about their dependencies.

My counterexample against the sane developers is seen easily in a search on scala libraries.

I don’t like the idea of merlin saving the day - code is text, and should be able to be read as text; if this is not a goal for the programming language - then this should be a discussion on its own.

As to using implicits explicitly: code that depends on their existence will not be nice to use with explicit passing of implicits. The libraries using them will exactly have avoided other solutions to the same problem that implicits solve, else there would be a double tax (or more) on being a library designer.


I’m also a bit concerned that people might want to turn OCaml into Haskell/Scala/Rust as soon as implicits land upstream; I find code very hard to read when it’s making heavy use of ad-hoc polymorphism and the explicitness of OCaml is one of the reason why I prefer OCaml.

I’m sure the minute the PR will be merged there’ll be a couple dozen translations of Prelude fresh and ready for the picking, but I doubt making heavy use of implicits will become the standard way of using OCaml.

Languages that have the problems you describe have had mechanisms for ad-hoc polymorphism since their inception (not sure about Haskell) and all its libraries are written with those features in mind; retrofitting the whole OCaml ecosystem and the way people are used to write code to use implicits in improper ways seems highly unlikely to me, and I think the reason why most people want them is to address the kinds of problems where they’re actually most needed, like serialization/deserialization (including print_*) and maybe some common infix operators.


Very nicely laid out, @rand. I like your arguments. Nevertheless, I personally think we should have a choice to use implicits, or not. In some contexts not having polymorphic math is just a pain, and makes code a little harder to read. I understand the dangers involved in allowing myself that choice, as well as in allowing others that choice.

I think that many programmers are interested in best practices and idioms when learning a new language, so training and available teaching resources can go a long way toward mitigating the dangers to code bases. (It might be harder with ReasonML, I suppose, since by design it makes it easier to hack up some OCaml with less of cultural switch.)


Out of interest, what are the cases? I really prefer something like Mat.add a b or Mat.(a + b) over a + b, because I see the context.


Thanks @rand, you just convinced me that I don’t want implicits.
Readability of code and its “fits in my head” property are some of my dearest things in the OCaml world.
And, my head is pretty small compared to most programmers.


3 modules as an argument? :smiley:
This reminds me about some code base:

Not only can it be difficult to dive into such code base, but compiler messages start to be pretty hairy too.


If I know what’s happening with the ints and floats in my computation, sometimes it would be nice to use + rather than + and +. with explicit conversions; the latter makes math a little harder to read. Implicits would be nice for exploratory programming, among other things. Sometimes I would prefer to make everything explicit for safety, though.

For Owl, I use the same style that you do, more or less; it’s too confusing otherwise, especially since there are functions that take matrices and scalars for different parameters.

I wouldn’t try to convince anyone else to do it my way, but I think it’s good to support different styles, to a certain extent.


Concerning Owl and linear algebra in general: I was hoping to eventually see a library that can handle structured matrices optimally without user intervention. For instance, you say inverse m and different BLAS matrix inversion routines get called, depending on whether m is a symmetric, banded, block-diagonal, or general matrix.

In my limited understanding, this could be implemented as lightweight user interface with modular implicits but without, it would necessarily get heavy – please correct me if I’m wrong. I guess in this use case, you don’t want to have to think all the time about the extra structure your matrices carry. That’s often (but not always) just a detail that allows efficient numerics.


What’s the problem with a functor with 3 arguments? Do you think that 3 arguments is too much for a function?

To me the most important problem of OCaml is that there is two languages in one : the core language and the module one. I like modules, this is the very basic abstraction that we encounter in elementary algebra (monoids, groups, rings, fields, vector spaces… are all modules) and I’m used, in my mind, to consider them as first-class values. And modular implicit is a huge step in the direction to make modules less second class.

If you’re not convinced that basic algebraic structures are modules, here their formal definitions as you’ll find in any mathematical books:

module type MONOID = sig
  type t
  val e : t
  val op : t -> t -> t

module type GROUP = sig
  type t
  val zero : t
  val add : t -> t -> t
  val neg : t -> t

module type RING = sig
  include GROUP
  val one : t
  val mul : t -> t -> t

module type FIELD = sig
  include RING
  val inv : t -> t

module VECTOR_SPACE = sig
  module V : GROUP
  module S : FIELD
  type vector = V.t
  type scalar = S.t
  val scale : scalar -> vector -> vector

And when, for a given type, there is a canonical such structure, I find it natural to use it implicitly.


Canonical is the keyword here. I have another (related) argument as of why implicits are bad - loss of referential transparency. If the proposal became one of type classes expressed with modules, then my impression is that most of my critique faints. I’ll maybe have some time later today to go more into depth with this.


For sure, and the design for modular implicits is inspired by the design of canonical structure in Coq (which is what I described). I think that you’re are confused by the way it is used and implemented in Scala, with the same denomination.


I then misunderstood your usage of canonical. What I mean is that if there can only be one instance of a typeclass pr. type, then the critique faints. This is not the case with the current proposal of implicits, and it’s not the case in Scala.


It is. You may only use one canonical implementation implicitly (otherwise It’ll be ambiguous which one to use).


Yes you are right that in the same scope, one can not have several implicits that match the same type in play. But across several scopes, this is not a restriction - it is with type classes.


It’s seems to be more a very very bad design decision problem. If you see programmers who define more than one canonical structure on a type, depending on the scope, shoot a bullet in their head. :stuck_out_tongue:


That is what I see very often in this world of Scala I’m sitting, and what I want the system to at least enforce. Minor example: you see library authors use an implicit parameter instead of a default argument, where the implicit is intended to be redefined pr. scope :exploding_head:


It would also be bad to split the ecosystem between the ones using and not using the feature - if you don’t split ecosystem, you are forced to use the feature, as you want to use the libraries using it… I would rather have a feature that doesn’t make you feel like there are potential holes in all your code, that are ready to be filled out by arbitrary implicits in scope.


Many of the problems with Scala implicits are specific to Scala’s implicits. They have an extremely complicated notion of scope. They support having overlapping instances that are chosen based on a complex ordering on types. They have (the frankly insane) implicit conversions.

If you take those away it is much harder to write unreadable nonsense. It is of course still possible, but there’s no helping some people. I’m not saying some harder to read code won’t be produced because of modular implicits, but I am saying that experience with Scala is not a good guideline for how they would work in OCaml. I would suggest that people’s experience using typeclasses in Haskell would be a better point of reference.

Modular implicits is deliberately designed to as explicit as an implicit system can be. It enjoys properties that do not hold of Scala’s system – for example if an implicit definition is in scope at a particular call site for which it could be used then either it will be used or there will be an ambiguity error.

As for having type classes, read the paper for why they cannot work in a language like OCaml that has full modular abstraction.

It is also worth noting that the current design for implicits brings a lot more expressiveness than simply ad-hoc polymorphism, so I suspect some of people’s anticipation of them comes from desire for those features.