Modules are powerful tools, but they are admittedly somewhat heavy. Though one could imagine improving the situation syntactically, I do think that the main issue is semantics. In my current project I’m dealing with a large number of “algebras” that may be arbitrarily nested so there is a potential infinity of “number types” and operators on them. Though modules provide exceptionally good abstraction capabilities to deal with this case in a safe and sound way, having to explicitly parameterize code over these algebras is not only tedious but also quite difficult if you want to hide as much as possible of this tediousness from the end user.

I don’t know what the current state of modular implicits is, but have good reason to believe that they could greatly ameliorate the situation. Crossing my fingers that they will make it into the compiler this year…

This is different. Modules may define languages, syntaxes or notations, whatever you name it. And this is the case when you need to open it, and be aware that the context of an opened module is now evaluated in a different language, i.e., a DSL. So what I’m suggesting is that we shouldn’t mix normal (non DSL, non-operator) functions, that should always be accessed via the dot notation and operators.

A good module design can solve the problem partially. My approach is the following. When a module defines an algebra, then it defines all the expected operators for this algebra in the scope of the module. Additionally, a Syntax (or Notation) module is also defined that defines only operators. So that, you can open Float.Syntax if needed. Basically, the Syntax submodule is a subtype of the whole module as it exports on part of it.

Also, how do you think your notation will scale. For example, I’m using bitvectors a lot, and they define a common machine integer interface, like addition, subtraction, etc. Here is the minimal set of operators:

module Syntax : sig
val ( ~-) : t -> t
val ( + ) : t -> t -> t
val ( - ) : t -> t -> t
val ( * ) : t -> t -> t
val ( / ) : t -> t -> t
val (mod) : t -> t -> t
val (lor) : t -> t -> t
val (lsl) : t -> t -> t
val (lsr) : t -> t -> t
val (asr) : t -> t -> t
val (lxor) : t -> t -> t
val (land) : t -> t -> t
end

And usually, when I implement an algorithm using a particular bitvector implementation I don’t want to use some of them, I need all. So will it look like this?

open Bitvector.Syntax[~-;+;-;*;/;mod;lor;lsl;lsr;asr;lxor;land]

I would prefer either open (Bitvector : Integer.S) or open Bitvector.Syntax here.

In general, I agree with you, that the module level language in OCaml is very heavy, I would say even Adish. And it is very annoying to create lots of intermediate modules and module types, and give them names. And there are some annoying constraints (like that we can’t actually do open (Bitvector : Integer.S) however we can do include (Bitvector : Integer.S). It is also very annoying to combine several modules or signatures. It would be much nicer to write Comparable <+ Integer <+ Serializable (or something like this, in fact just adding an ability to pass several identifiers to open and include will already help us a lot). And, unfortunately, OCaml chose fibration instead of parametrization to represent families of signatures, that also impedes composability of signatures. All these lead to bad design, where we have overblown signatures that are hard to compose. This is the root of the problem. So that’s why I was saying that you’re trying to address a wrong problem - you trying to protect yourself from badly designed modules (that are not modular anymore, in fact) by using this small shields [do,not,pollute,my,namespace] though the real problem is that modules weren’t defined correctly on first hand.

I read the thread, and I hope, I didn’t offend anyone with my reaction. I just strongly disagree with the general approach, because I have my own opinion on modularity, algebras, and existentials. I believe, that if a module defines some algebra, then this algebra should be first specified as a signature. Opening anything that doesn’t have a named signature is wrong (and in an ideal language should not be possible). Once you need to switch your context into a different notation, like you’re switching your layout to a different language, the notation should come in the all-or-nothing principle. You shall not try to pick only some operators from an embedded language and mix at the same namespace operators from different languages.

I was confused by this thread. I was not convinced by the proposal but failed to grasp up to now what bothered me. It seems appealing at first. Afterall, most languages offer partial imports, and I have already been trapped by name captures. But all the examples are artificial with several definitions of foo functions. Taking the concrete example of an algebra and infix notation make the point clear.

We shall not mix overlapping notations from different algebra or modules.