Monadic Library for Eio Capabilities?

In the example I gave above, anyone can see how the Eio version of the wayland-proxy interacts with the outside world after studying the code for a couple of minutes. If you were asked to maintain or deploy this code, you’d probably find that pretty useful.

My comment above may have been too broad. I’m not saying capabilities provides 0 values, but more: Most people don’t derive enough value from capabilities to be even remotely interested.

It might be entirely be linked to the type of software one writes (eg. library vs application, low level vs business line, etc.) but in the software I maintain, no, it’s not something I’ll find pretty useful. With or without capabilities, I need to architect the software in way that maintainable, and that architecture will tell me which part of the software is interacting with the outside world.

I feel capabilities is a niche concept, and should not be the foundation of OCaml multicore enabled libraries.

This is how Scala’s ZIO (and in fact even Erlang) does it.

I can’t comment about erlang since I don’t know much about their community, but ZIO is a very opiniated library and not everyone using scala will be using it, and that’s entirely fine. The thing is the scala community has enough manpower and commercial interest to do that. If a concept/library has to be written 3 times to support ZIO, scala Future, and Akka, they’ll find the people to do it.
On the other hand OCaml has struggle for years to support 2 concurrency libraries, and we now have an explosion of incompatible concurrency primitives. I don’t see how that will be beneficial to the ecosystem.

1 Like

Logging was given just as an example… It doesn’t really matter, the main point I wanted to make is that
the cost of the capabilities is not in the added characters, it’s in the adding of the characters (the effort to find all the relevant locations, the longer review time because your merge-request affects several files with different code-owners, the increased likelihood of conflicts, etc.)

Don’t get me wrong, I think the capabilities can be useful. And, depending on the project at hand, I even think the increase cost is worth it. (It can even pay for itself later in avoiding some bugs.) But at the same time, I just don’t think that your comment

as far as I can tell [added characters] is the only thing people are complaining about

is accurate. And I don’t think you can convince potential users by dismissing their concerns as “you are just complaining about a few characters which are fewer than what you need with Lwt anyway” because I don’t think that’s the concern they actually have. And opinionated libraries have extra work to do to convince potential users so I think it’s important.


Now I would like to say that I like the concepts behind the Eio library. I think it is useful to have such a library available.

I do think that, in terms of adoption it would benefit from being packaged differently: opinionation is a gamble and, as it currently stand, the opinionated packaging doesn’t seem to be a widely accepted consensus.

2 Likes

I wonder how much of this should be considered a fruitful and promising exploration of the horizon opened up by effects, rather than as a threat to the health of the ecosystem. I grant that it isn’t optimal for short term acceleration and growth to spend time exploring the solution space, but maybe it is good for long term sustainability? Ecosystems do need diversity after all :slight_smile:

3 Likes

This makes perfect sense to me! I am all for cooperation over competition. But I also think sometimes we need to have diverse, potentially incompatible forms take shape before we can find the common ground that enables them to interoperate. While I agree that active cooperation is preferable, I also think we need the dynamism that comes from being able explore incompatible approaches. But I also think that being incompatible in a narrow technical sense doesn’t necessarily entail being competitive? Sometimes we just need space to try things out :slight_smile:

I hadn’t heard about Pico before this discussion, and I’m keen to try digging in. I really like the idea detailed in the readme!

3 Likes

I tend to agree with this approach, although there are still some social and technical issues to be resolved. As such, I’d like to clarify 2 things:

  1. the first is that I am actively following your work.
  2. the second is that I’m currently trying to integrate your work in one form or another.

However, I could only really commit to your solution if it goes beyond the ocaml-multicore organisation, which lacks transparency from an external point of view. However, I understand the paradox in which you find yourself where iteration is necessary and the integration of your work into OCaml could not be done without the acceptance of your solution by scheduler authors such as Robur. But there are intermediate paths :slight_smile: .

Secondly, during my trials, I’ve already noticed certain limitations in relation to what I wanted to express in Miou and your API. But more detailed discussions could be beneficial and could take place in a better place than this one.

Finally, I’d also like to comment on the notion of competition. The interpretation that there is possible competition between schedulers depends largely on the people involved and their objectives. It has been said that Miou is not very popular. This may be due to the fact that we don’t work actively enough on its communication :slight_smile: . As we’ve pointed out several times, we don’t feel we’re in any particular competition when our aim is not to offer a widely-used scheduler. Our ambitions remain fairly focused on unikernels and our approach to the development of protocols and formats in Robur remains (even after the existence of Miou) scheduler-free. In other words, in what we develop for the community and for our own use, we systematically try not to impose opinionated choices, and we’ve been gaining a certain amount of experience in this area for several years now, given the number of our libraries used in contexts other than MirageOS. So we’re going to continue in this direction even if Miou exists.

What I really want to make clear here is that the image of supposed competition between schedulers does exist, I agree, but it’s certainly not sustained by our approach, which remains fundamentally community-oriented.

And I personally think that’s the crux of the problem. The current existence of several schedulers in OCaml is not really based on technical details (as @polytypic is gradually demonstrating with the picos project). The reason is perhaps, beyond the technical, social. So I think it’s just as interesting to see how eio wanted to fit into a concrete and material reality (that of the community) and why this generated what we can see today: the appearance of certain frustrations (legitimate or not) as well as other ‘competing’ solutions (legitimate or not).

And in this respect, as I’ve already said, I’m not socially aligned with eio (and this is perhaps the most fundamental criticism I can make of it).1


1: I’d just like to clarify that the last two paragraphs come from my personal opinion.

8 Likes

To answer the original question, it should be possible to make the capabilities threading more ergonomic using a reader monad. Ie you would have a get operation to get the env inside the context of the monad. This is pretty much what Scala’s ZIO does. Maybe something like:

let main =
  let open Env.Syntax in
  let* env = Env.get in
  Server.serve (Env.net env)

let () = Env.run main
1 Like

As one of the owners of the ocaml-multicore GitHub org, I would like to understand how we can make it more transparent from an external perspective.

I understand that not everyone may have a need to use Eio or align with its philosophy. However, we have taken care to ensure that we communicate about it as much as possible. To that end, we have:

  • Regular developer meetings open to everyone. You can find the notes at: Notes – Eio dev meeting - Google Docs.
  • A blog post written by Thomas about capabilities that he has linked earlier.
  • Frequent updates for every minor release.

I’m not sure if you’ve had any private chats about Eio’s design. I haven’t seen any public discussions on Eio-related channels. I encourage you to enter the debate if you want; we are very interested in hearing opinions. Without any public conversations happening, it’s tough to gauge where everyone stands. If you’re keen to chat about Eio, please feel free to drop a post on the issue tracker or reach out to any of us (me, Thomas, Vincent, KC, or Thomas G).

One of our goals for Picos is not to split the OCaml community without limiting diversity. We have a first proposal, but we have the will to co-design it with the community. The ultimate goal for a framework like Picos is to become a standard for OCaml, either as a library or by incorporating parts of it into the standard library. However, as you’ve pointed out, reaching a consensus on the design is necessary for this to happen. It’s worth noting that this isn’t the first attempt to devise a mechanism for unifying schedulers. In the past, there have been other proposals with similar objectives:

  • Domain-local-await: proposed by Vesa Karvonen. Some libraries such as Eio, Domainslib, Moonpool, etc. support it.
  • Suspend/Resume: link to the paper (work with KC Sivaramakrishnan and Deepali Ande).

We plan to initiate public developer meetings for Picos soon, with the aim of bringing together various scheduler authors to reach a consensus on the design. People who are interested are very welcome to join, and you’ll hear updates about it soon from @polytypic.

19 Likes

It seems unlikely those would be the effects, though:

the effort to find all the relevant locations

Passing values explicitly means the type-checker will find any missing places automatically.

Compare this with the cost of checking that each new version of each library you use doesn’t introduce some new security problem.

As a recent example, Eio depends on MDX for its tests. MDX allows you to ask it to include one file (e.g. an example module) in another (e.g. your README). The latest release changes this behaviour: it now also executes the code while copying it! All programs using MDX now need to be audited to check what effects that might have. (amusingly, MDX itself used to have a test that copied the shell script sudo -rf /!)

If instead I had to pass access to it explicitly, I would have just passed it access to some temporary directory.

the longer review time

Being able to put a bound on what the code can do should make reviews easier, since it’s easier to see how the new code will interact with other code. Checking that contributed code doesn’t introduce a security flaw is one of the most time-consuming parts of reviewing. I particularly like it when someone adds a test and I can see at a glance that it’s safe to try it, without having to review the PR carefully first.

the increased likelihood of conflicts

This is possible, but unlikely.

Consider two PRs, both adding new features. To make this concrete, let’s continue with the web-server example. Say one PR adds Redis caching (a --redis=http://... option) and another adds a static media directory (--static-files=/srv/media).

So the first PR has to plumb through a capability to Redis, and the other a capability to the directory. If they go through the same functions, this could conflict.

However, they would also conflict with e.g. Lwt, since we need to plumb through the arguments somehow anyway.

To find a conflict that would be unique to Eio, we need an example where the capability being passed is one of the small number of capabilities that Lwt passes implicitly.

One example is “the network”. Note that, as we saw previously, we don’t pass this to e.g. a function that just needs to access a particular location (like our Redis-using handler). An Eio function taking a network argument is saying “this function may access any network location, and it is this function, not its caller, that will decide where”. Perhaps the PR adds a summarisation service that takes a web URL and fetches it, along with its immediate links, which could point anywhere.

Of course, we might want to take a network parameter anyway so that the tests can use a test network to inject faults, but let’s assume we’re not doing that.

Another example is “the file-system”. A function taking this is saying that it may access any part of the file-system, and it will decide where (as opposed to using some sub-tree, like /srv/media). Perhaps we want to add a server-side scripting language, and this (non-capability) language just accesses paths using strings. Again, assume we don’t want to confine this or test it with a mock FS.

Here, we might get a conflict.

However, we now have several functions that previously didn’t connect to the network or modify the file-system and now do. It seems likely we’ll want to mention that in a comment, and those comments will conflict.

Sometimes a comment isn’t necessary because it’s obvious from the function’s name that it will do something. But remember that these PRs are modifying pre-existing functions, so clearly the names don’t indicate that. If both PRs rename the functions, we’re back to conflicting again in all cases.

Finally, it’s likely that there’s more to our summariser than just raw network access. It probably needs a cache of recently fetched pages, a TLS configuration with the system’s certificate store, a pool of HTTP connections, etc. We won’t want to plumb though all the ingredients of a summariser, but rather construct it at the start and pass the single Summariser.t as the argument.

Likewise, our interpreter will need a library search path, a cache of loaded libraries, a set of running threads, etc, so we’ll do the same for that.

And now we’re back to having two PRs that make conflicting changes with and without Eio.

So it doesn’t seem like this will create new problems (and it hasn’t so far). But I think making up examples won’t take us far. It would be more useful to look at real examples where something was difficult.

The most recent was a problem the Irmin team had. They wanted to make irmin-eio use multiple domains, which required sharing a pool of open files. Eio requires open files to be attached to a switch, but the switch refused to accept resources from another domain. They joined the Eio dev meeting and asked why it works like that. As a result, I created an issue about it (Switches should be domain-safe) and we fixed it for Eio 0.15. It’s a lot harder to dismiss real problems!

“opinionated” is a very lazy word. It would be more useful to say what the opinion is. Otherwise, we’ll have to say that Stdlib.Buffer is opinionated because it makes you pass the buffer around, but that Str is “unopinionated” because it uses global state.

Remember, using Eio doesn’t turn OCaml into a capability language. It merely allows writing in that style. You can still use globals if you want! But a library that uses globals does prevent use of capabilities. The example I mentioned above (using Eio_unix.Cap.enter) wouldn’t work with Lwt at all, because Lwt forces all FS access to use the old open call and plain strings.

7 Likes

Here’s another example of a capability system grafted on top of an existing language: Fs2

To open a socket that’s connected to localhost:5555 , we use the client method on the Network capability. The Network capability provides the runtime environment for the sockets it creates.

Scala’s fs2 is an I/O framework used by Disney+ and others. It uses the abstraction of a ‘stream’ running inside a monadic I/O context. The closest analog in the OCaml world would be Rizo Isrof’s streaming library if it were running inside Eio and the streams were multicore-aware.

Ok so maybe these effects would be very minimal. I do also agree with you that (at least for a certain class of programs) these effects would be less than the time and comfort you can gain by having the compiler check some invariants like resource access. As I said (emphasis added now):

So lets go back up to the start of this sub-thread:

Do you really think that the potential users of this thread think the only (or main) pain point of Eio (or Lwt for that matter) is that it makes program longers?

I don’t think that the main perceived pain point is the length of programs. I shared what I think could be reasonable worries that potential users might have. I guess I am spending time on a large project with many contributors so I might be overthinking the smaller merge-requests with narrower diffs and shorter review times. Maybe other potential users would have different worries.

Do you think the added length to programs is the main perceived pain point of Eio? If you do, then I wouldn’t really know what to say. I guess I’d recommend to run a survey to check?


It was a shortcut for the opinion that I explicited earlier in the thread:

I think that this opinion (that Eio is primarily intended to be used with the capabilities) is widely expressed within the Eio documentation and API:

  • The documentation says that the Eio module provides the high-level cross-plateform OS API whereas the Eio_linux provides a low-level non-portable API. Who would want to use that?
  • The same documentation gives its first example using the env argument. It doesn’t actually mention the low-level API anymore.
  • The API tucks the non-capability OS API into a module named Low_level. And it doesn’t have a blurb on its parent page. It really looks like an Internals module missing its (**/**).

---- EDIT: addition ----

I wonder if the friction re: capabilities is because they come as a surprise:
The documentation’s introduction mentions effects and I/O, it also says “Eio replaces existing concurrency libraries such as Lwt”. The motivation section mentions using effects to gain the benefits of using direct style and it mentions more modern OS interactions. It mentions a few more things before capabilities are even mentioned.

Maybe a major issue is that people come looking for an Lwt replacement and they think they have it, and half-way through trying it out they realise that there are also capabilities to handle. Idk, maybe just a thought.

At this point we might want to think about branching this conversation off into a new thread to discuss application architecture in Eio and the pain points experienced therein. It would be great to see real-world reports of these pain points and potentially some feedback that could lessen these.

3 Likes

I see lots of critics of eio and its use of capabilities in this thread. I would just like to say there are also people who love this new approach and who love capabiltiies. I started to use capabilities in Semgrep, independently of eio (we plan to start using eio at some point but that’s another story), and I think it gives a huge benefit actually in the maintainability of the project. The code is far clearer; it’s far clearer that this part of the codebase is using the filesystem, the network, /tmp, is using exit, is using chdir, and it’s now also clear that this part of the codebase does not need any capability and so is “pure”. This is the futur of software engineering IMHO. And the beauty of @talex5 's approach to capabilities is that it’s using simple parameters and arguments!! No need complex effect type systems.

10 Likes

That’s good to hear, and reminds me: when arguing for or against capabilities, we should probably say what past experience we’ve had with them. Mine is that, around 2009, I’d been writing lots of web services and doing lots of things with access control lists, SAML tokens, X.509 etc but I didn’t feel I had a proper understanding of how to make programs secure, just endless checklists of best practices.

Someone recommended the E language as a demonstration of security done right. A language with security throughout sounded horrible to use, but I thought I’d learn it and see if anything could be brought back to other languages. But it turned out to be really pleasant, and I realised that most of what I had thought of as “security” was really just a pile of ugly work-arounds for systems designed with no security initially.

(E is no longer developed - I think a lot of the developers got hired by Google - and I wouldn’t use it in production, but it’s good for learning)

2 Likes

Sure. You were worried that logging would be difficult, and I showed that logging in Eio is the same as when using any other library. Then you wondered about merge conflicts, and I showed that there shouldn’t be any noticeable difference there. What I’m still missing is an actual problem (ideally with a link to some code in a real application that we can discuss).

It seems unlikely to me that we’ll find some huge problem with the idea of capabilities. After all, people have been using capability systems for decades (operating systems and programming languages that actually enforce capabilities, not just libraries compatible with them like Eio).

But it you really want to keep looking for problems, I can suggest some weak spots to try. The 0.2% or so extra code we saw earlier isn’t uniformly spread over the whole program; you’ll find it mainly at the interface between capability and non-capability systems.

Command-line parsing is a good place to start. For example, if Unix wants to pass us an integer and a directory, it instead passes us two strings, and we need to convert them. You can see that in e.g. eio-trace’s main.ml. Notice that when parsing a path, we need to use (path $$ tracefile) instead of just tracefile, to convert the string to an Eio path.

When I was trying earlier to find an example of something that is harder with Eio, I suggested an interpreter for a non-capability scripting language would need full file-system access. But if it were a capability scripting language then of course we wouldn’t need that (and we wouldn’t have to worry about the scripts having access to all our files either).

Likewise, the web summariser example needed the network capability because the request arrives over HTTP, which is a non-capability protocol. If we used e.g. Cap’n Proto instead then the client could give us a reference to the page instead of a URL string, and we wouldn’t need access to the whole network at all. And this affects behaviour too: if the client asks us to summarise http://intranet then we will try to summarise our own intranet instead (which could be an attack, or it might just be that the client wanted us to summarise their intranet and is annoyed it didn’t work).

1 Like

You’re right that I’d prefer people not to use the low-level APIs. I consider them to be similar to the Obj module in OCaml - there if you need them, but not for general use. I’ll try to explain why that is, and perhaps you can spot a flaw in my logic.

When people ask me why they should use OCaml, I’d like to tell them that it has a firm basis in mathematics, and makes it easy to reason about their code.

So let’s forget about capabilities and look at reasoning instead. Usually there’s too much code in a program to read all of it (especially when considering libraries, the OS, and remote services), so functions need some kind of summary, such that we can just read the summary most of the time. This includes the function’s type, and maybe its comment, or even a Gospel specification.

As a simple example, imagine trying to specify a negate function (pretend this function has a complicated implementation that we don’t have time to read):

let a = ref true
let b = ref true

(* [negate x] sets [x] to its negation. *)
let negate x = ...

let () =
  negate a;
  negate b;
  assert (not !a);
  assert (not !b)

We can use an informal comment, as above, or try something formal. For example, here’s a Hoare triple for it:

forall x, v. { !x = v } negate x { !x = not v }

This says that if !x = v and then you call negate x, then afterwards !x = not v.

This might seem like a fairly comprehensive specification, but in fact it’s mostly useless. Looking at the above program, we can’t say whether either assert will pass! For example, we know a is false immediately after the negate a, but we don’t know what negate b does to a, so it could be anything by the end.

As programmers, we probably assume negate b won’t affect a, but what is the rule we’re following when making this assumption? I can think of a few possibilies:

  1. A function should be implemented in the obvious way.
  2. A function should only use the minimum state needed to do its job.
  3. A function should only use state reachable from its arguments.

Maybe you have your own idea; I’d be interested to know how people think about this!

negate is simple, but here’s a trickier example:

val handle_request : request -> response

This is a pure function that just calculates an HTTP response from a request. Now, let’s say (to use the example suggested earlier) someone decides to add some Redis caching. Maybe not in this function itself, but in some function it calls indirectly, they make some calls to http://redis/. And they don’t want to plumb through anything (that seems to be your worry?), so they don’t change the signature or the comment of handle_request.

So: is that a reasonable thing for a programmer to do?

It violates all the rules above:

  1. It’s not obvious it uses Redis, since it didn’t before.
  2. It doesn’t need to use Redis.
  3. Redis isn’t reachable from its arguments.

If we say it’s OK to do this, then we can’t reason about programs at all; any function could do anything! A testing framework could one day delete all our files, or a logging library could download and execute code mentioned in the log message!

In this example, let’s say I’m asked to work on the server code. I make some changes locally and add some unit-tests. But unknown to me, every time I run the tests, I’m corrupting the production Redis cache!

And this is really hard to track down. The production team is pulling their hair out trying to find out why the cache keeps getting messed up, but they have no easy way to find all the places where the code uses Redis. And in fact, the bug isn’t even in their version of the code.

Now, it might strike us as odd that this problem can occur at all. OCaml is a memory-safe language, which means that functions can’t just corrupt each other’s memory, and Redis is an in-memory database, yet the memory is being corrupted. The reason is that when people say “memory-safe”, they are usually only talking specifically about local memory (i.e. memory in the OCaml process’s heap), not remote memory such as in the Redis service. And indeed, if the production service, the tests, and the cache were all a single OCaml program, communicating using function calls, we wouldn’t expect to have this kind of problem; it would be obvious that the tests shared the cache with production, because we passed the same cache to both of them.

“memory-safe” also usually means local volatile memory (i.e. RAM), not non-volatile memory (e.g. SSDs); we shouldn’t be surprised if one OCaml function corrupts another function’s files. So one solution to our problem would be to make a “fully” memory-safe system (and this is what a capability system is).

But this is not the only option. I expect that most people would say that the real problem is that our Redis service wasn’t properly secured. To make things easy, we give resources global names (like “redis”), which avoids the need to plumb references around. This means that anything that should be using Redis can do so easily. But it also means that anything that shouldn’t can do so too. The general solution to this is to add an access control layer, and this is what Redis does.

Doing this brings a couple of problems though. Obviously it’s a load of extra work and complexity, and we now need to include Redis’s ACLs in our reasoning system. But also, we now need to plumb the password through our service:

val handle_request : redis_password:string -> request -> response

So, we get the same cost as when using references, but without the benefits.

Back to our negate function. The problem of how to specify functions in a modular way was eventually solved (and rather elegantly) by the invention/discovery of Separation Logic. Here’s an updated specification:

forall x, v, p. { (!x = v) * p } negate x { (!x = not v) * p }

The idea is to partition the state of the system (e.g. the set of allocated ref-cells) into two non-overlapping parts. The spec above says: if the state can be partitioned such that !x = v is true of the first part and p is true of the second, and you then do negate x, then afterwards the state can be partioned such that !x = not v is true of the first part and p is still true of the second. In order words, negate x only affects x.

Functions having * p on both sides can be composed safely. For example, when considering the effect of negate a, we can use the specification like this:

{ !a = true * !b = true } negate a { !a = false * !b = true }

Composable functions are so useful, and non-composable ones so useless, that in fact we just redefine our notation so that every specification is understood to have an implicit * p.

So being able to reason about code requires plumbing through effects somehow. We must mention them somewhere. There are various ways to do that. Passing an argument is the most obvious, which is what Eio encourages:

val handle_request : redis -> request -> response

If there are several things, we can bundle them together if we like:

val handle_request : config -> request -> response

If Redis is getting corrupted, that makes it a little harder to find out where, since config is likely passed to more places than redis, but it still gives us a bound on the behaviour and a set of places to look.

There are other ways of doing it, such as using an IO monad:

val handle_request : request -> response io

For example, Haskell uses an IO monad for all side-effects, allowing them to partition their functions into two groups: those that can’t do anything, and those they can’t reason about.

Incidentally, if we were using Lwt then we’d have to plumb Lwt.t through handle_request when adding Redis support. That would be more invasive than adding config with Eio, while still not putting any bound on behaviour.

We could also make a more restricted “redis monad”:

val handle_request : request -> response redis

But this is just more complicated than passing the argument, and composing multiple monads is tricky.

Finally, if OCaml had implicit arguments then we could pass redis implicitly. We’d still need to annotate handle_request to say that it used Redis, but the code wouldn’t show where it used it. This is a bit shorter, but also less clear, and quite bad from a security point of view.

In summary, the reason why I want people to avoid the low-level APIs is because we can’t reason about such code. Ideally, the compiler would warn about it (and about Obj.magic). Giving up reasoning, clarity and ease of debugging are not small matters, and I’d need to see some pretty impressive benefits to consider it.

(Minor correction: I think Redis actually uses its own protocol rather than HTTP, but the same logic should apply)

8 Likes

One of the author of the E language, Mark Miller, actually published a paper on using capabilities in OCaml a very long time ago: “How Emily tamed the caml”: Emily 0.41

1 Like

When I read all this, two questions come to my mind:

  1. why do we try to impose capabilities on the entire community by encoding them into Eio?
  2. why can’t we deal with capabilities by integrating formal verification with OCaml and expressing capabilities at the formal specification level?

To elaborate on 1:

  • Having to deal with capabilities comes as a surprise when you’re coming to OCaml from other ecosystems. This has nothing to do with the length of the code, or having to touch many places when the need for capabilities changes as the application evolves.
  • In a business context, you are looking at tradeoffs such as development speed versus reliability. Many times, over the trajectory of a project, these needs change. With its unobtrusive and convenient type inference, OCaml is very well positioned as a language that allows you to rapidly prototype (and later achieve increased reliability by adding .mli files to solidify the interfaces). IMO, capabilities are exactly one such means to achieve increased reliability. Making this choice of trading development speed (developers needing to understand what capabilities are and why you should care) for increased reliability at the ecosystem level is IMO a mistake. It positions OCaml towards a niche audience that values reliability over development velocity. In particular, it would make the OCaml ecosystem less attractive to startups who need to optimize for quick development. That is, assuming, that the broader community maintaining the OCaml ecosystem will even adopt Eio.

To elaborate on 2:

  • What if we address the needs of different parts of the OCaml community, and different project stages by creating a good developer experience for using formal verification in OCaml?
  • With the development speed vs reliability tradeoff and how that impacts OCaml’s viability as a language for various projects and businesses, it seems only natural to me that there’s a progression along these lines: writing .ml files only for prototyping > adding .mli files > adding Gospel formal specification. So with every “layer” here, we add more reliability, but we incur a cost (which is often worth to pay in the long run, but usually not worth to pay in the short term before a project is fully validated).
  • IMO, something like capabilities belongs into the formal verification layer as it introduces complexity that small projects should not need to incur.
5 Likes

I don’t use eio nor any other competing library (until now, it hasn’t been useful to the kind of research projects I have been developing until now). But, I find it refreshing that an OCaml library leverages the powerful type system we love to provide a strong safety-related added-value compared with more usual such libraries. Furthermore, it seems to me the capabilities part is not only related to safety but also to be clear about the services an application relies upon, and that seems important. Of course, there is a price to pay upfront but is it that big compared with the return on investment? It would be interesting to see where such a development leads to, ultimately. It might confirm what you’re saying or, on the contrary, confirm the approach is fruitful. Finally I guess there are other competing libraries without capabilities, so it’s not like people are forced to use it.

On another note, I’m surprised by your “natural” progression. The progression you’re talking about is certainly all too common, for various reasons (business pressure, helping ourselves poor humans to better understand what we write :slight_smile: …), but types not only bring safety but also provide with a strong support in design and in refactoring. My opinion is that we should as much as we can always strive for more upfront specification, even if we know that “real life” isn’t conformant to such a discipline. Otherwise, what’s the point of using a language with strong typing by inference?

2 Likes

My personal estimate is it leads to somewhere similar to Haskell. Which is to say a language that is awesome, and really safe and allows expressing a lot of safety invariants in the type system. But that also means an ecosystem that exposes the developer to so many intricacies up-front that the barrier to entry is higher.

I am happy to be proven wrong by how this all plays out.

types not only bring safety but also provide with a strong support in design and in refactoring

I fully agree. OCaml is already positioned stronger than TypeScript here (which it does make a wonderful replacement for, as projects such as ReScript and the ecosystem around ReasonML prove). People are willing to incur the learning curve of learning an ML-type language to get these safety benefits. My impression is, though, that there is a point after which adding (imposing) more safety benefits makes the deal you take with OCaml look less desirable.

3 Likes

Finally I guess there are other competing libraries without capabilities, so it’s not like people are forced to use it.

Well, there’s strong network effects on concurrency and IO libraries, because they end up being dependencies of many other packages (eg piaf or capnproto). Until Eio implements picos, coupling it with something as opinionated as capabilities only introduces more fragmentation and I think @sabine it right to point it out.

In other words, if Eio was lower level and less opinionated, with an optional capabilities layer on top, none of the criticism would have happened :slight_smile:

9 Likes