I do not think this is necessarily true.
Here is why I think so, but be warned that this is preliminary as I do not have time to explore this idea further on my own at the moment.
State in Rust
Breaking the C API is a consequence of deciding that all single-threaded shared mutable state must assume they are also shared between threads. So a new read barrier is used to promote values when read from another thread. But for data types that were correct up to now, users must also be careful to avoid races from now onā¦ for instance by avoiding sharing values of such types between domains.
One lesson of Rust is that there are different kinds of mutable state, for different usages, with different means to achieve thread-safety.
The closest there is to current OCamlās mutable
is the notion of single-threaded multiple-writers mutable state (&Cell
). It is made thread-safe in Rust by statically preventing values containing &Cell
from crossing thread boundaries (by virtue of not having the Send
trait). The same restriction is used to make some data structures more efficient by avoiding the cost of synchronisation (cf. the reference-counting pointer Rc
vs. the atomic reference-counting pointer Arc
).
This is not enough by itself, and Rust offers other kinds of state for communicating and sharing values between threads.
UnsafeCell
has similarities with Ocaml multicoreās mutable
(though yours is safe thanks to the work on the memory model): it is used to get raw pointers to the data that have almost no restriction and can be sent across domains, but the user is likewise told to āavoid data racesā. It is rarely used alone, but together with type abstraction it can be used to program safe concurrent data structures.
Lastly, the default notion of state in Rust is linear state, which can be sent freely across threads. Thread-safety is ensured by restricting aliasing using the ownership and borrowing discipline.
A backwards-compatible concurrent collector?
If I had to imagine a backwards-compatible OCaml with static control of interference Ć la Rust based on ConcMinor
, it would distinguish the three kinds of state (concretely with other keywords in addition to mutable
). mutable
would keep its current meaning of single-domain, multiple-writers state and not require a read barrier, and in particular preserve the API. (I count systhreads as single-threaded for this purpose, since here it means āsharing the same minor heapā.)
Programs could progressively transition to other kinds of state when parallelising the program. Concretely, a data structure like Stack.t
, instead of becoming racy, would keep its current meaning, but users could replace it with a linear stack or a concurrent stack, two data structures distinct from the first one, when parallelizing their programs.
So how could this fit with the current plans? It is not entirely clear to me. If people start to rely on parallelism in an unstructured way (e.g. no clear distinction between different kinds of data types arising from different ways of ensuring thread-safety) then one will also lose the ability to retrofit ConcMinor
in a backwards-compatible manner (by losing the information that the current mutable
API is single-threaded). The API breakage of ConcMinor
which might only be virtual right now (if I trust this preliminary, not fully-explored idea) will become real. (Further difficulties arise with the emulation of the Thread
library with domains, but this could be changed later.)
But if users are provided in advance with a general direction for a model of control of interference this might happen differently. And eventually having such a model is desirable in any case, as it helps parallelizing programs (for instance the Firefox people reported that they had attempted and failed twice to parallelise the CSS engine in C++ before succeeding with Rust). Furthermore, in an imaginary retrofitting of ConcMinor
, one could imagine enforcing something like the Send
trait at the level of the read barrier until there is a better way (there would be two kinds of barriers, one of which would raise an exception if a state happened to be incorrectly shared across domains, and not be required in the FFI).
I find ConcMinor
interesting from a systems programming perspective compared to the stop-the-world collector because it could (I hope) offer possibilities such as having a low-latency domain communicating with a higher-latency domain. Moreover the performance cost of the read barrier might be lower in this scheme if it could be removed for all but the concurrent data structures.
TL;DR
The current API and programs that use it remain unchanged, mutable
does not require a read barrier. Single-threading is ensured by other means (for instance with single-threaded multiple-writers state as it exists in Rust, though static control of interference arrives later). This has the added benefit that data structures keep their current semantics. To share state between threads, a new kind of state is introduced, with a read barrier and the more complicated API.
Edit (12/06)
The previous version stated that Cell
does not have the Send
trait, but this is incorrect. Thank you to @stedolan for pointing this out. The analogy I wanted to make is between OCamlās mutable and Rustās &Cell
, which does not have the Send
trait (i.e. Cell
is not Sync
). More generally the situation in Rust is more complicated than I sketched. The ownership of Cell
itself is tracked, and so it can be sent to other threads (but not shared between threads). With a true analogous of Cell
under ConcMinor
, there would be new opportunities for promoting lazily if necessary, such as when the program tries to obtain a &Cell
from a Cell
, which is necessary before any access to its contents. So a read barrier on &Cell
could still be avoided with this more expressive variant (again, according to this preliminary, not fully-explored idea).