The design of an iostream
type that offers an imperative programming interface for I/O isn’t going to be able to hide its side effects, and the side effects may be different depending on the implementation of the underlying stream.
One example of this is handling non-blocking I/O operations. If the underlying stream is an In_channel
that has been opened non-blocking, then a read may raise Sys_blocked_io
. If the underlying stream is a Unix.file_descr
that has been opened non-blocking, then a read may raise Unix.Error (Unix.EAGAIN, _)
or Unix.Error (Unix.EWOULDBLOCK, _)
. And if the underlying stream is a bytes
, then it’s an open question what happens when multiple domains are contending for mutually exclusive access to the mutable octets. Do they block on a Mutex.t
even though the iostream
is meant to represent a non-blocking stream? Does it try to acquire the Mutex.t
and raise some exception if it can’t do so without blocking? Which exception would that be?
Other side effects can be important as well. Consider streams that read from a pipe or a socket rather than a file at rest. The operation of receiving from the pipe consumes octets from buffers held by the OS, which may have the side effect of signaling to the correspondent that it may resume sending. The timing of such signals may have important downstream consequences in a system of concurrently communicating processes.
My take is that where an imperative programming interface to I/O is required is at the lowest levels of an application stack, at precisely the points where the action and events represented by operations on an I/O stream often should not be hidden behind an abstract type. And we have a good standard functional programming interface for dealing with streams of data: Stdlib.Seq
.
The trick that I think a good I/O library needs to perform is to enable programming in the boundary between 'a Seq.t
and Unix.file_descr
(and its nieces In_channel.t
and Out_channel.t
)