Do IO writes complicate a function's contract more than IO reads?

Very abstract question, but in terms of design by contract, pre- and post-conditions, what makes a contract more or less simple or complex?

Intuitively, it feels like pure > read > write. So it’s worth the effort to defer writes if possible?

Maybe by removing IO reads, you simplify the pre-condition, and by deferring writes, you simplify the post-condition [of a function]?

There is lots of info online that describes what pre- and post-conditions are etc, but I find zero info on what’s a good pre/post-condition, and how to rate them. :slight_smile: I’m especially interested in how easy or hard it is to combine functions, and how to test them (composability and testability).

Some people on #haskell IRC has some good idea regarding the ordering and the dependencies of the reads/writes. It feels correct to say that an unconditional read and an independent write (that nothing later depends on) is less complex than reads/writes where a certain order is required. To measure this, you could do a graph analysis and count the times where a reordering would cause bugs.

I don’t know what you’re aiming for, but it … “feels” like maybe you’re looking for something like serializability theory in databases ? There’s a lot of stuff there, about how and when you can reorder reads and writes in what should be independent transactions, without changing the semantics of those transactions. Might be relevant; OTOH, I might be misinterpreting your goals.

My goal is related to code quality, maintainability and testability.

A pure function is easier to test, maintain and combine with other functions that an effectful function. That’s the first step. BUT: Are all effectful functions created equal? That’s what I’d like to investigate.

Unconditional read = a read that happens in all possible logical paths in a function. Should be moved out.

Independent write = a write that no read depends on in that function. “Can” be deferred. Not sure it should?