Composable meaning, can be combined with other functions easily. Side-effects might be one reason you can’t easily combine it. Another, maybe number of arguments? Feel like there should be a measurement or metric for this.
Some non-pure functions that communicate via a hidden mutable state are not composable as they violate the associativity law (f . g) . h = f . (g . h)
1.
Determining whether two functions have hidden channels of communication is known as data flow analysis. In general, it is not possible to precisely determine data flow facts, due to the Rice’s Theorem in other words this analysis is undecidable. It is, however, possible to overapproximate data-flow facts, i.e., to have a conservative approximation of functions composability (i.e., if the analysis determines that a function is composable then it is always correct, but some composable functions could be misclassified as non-composable).
Pure functions are always composable, but purity itself is also a property that is undecidable, though it is sometimes easier to establish.
It is worth mentioning that undecidability is not a death sentence to the static analysis. Most non-trivial properties are undecidable. And the final answer to your question is: “Yes, but not precisely”.
1) It turns to be wrong, see below for more details.
Maybe not so much if they are too big or have too many arguments? E.g.,
add x y
is more composable than
process_order order vat customer discount product client contact
Also depending on how much the functions return, I guess.
This statement is wrong1 In fact, (f . g) . h = f . (g . h)
holds for all functions or functional values, no matter whether they are syntactic, pure or not, total or partial. But let’s reach this conclusion in slower steps.
All functions in OCaml have only one argument And fun x y -> <expr>
is a short hand for fun x -> (fun y -> <expr>)
.
Concerning functions domains and co-domains, they have to match, e.g.,
# let compose g f x = g (f x);;
val compose : ('a -> 'b) -> ('c -> 'a) -> 'c -> 'b = <fun>
So the minimal requirement is, of course, for f
and g
to be composable is to have compatible types, i.e., codomain of one has to be the domain of another. It doesn’t really matter, from the perspective of composability, what is the image of the function g
, i.e., whether g
is subjective or it loses some information. For example, let’s define the composition operator to make our code more concise and readable, and experiment,
let ( *. ) = compose
and let’s define a couple of perfect pure and surjective functions,
let succ x = x + 1
let pred x = x - 1
and one non-surjective function that just ignores its argument and returns zero.
let zero _ = 0
Now we have two compositions
let h1 = succ *. (zero *. pred)
let h2 = (succ *. zero) *. pred
and let’s check that they give equal results, we will need a small helper function for that, that will apply our function to a range of values from lower
to upper
(both including),
let rec forall lower upper f =
lower > upper || f lower && forall (lower+1) upper f
So we can run assert (forall 0 100 @@ fun x -> h1 x = h2 x)
and get some assurance that they are indeed giving the same results. It is not a proof, of course, but still something.
Something closer to a proof would be proving that the two OCaml expressions are equal for all x
,
((f *. g) *. h) x == (f *. (g *. h)) x
To prove that we will apply our *.
operator,
fun x -> (fun x -> f (g x)) (h x) ==
fun x -> f ((fun x -> g (h x)) x)
and then use beta-reduction, to get the syntactic equality,
f (g (h x)) = f (g (h x))
that proves that for any three functions f
,g
, and h
, and for all x
, ((f *. g) *. h) x
is equal to (f *. (g *. h)) x
So it looks like that we just proved that it doesn’t really matter how a composition is constructed, the functions are always applied in the same order. Indeed, that would be quite natural
And whilst the generated code is slightly different, it still generates a closure that references three functions and applies them always in the same order, so it doesn’t matter whether they communicate or not, throw exceptions, their composition will always have the same effect no matter in which order the composition operator was invoked.
1) It was more about associativity of functions, not function composition associativity.
Very impressive, but I’m not sure how to turn that into a useful metric in an enterprise setting.
So what is your end goal?
It is really not clear if there is a meaningful composability metrics.
For instance the function (with -rectypes
)
let rec f _ = f
can be applied as many time as wanted to any kind of arguments. It is not exactly useful.
I was thinking, since there are metrics for modules/classes - cohesion - and interaction between modules - coupling - there should be a metric for interaction between functions. If you have a metric, you can define a threshold. Could be useful for the architectural pattern “functional core, imperative shell”, where functional core is assumed to have higher composability than the shell, so, a different threshold value in the CI for those parts of the code.
I mean, it’s OK if some functions end up with metric “N/A”.
This is the reverse: the function is both perfectly composable and useless.
How is that a problem? The metric is supposed to help with real-life code, so if it has false positives for prototypical code, it doesn’t matter so much, I think.
It gives an upper bound on the usability of such metrics: it seems hard to define a probabilistic model for “composability” where the most “composable” functions are not the ones that ignore their arguments and return the “most composable” elementary type.
Not “not”? Or, they are the ones?
Don’t assume the metric will work alone. There can be other metrics to warn if arguments are ignored. Multiple metrics can be combined to produce an overall score, like A-F.
Maybe it makes more sense to think in sets of functions, instead of singular functions? So a certain set of functions has a single composability metric.
I don’t know if the composability of functions is formally measurable, but, in an enterprise setting, tools like Mascot or pfff can be useful in this area. Insidentaly, the practice of test-driven development results in more composable functions.
Yes! There’s a connection between testability and composability, for sure. But not all functions can be unit-tested, some must be integrity tested or functionally tested.
Nice tips, thanks!
What about this: A set of functions S has maximum composability C when 1) The domain and image is the same for all functions; 2) All functions are pure (or maybe total, even?).
For example, a module with math functions all having signature int -> int
, or string manipulation library string -> string
.
Then to get a metric out of C, you add points for all deviations from this perfect scenario.
What’s your real goal ? While what you described is indeed a measure of something, I’m not sure it’s directly corelated with how nice to use and safe the API is.
Concrete example:
- One API with 2 types, representing 2 states, and functions that uses them as appropriate
- One API with 1 type, and dynamic check to ensure functions are applied to the correct state
The 2nd API scores better in your metric, but is harder to use correctly, harder to document, and generally less nice.
Sometimes, preventing arbitrary composition statically is a feature. If that was not the case, we would all use lisp.
Real what? End goal? To avoid monolithic functions that do “everything”, and to increase at least the potential for code reuse. And testability.
Yeah, would be nice with some real-world examples. Are you thinking of file open/closed (or socket etc)? Good point, you wouldn’t want to artificially enforce composability just to increase a metric.
type opened_file = Handler.t
type closed_file = unit
let open_file (filename : string) : opened_file = Handler.make filename
let close_file (file : opened_file) : closed_file = ()
let read_line (file : opened_file) : string = "some line"
Your point being, that if read_line
accepted a file
type instead of opened_file
, composability would artificially be higher? Maybe this should be expressed as a pointed graph, where each edge has a number.
First API:
string -> opened_file
opened_file -> closed_file
opened_file -> string
Second API
string -> file
file -> file (* but closed, dynamically checked *)
file -> string
In second example, it looks like close_file can interact (or compose) with read_line, which is false.
You can also type alias string to filename, to increase intention.
Another real-life example could be to break up process_order
in smaller bits and see what that looks like.
OK, so what would happen in my team is that someone would add a function like
val read_line_into_filename : opened_file -> filename
Where this should better be two functions, like
val read_line : opened_file -> string
val filename_of_string : string -> filename
(If read_line
was already present wouldn’t matter.)
So maybe my question is more about the “splitability” of a function? If it’s possible to explode it into two or more parts.