Garbage Collection, Side-effects and Purity

Attempt: Functional programming is writing code that model a computational problem using functions and algebraic datatypes (sums and products, using pattern-matching to inspect values). A new domain is modeled by defining types for the relevant categories of data of the domain, and functions for the operations between those categories.

More generally: I don’t understand in this thread what is the question that you are asking. Could you try to be more precise about what you are asking for? It would be easier to answer at the right level of generality (and not in a way that you claim is too advanced/abstract/obscure).

If you want to understand what the computation model is for functional programming (“how to run code in your head”), then indeed an operational semantics would be a sensible answer. Operational semantics for a core of a functional language (lambda-calculs with datatypes and pattern matching) are fairly simple to define.

3 Likes

Have you tried to work thru books on functional programming? E.g. The Little Schemer, or maybe SICP? I ask b/c in fact functional (in the same sense that Ocaml is functional) languages like Scheme have been used quite successfully in both undergraduate and high school education for years – decades, really. Scheme was designed as a teaching language, and honestly, while I understand why even MIT has switched to Python (gotta go where the main body of interesting development is), Scheme is still a superior language for teaching. [When I was an undergrad, the teaching language was Pascal, and even though I never programmed in it again (C was the lingua franca at the time), Pascal was still superior to C for teaching.]

I will also recommend highly my favorite book: Functional Programming by Peter Henderson. It’s really old, but I think it does an excellent job of teaching pure functional programming.

Lastly, I don’t quite understand what you mean by OOP languages having a concrete model which somehow FP languages lack. It might be helpful if you could compare and contrast what you find to be better about the models of OOP, vs. the models of FP. B/c I fear that what you’re really saying, is that you don’t understand FP languages as well as you understand some particular OOP languages. And that is something that can be remedied by (for instance) reading and working thru the above books.

P.S. There is a way of thinking, in FP, that is different from the way of thinking in OOP. You won’t find that way of thinking taught in “Real World Ocaml”, b/c that book is trying to teach you how to use Ocaml, not “think in FP”. But (for instance) The Little Schemer or Little MLer will help you understand that way of thinking.

2 Likes

Hi @G4143,

To answer your question “does de-allocation creates a side-effect?”:

To state the obvious: if you care about the memory consumption of your program, then you care about the side-effect of de-allocation, and this indeed voids purity.

A language like OCaml lets you reason about de-allocation. Memory is collected when values are no longer reachable. Like in other languages, 1) a value that does not escape and goes out of scope will get collected, and 2) you can reason about when a value escapes and goes out of scope thanks to OCaml respecting the strict evaluation order of value types. OCaml (like other compiled languages) is in fact more precise: it ties the dynamic notion of reachability to the lexical notion of variable occurrence. For instance, in the following:

let x = get_huge_data () in
let z = long_running_function x in
f z

OCaml will be able to collect the value in x before x goes out of scope, and thus if possible before long_running_function returns. Indeed, OCaml performs liveness analysis during compilation, and records the information about variable occurrences in frame descriptors, for consumption by the GC when it scans for roots. In fact, you can rely on call-by-value operational semantics to (loosely) reason that a value no longer appears in a program, and therefore that the corresponding memory will be collected by the GC¹ (Morrisett, Felleisen and Harper, “Abstract Models of Memory Management”). Of course, using lazy or higher-order interfaces (when closures escape; with many idioms they do not) will make it harder to reason about the lifetime of values.

(¹: For OCaml, this is a conjecture I make, for subsets which could be given such operational semantics, and only for native compilation. Morrisett, Felleisen and Harper’s semantics obviously assumes that the results of liveness analysis are made available to the GC, but this is not written, nor is there any mention of the link between liveness analysis and accuracy of garbage collection in Appel’s “Modern Compiler Implementation in C”. I assume that it was part of folklore at the time, though recently I mentioned it to some functional PL researcher and they seemed surprised. I only found it explicitly mentioned in later papers from the OOP community. I checked that everything seems in place for OCaml to allow such reasoning, but only the authors of the original code, @xavierleroy and @damiendoligez, can tell us if this is intended to be part of the language semantics.)

Furthermore, memory is not collected immediately when a value becomes unreachable. Instead:

  • Short-lived values are allocated contiguously and deallocated in a batch, so that allocating and deallocating short-lived values is very cheap, with additional benefits in terms of cache locality. This replaces stack allocation from languages with explicit memory management.

  • Longer-lived values are moved to a heap that is scanned incrementally, to ensure a bounded latency. In contrast, naive reference-counting and unique pointers from C++/Rust make you pay the cost of deallocation up-front.

While this is essential for understanding the performance of OCaml programs, from the point of view of deallocation-as-an-effect, the delaying of the collection of unreachable memory can be seen as a runtime optimisation, that does not change the effectful status of deallocation (the memory still gets freed). [The intuition is that an effect can support some degree of reordering without requiring purity, as illustrated by strong monads which can be commutative without being idempotent, one possible definition of purity for semanticists.]

But is de-allocation an effect in practice? Faced with the scepticism and misunderstandings from this thread, I emit two hypotheses:

  1. Memory consumption is not an issue in functional programming, for application areas that interest functional programmers.

  2. Memory management in OCaml is efficient in such a way that programmers do not need to think about it in their day-to-day programming activities in those terms.

Hypothesis 2) could be explained for instance if OCaml programmers are already dealing with effects and thinking about the order in which their code executes (my experience), and are only used to deal with deallocation as an afterthought, e.g. when chasing leaks with a profiler.

Let us turn towards two programming language experiments from the 1990’s that allow me to reject hypothesis 1). Both show what happens when one denies the status of deallocation as an effect controlled by the programmer.

  • Region-based memory management consisted in allocating in a stack of memory regions deallocated at once, and determined by a whole-program static analysis. Now regarded as a failed idea but successful experiment (i.e. good science!), it taught us a lot about the structure of functional programs in relationship to memory management (see this retrospective). There were some good performance results, but also pathological cases “where lifetimes were not nested or where higher-order functions were used extensively”, sometimes requiring them to be altered to be “region friendly”, which was “time-consuming” and required knowledge of the inference algorithm. In addition, the regions changed unpredictably when the programs evolved, and memory leaks appeared when the compiler inferred too wide regions.

  • Haskell was (at the time) an experiment with lazy functional programming. Pervasive laziness prevents reasoning about the lifetime of values, and purity is a central assumption used by the compiler for program transformations, which is antithetical with reasoning about deallocation as an effect. It is well-known that naive Haskell code has issues with memory leaks, and that realistic Haskell programs have to follow “best practices” to avoid leaks, by making extensive use of strictness annotations (e.g. bang patterns). Unfortunately, I found it hard to find reliable academic sources about lessons drawn from the experiment like the RBMM retrospective. The best I could find on the topic of memory leaks is the following blog post: https://queue.acm.org/detail.cfm?id=2538488, from a Haskell programmer who wrote in another post (linked from that one) “My suspicion is that many (most?) large Haskell programs have space leaks, but they often go unnoticed”. This is consistent with comments I received from people with Haskell experience (first-hand, one academic and one industrial) and about an industrial Haskell consultant (second-hand) who reportedly commented that their main job was to fix memory leaks (but maybe in jest). Of course, take this with a grain of salt. At least, I believe that the Haskell academic community has accumulated empirical evidence of the extent and manner in which deallocation voids purity assumptions. Having an authoritative source about it would be pretty important to me, given the initial promises of functional programs being more tractable mathematically specifically via “referential transparency” and independence of execution order, whose theoretical justification already looks shaky to me from a semantic point of view. Some parts of the literature continues to promise far-reaching consequences of equational reasoning, without clear statements of limitation of the application domain. I have the impression that the Haskell which is practiced in the real world is very different from what you can read in some academic papers.

The hypothesis that deallocation matters as an effect, and that ML makes it easy to program and reason about effects, seems to me a strong argument explaining OCaml’s predictable and competitive performance.

So, thank you for your healthy scepticism.

2 Likes

Concerning the “don’t scan local variables that are dead” trick:

  • Technically it is not “intended to be part of the language semantics” because the bytecode compiler (ocamlc) doesn’t implement it, only the native-code compiler (ocamlopt).

  • As far as I remember, I reinvented this trick circa 1993, but it seems it was used earlier in the Lazy ML compiler by Augustsson and Johnsson. See Appel and Shao’s paper “An Empirical and Analytic Study of Stack vs. Heap Cost for Languages with Closures”, JFP, 1996, end of section 5.

1 Like

TL;DR: the paper mentioned by @xavierleroy provides additional references regarding the importance of liveness analysis for GC, including a demonstration by Appel that this actually matters for space complexity (thanks!). I find that a link is still missing with an abstract semantics à la Morrisett, Felleisen & Harper. This seems important to me because more theoretical works about time & space complexity in the lambda-calculus seem to take for granted that garbage collection implements something like the latter (i.e., how does one specify and certify that a compiler is sound for space complexity?).

how does one specify and certify that a compiler is sound for space complexity?

See for example Closure Conversion is Safe for Space, by Zoe Paraskevopoulou and Andrew W. Appel, ICFP 2019.

Objects in OCaml are like an electric guitar in the middle of a symphonic orchestra.
Just don’t care about them. Don’t use them.

If, and only if, your name is Ennio Morricone, you can ignore what I just said.

2 Likes

Oh, I want a “pure” opam then…

1 Like

NixPkgs at your service, then :slight_smile:

But, the package manager for OCaml programs is opam, not nix…