Garbage Collection, Side-effects and Purity

GC = Garbage Collection

GC, in a pure program, is a point that’s always confused me. I always understood that freeing memory from a program was impure and would create side-effects but it seems it doesn’t matter if the program is remove from all consequences of those impure acts and side-effects.

Basically, if any memory block has no remaining references in the program, then freeing that block will have no consequences on the running program so its allowed to happen behind the scenes…

Is this correct reasoning?

1 Like

At any point the program can only see the memory that is reachable from the data-structures it holds; if no references can be created to a particular part of memory, then its presence or absence is not observable to the program, and that part can be collected automatically. This is not about purity, this reasoning is correct in any programming language (“pure” or not).

In some languages such as C, pointer arithmetic allows to create valid accesses to many pieces of memory from a single pointer, so it can be fairly hard to use this approach without imposing extra restriction on what programs are allowed to do (otherwise the set of “reachable” memory is just too large and this idea is ineffective for timely collection).

Two aspects that relate side-effects to memory-management are the following:

  • If the language does not allow mutation, or any other way to create cyclic data structures, the GC can rely on this to use a simpler implementation. (Erlang has this property and its GC relies on it. Neither OCaml nor Haskell do.)
  • If you want to associate specific computations to the deletion of a particular part of memory (a “destructor” for the corresponding program value), then in presence of side-effects the time where the memory is collected becomes observable to the program. You need to ensure that the memory-collection strategy is predictible enough to the programmer, and has good properties that let people use it for general management of programming “resources” backed by memory/reachability (file handles, locks, sockets, etc.). In OCaml for example, the GC gives no guarantee on when memory will be deallocated, so “finalizers” (computations to run when a value is freed) are in general not a good fit for correct resource handling. Languages such as C++ and Rust have done a lot more work to make this work; this also interacts with other aspects of the language, notably error/exceptions handling.

If a de-allocation(freeing of a resource) happens in a forest, and no one is around to witness it… does it create a side-effect?

1 Like

First of all it is important to note, that the notion of effect is rather vague. To some even reading from a register or from a memory location creates an effect as it is observable through the timing attack. In other words, effects are in the eye of the beholder and should be considered from the perspective of a concrete beholder.

Every time an allocation happens, OCaml checks whether there are pending signals, and if they are it will invoke them. OCaml also uses allocations for yield-points to implement preemptive threads. Finally, an allocation may trigger finalizers, which in turn may do arbitrary things, even delete your home folder. That means that an application of the function fun () -> "hello, " ^ "world" may, in general, perform any effect, including, again, the deletion of your home folder.

Depending on your perspective, you may or may not consider it as an issue. From one perspective, this is not an effect associated with the "hello, " ^ "world" expression, it is an effect of some other expression, that is executed (pseudo)asynchronously with this expression. From the other perspective, you can think of "hello, " ^ "world" as a vessel that could be used to inject an asynchronous malicious code into your computation. Again, it all depends on your perspective.

If we will consider a common perspective of purity, which involves referential transparency - the main indicator of a side-effect free code in functional programming, then allocation doesn’t have any side-effects. This is the notion that the compiler is using itself, so whenever it will see fun () -> "hello, " ^ "world" @@ () it may expand it to the "hello, world" constant.


@ivg And this is what throws me with functional programming " First of all it is important to note, that the notion of effect is rather vague. To some even reading from a register or from a memory location creates an effect as it is observable through the timing attack. In other words, effects are in the eye of the beholder and should be considered from the perspective of a concrete beholder.", I find it very hard to attach a real understanding to some of its fundamental notions.

I had the same problem objects. What’s an object? Its a fundamental notion in OOP that can’t be firmly attached to anything concrete but at least OOP languages have object models so you can understand what C++ or Ruby means by an object. I find that model is lacking(or I haven’t discovered one) in functional languages.

Yes, this is a common cognition problem. Functional programming is rooted in mathematics and lambda calculus, which was originally developed as a foundation for mathematics itself. So it is the most fundamental notion, which you have to develop from scratch and then use as you mental basis to reason about the real world. Not vice versa. You can develop this intuition by a constant exercise in mathematics, in other words, it comes with time :slight_smile: But let’s start right now.

The good news is that lambda calculus is extremely simple and has only two rules and three syntactic forms. So it is very easy to learn. But before we will learn it let me draw the lines connecting mathematics with functional and imperative programming. Both kinds of programming are pure mathematics, despite that functional programmers usually claim that their style is exclusively mathematics and everything else is a sin. The same is with purity and impurity. Or OOP vs procedural. High-level vs. assembler also doesn’t matter. As long as a program computable - it is mathematics. The main difference is how complex are the rules of the programming languages.

So what is the program, anyway? Why do we write programs at all? The main reason is that programs allow us to express concisely some parts of the real world as well as the processes that flow in the real world. Not everything is expressible, but if we can express some real-world process or an object as a program, then we have a formula. As soon as we get the formula, we can repeat this process, and we’re now in control of this process. It is like learning to build the fire vs waiting for the lighting to strike and give us the fire. This turns us into more powerful creatures. Well, as long as we have computers. The computers are the things that can run those formulas and repetitively obtain the same outcome. It was observed a long time ago, that we can build an automaton, that can read a formula and follow a simple set of mechanizable rules and this automaton could be used instead of a human to materialize our needs. Different automatons are capable of executing rules of different complexity and therefore perform tasks of different complexity. For example, there are state machines, which have a finite set of states and no memory. It is a rather stupid automaton, but still useful. We have a push-down automaton, which has some memory, but at one point in time can remember only one thing. Still quite a stupid creature, but a big step forward. Finally, we have the Turning Machine. This guy can remember as many things as fits into his memory and is quite capable. In fact, all our computers are Turing machines, so you can somehow estimate their capability. There are other fantasy guys such as non-deterministic Turing machines or quantum machines, which we yet to learn how to build. So far, we humans can definitely build Turing machines (and probably starting from 2019 we can really build quantum machines, but let’s put it aside).

Ok, so we can build machines that can follow our instructions and be our mechanical servants. So where is the place for mathematics here? The answer is simple - mathematics is the well-established process of expressing real-world phenomena in terms of a simple set of easily mechanizable rules. Ther same as ancient mathematicians were striving to find a formula for the volume of a pyramid, we’re striving today to find a formula that will send a message across the universe or even let the self-driving car to navigate our cities without killing anyone in the process. The tasks are a little bit more complex nowadays, but the underlying mathematics is roughly the same.

So when Archimedes was coding with a stick on sand it was using simple rules like addition. The multiplication was used to perform several identical additions, e.g., x+x+x+...+x=nx, and the power function served the same role for multiplication, e.g., x*x*x*...*x=x^n. These simple rules together with simple tooling of that time, like multiplication tables or abaci, let them develop quite complex programs, that were employing tools of that time, and that served well for builders, engineers, and traders of the time. Later they developed more complex apparati and evolved mathematics correspondingly. They implemented integrals and derivatives, positive and negative numbers, zero and infinity. They have discovered a lot of mathematical identities and learned how to program better so that more complex ideas can be expressed in a more concise and easier to compute nature. But since the tooling was mostly hand-driven, like still the same abaci and lots of tables, the entrance barrier was pretty high.

Finally, we reach the modern days, the time of Turing and Church, the Rise of Machines. The time when we learned how to build electrical machines, which can perform a couple of simple operations and have some memory. The time when we learned how to build the Turing Machines. This is the time when mathematicians started to refer themselves as programmers and ignore their heritage :slight_smile:

Now, we’re finally ready to learn the rules of the programming. Well, we have the rules of integer arithmetic, we use modular arithmetic to mimic real-world integers and we use floating-point approximations to mimic real-world reals. But like all other programmers we can ignore it. At least for now. These rules only allow us to use a computer as a calculator. So we need somehow simple rules for doing repetitive tasks and for storing information in the computer memory, as well as for retrieving. Historically, those rules were encoded using primitive machine instructions, like jmp for fast-forwarding/backwarding the input tape and load/store/mov procedures for storing data in the computer memory. It was very hard to use those primitive instructions directly, so mathemprogrammers started to develop a higher-level mechanism, the same as ancient geeks were developing their own mathematical language with multiplications (repetitive tasks - loops), functions (procedures), and so on. They started to develop more and more complex mathematical abstractions, so that they can build their formulas in a more concise way, like objects, classes, inheritance, polymorphism, agents, you name it.

Their theories grew into a large and unmaintainable mess which was really hard to reason about. It was still pure mathematics and it is, as those theories are definable in terms of simple underlying math of the assembly instructions. The problem is that it was very hard to reason in terms of those abstractions as they have very vague definitions and imprecise semantics mostly implementation-defined by the compilers. These were the dark times of programming when programmers forgot the name of their profession. Wait. It is our times, this is how we program today. Ouch.

At the same time, and even before Turing has built his machine, programathematicians developed lambda calculus. At that time they didn’t even have an abacus for this calculus, so this was just a mechanism to describe a group of things which they think was better suited than the set theory and in general as the most primitive mechanism capable of bearing modern mathematics, including even such basic notions as natural numbers. It was extremely simple, had only three rules for building formulas

Syntactic Rules:

  • x is a formula (as well as any other variable);
  • fun x -> <body> is a formula binding x if <body> is a formula;
  • <f> <x> is a formula if <f> and <x> are formulae.

So we now are able to build programs, e.g., hello is a program, hello world is a program. They don’t have any meaning yet, i.e., if we will give those programs to a machine it won’t know how what to do with it. So we need to give it computation rules, which will tell the computer what to do with these formulas.

Semantic Rules:

  • (fun x -> <b>) <e> is <b> with x substituted by <e> if x is not bound in <b>;
  • (fun x -> <a>) is (fun y -> <b>) where <b> is <a> with all bound x substituted with y.

So it is all about introducing variables and making something abstract over a notion that this variable representing, and eliminating a variable that is applying an abstraction to a concrete instance. We should be careful and not confuse variables, so we have an additional rule (the second one) which basically tells, that the name doesn’t matter and we can rename variables using a simple rule.

Surprisingly, this language is powerful enough to express the same computations as a Turing Machine can express or as x86 or ARM machine can compute. Most importantly, it is capable of expressing repetitiveness via the fixed point combinator, i.e., if we pass to an expression fun x -> <body> itself (i.e., apply it to itself (fun x -> <body>) (fun x -> <body>) ), then <body> can express itself it terms of itself and express repetitive, recurrent, and otherwise self-referential structures of the real world.

This language is very simple and is very easy to reason about, but at that time it didn’t get enough traction because they didn’t have the abacus for this calculus. However, very soon it was reincarnated in the form of functional programming languages, like Lisp, Haskell, and ML (including OCaml and SML, and other members of the family).

Unlike other languages of the time, which were residing on a very complex mathematics of Algol, these had much cleaner semantics and were very easy to program. The only issue was performance, as computers were built using the Turning Machine idea in mind and there was an impedance mismatch between lambda calculus and primitive operations of the computers of that time. Unfortunately, the laws of the market were stronger than the desire for the beauty and functional programs were late for the train and left mostly in academia, only because they couldn’t compete with imperative programs of that time. The machine time at that time was much more expensive than the programmer time, so it was much more important how fast the program runs, rather than how fast the program programs.

But, with time, machines got faster and cheaper and programmers started to value their time more than the machine time. The compilators also advanced and were even able to compile functional programs into faster code, than their imperative counterparts. So functional programming started to slowly return to the arsenal of a modern mathematician (or should we say, programmer?). The reason for that is purely pragmatic, it is much easier to write functional programs than imperative programs, because you have to reason about only functions and their applications. No objects, no classes, no nothing (okay, okay, we have classes in OCaml, but don’t tell anyone, it was trendy in those times).

Let’s be honest, OCaml is a little bit more complex than lambda calculus. Mostly because it is a strictly typed language. If you want pure lambda calculus with no Mamba Jamba, then try Scheme. In any case, you shall learn Scheme and read SICP to master functional programming. It is an easy and entertaining reading and is a must-read.

But at some point of time, you will find out that types are an extremely powerful mathematical tool that makes you a very efficient programmer, since it lets the type checker to find bugs in your programs and verify them without running them. The rich type system of OCaml lets you minimize the debugging stage of the program development, which usually takes most of the time when you develop programs using less advanced languages, such as C/Rust/C++/Java or much less when you develop programs in dynamically typed languages such as Python/Ruby or Scheme/Lisp. But this is a completely different story.


OCaml is not a functional programming language.

I’m trying to find a baseline we can all agree on. A starting point. How about this?

A programming language cannot limit the execution environment but it can limit the exposure of that execution environment to the programmer by handling certain features itself.

A classic example would be pointers and Java. The Java programming language/envirornment didn’t eliminate pointers, it just eliminated Java programmers from dealing with pointers directly.

Can we agree on this?

My problem isn’t with functional programming. My problem is with functional programming language’s claims and the execution environments their programs find themselves in and how they square that circle.

I’m probably making this way more complicated than it has to be.

1 Like

What claims are you talking about?

In any programing language purity means a simple thing. A function is pure if for the same input it always gives the same output. No more, no less.

In Haskell all functions are pure and that lets them rearrange the order of execution in arbitrary way. It is even true with IO, thanks to the IO Monad.

In OCaml or Scheme, we can have impure functions, e.g., Random.bits () usually returns different values on each evaluation, but it is always applied to the same input ().

This is the notion of purity that makes functional programs easier to understand. You do not need to think about what a function is doing - what it is doing it always returns as the resulting value.

First. Do you agree with my first point?

You mean this one?

In general, I totally disagree. But your definition leaves some space for different interpretations. Like what do you mean by the execution environment?

A programming language is a complex software system that consists of a compiler (optional) and runtime. For me, the execution environment is defined by the state of runtime. Therefore, languages are in total control of their own runtime, so your statement doesn’t make sense in that context.

Well I guess I don’t understand the most fundamental things about languages.

Don’t be harsh with yourself.

There are many ways to think about languages. Some people deal with them purely abstractly, e.g. you could study their denotational semantics, figuring out what they mean.

Other people study operational semantics, i.e. how the language computes in some formal setting.

Both of these are extremely important. And yet, they don’t begin to address the reality of what happens when you run a program, because programs don’t live in a vacuum, they are surrounded by other programs, the operating system, they operate in the real word and interact with devices across the planet. There is a multitude of abstractions, all the way down to electrical signal and quantum effects.

I am saying all this to make it explicit that: you cannot discuss languages well without being very clear what level of abstraction you’re interested in and it’s easy to talk past each other when everyone assumes their favorite level.

And finally, nobody understands all the fundamental things about languages across all the abstraction levels. Most CPU engineers are likely absolutely clueless about dependent type theory and vice versa. And it’s okay.

1 Like

This is a perfectly natural thing that happens in all disciplines that manipulate formal objects to model things in the real world (in this case: a running program). There are different models of the real-world object, of varying degree of simplicity, that leave some aspects out.

Sure, a cosmic ray can flip a bit of your memory, or even several rays can flip several bits in a way that escapes checksum protections, but maybe this needs not be part of our usual reasoning model, unless we deal with programs that may be run from outside the Earth’s atmosphere without radioprotection, or we have to consider the case where a targeted attacker of the hardware system would use a laser or some other technology to provoke coordinated faults on purpose.

As in other fields, the solution is to be precise in the formal model that we are working with, which lets people fully reconstruct the assumption it makes and the details that it leaves out. When programming in OCaml, the “standard” model assumed by most programmers does not consider asynchronous code execution (GC collection slices, signals, finalizers, out-of-memory situations). In this model, allocations are not observable, neither is collection of unreachable memory.

Certain applications have more low-level requirements that forces them into a more complex reasoning model. For example, they want to deal gracefully with out-of-memory scenario, or they were designed to rely on signals or finalizers, or they need to track and minimize reaction latency, or they are trying to write crypto code that is resilient to several forms of side-channel attacks.


And the functional programming crowd wonders why most programmers shy away from functional programming… I mentioned that OOP languages have a concrete object model that programmers can reference when they want to know what a language means when they refer to an object but what does the functional crowd have? I found nothing concrete.

I find I can write the most rudimentary code in a ‘functional’ style but I’m completely lost as to what that is besides the most obvious -> higher order functions, immutability over mutability, , pure over impure, etc.

Until the functional community can dumb down functional programming to at least the undergraduate level, I think most programmers will use the functional concepts unwittingly but abandon functional programming as a whole.

Sorry to sound so negative, but its hard to support something that I can’t comprehend.

1 Like

At it’s most basic, a functional programming language is one where a Function is a first class type, just like an Object is a first class type in an OOP language. You can pass them around, you can change them, you can do stuff with them. Most functional languages (though this is not required but it becomes more annoying to program in if not) also have Closures, which are Functions that can Hold External Data.

How about this for your OOP relation to functional programming (don’t do this kind of stuff normally, there are almost always better ways):

type thing_methods =
| Add of int
| Get

type thing_returns =
| Nil
| Int of int

let new_thing start =
  let data = ref start in
  | Add i -> data := !data + i; Nil
  | Get -> Int !data

And can be used like:

# let thing = new_thing 42;;
val thing : thing_methods -> thing_returns = <fun>
# thing Get;;
- : thing_returns = Int 42
# thing @@ Add 42;;
- : thing_returns = Nil
# thing Get;;
- : thing_returns = Int 84

A fully self contained ‘object’ with fully private data that can’t be accessed externally, except it’s just a closure function, nothing special. Thing is that functions can represent about any data structure.

Now Functional Languages generally don’t represent all their data structures as functions, that would not be terribly efficient, so they tend to add more types like lists and strings and so forth.

The main thing about Functional Programming Patterns is not the language, but rather the patterns. Those patterns tend to espouse using as little indirection code as possible (OOP’s virtual inheritance is a prime example of breaking this) because it makes the code path hard to follow, hard to maintain, hard to upkeep, and it makes it slower to top it all off. Modern Functional Programming Patterns are about clarity of code.

1 Like

Wait, what? -No- language has an easy-to-understand explanation of how GC+finalizers works, and that includes O-O languages. For instance, O-O languages that use deferred reference-counting (common, b/c efficient) end up running finalizers at times completely unrelated to when the objects became unreachable. Just like FP languages.

FP languages (and Ocaml, etc) have quite clear descriptions and semantics for the parts that are describable without involving crazy low-level details. And that’s no different from O-O languages.

1 Like

We must’ve lived in separate realities because I clearly remember the the last 15 years being filled with discussion what the “essence” of OOP is, with people arguing for and against OOP flavours like Smalltalk, Io, C++, Java and Python, praising and decrying multiple inheritance, interfaces, desperately trying to find sensible examples of inheritance that are not just a technical artifact of languages forcing classes upon people and that are not “Dog inherits from Animal”. How programs should be structured in a “true OOP way” is not really clear at all.

Even OOP languages have adopted FP ideas in spirit, as Dependency Injection is a complex way of essentially passing functions to other functions, similarly many “Design Patterns” are borderline trivial with first-class functions (even in primarily OOP languages like Python).

I can’t help you with your understanding, but I want to point out that OOP is not that clear either and I think a lot of the reason for it, was the bandwagon craze of the 90ies for thinking object orientation is the silver bullet which will magically solve programming complexity. Similarly to the current craze of copying ideas from functional languages and transplanting them into imperative languages.


OOP left so much problems, it is like a nuclear waste - intoxicating environment for decades. Of course it has its own niche application, but far from being universal. Dealing with large amount of OOP code written will be a bit like dealing with the droves of COBOL code rotting in the guts of huge banks and corporations.