I know this is a rookie question but… What exactly do functional programmers mean by state and how does state fit into a pure functional program?
My impressions are simple and probably wrong. A pure functional program can’t change the current state but it can create new state to hold new bindings. Basically, you can’t modify the value of an existing binding but you can create new bindings. Am I even close?
I think you are quite close to it. Pure functional programs are called “pure” as they do not modify the state of the allocated memory, they have no “side effects”. Usually, this property is pleasant because it allows the “if it typechecks, it works” philosophy during development. A pure function will always give the same result if called with the same inputs several times.
Side effects can involve reading from / writing to files, modifying global variables, using references, using mutable data structures, etc. It is not always bad to use, but it is often harder to reason about the program when a lot of side effects are present, and some unexpected bugs can occur.
Functional languages can use monads to model the fact that side effects are unpredictable. For example, in Haskell, writing a value of type X returns a IO () value, and reading a value of type X returns a IO X value. You could think of it as a box labeled IO telling the user that the value inside might not be reliable, or the operation of checking what’s inside could fail. However, this syntax is quite heavy, and OCaml chose to not use it, because classic (not monadic) functions or just 'a option (which actually is a monad) are easier solutions to use, hence the “industry ready” qualification of this language.
I hope I have made the point even clearer. Note that I am not an expert either !
I always found functional programming to be difficult because its fundamental ideas are great but where they meet reality(i.e. IO monads), they can be a bit confusing and messy. I like that OCaml encourages pure code, but doesn’t require pure code.
Yeah, you’re close. A functional way to model state is with functions. That is what pure FP is doing at the end of the day, if you take out all the syntax sugar and operators. I’ve written a bit more about this here: https://yawar.blogspot.com/2015/12/how-does-state-monad-work.html
A student of FP came to Daniel de Rauglaudre and asked how to achieve FP Nature. Daniel replied, “to achieve FP Nature, you must learn to program without side effects and other imperative features”. So the student went away to learn how to program in this style. He studied very hard so he could rid his programs of references and for-loops. He struggled to only use let bindings and let rec loops. One day, in order to find inspiration, he was studying the code of the Masters, and in reading the source of Camlp4, he saw a number of uses of references and for-loops! He rushed back to Daniel de Rauglaudre and exclaimed, “How can you tell me not to use imperative features when you use them yourself!” Daniel measured the student carefully in his gaze and replied, “But I already know how to program without side-effects.” At that moment the student was enlightened.
OK. So I think one of the things that FP aficionados tend to forget, is how they got there. Most of us start off writing in some other language, and not functionally. I remember well the book Functional Programming by Peter Henderson (long out-of-print, but I’ll bet you can find it in libraries) and I hear good things about The Little Schemer. They teach you how to write rather nontrivial programs in purely-functional manner. Then when you go back to writing “real programs”, you have more of a choice of whether to write purely-functionally, than somebody who never tried to write purely-functionally.
So what’s good about functional programming, and about banishing “state”? The standard answer [and correct] is “referential transparency”. [Very inaccurately,] It refers to the idea that the meaning of an expression can be divined by looking at the expression, and not at its surroundings. It means that equals-for-equals reasoning can be used on expressions. So when I have a program that’s misbehaving, I can pluck out sub-expressions, run them in a toplevel (for instance) and expect that those sub-expressions will behave in the toplevel (and produce the same results) as they do in the program. Obviously some of this gets harder to do with I/O monads and such, and I certainly won’t defend pure-functional programming languages. But the general idea is an excellent one, and an excellent reason for working thru a book like Henderson’s, no matter what progarmming language you use in daily work.
When functional programmers use the term “state”, they mean “mutable state”, which is a somewhat bizarre way of referring to a mutable value which can hold the representation of only one state at a time. For example, a list of dollar amounts can represent all the successive states of your wallet. This isn’t what’s called stateful in functional programming. A stateful representation for this would be a mutable variable that holds only the current amount of money in the wallet.