Do you enjoy programming?

In the book ML for the Working Programmer, the author writes (my emphasis):

Functional programming and logic programming are instances of declarative programming. The ideal of declarative programming is to free us from writing programs — just state the requirements and the computer will do the rest. Hoare (1989c) has explored this ideal in the case of the Greatest Common Divisor, demonstrating that it is still a dream. A more realistic aim for declarative programming is to make programs easier to understand. Their correctness can be justified by simple mathematical reasoning, without thinking about bytes. Declarative programming is still programming; we still have to code efficiently.

The bolded part about the ideal of declarative programming just doesn’t resonate with me at all as I enjoy programming and do it as a hobby which I’m not employed for (and I particularly enjoy programming in ML languages).

Sometimes I think to myself, did I wander off into a community where my goals and ideals with regard to programming are quite different from the majority? I think the ideal of having all the code written for oneself is understandable and makes sense for someone who just wants to get things done but I enjoy the process of programming and the learning and challenge that it comes with, so it’s not an ideal I share.

So I wanted to ask: is it an ideal shared by many/most people here or by the functional programming more broadly? That’s a question I’ve asked myself before, but I didn’t have a way to answer it so hoping nobody minds me asking here.

3 Likes

I interpret that ideal in the same category as “I would like to be able to fly”.

In general, I think it’s fair to say that the FP community (or particular languages) values elegance as well as learning and understanding things relatively high. In fact there’s a meme that functional programmers are ivory tower academics and don’t ship things, the opposite of just getting things done.

3 Likes

A long, long time ago, there was a new invention called ‘automatic programming’. It freed the programmer from having to code their programs by hand, in full detail. What was that invention? Haha, the assembler. I kid you not.

There have only been two examples of programming, where the programmer does not actually specify the -algorithm- to be used (and the second one is debatable): spreadsheets and relational databases. Otherwise, I think it’s pretty clear that all we’re doing is raising the level of abstraction, bit-by-bit.

P.S. I’m very aware that when real programmers use RDBs for real, they’re very, very aware of which indices are being used, in which order, for which queries. That’s what I mean by “debatable”. B/c for some naive level of use, an RDB really does free the programmer from having to think about -how- their query will be executed. But yeah, it’s a pretty naive level of use.

4 Likes

There’s a good few people who wish their programs could be inferred from their types, rather than the other way around. Either way, the stronger the type system gets, the more it resembles a programming language. There’s been examples of things that were intended to be inert (e.g. homegrown configuration DSLs) that turn into esolangs the second the scope is broadened to suit more general purpose applications. So, I’d say this ideal doesn’t fully alleviate you from programming, it just attempts to limit how much the domain of discourse is muddied by implementation details.

As for a response to the thread title: yes and no. There’s definitely more tedium in programming than programmers like to admit (especially in the day-to-day of software engineering). There are languages that make many things tedious to express and laborious to actually commit to code - worse yet, you can easily find people who think there’s something noble in wasting their own time and adopting error-prone programming languages. Over time, I’ve found that I resonate more with the FP and PL communities, which seem to deeply value the ideas (in all their incarnations in different programming languages), more than just one instance of the ideas - so my interest in programming is fueled by those things, with the act of programming mostly being a means to an end (a vehicle for the ideas).

4 Likes

This is not even about having elaborate types and being error-prone. Rust inherits from C++ the “noble” idea of having value types infected by memory semantics.

1 Like

Above all, I enjoy programming when I see that the things I built “click” and I’m starting to get power “for free”, if you know what I mean. The abstractions worked out, and I no longer mess around in imperative spaghetti written by others 10 years ago, but rather I move bigger pieces or symbols in a flexible manner.

It’s also nice to be able to surprise clients by saying this or that change they required will be easy to achieve. :innocent:

1 Like

just state the requirements and the computer will do the rest

I think this is still “writing programs”. (Also, I enjoy writing programs and evidence shows that I will do it despite negative incentives.)

4 Likes

[speaking for myself, but maybe this is a widespread sentiment]

We work (in the types) to bring invariants forward from merely expressed in our code to being expressed and -enforced- in the types, so that when we then write code, we cannot break those invariants. We think of a type system that allows us to express/enforce more and stronger invariants, as a stronger/better type system.

We dispute with each other whether some particular invariant or class of them should rightly be encoded into the type system (“is the type system engulfing what rightly belongs in semantics”) but really, it’s a smooth spectrum. I was corresponding with an old-skool MLer who switched to Python when he went commercial, and he was telling me that massive unit-testing has replaced types in his arsenal (b/c Python lacks strong typing and the current state of type annotations isn’t sufficient). So what for us is -typing- is (for them) just more invariants to be checked in the code and unit-tests.

Sure, but Jean-Yves Girard’s linear logic did the same without inherit syntactic non sense from C++.

Memory semantics “infecting” your types is a feature or a curse depending on your goals. It’s certainly a good thing when you’re drilling into the details of a performance issue and trying to tune memory layout, codify ownership, ensure “deterministic” allocation/deallocation, etc. On the other hand, it gets in the way of prototyping and thinking “in the large”.

I would love to have a programming language that satisfied both situations, where you could make the trade-off along a continuum. I’m not sure this could actually work well in practice though, as the mixing of high- and low-level design patterns in the same language/code base seems likely to lead to bad interoperability/impedance mismatches. (This is all speculation–I don’t have any concrete examples of this in the wild.)

Rust and OCaml both clearly fail at the extreme ends of the spectrum but excel in their niches. :person_shrugging:

2 Likes

My enjoyment of professional programming is somewhat curtailed by the fact that I am frequently gluing together large inscrutable libraries to create programs which deal exclusively with solved problems in programming languages I don’t particularly care for—which is to say, a large part of my job involves creating websites. Not a fan of this kind of work. There is occasionally a bright glimmer when I get to write an algorithm here or there. Still, it’s a living, and I could think of worse ways to earn it.

However, I absolutely adore working on my mentally-stimulating hobby projects, mostly in OCaml—a language which is more declarative than those I use for my boring work.


But more to the point, I think the words of the author are being misinterpreted to some extent here—though we could argue his words are poorly chosen.

What is programming if not stating the requirements of a program, albeit in minute detail?

Working at a higher level of abstraction simply means that these details are less minute than they might be at a lower level of abstraction. In OCaml (and most other languages these days), we don’t need to speak about where values will be stored—which stack offset, which register or whether we just ask the OS for some space on the heap.

The point of high-level programming is less that we do not program, and more that we describe programs on our own terms rather than those of the machine which is to execute the program. In the case of functional programming, that means describing our computations in terms of mathematical expressions and logic.

What we consider programming today would barely be recognizable as such by our predecessors who came up in the days of punch cards. We’re just writing pseudo-code, and cleverly written programs are doing the “real programming”.

I’m not advocating such a perspective. I’m just trying to explain the context for what “free us from writing programs” might mean coming from a gentleman who has been programming since near the beginning of computing.

The classic functional programming manual, SICP, takes a contrasting view of programming: That programming has little to do with physical computers, but is actually about the description of computations—i.e. stating the requirements of the program. Whether the program executes on a physical machine is of no consequence, because if the description is clear and correct it can be translated in to a machine-executable form, either by a compiler or a programmer.


Now, if you’re fond of shuffling bits around and thinking about memory alignment, I admit that the ML / declarative programming community is—in general—not too concerned with these things. If you prefer to describe your computation in mathematical and logical terms, letting the language implementation handle the bits and bobs of the machine, you are in the right place.

2 Likes

To me, this sounds like a description of program synthesis from specifications (an old AI topic that is again all the rage nowadays), but not of declarative programming.

The goal of declarative programming, in my opinion, is to have programs that can be read as mathematical definitions (without the help of a program logic), making it easier to relate them with a mathematical specification. Examples: pure functional programming (but nontermination can cause surprises), logic programming (same proviso), database query languages.

Writing declarative programs is still programming, just focusing less on the “how” (how the computation proceeds, e.g. memory management, pointers in data representations, the exact sequencing of computations) and more on the “what” (what does this program compute?).

8 Likes

Now that’s a funny thing: a fellow grad student who was an MIT undergrad (and took that course) was discussing all sorts of low-level stuff. We were talking about machine architecture, about compiling to machine code, and various other things. And he insisted that he learned all those things in SICP, via programming in Scheme. Far from “stating the requirements of the program”, they got as down-and-dirty as they needed to get, to understand whatever the subject matter was.

And I had to agree with him: when I program in ML, I can become acutely aware of the way my code maps down to the machine, and if I could not, that would be a reason to not use ML.

Indeed, it is one of the distinguishing marks of excellent programmers even today, that whatever language they’re using, they are aware of how their code actually runs on the hardware. Or at least, -can be aware when it’s appropriate-.

What we consider programming today would barely be recognizable as such by our predecessors who came up in the days of punch cards.

grin Maybe. Maybe not. (Some of) Those predecessors used languages like ALGOL – call-by-name – and LISP, and I’m talkin’ in the 60s, not the 80s.

3 Likes

I don’t disagree at all. SICP indeed has a chapter on implementing a virtual machine (a register machine, they call it) to elucidate the details of how computers actually work, and understanding what the hardware is doing is necessary for almost any non-trivial programming work. This is part of the “interpretation” in “Structure and Interpretation”. Still, it is a central contention of the book that the primary purpose of a program is a clear description of the computation.

And I was perhaps overstating my case. While OCaml programmers rarely think about stack offsets and registers, most of us do think about allocations and boxing in hot loops. Memory is still a thing—though, and perhaps I don’t speak for everyone, but I think most of us prefer not to worry about the memory model until it becomes a problem, since our short-lived allocations are quite cheap in any case. I’d say the general approach in OCaml is to find the most optimal algorithm and to worry about the memory model once it comes to trimming the fat.

I think you’d also agree that as the official “systems jock” of the OCaml community, you’re probably more acutely aware of and concerned with runtime behavior than most of us—although there are plenty of others who fall into that category. (Anil comes immediately to mind, and Jane Street is also doing a lot of performance engineering, which OCaml definitely facilitates to some extent.)

Anyway, I was not trying to say that machine behavior is not important to OCaml progarmmers (though I think I did actually say something to that effect)—but rather that this is likely what the author of the quote in the original post meant about “freeing us from writing programs.”

Personally, I’d rather spend more of my mental energy creating clean abstractions than accommodating the machine, but of course accommodating the machine is still very much a part of programming, and I’ll even admit to enjoying it from time to time. :joy:

1 Like

ML is descended from a language called ISWIM (“if you see what I mean”) proposed by Peter Landin. In particular, let-bindings and Haskell’s “where” clause are both in ISWIM; ISWIM, like ML, has an imperative fragment to deal with problems for which there is not a good pure functional solution.

In Landin’s paper, he comments somewhat critically on the term “declarative programming” and suggests that a better term for languages in this family is “denotative languages”. I found his comments persuasive because I think the real idea of functional programming imo is to think in terms of definitions and expressions that are assembled into more complex and complete definitions in the style of a mathematics textbook, as complemented to the imperative style which describes a sequence of state transitions. The term “declarative programming” is too vague, Landin points out that something like let x = cubic_root(3, 4, 5, -1) is “declarative” because it describes x by its properties (it is the root of a cubic polynomial with the given coefficients) but this is not immediately algorithmic (unless you have some library or language primitive which is able to find such roots, but that’s not his point). ML does not deal with hypothetical values subject to constraints, it deals with expressions which can be evaluated to a value.

(Unfortunately this doesn’t capture logic programming well)

3 Likes