ThoughtStorms Wiki

Note Phil is much more sympathetic to functional programming now than when he wrote some of the historical stuff (pushed down on the page but not deleted yet)

FP is cool!

With FunctionalReactiveProgramming I may even have to take back what I say below about not doing input and output well.

And see ImmutabilityInProgrammingLanguages

Am now teaching HaskellLanguage, better brush up.

I like the analogy with SpreadSheets in this introduction to HaskellLanguage :

Debates and Rants

FP "easy to understand"

OTOH this is what gives functional programming a bad name : Functional programs are often easier to understand. You should be able to understand the program without any previous knowledge of either Haskell or quicksort. The same certainly cannot be said of the C program.

This is so patently false it's unbelievable. Yep, the quicksort in Haskell is beautifully compact. I wish I had access to that kind of expressivity. But the flip-side of that is a kind of density. The same problem I have with maths. If you don't undertand it, you can't decode it incrementally. (LookMathsUp)

You can't look at one bit and say, "OK, I see what that bit's doing. There's a 3 in the little box marked A. So, given that, let's look at what the next bit is doing."

Classic issue of "what's the easiest way to explain X" ... depends who you're talking to !


I think I'm saying something a bit different. Or at least, one level higher. Not "what is easiest to understand" but "what is easiest to work out how to understand". I dunno. Maybe you can make a case for collapsing these together and the distinction isn't such a big deal.


Update : Having just had a disasterous lecture where I tried to show both the Haskell QuickSort and a Pythonic translation of the C quicksort, I must confess I've a new respect for the Haskell claim.

Phil's new first rule of computer science lecturing : don't attempt to demonstrate an imperative QuickSort by stepping through it on the whiteboard. Especially when you haven't tried it at home. Firstly it's very long. Secondly it's easy to make a mistake.

A horrible lecture yesterday. I fell into one of those "mumbling into the whiteboard for 20 minutes with my back to the students" traps. Having started to manually running through the quicksort, I found, after about 6 or 7 minutes that I'd made a mistake, coming out of the first "increase-the-lower-bound while loop" too early. I appologised and went back to correct it, but it became extremely clear that actually working through the algorithm until the list was sorted would take far longer than I wanted to spend, and the students were already ignoring me and chatting among themselves. Admitedly some were looking at the paper copy I'd given them and trying to work it out. Others were staring glassy eyed at me. So I asked if they preferred to work through it themselves or for me to continue. No one answered. Eventually one guy indicated that I should continue, but as I turned back I could tell perfectly well that even he wasn't really paying attention or following what I was saying. I panicked and took refuge in trying to work through it again for five minutes and then gave up. "It does work" I tried to assure them, lamely. "Let's look at the Haskell version."

But by then I really didn't have the presence to summon their attention back. And my vocabulary and any semblance of grammatical competence had completely disintegrated. Even the recursive version takes time to write, and the audience were restive. Was I going to expect them to do the exercises before they left? No, I relented. This time they could take them home. That was the basic cue for some to start to get up and leave. "If anyone had any problems understanding this, come up and see me now and I'll explain individually" I invited. No one took me up on the offer.

One student who had already taught quicksort on another course, took pity on me and came up and started to demonstrate that my program did really work. I let the others drifted off without a word. A couple of the students stayed behind and finished their exercise. Their verdict : the Python was very complicated, but the Haskell was interesting and kind of self-explanatory.


Pros and Cons Rant

But, honestly!!! It's kind of hilarious reading the online stuff about Haskell and the "benefits of functional programming". First, they always seem to compare it with C. So benefits of functional programming (hereafter BOFP) include dynamic memory management and strong type-checking. Beyond that it's back to "simplicity" and "elegance" and the wonders of no assignment.

The Haskell guy (grudgingly, grudgingly,) admits that you can do higher level functions and pass functions as arguments in Java, but then says : In Java you often have to provide a Comparator object for trees and other standard data structures. The difference is that the Haskell way is a lot more intuitive and elegant (creating a separate type just for comparing two other types and then passing an object of this type is hardly an elegant way of doing it)

No mention that there are plenty of imperative languages with elegant Lambdas, closures, blocks etc. or that even C can pass a pointer to a function without "creating a separate type".

Hmmm. Is this a coup. Are functional programming people actually getting me to defend Java?

Don't panic. No, the next bit makes abundantly clear, despite all the shouting about strong typing (better than malloc) Haskell's automatic type-inference and preference for the most general, actually tilts towards sanity. Even I can get behind a "type as much as you need" system like that.

At the end, of course, there's the usual question.

So if Haskell is so great, how come it isn't "mainstream"?

And the usual sort of attempt to explain it away in terms of LockIn by other languages or stupid programmers. Here I really want to shake this author (and his like). And shout :

Look! Functional Programming is an evolutionary dead-end. It's going precisely nowhere because

it can't do input and output!

The truth is 99% of what programs do is communicate with things outside themselves ... either users, or other resources in the operating system, or databases, or other services over the internet, or chemical plants, or ...

But the functional paradigm (proudly) doesn't do input and output. Sure, you can graft it on. Functional programming languages have to have some kind of I/O. But you can't do it and maintain the purity that everyone seems so excited about. Interaction with the outside world requires programs that maintain the state of their dialog. It needs that the sequence of the interaction is managed explicitly. FP, by ruling out both state and explicit control of sequence, basically declares that it isn't interested in being used for 99% of the things we want computers to do.


I totally agree. SJ


1) Hmmm. What about stateless web-servers? And Unix tools designed to be chained together through piping? These don't need to hold any other kind of state or finer grained interaction except for the initial call and response. FP might suit these apps.

: Stateless web-servers do store state in a database. So an FP CGI script needs to sully itself to that extent. But I admit there may be more scope there than I originally gave credit for. Unix tools, yep, maybe those too. (See also TheArtOfUnixProgramming)

2) They think they've solved it with "worlds" :

: Yeah, but I don't believe them. Worlds are a kludge. First, they are not in any sense "elegant" or "easy" to understand. Second, the "impure" bit that talks to them has reverted to being "imperative programming". The model of thin imperative shell over pure functional core only holds if the amount of IO is actually a thin shell's worth. The higher the ratio of IO to non-IO the larger your shell becomes, and as the guy says If you wish to write imperative programs, you're better off using an imperative language! Which is pretty much the reason people do use imperative languages.

3) EmacsLisp?

I really find it wrong that the value of computer languages is discussed by computer scientists (who are in fact mathematicians). All computing languages are mathematically equivalent (in some sense, I would love to see an analyzis bellow the trivial polynomial time), all are equally good for the machines. For the machines it does not matter and what is left is the human side so clearly the value of computer languages should be discussed by psychologists not by mathematicians.

By the way, my Master Degree thesis was another proof of the fact that the type inference for the ML language (and I believe for others as well, it was pretty general reasoning) is complete in the Deterministic Expotential Time (that is you can write programs in ML that take expotential time, as function of the length of the program, to compile).


Another issue. You can't actually find a tutorial for a functional programming language which doesn't doesn't draw all its examples and explanations from mathematics. From initial demonstrations of finding the greatest common denomenator through to explaining list comprehensions in terms of set comprehensions. FP is designed by pure mathematicians for pure mathematicians and the kinds of problems they're interested in.

If you want to try to understand FP (or teach it) you have to start by learning (teaching) this whole mathematical world-view and set of problems :-(

Anyone know a tutorial for FP which doesn't start assuming such knowledge and draws its examples from a different domain?


Offhand ideas:

  • – – I haven't read this book but it sounds like your request: "Rather than using the conventional mathematical examples commonly found in other programming language textbooks, the author draws examples from multimedia applications, including graphics, animation, and computer music, thus rewarding the reader with working programs for inherently more interesting applications."
  • – – While this doesn't specifically teach just functional programming, this online book builds stuff like Shoutcast servers and unit test frameworks. Common Lisp is a multiparadigm language – the advantage is you get to know the natural limits of what functional programming can express well, without being forced to artificially stuff everything in that single paradigm. You may also like this: They They don't talk about monads though, if that's what you want.

Tayssir John Gabbour

See also :



Added 2018-10-08 : Originally 2018-10-08

Hardware support for FunctionalProgramming

Quora Answer : Why is functional programming gaining popularity lately?

Dec 15, 2013

The main reason, of course, is that computers have got fast enough that FP isn't paying a significant performance cost compared with C. A lot of languages these days are run on virtual machines, with garbage collection and late-binding. FP no longer looks exotically wasteful. These and the other dynamic strengths of FP are now feasible.

Furthermore, FP's immutability lends itself to concurrent / parallel programming that takes advantage of multi-core and distributed computing. So performance has gone from being a perceived weakness of FP to being a perceived strength. (FP fans will argue that the earlier perceptions were always wrong, but that's a different issue.)

Another point is that there are now a lot of good programmers with experience of OO and with experience of where OO "goes wrong" (large systems written by mediocre programmers, filling up with cruft). A lot of these people are looking for "the next big thing" that will help them escape the tar-pit..

Right now, FP is the only real offer in town. It's been the province of very good programmers who've been writing excellent code, so it looks very smart. (We've yet to see, if FP goes mainstream, what kind of a mess a bunch of mediocre programmers will be able to make with macros and monads etc. There may be realms of debugging pain that no-one has yet dreamed of when it comes to sorting out a 10 year old enterprise program written in Clojure.)

Quora Answer : Will functional programming become mainstream in the 2020s?

Aug 26, 2019

Some ideas from it will.

I think the biggest idea, that's kind of been promoted in FP, is "Reactive Programming" where you specify a data-flow dependency graph between elements of your program. Particularly the data-model and UI. But don't have to imperatively push the data from one part to another.

I think this idea, and native support for it, will become more prominent in the next generation of languages.

We see it today in web-frameworks like React. And in languages like Elm. It wouldn't surprise me if some version of Javascript gets native support for it eventually.

And if I were designing a new language today, I'd definitely be looking at ways to free people up from imperatively pushing data around.

Other ideas from FP : immutability, laziness, sophisticated type systems are also likely to become more prominent.

Whether that means that today FP languages become the most mainstream, I don't know. People tend to like hybrids that kludge new ideas into things they know (eg. that's plausibly why C++ / Java became the OO standards, rather than a Smalltalk offshoot). Similarly, I can imagine that most FP ideas find their way into Javascript or a Javascript derivative.

Or from languages I used to be a bit dismissive of, like Elm, Dart, Kotlin, as being neither one thing or the other. If they hit the right sweet spot between what people know and a good new idea, perhaps they'll take off.

I personally love Clojure. And I think it will be bigger and more used in the future. But whether it gets to top 10 ... I mean I hope it does, but I wouldn't bet a lot of money on it.

Quora Answer : What is the most cool thing you can do with functional programming languages?

Feb 15, 2016

Here's what I discovered when I played with my first functional language, Erlang, with respect to Python. (A language I loved passionately and still like a lot.)

I estimated my code was about a quarter of the length of similar things I've written in Python.

I've since got very into Clojure. I'm pretty sure most of what I do in Clojure would take 10x as many lines in Java.

I sure as hell LIKE being able to do my work in a quarter to a tenth of the number of lines of code. To me, that's already pretty cool.

Now WHY do we get that kind of improved productivity?

Well, one key thing is the higher-order functions.

You all know this, right :

(map my-func my-collection)

(or for some people)


We all know that myFunc can be anything. What's not so obvious when you first see this, is that code like this can be written for any collection, not just a simple list or array. So you can have equivalent crawlers / mappers / visitors for trees and remote databases and file-systems and streams of future user input events etc.

What this means is that whatever data you keep in your model, you usually only have to write functions that crawl it, or collapse it or filter it, ONCE. And everything else you want to do becomes just an argument to your map, fold or filter.

In most OO / procedural languages, you are writing code to loop through your data-structures all over the place. And every time you modify your data-structure you end up having to update all that code again. In FP you usually have map, fold and filter already given for built-in structures. And can quickly construct equivalents specific to your custom collections. Once you have these, you never have to think about them again ... unless you change your data-model. This eliminates huge amounts of redundancy.

The next thing that impressed me was laziness.

Laziness really shines when you want to combine two or more collections. Say I want to zip two lists together. Without laziness, I worry about them being the same size. And have to handle the edge cases where they aren't. With laziness, I ignore this ... I zip and the language just stops when one or the other list is exhausted.

Once again, the principle scales beyond the simple example of lists. Any kind of combination of things ... joins and Cartesian products, applying handlers to incoming events and callbacks, co-0rdination between asynchronous processes etc. suddenly stop being hard, and become much more straightforward when you have built-in laziness.

Finally, with Lisps, there's homoiconicity. Which I wrote more about here : TheCostOfSyntax