Hacker News new | past | comments | ask | show | jobs | submit login
A Year of Functional Programming (japgolly.blogspot.com.au)
174 points by Garbage on June 9, 2014 | hide | past | favorite | 165 comments



" Recently I looked at some code I wrote 8 months ago and was shocked! I looked at one file written in “good OO-style”, lots of inheritance and code reuse, and just thought “this is just a monoid and a bunch of crap because I didn't realise this is a monoid” so I rewrote the entire thing to about a third of the code size and ended up with double the flexibility. Shortly after I saw another file and this time thought “these are all just endofunctors,” and lo and behold, rewrote it to about a third of the code size and the final product being both easier to use and more powerful."

That would be so much more useful if it came with the examples.


If you're interested in examples of useful monoids and similar structures, you may want to have a look at the Algebird[0] project or the recent work on CRDTs[1]. The code savings in these projects comes from being able to write the complex distributed aggregation / conflict resolution / etc. code just once, and reuse it with a menagerie of useful implementations.

Personally, I like structuring things using these tools when I can get away with it. They're simple and broadly applicable abstractions, and mean I can reuse both my intuitions and my code in wildly diverse situations.

[0] https://github.com/twitter/algebird [1] http://highscalability.com/blog/2010/12/23/paper-crdts-consi...


What we want is a comparison: what was the OO code, and what did the functional code turn out to be? That way, we can judge for ourselves.

Typically, these examples take low-end crappy OO code and convert it to high-end elegant FP code. But this doesn't really convince anyone.


I don't know the OP's examples, but I think he is saying something subtly different: that he took (his own) low-end crappy OO code and converted it to high-end elegant OO code, using the insight gained from how he would have written it as FP.


I didn't get that from the article, but they aren't explicit about it.


I got that too, and that's exactly why I would like to see the code before and after. That's far more interesting than the rest of the article in my opinion.



1. What does this code do? 2. Where is this previous code this replaced that was 3 times the size?


I've seen a lot of code that was larger than necessary because the author didn't know what a Monoid or Functor were.

Cf. "Visitor pattern"

I wasn't ignorant enough at the time I wrote the code to fuck up in the way you describe. I can only show an example of stuff I've made.

Here we go, three more uses of Monoidic functions (mempty, mappend)

https://github.com/bitemyapp/bloodhound/blob/master/Database...

It's not just about code, it's about conceptual memoization.

If you know what a Monoid is and you're learning how a data type like ByteString works, you can intuit how to "stitch" bytestrings together as well as how to get an "empty" ByteString.

It's about killing off unnecessarily ad-hoc APIs as much as anything.

Being able to realize when you're implementing a Functor/Monoid/Applicative/Monad is powerful because it lends you intuition on what the rules for a well-behaved API are and allow powerful, polymorphic code-reuse.

Consider the reusability of the functions here across the vast set of Monadic types out there: http://hackage.haskell.org/package/base-4.7.0.0/docs/Control...

This is worth pondering as well: https://github.com/jwiegley/simple-conduit

The code savings described are usually associated with not having to rewrite the polymorphic generic functions used in the Haskell ecosystem for the one-off a person made. The conceptual power from having a community that understands these patterns is more important.

You'll learn more and faster by learning Haskell.

https://github.com/bitemyapp/learnhaskell


Second design is the better design most of the times. It's not something special about functional programming. The first version is mostly just getting it working and released. Revisiting the code always gives opportunity to refactor, simplify, and re-design, which leads to reduction in complexity, reduction in size, and increase in reuse.


Exactly, 2/3 code reduction sounds so great that I would like to be able to see that.


I, too, would like to see examples.

Though I will note that I recently saw some OO code written by a colleague that takes in a set of data names, runs a set of processes on them (translating the names to process-specific ids, fetching the data, turning it back into generic names) and then outputs it to one or more places. It was a pretty standard class hierarchy for reusing common code, selecting optional processing and so on. I've been spending a lot of time doing functional programming lately and would call myself a little bit of a functional programming enthusiast, so when I looked at the code, my first thought was that each of these parts are just functions that get passed into the appropriate places as higher order functions. Smaller, simpler and more flexible because you can customise what happens just by changing a function.

It occurred to me that when I think about problems in a functional programming mindset, there really isn't a need for crazy class hierarchies at all, because they're really just a dispatch mechanism for code reuse, but higher order functions acting on pure data structures handle 90% of this just fine with very little work. The other 10% are the complex cases that FP solves using fancier techniques like pattern matching (in the simpler cases) or multimethods or something similar in the complex cases.


"there really isn't a need for crazy class hierarchies at all, because they're really just a dispatch mechanism for code reuse, but higher order functions acting on pure data structures handle 90% of this just fine with very little work"

You did a better job of putting my thoughts to words than I could.


Nice article!

I have been using Clojure for years, initially because a repetitive customer mandated its use, later for my own projects because it reduced my development time and is fun to code in. I tried Scala (really liked Martin Odersky's Coursera class!) but it did not stick.

That said, Haskell has started to win my mind share. At least for my own projects I have been mostly using Haskell this year, with some bits of Ruby for quick text processing and munging. When my Haskell abilities improve I would like to start using it for small text processing utilities also.

Sometimes Haskell can be as frustrating as hell (yes, cabal, I am thinking of you). It can be frustrating also when writing a bit of useful code took a while because of the never ending (for me) learning curve. However when I am in the flow with Haskell, it feels like my 30+ years of Lisp experiences, but with a stronger language. It has become a cliche about Haskell's type system guiding you away from bugs, but it is true. I never had that feeling with Common Lisp and Scheme (but I have not tried strongly typed Racket yet).


That's a bit of an off-topic but may I ask, why Haskell over Clojure. What are the differences? You partly asked that in the 2nd part, but I would like to know more. I never did any serious FP so would love to know where to start: Scala? Clojure(Script)? Haskell? F#?


Here's an article that was on the frontpage recently; some reflections from a Clojurist that switched to Haskell:

http://bitemyapp.com/posts/2014-04-29-meditations-on-learnin...


I'd start with Haskell or Scheme. They are a lot easier to get started and learn than anything JVM. For Haskell, this course is awesome: http://www.seas.upenn.edu/~cis194/lectures.html

For Scheme, just do the first two chapters of SICP. In fact, do that anyway!


It is mainly the stronger type system. I like the upfront checks. That said, I have used Clojure a lot - a fine language. BTW, I think that learning some Haskell will help with Clojure.


I'm pretty ignorant of both languages (but I have novice experience in both): Moving to Haskell, don't you miss Lisp macros? How do you deal with that?


For the basic "I want a better conditional abstraction", no, I don't miss it. Lazy-by-default Haskell makes that sort of tool even easier to make.

I've also been realizing that a lot of things I would use a macro for in Lisp I can get with monads. Not everything, but so far, I haven't really missed macros much at all. They are both powerful tools for abstraction. I'm starting to think monads are a bit less powerful, but much simpler to get right and maintain long term.


Not at all trying to diss the OP, who wrote an interesting article, but if anything, this has put me off FP a bit. It really does seem like a lot of effort to go through for unclear benefits.

I have no doubt that learning FP will make me a better programmer (so perhaps it is worth it for that alone) but it seems to me I should rather spend those hours learning more data structures, or algorithms or machine learning.

Not sure I get it.

edit: it appears partially to be a definition problem. Exactly what is meant by FP and in what language, is a large part of the question.


I'm not sure what part of reducing a codebase by 2/3rds but with double flexibility is an "unclear benefit". Imagine if you could do that with a C# or Java library, people would be freaking out. Or getting almost all the safety of unit tests without writing and maintaining unit tests! That's awesome!

You want to be a lot better programmer? Go through SICP! It'll cover FP, data structures, interpreters, algorithms, and OO. You think you know OO now? I'd wager the chapter on OO will blow your socks off with awesome stuff you can use right now. And you'll learn FP enough to give you a taste of what's possible in the more powerful languages like Haskell.

Rather than sit around trying to figure out if it'll be worth it, just do it. I've never heard a programmer who has learned it who has said it was a waste of time. So then, what are you waiting for? No study will ever prove its better, just like no study proved Java was better, or C++ was better, or C was better. It's impossible to prove. Was each objectively better? In some ways. Is Haskell objectively better than all of them? Yup. There's all the proof you're going to get: opinions of those who know all of them. You either trust that, or you stay comfortable and fall behind.


It's not that I don't think that programming in a functional style isn't a good idea , it's just that the discussions about functional programming and the languages people use to implement its style so often suggest:

   If you use language 'x', then using
   mutation is wrong.
and recently, I've been thinking about the practically important but socially awkward question:

   What language is best for implementing
   mutable state?
By which I don't mean:

    What language makes avoiding mutable
    state impossible?
because given my skill set on some absolute scale that encompasses Knuth, I've got big fish to fry. Trying to maintain ideological purity is a distraction at best and an impediment at worst, if I am ever to see an [m42] and have a solid intuition about its solution.

Even the most rousing chorus of 'Onward Christian Soldiers' isn't going to help me implement Hoare's Quicksort in Clojure. Consing and filtering are great, but they miss the idea of working in-place. Racket's built in O(log n)priority queue isn't a substitute for a traditional O(1) queue implementation:

   +-------+
   |   | o-|------------------+
   +-|-----+                  |
     |                        |
     v                        v
   +-------+    +-------+   +--------+
   |   | o-|--->|   |o-|--->|   | nil|
   +-|-----+    +-|-----+   +-|------+
     |            |           |   
     v            v           v
    'a           'b          'c
The built in priority queue is mutable anyway.

The illustration of queues in Racket isn't rendered from whole cloth. If one searches the Racket documentation (depth first is implied?) the only hit for 'queue is the priority queue which lives in the 'data library [and again, it's still mutable and not thread safe]. But the real problem is that providing Racket implementations on Rosetta code is one of those ways Racketeers are encouraged to give back, and the implementation of LIFO on Rosetta Code shows the priority queue, rather than an actual queue, in part because using 'mcons is considered taboo under functional programming mores. It's a socially acceptable answer, rather than a correct one.

Some things just have to be to be mutable in order to get built in a way that will be useful when dealing with large tables when the first test of usability is getting built in the first place. It's not that playing with dangerous objects ought to be encouraged, it's that it there's nothing wrong with a little dynamite now and then for removing stumps when the alternative is lifting the grinder to the top of the cliff with a crane. All that really matters is acting in accord with the fact that, yes, we're using dynamite.

There's a continuum between Coq and MMIX. A good functional programming language allows refactoring a solution from right to left. It doesn't pretend as if there isn't a right wing or that it doesn't matter politically.


I suppose I can only speak for myself, but as a functional programmer I would never ask you to give up mutation completely.

I do think it's nice that most FP languages give us pure functions and persistent data structures that are easy to use and relatively performant, so we don't have to go out of our way to provide a pure (by construction) abstraction when we want to.

The key idea is to be honest when you are asking for mutation. Haskell would likely be far on the "left wing" (or right wing?) on your political spectrum but you can do mutation whenever you please as long as you're honest and mark it in the types with something like IO, State, or ST (Clojure does something similar in spirit with 'transient'). There is even unsafePerformIO if you are absolutely sure that you are wrapping an impure computation in a way that you know looks pure from the outside.

Purity isn't promoted because it's a virtue. It's that pure expressions generally come with nice commutativity and idempotency laws which give you the ability to refactor code while remaining confident you won't change its meaning or alter the behaviour of distance subsystems. It's worth preserving those properties when you can.


Maybe, what I am thinking is that a language which makes it easy to program in a functional style, should make it equally easy to program in an imperative one. By extension, this may mean that a good language for functional programming ought to make it easier to program imperatively than languages designed to facilitate imperative programming.

Of course, this begs the question of what is easier and harder and better and worse. There's a case to be made that Haskell's monads make imperative programming easier. It's essentially a mathematical argument - rather abstracto-theoretical versus the sort of concrete arguments that get advanced to avoid discussions of why it might be better to reason with lambda-calculus versus using von-Neumann as a model.

To put it another way, being able to abstract away von-Neumann into mathematics is useful. But the von-Neumann model is useful because it is a model that we can easily get our heads around.


If you asked a Haskell expert what their favorite imperative language is, they'll probably say "Haskell."

That is because it "makes it easier to program imperatively" more than any other language I've met.


Only after the learning curve, though. It isn't immediately approachable as an imperative language.


Yes, but suppose you could write the same program in two languages, a classic imperative one, and a purist and difficult FP language.

For the sake of argument, let's assume the classic imperative language is easier to approach for rookies. If it's also the easiest language to write bugs and make mistakes with, wouldn't the "harder" FP language still be a net win? As long as its learning curve isn't unapproachably steep -- i.e. so steep that your time to market becomes awful -- what is the advantage of the imperative language's ease to hit the ground running and writing lots of bugs?


If language 'a' lets a person write a program with 'b' bugs in time 't', and language 'aa' lets the same person write the same program with 'b' bugs in time 't-n' then there is a clear net win for all 'n' > zero.

If 'aa' also has the advantage of providing semantics for producing significantly less buggy code in time 't-n-m' at some future time then that is also an advantage but a distinctly different one and one which can be deferred [and probably will be given a significant learning curve].

The first step toward the modern automobile was the 'horseless carriage' not the Countach.


It's greatly overstating the learning curve. It's possible to pick up Haskell by simply writing a few small applications tackling different parts of the language in each one. This can be done in a few months. Not to mention there are now numerous resources available for free such as learn you a haskell, real world haskell, and countless blog posts.


I hear the Wikibook also has a good introduction.


> The first step toward the modern automobile was the 'horseless carriage' not the Countach.

Lamborghini makes shit cars. The only reason people rave about them is because they have giant engines. A car without a reverse gear is one of the stupidest ideas in automobile history.


I used "Countach" for its word value. It's alliterative and has little rhyming potential and is loaded suggestively with images of suits with shoulder pads. An absurdity is not accidental.


"But the von-Neumann model is useful because it is a model that we can easily get our heads around."

I am skeptical of this statement.

Programmers make a lot of errors when reasoning about mutable state, so I don't think it's fair to say "we can easily get our heads around the von-Neumann model". My intuition is programs written without mutable state will, on average, have fewer errors, which suggests to me the more "mathematical" nature of functional programming is actually easier to wrap one's head around.

Not sure what empirical data is out there to test this hypothesis.


I probably wasn't clear enough. My point is that the von-Neumann model allows us to walk up to a computer and point and say, "Here is the memory and here is the processing unit and here on this test probe of register 47 we see the instruction to move the contents of memory address 42 from memory into register 19." Even my mom can understand a sketch of the von-Neumann model to some degree.

Because functions are not a model of a computer, but a means of computation, there's no obvious picture. Indeed, if I look at the test probe, I don't see a function but an instruction to move the contents of memory address 42 from memory into register 19. I can draw a diagram of a Turing machine in a few minutes, not so with the lambda calculus. The equivalence is only mathematical, and that cuts both ways. There's no way to write a functional version of The Art of Computer Programming because mathematical equations don't have running times.


I think you hot the mail on the head there. That the delta between mathematics and programming is time. Lee Smolin makes a similar case for physics.


One can achieve commutativity and idempotency in a side effect based language as well...but one has to redo the programming model to achieve it:

http://research.microsoft.com/pubs/211297/managedtime.pdf


No, you shouldn't give up mutability or in-place algorithms. As you said, performance concerns demand them at times.

Haskell has a nice approach to that problem. Data is immutable by default, but mutable primitives are available. The type system forces you to be explicit about when you're using mutability, so that mutable data can never accidentally creep into logic where you relied on immutability for correctness.

Broadly speaking, most mutable data types fall into two categories: Those that are some flavor of "IO," and those that are some flavor of "ST." In brief, the IO type is Haskell's way of dealing with operations that logically must be performed in the right order. (In-place operations require as much.) You can think of IO as one continuous chain of operations that begins and ends with the lifetime of the program. In other words, you can't just drop in and out of IO at will; the chain must be unbroken.

ST is an interesting variant on the same idea, except it does allow you to drop in and out at will. That is, I can enter ST at any point in my program, including the functionally pure parts. How is this possible? Because it imposes a restriction that's not present with IO: You must not touch the outside world from within ST. In any ST function, you get your own little mutable sandbox, but as with any pure function, you can only see that which is passed to you, and all you can do to the outside world is return a value. So you can, e.g., take an immutable list, create a copy that you sort in-place, then return the sorted list as an immutable value.

IO and ST are each flexible in their own ways. With IO, you can do almost anything, including mutating something that's passed in by reference. That can be important for some programs' performance. ST is flexible in a different way: You can sneak it into pure functions.


Having finally decided to read Knuth's main works, I have to confess I am being drawn more and more heavily into this way of seeing things.

The beauty of how Dancing Links works is rather remarkable. Similarly, finally understanding how to read some of the algorithms as specified has helped to see just how "imperative" algorithms can be analyzed rather comprehensively.

Also, as a major aside, I would encourage more folks to try the Knuth works. Definitely feel free to skim the math, as that is some pretty hairy stuff. At the same time, keep trying to go back to it later. Most of it isn't undoable, so much as it is just completely foreign to what you probably do day to day. Seeing the connection between it and many typical programming tasks can be tough. (Indeed, I don't think I've done so, yet.)


I assume by Knuth's main works you're meaning his The Art of Computer Programming. If you've got a decent math background (college level with some discrete math and calculus, it will still be pretty difficult for solo study by someone with just a high school math background) and find the stuff in Volume 1 to be a bit too difficult, I recommend Concrete Mathematics. It's essentially the math portions of Chapter 1 extended to a full text. I found it much easier to get into. Some portions are essentially calculus for discrete functions. Reading it made a lot of the calculus things that I had difficulty with suddenly click (I passed Calculus II by willpower, not understanding).


:) I did indeed mean that. And, oddly enough, I did just pick up Concrete Mathematics. I am just close enough to "get" much of the math. What I can't do is make some of those leaps myself. Worse, I'm not entirely clear how they relate to what I do, day to day.

Despite all of that. I am loving every moment of these books. Highly highly recommended.


> What language is best for implementing mutable state?

I would say a language that is pure and has a strong static type system. And I say that because those languages allow you to track exactly where mutation is happening (you're using dynamite) and deal with it appropriately.


Actually, because Clojure is not ideological (though it is very opinionated) -- and certainly not pure -- but rather very pragmatic, it includes a very interesting feature called transients[1]. Transients temporarily turn a persistent collection into a mutable (in place) one in O(1), specifically to allow efficient in-place algorithms like quicksort.

[1]: http://clojure.org/transients


Clojure's syntax for transients is deliberately more cumbersome. A Scheme like 'loop! would flag the decision to employ mutation. Instead, Clojure steps outside the conventions of dynamic typing and expressions and requires something that looks like a static type specification and another thing that looks a lot like a 'return statement.

To put it another way, the language complects mutation instead of simplifying it and it's reasons for doing are...well, what's the difference between an ideology and an enforced opinion?

Clojure is rarely introduced with, ok here's how you can do exactly what you were doing before in Clojure, and now here are some ways that you might think about doing it differently. Instead, Quincy autopsying the corpse before the opening credits are even finished.

None of which is to say that I don't like Clojure. It's just that the really important question is part of the theology rather than evangelical outreach. Java in Clojure probably isn't good Clojure, but it might often be better Java. 20% less imperative code is probably an improvement.


I'm pretty sure that it is possible to make a function in Haskell which, while it mutates variables, these variables are all local, so you are still able to offer up a purely functional interface to the world outside of that function.

Whether this is simple and straightforward or even demands some "magic" which isn't usually part of the language semantics - that I don't know. Imperative code seems very doable and maybe even pleasant in Haskell, but I don't know how simple it is to actually write imperative code that is efficient - e.g. destructive updates and all that, rather than making a new value for each "update" and programming in an imperative style though you are really just using immutable values all the time.


> I'm pretty sure that it is possible to make a function in Haskell which, while it mutates variables, these variables are all local, so you are still able to offer up a purely functional interface to the world outside of that function.

Indeed, it's called the ST monad. The internals aren't that magical, behind the scenes GHC does some state passing like IO but nothing too fancy. The real trick is using -XRankNTypes for the runST function[1] to track "state threads" at the type-level.

        example1 :: Int
	example1 = runST $ do
	  x <- newSTRef 0

	  forM_ [1..1000] $ \j -> do
	    writeSTRef x j

	  readSTRef x
[1]: http://research.microsoft.com/en-us/um/people/simonpj/Papers...


This is actually very simple and straightforward to do using the ST monad[0]. See the paper Lazy Functional State Threads[1] for details.

[0] http://hackage.haskell.org/package/base-4.7.0.0/docs/Control...

[1] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.144...


State can be local, but you need an escape analysis to verify that it doesn't leak the context if your language supports aliases, which I believe Haskell doesn't (or to say, aliases don't make sense in Haskell).


This comment is very confusing. You spend a lot of time complaining about how Racket is overly focused on functional programming, leading to slower data structures. But the queue data structure you refer to is an imperative queue with O(1) operations for everything. Also, it isn't a priority queue.

The Rosetta code example for Priority Queues in Racket is also built on a mutable heap data structure. Of course, it has more complex time bounds, because of priorities, but not because of immutability.

Finally, the Rosetta code example for Queues in Racket uses `mcons` (which you claim is somehow taboo) explicitly.



What do you think, in terms like this, of the ST monad work in Haskell?


You could look at Rust? It has mutable state but strict ownership which is enforced at compile time.


My irrational preferences fall along the lines of Lisp, and I think Scheme has the right syntactic approach to mutation: Put a '!' on the end so that it's explicit, but don't make it syntactically a mess like Racket does where mutable lists have to be 'mcons-ed up step by step rather than made in one pass (not that there's anything wrong with writing a macro, it just means that there's an additional layer of misdirection when using mutation, and that's exactly the place where you don't want it.

Ordersky's Scala takes an approach similar to Racket in regard to type casting - purposely making the syntax a bit more cumbersome as a flag rather than...well flagging the action explicitly. That said, I like Scala's ML like features, and Ordersky opens Scala by Example [1] by illustrating the transition from imperative to functional style in a Scala implementation of quickSort.

Rust looks interesting. One of the big things though that I look for is an over-abundance of documentation and resources. I'm learning Scala because Ordersky teaches a course on Coursera and the decade of people writing about it in general and the fact that one could write about Scala and potentially make money doing it due to the size of the JVM market, means that the literature about the language isn't so much some necessary evil for people who like writing languages more than documenting them.

[1] http://www.scala-lang.org/docu/files/ScalaByExample.pdf


I'm not sure what you mean by "syntactically a mess" but Racket provides `mlist` which sounds like exactly what you want [1].

[1] http://docs.racket-lang.org/compatibility/mlists.html


Thanks. It's not linked from the Racket Reference. Clicking on "mutable lists" in 4.10 is a 'did you mean recursion?' experience.

http://docs.racket-lang.org/reference/mpairs.html?q=mcons#%2...

That it takes an intimacy with the code base of a regular repo contributor to find it suggests how deeply buried 'mlist is.Or to put it another way, Racket the language does not provide 'mlist. Racket the ecosystem does but deliberately makes it hard to use (by obfuscation) on the moral ground that mutating a list ought incur the pain of iterative 'mcons-ing...and it's associated pretty-print. .


For us that were around when the default changed from mutable to immutable cons cells, known the rule "just prepend an m". I agree that the link in section 4.10 can be improved - but calling it "obfuscation" and "deliberate" is taken it too far. File a bug report, and I bet it will be fixed.


'mcons is part of 'racket/base. 'mlist is part of a library that contains two layers of warnings, is not linked to by the Racket Guide or Racket reference and lives under the 90th of 96 links on the Documentation page. It's lower in the great chain of being than "Unstable May Change without Warning".

(cons 1 (cons 2 (cons 3 '()))) is commonly required in the Racket family of languages for purely pedagogical purposes. That it is the only pattern for 'mcons provided by 'racket is a deliberate statement of community values in regard to mutable lists.

In regards to filing a bug report, the bug is that there is mention of mutable lists and a hyper-link in the documentation for the Racket Language. Because mutable lists are not part of the language the bug fix is properly removing the reference - at least in so far as maintaining consistency with the rest of the documentation because other libraries are not cross linked from the Guide and Reference. Linking "mutable lists" to 'compatibility/mlist, would just replace one bug with another...unless leaving 'mlist out of 'racket/base was itself a bug. That that is the case, is unlikely.


> One of the big things though that I look for is an over-abundance of documentation and resources.

..... stay tuned.


Ask HN: As somewhat of an old timer, I have a couple of questions about FP, not really worthy of creating their own thread:

1. I learned a rule (in my Pascal textbook), "avoid globals." Is FP just an embodiment of that principle?

2. I do a lot of programming involving hardware, often homemade. Hardware has state, such as whether a lamp is turned on or off. Does this just mean FP is inappropriate, or are there techniques out there that accrue the benefits of FP in systems that necessarily have state?


1. I think "avoid globals" is a reasonable way into FP, but FP itself involves a lot more than just avoiding coupling via global state. I would say the main components are

- nested definitions;

- lexical scoping;

- first class functions;

- static typing; and

- a whole bunch of programming techniques (e.g. monads) that have grown around them.

2. You certainly can use FP ideas in low-level programming but you'll have difficulty finding a FP language that targets your hardware. There are a few small Scheme implementations that might work. Rust is viable if we're talking 32-bit CPUs, not 8-bit.

(I imagine the Rust developers would not claim Rust is a functional language, but it has absorbed a great many ideas from functional languages. I prefer to talk about modern programming languages instead of functional programming languages. Rust, Haskell, and Scala are all modern and have many features in common. Only Haskell claims to be a pure FP language though.)


>- static typing;

I'm not an expert on the theory or design of programming languages, but isn't static typing orthogonal to FP?


I will discuss Haskell, since I am not very familiar with other FP languages. In Haskell static typing is more useful than in other languages. This is because in Haskell the type signature of a function gives an upper bound of what the function does. In an impure language, a function may change any accessible mutable values and/or perform IO. These possible side effects mean that in an impure language the type of a function is a very weak indicator of what the function actually does.


So you're saying that static typing helps FP and hence is sort of a part of it?

Thanks for the answer.


Yes and no.

A lot of the action in modern programming languages is in type systems. Static typing is synonymous with the dominant group of programming languages that subscribe to the "functional" label. There are other languages, such as Scheme, that are not statically typed that also subscribe to this label but they don't have the same mindshare at present.

It's important to keep in mind that FP does not name some formally defined concept like the number 3 or the function cos. Rather it's a name given to a group of people and programming languages that share a common ancestry and world view. The definition is fuzzy and can be split many ways.


Interesting, thanks.

>It's important to keep in mind that FP does not name some formally defined concept like the number 3 or the function cos.

Got it.


FP doesn't ignore state, FP reifies state and state transitions and allows you to talk about them more explicitly (and, if you have static types, use your type system to reason about them). 2 is no reason at all to avoid FP.

That said, I agree with others here that there might not be an FP language that is a good fit, mostly because they have some trouble giving good guarantees about memory usage (particularly true of Haskell, with its lazy evaluation, but gets to be the case whenever you're passing a lot of closures around).


1. I think we should challenge the definition of "global variable". In what context is the variable global? The traditional definition is that the context is the whole program. However, in imperative OOP [1] code, I find that people just learned to hide their global variables in class instances. The whole program cannot access them, right? but every method in the class can and does, and all those mutations and side effects can lead to the kind of spaghetti code we all hated when there was no OOP. It's just global variables with extra sugar. And we can make copies of them, so the code is maybe one step better.

Imperative OOP often feels like one is pivoting behavior around the data.

FP feels like the data is flowing from one transformation to the next.

There is the "functional" part of it, too... where functions are highly composable. One creates new functions by combining old functions, and these may carry along with them important context (closures).

2. From what I've learned of Haskell -- and I'm a Haskell noob -- it lets you work with the side effecty real world. You can alter your hardware state. The key is that it specifically flags such side effects and encourages separation between more "pure" code and code that is "tainted" with the side effects.

[1] OOP is an abstraction and compatible with pure FP. You can model your class instances such that they are immutable and get all the benefits of OOP. FP contrasts better with imperative programming.


I'm not a specialist of FP, so please take what I say with a grain of salt. Before answering your questions, here's a preliminary point:

0) The problem is that FP is not really defined anywhere, or rather, everyone has its own definition. I was wondering about the definition of FP about a month ago, and discussed it with my roommate who's a phd student in programming languages. The conclusion of this 2-hour long conversation was: well FP is kind of a fuzzy concept. The concepts of FP languages, typed languages, static languages, etc.. all tend to conflate. Some people will consider that Scheme, Python, Ruby, Scala, Haskell, OCaml, R are all FP languages. Some will say that only Haskell and OCaml really are. Whatever..

Keeping that in mind, here some attempts at answers:

1) In a very loose sense, I guess that you can say that, and to make it possible, you need some constructs in your language, the minimum requirements being higher order functions, and maybe closures. I'd say that's the minimum, but again I'm no specialist. But then if you want to push this "no globals" moto to its logical extreme, you get to a point where you can't even use a "print" function, since it has some side effects. Haskell does, and solves the problem with monads. But to have monads you need a very particular type system, which leads to the question of wether you need a type system to go FP all the way? I don't know. Someone more knowledgeable should answer that

2) I don't know about hardware. But like I said, FP can deal with state. In fact, even hardcore FP as embodied by Haskell can deal with state. It's not trivial to wrap your head around the concept, but it's worth giving it a shot.


1. More generally I'd say it's "minimise mutable state".

2. Almost all programs are necessarily stateful, but most of the code within them does not have to be. As such, all FP languages support working with state, e.g. Scala which is essentially just Java with more powerful types and easier creation of immutable values and lambdas, so you can program with state as easily as you can in Java. And Haskell, which separates stateful functions by type, so that the programmer and the compiler know which functions have state and which do not.


FP doesn't just avoid global variables, it avoids local variables a lot of the time: you compute things from the function's arguments directly. f(a,b,c) always returns the same thing for the same values of a, b, and c and does not modify them.

FP hides state behind the concept of monads. This tends to make state-heavy systems harder. Often small embedded systems follow the "state machine" design pattern.


Suprised that the year ended with one language and not another, particularly given his background.

This could be indicative of a "good enough" FP trend where the net affect of language authors cherry picking FP constructs does not, ironically enough, result in significantly increased adoption rates for those FP languages that are at the forefront of FP R&D.

Case in point: the recently announced Swift appears to have a decent grab bag of FP candy that will immediately appeal to legions of iThing app developers.

In short, it may be that Haskell, and particularly Scala (fending off Java 8, Kotlin, Groovy, and Clojure) will be fighting for scraps until one of them comes up with a killer stack that launches them out of niche status into the mainstream. Could be awhile yet...


I used to like functional programming. Now I think that it's -- more often than not -- a solution in search of a problem. I can understand some of the things FP gets you (although those come at a cost, which I'll get to later); I'm just not so sure these are the things we need, or that FP is the best solution for them.

One is code reuse: yes FP code is definitely more reusable. The problem is that the kind of code it enables reuse of more than OO is very small, simple loops. This is never a big issue. Writing the same 4 line method -- which could have been abstracted with a monad -- 10 times in a 50-150K LOC project is never a big problem, and these methods rarely contain bugs. On the other hand, OO code is often easier to refactor and modify. Sometimes -- in Java for example -- dynamic linking combined with OO, lets you modify/add functionality without even re-compiling existing code. Heck, it let's you add and load new polymorphic type implementations at runtime. It is much more malleable than FP code.

Another is state management: yes, FP is one solution to the problem of managing state, especially in a multicore environment. But it is neither the only solution, nor is it the best. The Clojure solution of -- for lack of a better name -- "transactional" mutable state is simpler than the pure FP one, and just as safe. I.e. there are ways to make side-effects safe without restricting them so much that they become a nuisance (after all, all software, possibly with the exception of compilers, exists for the sake of its side effects).

Finally (and I've said this before on HN), a language like Haskell discounts the very useful choice of reasoning about your code after it runs, favoring, instead, all-upfront reasoning, often at the expense of facilitating the former. There are some domains where figuring everything up front is very important. Others, where trial and error is far more productive.

So pure FP increases code reuse, but of code that's not that important to reuse. It helps deal with the hard problem of state management, but other, simpler solutions exist. Finally, it makes it hard to "feel" how your code runs, to debug it, to profile it, and more. I think it is based on the premise that if programming could be made more formal -- more mathematical , if you will -- it will become easier/"better". But that premise has not been shown to be true. The lack of magically bug-free, FP OSs, drivers, control software, and large applications, shows that even the biggest supposed benefits of languages like Haskell, are yet to be demonstrated in the real world.

EDIT: Just to clarify: Unfortunately FP does not have a definition, so, when I was saying "FP", I meant "FP as the article's author practices", which means "statically typed, pure FP". FP using Java streams, FP in Clojure/Erlang, and FP in Haskell mean very different things.


Wow, I sure have the opposite reaction to you. Makes me wonder what kind of code you wrote. Some specific points:

"One is code reuse: yes FP code is definitely more reusable. The problem is that the kind of code it enables reuse of more than OO is very small, simple loops."

Combinator libraries sure are pretty useful, and they're more than small simple loops.

"Writing the same 4 line method -- that could have been abstracted with a monad -- 10 times in a 50-150K LOC project is never a big problem"

All our goes code through a bunch of monads (mostly Future + Writer). Very puzzled how you're using monads if they only appear 10 times in 50KLOC.

"Another is state management"

I think you're referring to STM here. That's been extensively explored in the Haskell community.


> Combinator libraries sure are pretty useful, and they're more than small simple loops.

Useful -- yes. Useful enough to pay the price -- IMO, no. OO is decent at code reuse, and the most effective, most important code reuse, is that of big libraries with rich functionality, which is done rather well in many programming paradigms (i.e. pure, statically typed FP has little to no advantage here).

> Very puzzled how you're using monads if they only appear 10 times in 50KLOC.

I'm not. But OO code that refactoring with monads would have saved me does not amount to much.

> I think you're referring to STM here.

I was talking about more limited transactional (or "concurrently mutable") data structures like Clojure atoms and agents, or restricted STM like Clojure's refs and in-memory DBs (general purpose STM is pretty much dead). I know Haskell explores STM. I'm just saying you can enjoy the same benefits in Clojure or even Java.


I don't agree with your definition of FP then. Erlang is broadly the same as Clojure: pure in the small but stateful in the large (mailboxes in Erlang store state).

Your basic argument seems to come down to Haskell (the only pure statically typed language with any widespread adoption) vs any other language. That's not an argument I'm particularly interested in.

As for OO vs FP in combinator libraries -- the same pattern is often called a fluent interface in OO. This gets to the larger point that FP is as much a programming style as a language that supports that style, and you can program in a FP style in any language. The question I'm interested in is "is FP style useful?" to which I give a resounding "yes!"


All Erlang, Clojure, Scheme sport pretty nifty object systems, even if they don't resemble the Java kind. I find it funny when someone espouses the benefits of functional programming with...objects? On the other hand, these languages are quite pragmatic and I believe such blending is the future.

The only language that is purely FP is Haskell. It doesn't have anything that we could mistake for an object system, its functions all the way down!

Fluent interfaces should die a horrible death; it is a confusing style in OOP and quite easy to make lots of errors without the equational reasoning that supports combinators in FP.


Haskell completely has something you can mistake as an object system! The entire typeclass machinery works that way less classes and inheritance. There's even a highly functional subtyping relation.


Type classes work strictly over values, with none of the naming machinery to give these values names and identities. Type classes only resemble classes in their unfortunate label; subtyping and polymorphism isn't really exclusive to OOP (with the exception of name-oriented nominal subtyping). The only way to create an object in Haskell is through giving it a GUID of some kind.


I'm not claiming typeclasses are classes, but more like modules and that modules achieve at least some of OO. Entity-like identity is a notably more difficult thing to pull off, but still is easily embedded in IO or ST.

I'm not saying it's a complete OO implementation, but instead that it's not so far as to be completely unconfusable, though it might be hard to, say, do Smalltalk-in-Haskell.


Typeclasses don't give you any specific to OO: they are abstractions over values and that's about it. You wouldn't be able to emulate an existing OO system in Haskell, and anyways, it wouldn't bring you closer to "talking about" and "thinking in terms of" objects.

My point that OOP is essentially "thinking in terms of named objects" whereas most of what Haskell focuses on is "expressing math by composing anonymous values." I believe this starting from a design perspective is the best way to highlight the real differences between OOP and pure FP.


I disagree, but I think we'd have to go to definitions to make headway on this conversation, so I'll back out. As final thoughts: while type classes give abstraction over values, (a) those values encompass HKTs so they can also be effectual in various ways and (b) universally quantified bounded types are a basically final algebras and codata goes a long way toward modeling objects. Between those two I think you can give OO in Haskell a run.


We then have to argue about what objects really are, and there isn't much consensus :) I don't think my design-oriented linguistic definition is very popular, but it contrasts nicely with pure functional programming; otherwise there is a lot of overlap and I don't think we can say anything very interesting about their differences.


Yep, exactly :)

I don't find definitional wars interesting, and I'd, in the stream of everyday things, be very sympathetic to "OO and FP are different because they were built from different POVs". The only reason I am being a stickler is that I'm pretty interested in the idea that both FP and OO have a representation in type theory/category theory/logic and thus can be put into common representation.

Haskell definitely isn't the best OO language, but if you squint hard enough you can see OO in it much like you can see FP in C's function pointers: painfully. Seeing FP in C helps you examine the implementation challenges of FP. Seeing OO in Haskell helps you pick apart the different semantic pieces which build to form OO.


I would go farther and say Haskell is the only popular general purpose language that can get away without exposing programmers to object-like abstractions in the general case. Of course, these languages are general enough that you could probably shove whatever paradigm you wanted on top of them, but they do optimize for a particular kind of paradigm.

A type theorist's definition of OO will be quite different from a generalist's. For type theory, various kinds of polymorphisms mostly overlap between paradigms (e.g. generics and structural subtyping are useful in OOP), but nominal typing is both commonly associated with OOP and relatively useless in FP languages to be a distinguishing feature. This also dovetails with my named-oriented definition of OOP: objects have identity, but their types have identity also. If you come at it from the FP perspective, identity for objects AND types aren't very desirable...there are no meaningful names in math!


That's an interesting claim—that Haskell doesn't need object-like abstractions. I think it's really worth examining. I've said in the past the Haskell (and the ML-family) is possibly the only language which takes seriously initial data. That said, it still needs jerry-rigged abstract types to be efficient and I'm happy calling codata "object-like".

So really what I believe is that Haskell does a better job of being OO-FP than Scala does. I think there's still a happier medium between the two, though.

I'm also really eager to explore the type theorist's definition of OO in order to tease and bend the generalists definition. I'm not sure I completely agree about the idea of names that you outline, though. Existential types (and their dualized universal presentation) present unique types which could be named. State-transformer style (and the state-like monads) present a notion of identity-as-entity.

Again from the perspective of Haskell being a stab in the direction of a type theorists FP-OO language I find all of these features really interesting.


I feel like the nearest thing to OO in Haskell in common use is the records-of-closures pattern. Still doesn't have persistent identity, of course.


Records-of-closures is pretty much what I'm talking about. There is a notion of identity when those records are recursive, though the state management is still manual (as you'd expect).

    data Complex = { rotate :: Degrees -> Complex }

    -- thus
    -- rotate :: Complex -> Degrees -> Complex
where `rotate` is a state transformer and thus produces a notion of identity.


I think it's a stretch to say this, itself, provides any notion of identity. You could certainly implement identity with any of the myriad ways of maintaining references with state, of course.


Of course you're right, but I think a function like

    Complex -> Radian -> Complex
offers two points of view. You can see it as a function which transforms complex numbers or a function representing an update to the internal state of a Complex object. The second highlights the need to keep track of identity. Throwing an STRef into the mix just gives you a particular kind of history—one which only retains the "now".

Which is all sort of obvious and meaningless, but I still think it's interesting to think about. I think coalgebras tend to force thinking in terms of identity in this kind of way.


I agree with every word. Alas, FP does not have a definition, so, when I was saying "FP", I meant "FP as the article's author practices", which means "statically typed, pure FP". FP using Java streams, FP in Clojure/Erlang, and FP in Haskell mean very different things.


Agreement with someone on the Internet? o.O

;-)

[To the downvoters: this is me replying to the conversation I've had in the previous posts in this thread. If you can't handle my surprise and happiness at reaching some agreement hit up 4chan.]


STM applies to OOP just as much as it does to FP. There is nothing magical about monads that make STM possible, they just make building the infrastructure for STM easier.


> One is code reuse: yes FP code is definitely more reusable. The problem is that the kind of code it enables reuse of more than OO is very small, simple loops.

It sounds like you just haven't used Haskell enough to have the full extent of the reusability become apparent. I agree that on the surface it seems like you're right. But that particular example of reuse is just the tip of the iceberg. It's actually a very pervasive pattern that has tremendous positive impacts on your code at every level from small 3-line loops to large overarching abstractions.

This isn't just me making baseless assertions. It has been backed up by real world experience. There was an ICFP presentation by a company that rewrote their entire application in Haskell. With Haskell their reuse was an order of magnitude higher as measured by the number of external library dependencies! The rewrite also decreased the size of their code base by something like 80%. Oh, and they saw a massive reduction in bugs and increase in performance.

> But that premise has not been shown to be true. The lack of magically bug-free, FP OSs, drivers, control software, and large applications, shows that even the biggest supposed benefits of languages like Haskell, are yet to be demonstrated in the real world.

It's only been in the last 5 or so years that Haskell has really become commercially viable. This is too recent for the large apps you're talking about to have been written. But don't worry, we're working on it. The other day our newest hire made the comment that at our company each individual developer is responsible for roughly the equivalent of an entire team at all his previous jobs.


> With Haskell their reuse was an order of magnitude higher as measured by the number of external library dependencies!

If we're talking real world -- as we should -- then this should be contrasted with the tradeoffs and alternatives. The most important kind of code reuse, IMO, actually happens at a larger scale, and in the past few years has worked remarkably well, since the advent of open-source. I'm talking about reusing projects like ZooKeeper, Hadoop, etc. Being able to use such a project might save you millions of LOC, regardless of the level of internal code reuse. The lack of Haskell libraries and bindings relative to other languages reduces this most important kind of reuse. In fact, using ZooKeeper also reduces a large number of bugs.

> But don't worry, we're working on it.

Excellent. I'd love to get a full picture, tradeoffs and all, of a big, important software project in Haskell (which isn't a compiler).

> The other day our newest hire made the comment that at our company each individual developer is responsible for roughly the equivalent of an entire team at all his previous jobs.

I can say the same for my current job, even though we're mostly doing Java (and some Clojure), and at our previous jobs we also used Java. This mostly has to do with the quality of developers, company structure and size, and less with the choice of programming language.


"The most important kind of code reuse, IMO, actually happens at a larger scale,"

There's a hidden instance of begging the question here, which is that the reason why "large-scale" code reuse is "more important" in the real world is that OO has largely been a failure for mid-scale code reuse. You can build enormous frameworks in OO that serve a need (at some cost in constraining your options, but one that can be worth paying), and it can do certain fairly small-scale reuse like 'a generic red-black tree' or other data structures, but it has not done well in the middle. Therefore, since OO has been the dominant paradigm for decades now, we do not see any middle-scale reuse in the "real world". However, it is difficult to disentangle whether that is a fundamental characteristic of "reuse" or whether it's a fundamental characteristic of OO. (I can argue in favor of both, and I don't mean "both are true", I mean I could argue in favor of either one.)


'Therefore, since OO has been the dominant paradigm for decades now, we do not see any middle-scale reuse in the "real world" '

I'm not a blind fan of any one type of software methodology but this assertion is completely wrong. Java and C++ world is replete with widely used code libraries that are used and reused in millions of diverse software projects. This fact is so obvious that I don't feel the need to list out the examples here, but I will if challenged.


> The lack of Haskell libraries and bindings relative to other languages reduces this most important kind of reuse.

This is an instance of begging the question too. Newer, less popular languages will always have fewer libraries and bindings relative to more popular languages. It might be an argument against using Haskell at the current point in time, but it has nothing to do with the merits of the language itself.

But the reality is that Haskell is actually doing quite well in terms of libraries and bindings. Pick anything you can think of...odds are probably pretty good that Haskell has bindings to it. As an example, let's look at the two projects you mentioned: ZooKeeper and Hadoop. Yep, there are Haskell bindings to both of them. For ZooKeeper there's hzk (http://hackage.haskell.org/package/hzk), and for Hadoop there's hadron (https://github.com/Soostone/hadron) and there's even a nice presentation about it (http://vimeo.com/90189610).


> It might be an argument against using Haskell at the current point in time, but it has nothing to do with the merits of the language itself.

The merits of the language itself don't matter that much. A language isn't a piece of art to be admired. And as long as Haskell is used by such a tiny group of people, the merits of the language itself can't even be argued well, because it's unclear how they work in practice for different kinds of software and different kinds of developers. And the tradeoffs people have to make aren't clear, and if the tradeoffs aren't clear, the merits aren't clear either. They're at best hypotheses of potential merits, or merits in the eyes of the very particular group that's currently using the language.


Merit is not exclusively a characteristic of art. Tools have merits, and awareness of those merits is very important for any practitioner who seeks to use those tools in their work. If I went into my father's woodshop and sought to use his lathe to attempt to plane a board, he would rightfully yell at me.

Merits of programming languages can be argued well, because there is an entire theoretical discipline dedicated to the understanding of programming languages (PLT). Your lack of knowledge of this field does not mean that it does not exist.


> And as long as Haskell is used by such a tiny group of people

I'm guessing Haskell is used by more people than you think. There are 6500 packages on hackage. The Haskell subreddit has more than 16,000 subscribers. And there are usually more than 1300 people in the #haskell IRC channel at any given time. That's more than #ruby and ##javascript and about the same as #python. Also, Standard Chartered bank has roughly 1.3 million lines of Haskell code running in production.

> And the tradeoffs people have to make aren't clear, and if the tradeoffs aren't clear, the merits aren't clear either. They're at best hypotheses of potential merits, or merits in the eyes of the very particular group that's currently using the language.

This statement applies equally to pretty much every language out there. We don't understand the tradeoffs between mainstream languages either. We have no idea how those choices are actually going to end up affecting our projects. To claim otherwise is intellectually dishonest. How do you know that Ruby or Perl would be better than Brainfuck? The tradeoffs and merits aren't clear. They're at best hypotheses of potential merits. So how do we actually make these decisions in practice? We look at the language features and think about how they can help us manage common situations in software development, then we go with our best guess. We can do that for Haskell just as well as we can do that for Java, C, Ruby, etc.


> We look at the language features and think about how they can help us manage common situations in software development

Maybe some people do, but when I pick a programming language, language features are among the last things on my mind. I look at the available tools (profilers etc.), available libraries (particularly in the domain that interests me), quality of documentation, the size of the community and its vibrancy. I also look at similar projects done in the language. Only after all these do I look at the language features, but they don't matter so much, because I know I'll be very productive in a language with a very large selection of libraries and a large user community. I'm saying this after 20 years of developing software in languages like BASIC, Pascal, x86 asm, C, C++, Matlab, Java, Clojure, Scheme and more. I can't think of a single case where language features mattered more than any one of the things I listed.

All this is, of course, for serious projects intended for production. For a toy project I might look at the language features first.


If there were superb tooling, libraries, and documentation available for Brainfuck or INTERCAL, you would not find that sufficient reason to program in those languages. Of course, you will say, such a situation would never exist, because nobody is going to write libraries for INTERCAL. With this argument, though, you concede that language features are important. You just estimate them by proxy: languages with features that encourage library authorship pass your test. Rather than use this proxy test, some of us who are well informed about PLT choose to judge languages on their merits directly.

Nobody in this thread is talking about toy projects. Developers who use mainstream programming languages do not have a monopoly on real work. The consistent assumption made in these discussions that FP practitioners don't work on "serious projects intended for production" is misguided and insulting.


I think your cause and effect are reversed. I think that the biggest difference between "pragmatic" languages and research languages, is that pragmatic languages are driven by actual need, and with consideration for current practices and workflow. The language features are designed around that. Research languages, OTOH, are made to explore an idea, or conduct an experiment, usually made to answer the question "how would these language features affect development" (they could increase productivity, reduce bugs, improve performance etc). The result of the experiment are very valuable, and the successful, proven, ideas are then integrated into pragmatic languages. It is very rare that a research language achieves mainstream adoption. In fact, I would argue that if it does, it's failed as a research language because its widespread adoption probably means it wasn't adventurous enough.

Haskell is a prime example of a rather adventurous research language. It's an experiment that's been run for about 15 years now. First the early steps, and now people are experimenting with real world applicability. I'm really interested in the results, so I'd really like more people to run the experiment, but I have no doubt that Haskell itself will never gain widespread adoption. If it does its job well, its best ideas will be copied by other, non-research languages.

Now, I'm certainly not saying that Haskellers don't work on serious projects. But it is a fact that Haskell is much more common in academia than it is in the industry. Again, this is intentional. Haskell is meant to venture into unchartered territory and explore. Some of the things it aims to do work well and might be adopted in the industry, and others don't and won't. If Haskell teaches us even a few things, then it is a great success. What I don't like is that some Haskellers deny the true purpose of the language, and instead of examining it as researchers should do, they preach it.


You're starting with a fundamental assumption that there are two classes of languages, which you are calling pragmatic languages and research languages. I disagree. Researchers study languages of all sorts (there are plenty of research papers on Java), and practitioners use languages of all sorts (there are plenty of businesses using Haskell). Furthermore, many of the characteristics that make a language worth studying also make it worth using in practice.

Haskell has been around for nearly 25 years, not 15, and the ML family has been around for nearly 40. Strong type systems are not a research curiosity, and have not been for decades. They have significant and well-understood benefits, in practice.

I wish your hypothesis that the best ideas from research would be copied by other languages were true, but the history does not bear this out. A notable recent example is Swift: though it does make progress compared to its reference class, it still does not use any research more recent than roughly the 1970s. Clearly, the "pragmatic" languages are not keeping up. In what sense is it "pragmatic" to constrain yourself to decades-old technology?

The Haskell language is not constrained by some God-given purpose to be a research language for all time. Haskellers aren't "deny[ing] the true purpose of the language", they're choosing to use a useful tool, because doing so is pragmatic.


Is there a video of that presentation anywhere?



The lack of magically bug-free, FP OSs, drivers, control software, and large applications, shows that even the biggest supposed benefits of languages like Haskell, are yet to be demonstrated in the real world.

I've been reading religious advocacy of FP for almost 15 years now and yet there still don't seem to be many non-trivial apps written in any of these languages. Certainly many individual features of FP have been adopted to good effect in more mainstream languages but commercial Haskell or OCaml apps are still conspicuously thin on the ground. I think at this point it's up to the FP advocates to back up their claims with results. I'm sure there are a lot of programmers that would happy to roll up their sleeves and learn something new if it truly would make them 3x more productive.


They exist, but it's super spotty. HN is written in an FP language for example. One of the airline reservation systems is written in some kind of FP.

Reddit was written in a Lisp originally. But like many major FP projects, ended up being rewritten in a more common language. I used to work at a place that had hundreds of thousands of lines of CL code that was all rewritten in Perl and was on its way to getting rewritten into Java by the time I left there.

I agree with the major thrust of your argument, FP seems to find itself mired in weird niches. I'm not sure if it's because FP advocates seem to not want to write mainstream end-user software or it just hasn't leaked out into the mainstream, but it's pretty tough to find lots of major FP success stories outside of a constant stream of anecdotal advocacy stories (none of which ever seem to be attached to some piece of software I can download and actually touch for day-to-day usage).

I suspect the problem is that lots of the kind of software end users want is difficult/hard to write in an FP style, but an order of magnitude easier in an imperative language (even if it's less "right")...FP is just the wrong paradigm for general purpose programming, even if it works better than imperative languages in certain niches. I think this is an illuminating summary of all this [1]. However, here's an engine written in Haskell [2].

It seems that FP ideas will continue to end up in non-FP languages long before FP languages will become a major success. That might be a success by itself?

To save us some time, here are some pretty woeful lists where this exact question is asked and answered.

https://programmers.stackexchange.com/questions/22073/is-fun...

http://lambda-the-ultimate.org/node/2491

1 - http://prog21.dadgum.com/54.html

2 - http://www.hgamer3d.org/index.html


> One of the airline reservation systems is written in some kind of FP.

You are probably thinking of ITA and their Orbitz service, using Franz Common Lisp. That is not really a FP language, but a good feather in the hat for Lisp nonetheless.


Yeah that's it. Well, strike that off the list then I guess.


How about WhatsApp's backend being extremely scalable because it was written in Erlang from the start? Or even Twitter that rewrote much of their backend in Scala to deal with scaling issues (I'm not saying that Rails isn't scalable, I don't want to start a war over this issue; the important thing is they saw a problem and solved it) while contributing many many open sourced libraries for our use?

Foursquare uses Scala extensively. LinkedIn is using Scala extensively for its new projects. The commercial and successful use cases for Scala are widespread at this point. For Haskell and OCaml it's going to take a while.


Well, when people say FP they mean different things. The Erlang/Clojure kind bears little resemblance to the Haskell/scalaz sort discussed in the article. Neither does most of the Scala code written at Twitter (which is mostly OO with a sprinkling of functional).


That's hardly true. Erlang's event loops are based on a very similar kind of state encapsulation as Haskell's state monad. Really Erlang is littered with explicit monadic patterns but lacks a good system for abstracting over those patterns so you just call them "OTP" instead of a monad.


It says a lot about the scale of adoption the big language players have that despite the aforementioned inroads into the enterprise that Scala has made, it is still very much a niche language; Haskell at this point in time, even more so.

Am curious to see how Scala 3/Dotty backed Scala works out. Enabling better tooling and faster build times via a generally less kitchen sink-y/more regular language should bring Scala adoption to new levels. Dotty is under active development, even saw a Github issue talking about the migration tool[1]. It's going to happen, just a matter of when.

[1] https://github.com/lampepfl/dotty/issues/129


Here is a good example of a complex Haskell app: the Haskell web based IDE at fpcomplete.com. They wrote the back end in Haskell and the client side with Haskell compiled to JavaScript. Really impressive.


> Here is a good example of a complex Haskell app: the Haskell web based IDE at fpcomplete.com.

Technical merits aside, I think this actually re-enforces the parents point.

I mean, you're basically saying that one of the best examples of production-ready Haskell is a Haskell IDE, developed by a company a good >80% of HN readers may never even have heard of.


> and yet there still don't seem to be many non-trivial apps written in any of these languages

Perhaps you are not looking hard enough? Sure, if your problem space is CRUD web apps, then you won't find many implementations. Try looking into more difficult problem spaces, such as high-frequency trading, and you'll suddenly find more FP examples.

Also, there's Erlang, which was developed by Ericsson to solve a very specific, real-world problem. Look it up.


High frequency trading is dominated by C++, Java and C#. There are a few places using F# and Ocaml but they're relatively small players. I know of only two places that seriously considered Haskell and they both decided against it in the end.

I doubt any of these languages are significantly more popular in "more difficult problem spaces."


To be honest, I'd be happy with a 1 level interactive demo (with sound) of a Mario Brothers style platform game.


There's Nikki and the Robots:

http://joyridelabs.de/game/

There's also Super Monao Bros (which used to be called Super Nario):

https://github.com/mokehehe/monao/tree/master


That is sort of the worst case scenario for a FP program (though I'm sure you could write it in FP). It is almost entirely side-effects.

I agree with the sentiment that for a paradigm that has been advocated for 60 years it is really odd how little non-academia code is written in it.

In my experience you find functional code shines (and is therefore common) in places where prove-ability is of the utmost importance or where writing custom DSLs provide a lot of value. Things like nuclear systems, extremely high security applications (think custom OS level requirements) or financial modelling software.

That said in most of the places I've found the FP program is a high value prototype or proof of concept, not the actual system. I don't know if that is just a resource allocation thing or not.


There are plenty of side effect free ways of dealing with change...you can make the effects you know...explicit. Anyways, for a game like SMB, FRP should work relatively well. Just be careful with collections.


But there are no side effect free ways of changing the pixels on a screen or the sounds coming out of a speaker as those are precisely side effects.

Of course there are ways of abstracting those side effects and acting like they are the same as pure functions. Or conversely instead of hiding them, you could point them out and make them "exceptional", which is what I assume you mean by making them explicit. These machinations (of which FRP is a clear example) to deal with side effects have a price, and in a game like SMB it is entirely possible that the vast majority of the work would be this overhead.

Again, of course you can do it, but it is a problem domain not particularly suited to FP in and of itself. I've written short running, fast starting, memory constrained applications in Java, but it was a fight to do it. If the fight is worth the other advantages, it can be worth it.


"But there are no side effect free ways of changing the pixels on a screen or the sounds coming out of a speaker as those are precisely side effects."

No, those are precisely effects. They can only be side effects once you've defined how you're distinguishing side effects from effects generally.

A common way is labeling intended results "effects" and unintended results "side effects". By this definition, they're clearly effects not side effects, though this equally clearly isn't the usage in question.

Another way - common in Haskell - is to label effects that do not appear in a function's type signature "side effects". By this definition, it clearly depends on the implementation and there are "side effect free ways of changing pixels on the screen".

I don't think there is another useful definition to be had.


Without getting into a hugely semantic debate, I was using side effect imprecisely to mean the opposite of a pure function.

Many FP paradigms (including Haskell's) impose an overhead when doing operations that are not pure. In the case of a SMB clone a huge percentage of the work of the program may end up not being pure, therefore a large overhead may be imposed. Further, many of the advantages of FP paradigms (including Haskell's) are dependent on pure functions.

In the case of a SMB clone, a large percentage of the application may be intended to be impure, therefore the "worst case scenario" comment.


"Without getting into a hugely semantic debate, I was using side effect imprecisely to mean the opposite of a pure function."

That's the second sense I described. It's only meaningful if you're actually talking about a particular function. And you're still left with the question of the actual domain of the function. Semantics, but relevant semantics.

"Many FP paradigms (including Haskell's) impose an overhead when doing operations that are not pure. In the case of a SMB clone a huge percentage of the work of the program may end up not being pure, therefore a large overhead may be imposed. Further, many of the advantages of FP paradigms (including Haskell's) are dependent on pure functions."

That's somewhat true of FRP, basically false about Haskell IO generally, but either way if you can do it in PyGame someone can do it in Haskell - it's mostly a matter of working in a high level language, not a matter of functional-programming-imposed overhead.

I would recommend listening to what John Carmack had to say about implementing games in Haskell (and also Lisp, IIRC).


In FRP, you basically define your UI as a function to display pixels. Physics is a bit weirder, but that is handled by event streams over ticks and "stepping." Actually, the effect system doesn't really come into play, because you can define everything effect free.

I'm not a big FRP proponent though. In my own work [1], I believe that manipulating pixels and doing physics should be effects...even side effects that are at least explicit during run-time. Of course, to achieve this, I needed to do special things to achieve commutativity and idempotence, which is otherwise taken for granted in a pure functional program.

[1] http://research.microsoft.com/pubs/211297/managedtime.pdf


Niki and the Robots[1]

[1]: http://joyridelabs.de/game/trailer/


What have you contributed? Or are you just waiting for _someone else_ to put in the effort to prove it for you first? Multiply that by every developer in every company, and you start to see why it's not that common.

I am currently trying to write something in F#, which is still OO, only the rest of my team won't let me because there is "no proof it's better". Well, no. And there won't be if everyone says that. And where is the proof C# was better than VB.NET? Oh, right, the proof is VB.NET doesn't look like Java! Curly braces, the only evidence we need!

As long as the majority can hold back the minority that WILL learn something new just to see if it's better, everyone stays with the status quo.

"Just don't make me learn anything new" <- the developer motto.


As others have mentioned, there is no universal definition of FP. But if you're willing to stretch your personal definition of FP to include LabVIEW, then there are a considerable number of real FP applications used every day in research and industry for real, productive things. It's just that LabVIEW tends to be considered outside the domain of "software engineers," so most people here probably haven't hear of it. Nevertheless, it is a real programming language, and you can do anything with it that you could do with any other language (albeit, much more tediously).

I don't think I've ever heard of LabVIEW described as a functional language (and it's clearly not a "purely" functional language), but it programs like one. Programs are written by connecting VIs (functions in LabVIEW-speak) with wires in a data-flow diagram. VIs don't contain any state (usually, unless you explicitly use local or global variables--which all the documentation warns you never to do upon pain of something really awful). VIs mostly don't have side-effects (unless you're explicitly doing something with the filesystem, or data acquisition, or motion control, or... ).

One of the difficulties of using FP for "real" things is that you give up something you got for free with imperative languages, namely control of timing. LabVIEW works around this with something called a sequence diagram, which forces VIs to be called in a deterministic order (because, being FP, you can't otherwise divine in what order VIs will be called).

LabVIEW also encourages the use of a trick to enforce sequencing without using sequence structures (and this trick is absolutely mandatory if you don't want a write-only program): The abstract name of an instrument resource is passed to a VI, and then returned unchanged. Since the returned values of a function aren't determined until it is run, you can chain several functions together in this way so that they run in sequence (which, IMHO, is bad practice because this abstract resource thing being passed about is something which is needed by each function, and can be modified by each function, but SHOULDN'T be modified by any function).

Annoyances aside, I've written a number of medium-sized programs in LabVIEW that implement entire data acquisition systems, including control of external instruments, positioning of mechanical parts, high voltage power supplies (no side effects, heh, sure thing), pulsed lasers (same), and a synchronous detector of my own design. I would also note that the data acquisition systems do not use any local or global variables (as LabVIEW uses those terms), so it could be argued that they are have a very high "functional purity," though that was not my intention when I wrote them.

My personal opinion is that the productivity claims of the FP camp are largely inflated. Maybe FP is better suited to problem domains that exist entirely inside the computer.


"Haskell discounts the very useful choice of reasoning about your code after it runs, favoring, instead, all-upfront reasoning, often at the expense of facilitating the former. There are some domains where figuring everything up front is very important. Others, where trial and error is far more productive."

Yes people seem to learn much better by example. The worst teachers (we have all had them) are those who jump straight into the abstraction that solves the problem instead of going through some examples first. If they cover a few specific examples with specific solutions then the student will almost invent the abstraction themselves. The same thing came up recently about pitching startups. If you don't describe the problem first with specific examples then people won't even recognise the solution (and often these people are far from stupid). In general for most people if they are trying to do something innovative they are rarely definitive about it at first. If you asked them to explain it or prove it they couldn't. They start with a hunch, do something ill defined, and then refine it as they go. Even mathematics is like this, except that often once someone has discovered and understood something, they explain it as if they were definitive about it all along - which can be misleading to say the least.


But you can still experiment, you just have the types to guide you as well. And when you realize you got something wrong, you have the types to help you refactor correctly.


I found the same thing. Sure monads can help with somr cases in oo.

Overall i find Common Lisp Macros to be a better way for me to code, you can always construct your problem domain easily instead of choosing between two.


>I can understand some of the things FP gets you

I really don't think you do understand, as an example in the last thread we ran into each other you completely missed the fact that Futures allow concurrency for free since you're already in a monadic context 90% of the time.

https://news.ycombinator.com/item?id=7751605

> The problem is that the kind of code it enables reuse of more than OO is very small, simple loops. This is never a big issue.

This is called a straw man argument. People aren't claiming that FP helps you write four line loops in one line. This article and others talk about orders of magnitude improvement in LOC on a large scale.

>"transactional" mutable state is simpler than the pure FP one, and just as safe.

You clearly don't know what these words mean. Mutable and purity are not mutually exclusive. Purity is a slang term for referential transparency. You can have mutable, referentially transparent functions. You can also have transactional memory in pure languages.

>Finally (and I've said this before on HN), a language like Haskell discounts the very useful choice of reasoning about your code after it runs, favoring, instead, all-upfront reasoning, often at the expense of facilitating the former.

Citation needed. You’re making things up.

The bottom line is you’ve created a product that solves many of the same problems functional programming solves. Your product looks very nice and advanced, but your unfamiliarity with functional programming makes you fear it, so you’ve decided to spread FUD across HN to the detriment of the community.


I think you are describing extreme cases for the FP. Can you please provide one example how would you abstract away problems solved via Map, Reduce, Filter, List Comprehension, Pattern Matching, Partial Application and Currying using any OO language (say Java)? Filtering, transforming data takes a lot of code in any data driven app. You start trimming the code from the day 1 in any FP enabled language without even thinking about the pattern.

Research in Programming Languages giving us new tools to reason about our software development and FP is one of those nice tools.


Again, I was referring to that type of FP the author was talking about, i.e. the one practiced in Haskell/scalaz. Everything you mention is useful. But just to play devil's advocate for a second, the mere fact that something is useful does not always mean that it's worth its price. So whenever you give me a "solution" you first have to demonstrate the magnitude of the problem and the price I have to pay elsewhere. Simple map and reduce operations certainly have a non-negligible utility (more readable code, better reuse), and they do come at a low cost. But elaborate type systems and restriction of side effects cost a hefty price, so they need to justify their use a lot more than just show that they're useful.


You really should have said this from the start. Instead of:

> I used to like functional programming.

you should have said

> I used to like Haskell.

It's caused a lot of confusion in this thread. Also, I think it's unfair to compare Haskell vs. Every Other Programming Paradigm. Because by doing so you are only picking and choosing parts of other paradigms which work as well as the Haskell equivalent and not judging the whole of Haskell vs. Language X.


I'm not sure about haskell, but in clojure the, run code, reason, run more code cycle is very much supported with the repl.

In fact I find the nature of functional code very much helps out here. With clojure, in a complicated system I can typically take any subcomponent and run it in the repl without much fuss/mocking or worrying about the state of the system. Since usually functions aren't modifying state I can rerun, modify, rerun over and over again without restarts.


Like Clojure (which I have used a lot), Haskell development uses a repl (with really nice emacs support). For bottom up coding, I place a main function in every file with my test and experiments code; with an active repl in emacs it just takes a few seconds for the edit/run loop. In lower level code, I leave these main functions in place even though I am unlikely to need them once the code is working.

So, yes, Haskell and Clojure development has a similar flow.


Oh, absolutely. But I wasn't counting Clojure as FP in my comment. There is no definition for FP, and the OP seemed to be talking about the statically typed, pure FP, of Haskell and scalaz. Clojure is certainly not that kind of FP, as it's neither statically typed nor pure.


I know there are Haskell people who think Haskell is the only True Functional Language, but for the sake of discussion can we not go there?


It's not the only functional language by any means, but it is the strictest (he he.) It's silly arguing which is worth more, but it's also silly saying "Clojure is just as good as Haskell at separating effects."


What I found FP good for is writing small, self contained, and util-like library, which usually has no requirement on maintaining state, and can be highly reusable.


I'm starting to think that different programming styles are more suited to different levels of abstraction - maybe you can program in an OO style at a high level, use functional style code for the low level implementation.


"maybe you can program in an OO style at a high level, use functional style code for the low level implementation."

That's sort of the Erlang approach, right?


You know, you're right - that does sound a lot like Erlang.

Which is generally considered an extremely pragmatic language.

I guess it all goes back to the Actor model and the origins of OO in Simula.


The one question I wish this article answered is: what specific, quantifiable benefits could I get from using a real functional language RATHER THAN merely reactive extension libraries to an imperative language.

The point many programmers are at is far removed from diving into Haskell and hard math in functional languages. Many of us are still learning why to compose observables for multithreading. From a application programmer perspective, what could we gain by diving into FRP completely?

If Scala is the gateway drug, then the dealers need to do a better job explaining what the hard stuff actually does for us.


> Realisation: Abstractions

Objects are coarsely grained abstractions. Separating function Modules from the Data allows more opportunity to be DRY, better performance, and more complexity scalability.

> Realisation: Confidence. Types vs Tests > In Ruby, that often meant testing it from every angle imaginable, which cost me significant effort and negated the benefit of the language itself being concise.

This is largely a cultural artifact from the history of the Ruby & Java communities.

We are discovering that is is more useful to have your tests written from a black box perspective (in any language). White Box Testing is less useful and causes the architecture to be locked down. White Box Testing should be a rare occurrence.

From my development practices, Static + Strong typing helps with performance & debugging failing tests, but not much else. It imposes rigidity in the architecture & delays my development feedback loop.

I know that this is a matter of taste & I will catch much flak for my opinion as it conflicts with other peoples' taste, but it's my truth :-)


How well does deploying Haskell web applications to a low-memory VPS work? My experience with the Play framework tells me that Scala is out of the question. Presumably using Haskell involves cross-compilation, considering the memory usage of GHC compilation.


I've been deploying Haskell web applications to a 1G and a 512M VPS, and both have been working fine. Initially I was able to build on the boxen, but more recent libraries/compiler runs out of memory building some dependencies. Obviously that wasn't ideal anyway since it was taking up resources the live site could be using, although none of the relevant sites are supporting a ton of traffic. Building on my local machine and pushing up the executables has been working just fine, though.


Would you recommend starting with Scala before Haskell if one want to learn FP?


I want to add some color to the other answers saying "no". If your medium-term goal is to use FP for business problems (e.g. perhaps you are responsible for technology decisions), then the OP's path of "Scala for the Impatient" -> "Functional Programming in Scala" -> "Learn You a Haskell" will likely provide the easiest transition and expose you to "real world" solutions in FP. Learning F# will give a similar pragmatic path (although its complications are due to .NET rather than the JVM).

However, if you are interested to "learn FP", as in see what all the fuss is about, then there are more direct routes. My own recommendations are for Graham Hutton's "Programming in Haskell" and "Real World OCaml" (https://realworldocaml.org). There are many other good (albeit verbose) resources but these two are the shortest path (IMO) to understanding the "common denominator" and historical underpinnings of functional programming languages and functional programming. Dan Grossman's "Programming Languages" course at Coursera was also quite accessible and comprehensive, but I think it may be closed now.


No. Scala does some interesting things, but it adds a lot of complexity in order to interact with typical JVM code.

Haskell doesn't have that kind of baggage, so at its heart it's a very very simple language (Lambda Calculus + algebraic datatypes + typeclasses). There are a bunch of extensions, but they can be mostly ignored while getting to grips with the basics.

Programming in Haskell can involve a lot of unfamiliar concepts like monoids, functors, monads, etc. but these are just the APIs used by libraries; the language itself doesn't care about them (except for "do" notation). Just like you don't need to understand design patterns to learn how Java works.


And yet that complex JVM interaction makes one possible way to "get toes wet" is to wrap existing lower level java processes (and libraries and things) in a larger functional wrapper. Top down. Assuming you have some experience, confidence, or sample code in java that you can use or understand in the problem domain. So rather than trying to find a way to use recursive functional definition of a factorial in your code (bottom up) it Might be possible to work top down and make the "main loop" or whatever of your java program a functional construct of some sort.

I am in no way claiming this is the best way, only way, or even a good way to learn FP concepts but it is a possible way to at least get feet wet, plus or minus your personal characteristics. It is merely an alternative to the extremely popular nearly universally pushed educational strategy of bottom up introduction of FP.


Because Scala is a hybrid language, I feel it doesn't really try to teach FP, but some amalgamation of the two. So you'll end up thinking about case classes, traits, companion objects, unapply, etc... it's a way to solve problems in a unique way, but in comparison with the other more traditional FP languages, it doesn't present the same feel and can obscure which features are the ones necessary for FP - which is important to know if you're specifically trying to learn FP.


You are technically correct in your observations, but I think we're both stuck in a classic "glass is half empty" "glass is half full" observational trap. Both the good news and the bad news simultaneously about a hybrid language is you can implement the same solution two different ways.

If you did both intentionally for educational purposes, it might be fun. I like the many "koan projects" and a side fork that focuses intentionally on doing something two ways might be interesting, if its not already been added while I wasn't looking.


A 3rd "no" from here. Scala is unnecessarily complex because of the Java/JVM compatibility. It's not built from ground up on functional principles, but rather "augments Java" with functional features.

My recommendation is that start with Haskell or Scheme.


No. Actually it would be easier to learn Haskell without knowing any imperative language at all. For me Scala also was "gateway drug to Haskell" and after using Haskell for some time I got some good FP habits which made my Scala/JS/whatever code better.


Haskell made my C code better.


To jump on this bandwagon, no. From personal experience, FP concepts only clicked for me when I was forced to use them. Scala always has that imperative escape hatch; can't think of how to write this immutably? Just make it mutable and move on. And the language is huge, with a lot of OO bits you need to learn and work around.


If your goal is to learn FP, Haskell is a better option, since it will really force you to think functionally in a way that 'softer' FP languages don't. With Scala there is always the temptation to do things the 'obvious' procedural way each time you get stuck on something.


No. What I've found with Scala projects is that it ends up as a typical OO app but with some FP constructs. Yes you see a reduction in the amount of code written but with added complexity.

I would go with Clojure (if you still want to leverage Java libraries) or Haskell personally.


If you want to focus on FP and not worry about the existence of libraries, I'd say to go with Clojure. The learning curve is not that steep and it's a very pragmatic language.


Schema or Clojure would be a good first FP language. It has very simple syntax and structure that allow you to concentrate on the FP ideas.


>I've been coding since the age of 8, so 26 years now. >I started with BASIC & different types of assembly then >moved on to C, C++, ...

So to 68002 he had nine and a half or ten, here the article it's enough for me— even I'm interested in FP, but too old.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: