I like to think of Applicative Functors, Monads, etc as the Design Patterns of functional languages (though I've developed most of my understanding by applying these ideas in JavaScript).
A nice way to think about how they can be used (and the difference between parallel and sequential evaluation discussed towards the end of the article) is in handling promises:
The parallel or sequential composition of a list of promises returns a promise of a list.
The monadic composition resolves each promise sequentially (via the Haskell operator `sequence` or as a cascade of JavaScript `.then` calls).
The parallel applicative composition resolves all promises simultaneously.
Edward Kmett is well known for saying that one of the primary reasons he likes Haskell is that "design patterns" are just first-class language features given its sufficiently powerful abstraction capabilities. He designs he libraries in this way—`bound`[0] is a good example of wrapping up a sophisticated design pattern into a library.
So, perhaps they're design patterns, but I think calling them that puts them on too high a pedestal. True "functional" design patterns I think are much deeper. They include things like recursion schemes and the Types of Data [1], module design, purely functional data structures, handling laziness and strictness, etc.
As others have stated, your parallel v. sequential composition model is murky because it conflates dependency time versus real time. This is not how Monads and Applicatives work or differ at all!
But they do operate in sequence and in parallel—it's in "dependency order" though.
If you think of monads and applicatives as being composed of "layers" of action where each action pairs some instruction (like "print this string") and a "next action slot" which can contain whatever we like then we can see how monads and applicatives differ.
In applicatives, you have a list of these layers—the first next-action-slot contains a function and the remaining elements in the list contain each parameter of the function in their next-action-slot. Applicatives are "executed" by running all of the actions in the list in sequence (weird right?) and then applying all of the arguments to the function.
In monads, you "stack" layers. By stack I mean we place a new layer in the "next action slot" of each layer so that to find the next action you have to keep digging down and down through the layers. This means that each later layer can depend upon results of the "action" performed at earlier layers given monads their "dependency" structure.
These analyses are known as the free monad and free applicative. I really encourage everyone study them if they're interested.
That is a misconceptio and is only possible if you break monad laws. Monads aren't about sequencing. Just some monads are.
(+) <$> a <*> b
And
x <- a
y <- b
return (a + b)
Will yield exactly the same behaviour. Even in evaluation order. And they can be interchanged. It may look like we are sequencing in the second example but that's only true in datatype with sequencing embodied in their monad and application implementation ( State monad). An example of a datatype that doesn't have this property is [a]
This isn't true. They are absolutely not the same and don't yield the same behavior for every monad/applicative pair.
First, you probably meant:
do x <- a
y <- b
return (x + y) -- not: a + b
With the current desugaring of do-syntax there is no way the compiler will know that the statement 'b' doesn't have a dependency on the variable 'x' and will desugar to:
a >>= \x -> b >>= \y -> return (x + y)
or, maybe more legible:
bind a (\x -> bind b (\y -> return (x + y))
So, even if the 'b' doesn't use 'x', it is implicitly sequential and the effects of 'a' yielding 'x' need to be done before the effects of 'b' yielding 'y'. This can be a waste, e.g. when both are network calls that can be done in parallel.
The `Applicative` interfaces statically guarantees there is no data dependency and might therefor have a smarter implementation. For `Monad` this simply isn't possible. It's a subtle, but very important, difference.
Even the state monad is not actually sequential. It just appears that way conceptually. It can, just like most haskell code, end up being computed in any order (because of laziness and lambda calculus reordering rules).
For the most part, only "special" monads like IO enforce sequential computation.
> For the most part, only "special" monads like IO enforce sequential computation.
No, IO doesn't either, because of laziness. If you need a demonstration, then, in the IO monad, open a file (using the functions in System.IO), read the contents (without doing anything else that depends on them), close the file, and then use the file contents.
That behavior can be seen as a perversion of IO's behavior—it's roughly known as Lazy IO and is considered convenient but potentially very dangerous.
The whole cottage industry of streaming monadic operators like enumeratees, pipes, conduits, io-streams, etc takes "fixing" Lazy IO as a major design goal or inspiration.
Lazy IO depends upon using the function `unsafeInterleaveIO` which, as its name suggests, is considered potentially unsafe---and your example demonstrates why!
It's not entirely right to call this a misconception. The monadic bind is inherently sequential. The right hand side of the bind might compute a computation based on the result of the left hand side of the bind.
On the other hand, the burden of proof is on the library writer who creates the typeclass instance, not on the end user - possible largely thanks to purity - whereas programmers may expect to re-implement various patterns by hand.
Great article, but there is one nitpick: you need parentheses in the signature for bind. It should look like
(>>=) :: Monad m -- for any monad,
=> m a -- take wrapped value
-> (a -> m b) -- and a function which needs it unwrapped
-> m b -- unwrap it, and apply that function
Or, written the other way:
(>>=) :: Monad m -- for any monad,
=> m a -- take a wrapped value
-> ((a -> m b) -> m b) -- and return a function which can take an
-- unwrapped value and a wrapped one and
-- return another wrapped one
Which also makes the comments on the second one a bit inaccurate. It should read "return a function which takes a (function from an unwrapped value to a wrapped one) and returns another wrapped one."
Wonderful piece. A small typo: "Fuctor" in the first code sample.
Edit: I think "dependant" is also a typo. Apparently in British English it is used to mean a dependent in the sense of a child who relies on a parent for support, but when it's used as an adjective, "dependent" is still the correct spelling, even in British English.
The example applicative still respects sequencing - to do what the last paragraph claims to be doing you'd need to use some kind of nondeterministic gather.
Honestly I think the differences in that example are pretty superficial. The real advantage of applicative is that because the assumptions are weaker, some things that aren't Monads are Applicatives.
>Honestly I think the differences in that example are pretty superficial.
For most instances the difference is superficial, but for some problem domains the difference can have a pretty big impact.
All of this is because Applicative composition (using apply) statically guarantees there is no data dependencies between the (possibly) effectful computations. Monadic composition (using bind) can (but might not) introduce a data dependency which can only be reasoned about at runtime.
Applicatives can be analyzed without outside information. Applicatives are also closed under composition (and thus there's no need for the entire "transformer" concept).
As far as I'm aware, nearly every functor admits at least one applicative structure and oftentimes many choices thereof.
A nice way to think about how they can be used (and the difference between parallel and sequential evaluation discussed towards the end of the article) is in handling promises:
The parallel or sequential composition of a list of promises returns a promise of a list.
The monadic composition resolves each promise sequentially (via the Haskell operator `sequence` or as a cascade of JavaScript `.then` calls).
The parallel applicative composition resolves all promises simultaneously.