This is a gentle introduction to Haskell and also a brilliant tutorial on x86 assembly language – bonuses for anyone interested in compiler implementation. Exceptionally well written and clearly illustrated.
Man, I always love how cleanly EDSLs come together in Haskell. If you’re a fan of “language-oriented programming” (e.g., in Lisps and Forths) but you want static types, Haskell is for you.
My problem with Haskell, coming from the ease of Lisp, is that so much of the syntax is symbolic!
I know mathematicians might love it, but forgetting a symbol can mean I don't understand an entire block of code. Whereas it could have been made so much clearer with one word.
Well, there’s really only a handful of operators in typical code that don’t appear in other languages, and these are bound to the standard abstractions (functions, Functor, Applicative, Alternative, Monad, Monoid) that are used all the time. Once you learn those it’s pretty much smooth sailing, but yeah, it was pretty daunting to me at first, and even now as an experienced Haskeller I don’t care for packages like Lens that can turn into operator soup.
However, I do think the built-in symbolic syntax (::, ->, =>, =, |) is all pretty clear. And thankfully there’s Hoogle to find the meaning of any operator you don’t understand, or even to find a function that you think should exist of a certain type. So I dunno, we get by well enough. And you could actually write your code largely without operators if you wanted to, either because there’s a standard name already, or because it’s easy to define an alias.
Well, because it's pretty obvious after you've spent a week with Haskell. With that said, one thing the Haskell creators seem to regret is not requiring every operator to also have a name. (In the case of <$> it does: fmap)
As far as I can see, if you already know what fmap does and what infix notation is, you'll know what <$> is after looking it up just once. And if you know what a monad is, you'll also know what >>= does. And if you know neither of these things, then I'm fairly sure writing `fmap` and `bind` wouldn't help much anyway.
(With a short detour through writing a JIT in LLVM, of course, but I didn't want to edit the title. Sadly, I accidentally added the "from" at the beginning.)
I got lost at the monads bit. Where's the best place to learn about them? I did basic Abstract Algebra in university, and I understand how to use the Maybe/Option/Optional monad.
I know it sounds ridiculous but a) it took me about seven evenings b) it's a lot of fun and c) I guarantee you'll get some proper programming insight, whatever language you use for a day job.
But the intuitive version is that anything with those two operations that doesn't smell funny is a Monad.
(Think of it like implementing an interface: anything that matches the type signature will work, but if you write something silly, it won't work right)
Hmm, this stuff looks familiar, I tried to learn it once upon a time. The operators make sense to me except for "return" which just seems like a hack, or excessive syntax sugar. If you already have type constructors, why the magic "return" function that needs to sniff the surrounding scope to see what kind of monads you are actually dealing with?
return isn't syntax, it's a normal function of type Monad m => a -> m a. The "sniffing" that you are talking about is that overloading is done on the return type.
Why does it exist at all? Speaking loosely, monads are about composing functions that look like a -> m b. Here's regular composition alongside the 'monadic' one:
(.) :: (b -> c) -> (a -> b) -> (a -> c)
(<=<) :: Monad m => (b -> m c) -> (a -> m b) -> (a -> m c)
As you can see, the same except for the m wrapping each function's results. Just like normal function composition, there is an identity element - that's what return is. Again, a comparison:
id :: a -> a
return :: Monad m => a -> m a
Again, the same except for the m wrapping the result. Both composition operators are associative, and both have the same relationship with their identity operation:
id . f = f return <=< f = f
f . id = f f <=< return = f
(That's why there are monad laws, to ensure those properties hold.)
So return is very straightforward: it is the identity of functions that return monads. It's unfortunate that both the name and the presentation of monads in Haskell tend to obscure this.
I think what's confusing me here is haskells type classes, which I've never found intuitive (despite having "learned" them 3 or 4 times already..). The definition of return still baffles me, in the absence of some kind of module or object on its left hand side, I can't make sense of how it "knows" how to overload. I'm probably better of learning monads in the context of a language I'm more familiar with, like ocaml.
Very understandable, type classes + inference + monads + do notation is quite a stack of unfamiliar machinery to understand all at once.
The return type is inferred, and the type is what the type class machinery is driven by. You can get some more visibility into that by playing in ghci:
return 0 :: [Int] => [0]
return 0 :: Maybe Int => Just 0
Having behaviour driven by return type takes a bit of getting used to.
Return value polymorphism is hugely powerful though: it means you can write stuff that works for any Monad (involving return). Without it you can't sensibly do that.
A typeclass is really nothing but a basic java/c# interface.
The 4 things you do with an interface are:
1. Declare an interface
2. Describe the contract of the interface
3. Declare that a datatype (or class) adheres to that interface
4. Provide an implementation of the the contract of the interface (fulfill the contract).
A haskell Typeclass does exactly that. The keyword "class" is where you do #s 1 & 2. The keyword "instance" is where you do #'s 3 & 4.
The only notable difference is:
If an ISizable has int GetSize() method in haskell instead of the contract being int ISizable.GetSize(), it is static int GetSize(ISizable x).
Or in Haskell:
instance ISizable MyType where
GetSize :: ISizable -> Int -- Note you don't actually type this line
In the absence of some kind of module or object on its left hand side, return has a polymorphic type.
e.g.
return 'c' :: Monad m => m Char
You can actually sort of think of this as a function from typeclass dictionary to value, except the compiler will pass that dictionary itself once it's inferred what it should be (and will then inline/optimize it away, where it can).
The whole point of monads (and most other typeclasses) is that they're an abstraction. If you use a type constructor then you cannot generalize your function to work with any kind of monad, see for example the definition of `mapM` in https://hackage.haskell.org/package/base-4.9.1.0/docs/src/GH... which uses `return`
There’s no difference in this case—looks like an oversight. “<>” is the Monoid “append” operator (e.g., concat lists, union sets, compose functions). If you look in JIT.hs[1], you’ll see that there’s an instance of Monoid that just calls “.|.” (bitwise OR):
instance Monoid ProtOption where
mempty = protNone
mappend (ProtOption a) (ProtOption b) = ProtOption (a .|. b)
There is no canonical monoidal operation on booleans (just as the case is in any arbitrary ring), so I'd guess using the bitwise operators explicitly is good for disambiguation.
Yes and no—in practice in Haskell we tend to choose the most convenient instance and wrap things in newtypes to get at other instances. Writing code using generic operations (Monoid, Functor, Foldable) makes it easier to refactor datatypes later—coding to the interface and not the implementation, and all that. The bitwise OR operation makes sense for ProtOption because it’s basically a set.