Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I second this question and would like to add another one:

Where is it that Haskell shines?



I've loved Haskell for decades, but not yet had a chance to use it to pay bills. IMO, Haskell, more than any of the 1e3+ other PLs I know, shines in enabling 1) elegant and 2) robust ("better bugs") solutions.

Elegance comes from the non-strict functional part, enabling succent and modular implementations.

Robustness comes from the static typing which forces you to be explicit about the range and domain of functions.

What's unique about Haskell is that the type system is _very_ expressive, making this a practical approach. IMO, the type system is absolutely key to Haskell.

Where Haskell disappoints: Int. For reasons that I still cannot phantom, someone chose to infest a beautiful language that is almost free of pitfalls with the atrocity of a "C-like" Int type. Indeed, in Haskell, this isn't always true:

  a < a + 1 where a :: Int
which is a real shame IMO. It fails the largest value of Int (which BTW is implementation dependent). The correct answer would either to have (my preference) Int == Integer, that is arbitrary sized integers always, or at the _very_ least, have (+) fail on overflow, that is, return bottom like head [].


There are a bunch of fussy little problems that Haskell solves nicely. One example is looking up a value in a map

    map.get("foo") // returns null
is the value of the key null, or was there no key? Different languages have different approaches. Haskell let's you build up deep structures, a list of maps of list of maps ... and you'll have an indication of what path you took down the data structure to get a result. You can do this in any language, but it relies on programmer discipline, and everybody doing it the same way every time. In Haskell, it's the easy thing to do, and often the only way to do it. You'll wind up with something like

[Just ["foo"], Just ["bar"], Nothing]

The first two maps had once answer each, the final map had nothing.

Laziness means you have control of the order of evaluation. in practice you can make your own control structures without macros. bracket is a good example,

    bracket open_socket do_stuff close_socket
the socket always gets cleaned up, with an ordinary function. No try/catch.

One of the biggest hassles in programing is leaky abstractions. The type system means you can steal abstractions from mathematicians that just don't leak. There's some good stuff here [1] about the kinds of things you can say.

Purity means values can't change behind your back. there's no function to call that'll change the state of some object you're working with.

All together, it's nice for languages. You can build up parse trees any way you wish, and traverse those trees to execute the semantics. So if you can think of a dsl for your problem, you can whip up an interpreter pretty quick. If you change your mind, and want to turn it into a compiler, it's relatively painless.

It's a fun language.

[1] https://wiki.haskell.org/Typeclassopedia


I'm writing a compiler in Haskell for a university class this semester. I don't think "painless" is the word I would use to describe it. Would you care to elaborate on your thoughts here?


Sure :) First, i have to say i feel your pain. Writing a compiler is hard. If there's anything left you don't understand, this project will be punishing. Just know that at the end of the semester, there is absolutely no magic left. Your mental model of computation will be spectacular. Hang in there.

There are a couple of things. First, haskell lets you build up the language incrementally. So if your parse tree is like

    data AST = Prim Op |
               Immediate Int 
you can just throw in

               Variable String
And the compiler will tell you all the functions with non-exhaustive matches. It's tough to do this in C because you have to remember all of your switch/case statements. Also, in C those switches are going to be based on a tag in a data structure rather than a direct type. The compiler won't really help very much.

Second, the thing that implements the semantics for the parse tree can be a typeclass.

    class Semantics where
        eval (Prim op) :: Prim -> a
        eval (Immediate i) :: Immediate -> a
        eval (Var name) :: Variable -> a
        
In haskell, you are free to implement an interpreter which is way way easier than a full compiler. It's nice to have an interpreter for your language, because you can run the same program both ways, interpreted and compiled.

so...

    instance Semantics Interpreter where
        eval (Prim op) = case op of
            Plus l r = eval l + eval r
            Minus l r = eval l - eval r
            ...
        eval (Immediate n) = n
            ...
        -- variables need an environment, but i think you get the idea, just look up the string in the env.
Then, you can write a compiler.

    instance Semantics Compiler where
        eval (Prim op) = case op of
            (Plus l r) = eval l ++ "MOV r1 r2 \n" 
                      ++ eval r "ADD r1 r2 -> r1\n"
            ...
        eval (Immediate a) = "STORE " ++ a ++ " r1\n"
            ...
        -- again, variables require an env to be passed around, it's just another arg to eval
        -- this time around though, instead of a simple lookup
        -- you need the memory address to load into r1
        -- you have to encode what lookup would do. 
        -- [2] talks about this a bit.
This is a pretty crappy evaluation strategy, just sticking the last result in r1, but i'll work. You can do much much fancier things.

Another super cute trick with haskell, you can use LLVM to generate your machine code at haskell run time, and then dynamically link it [1]. This lets you mix and match evaluation, interpret parts, compile parts, and use them interchangeably. Much much easier debugging. That link should get you generating machine code for a toy language in a day. Basically, instead of loading up the dynamic library, you just make an executable instead.

Anyway, there's lots of good stuff out there. I'd suggest out 3imp [2] (warning postscript file) That's Kent Dybvig's dissertation on writing scheme compilers. The second implementation is unbelievably elegant. full support for call with continuation, so much cool stuff. It compiles to a VM rather than assembly, but that makes a lot of complex things clear.

An obvious typeclass is a pretty printer for your parse tree, so you can make sense of what a program is supposed to be doing and you don't have to swear quite as much when your resulting assembly doesn't work.

A less obvious typeclass pass is a typechecker, if someone is adding a string to an integer, you can return a new AST that injects the Int -> String conversion. Constant folding is a good one to do as well.

An even less obvious typeclass will ensure that every malloc has a free. I don't think it's possible for any C program, but you can get a lot of it, and spit out a warning when your algorithm isn't sure.

If you can handle academic papers, Oleg's finally tagless is the coolest way to swap out interpreters, compilers, and partial compilation [3] (pdf)

C won't help you with that, like at all. you can play a game with a table of function pointers, that you swap out for the various eval cases. Debugging that, imho, is hard. With haskell the compiler just tells you you're being dumb.

I guess the main thing is, a compiler has a ton of special cases. Half the battle is just mananging those special cases. Haskell eases a lot of those burdens. The other half of the battle is coming up with good algorithms for getting stuff done (like register selection above!). Haskell lets you slice things up, so you can give your algorithm everything it needs to do its job efficiently. It's not that you can't do that in other languages (clearly, there are a ton of compilers in a ton of languages) it's more that it really encourages and supports the kinds of things you need to do when writing a compiler.

Again, writing compilers is hard. you will feel stupid. You are not stupid, you're doing something very few of the 7 billion people on the planet have even attempted. It takes years of practice to get good enough to even try. You're a badass. You'll solve it.

[1] http://augustss.blogspot.com/2009/01/llvm-llvm-low-level-vir...

[2] ftp://www.cs.indiana.edu/pub/scheme-repository/doc/pubs/3imp.ps.gz

[3] http://okmij.org/ftp/tagless-final/JFP.pdf

edit

In retrospect that register allocator blows. any right associative operations will get stomped on. It's what i get for slapping something together at 1 am. let's just pretend this is for fourth, and you always work on the top of the stack.


Have you tried writing a compiler in another language? In my experience, functional languages are a very nice fit.


So which part do you find difficult? The parser? Haskell has the best library for parsing, parsec. The interpreter/code generator? I can't see how Haskell is different from any other languages on this part.


You're writing a compiler! It won't be painless in any language.


For me, I think the answer to this is really that Haskell is pure and immutable.

Equational reasoning[1] is immensely useful when debugging. It isn't unusual to debug pieces of Java code that look like this.

    MyClass c = new MyClass(..);
    c.method1(..);
    c.method2(..);
    c.method3(..);
    assert(c.someField == ..) // fails!
To figure what went wrong, you have to read through all three methods and depending on whether you have dynamic dispatch coming into play, you might need to read through _multiple_ versions of all three methods.

[1]: http://www.haskellforall.com/2013/12/equational-reasoning.ht...


This State of the Haskell Ecosystem doc[1] is very informative. If you're looking at application domains, Haskell(and OCaml etc) is usually well suited for writing compilers, but its not bad for your usual server-side application either. The only domain its fundamentally unsuited for is systems/embedded programming.

[1]: https://github.com/Gabriel439/post-rfc/blob/master/sotu.md


My impression from having tried it briefly is that it's not great at real-time stuff because of its lazy evaluation. Do you find that to be true, or is that not an issue in real life?


Lazy evaluation does make understanding performance/memory behavior hard, but usually you'll be using pipes/conduit/machines instead of lazy lists in production code, so its not much of an obstacle for real-time behavior.

However GHC's garbage collector can be an issue for real-time code, having high 99th percentile latencies.


Haskell shines at large codebases. It shines in a lot of other domains, but there's no mainstream competitor on dealing with messy, badly defined, big, changing problems.

Haskell shines on refactoring, API designing, and constrain setting (want to enforce safety? atomicity? thread ordering?). As a bonus, it'll also reduce your codebase an order of magnitude or two.

I mean, yes, if you never wrote a complex parser on Haskell, you should try. But that won't change your life too much.


Haskell and its advocates talk a big, big game about its supposed strengths in these areas, but there are suspiciously few examples of these strengths actually manifesting in anything but toy or academic projects. I've been hearing about Haskell's supposed miracles for years and years, and I've never, never seen them.


Many things that emerged at Facebook during recent years can be attributed to the influence of Haskell. React, Immutable.js, GraphQL, etc. Haskell is extremely powerful but allow me to rephrase Euclid "There's no royal road to functional programming". If you want something more pragmatic, yet still quite expressive and powerful - try Clojure and Clojurescript. Especially Clojurescript - it's probably one of the best languages out there to use for front-end and mobile app development.


Then you haven't been paying much attention to Haskell which can be forgiven because it's still pretty niche compared to Python or JS.

The best example I can give you is the Haxl project at Facebook [0].

[0] https://github.com/facebook/Haxl


Haxl is one of the standard two or three examples. I want to see a non-Haxl example.


A lot of stuff happens behind closed doors. Standard Chartered Bank has 1.5 million LOC of GHC Haskell and another 1 million LOC of their own internal Haskell implementation.

I'm also aware of another 5 or so big banks having/building Haskell teams for developing internal tools, but all this stuff is hidden. The biggest, public, project I'd say is FaceBook's Haxl (as pointed out elsewhere in this thread).


Standard Chartered is the other standard example. The only significant Haskell codebases anyone mentions are Standard Chartered, Haxl, maybe Barclays, and the GHC itself. It's pretty damning.


Galois, several startups ...


Haskell is very strict in what the compiler allows you to do where. It's a really great alternative to just requiring developer discipline. It means that you can do some things (like refactoring) with much greater confidence because the compiler rules out some big classes of bugs. Same with using other people's libraries. You can only use them certain ways and they are explicit.

A side effect of being so strict (in the sense of picky) about what is allowed where is that it needs some really powerful abstractions to keep that from slowing you down. Its type system shines here. Often, these abstractions have a nice syntax in haskell and it makes for some really clean code, relative to a direct translation in other languages.


Writing parsers is really nice in Haskell: http://www.cs.nott.ac.uk/~pszgmh/pearl.pdf

Trifecta is a nice modern parsing library: https://hackage.haskell.org/package/trifecta




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: