Hacker News new | past | comments | ask | show | jobs | submit login

Not at all. In fact about 95% of all Haskell I write - and I write quite a bit - is for commercial stuff, ranging all the way across large-ish (not quite google-scale, yet) scale computation, distributed systems, machine learning, modeling/simulations and web development. More academic feeling stuff like parsing and DSLs are just the cherries on top (though even those were for commercial uses).

I'll admit Haskell has a steep learning curve, but once you're there, all this stuff Go is said to do real well feels fairly lackluster compared to what you can find in Haskell-land. What's built into Go can be achieved at the library level in Haskell. As a result, we keep seeing better and better manifestations of key ideas in the library space.

Examples include Cloud Haskell, many, many concurrency libraries, STM, Parallel Haskell, many constant-space data streaming libraries (pipes, conduits, enumerators, ...), several excellent parsing libraries (parsec, attoparsec, trifecta, ...), etc.

As a side note, I used to do lots of python/ruby - I really can't anymore. They feel simultaneously more burdensome to code (no static type-checking), more verbose (no elegant, long pipelines of computations), less expressive (no real first class functions - you barely use map/reduce/fold/etc. in python) and slower (as in runtime).

Having said all that, I do see how Go fills a gap in the market. You need something that's easy to grok and gets just enough of it right that you can produce fast, type-safe-enough and concurrent programs with somewhat less mutable state than what you may be used to in C. A simple mental model and ease of entry are conceivably great for larger, homogeneous teams.




I guess it all depends on whether or not you think that pure functional programming makes sense as a general approach to programming. Pure functional or not is a very fundamental choice that influences all other features of a programming language. So I don't think it makes a whole lot of sense to compare Haskell to Go feature by feature.

Anyway, I agree that there is no reason why Haskell should be confined to the academic space at all. Actually I think the whole "right tool for the job" mantra is largely misplaced when it comes to Turing complete languages (apart from _very_ low level systems programming).


Can you recommend any open source project that you consider a good example of Haskell usage?

(meaning both practical and well-written)


I'd say xmonad[1]! It is a very light and fast tiling WM.

There's also a couple of elegant and blazingly fast web frameworks, such as Yesod, Snap and Happstack[2].

1: http://xmonad.org/

2: http://www.haskell.org/haskellwiki/Web/Comparison_of_Happsta... http://stackoverflow.com/questions/5645168/comparing-haskell...


Installing xmonad requires several hundred megs of dependencies, so what does it mean for it to be light? Low memory footprint?


Pretty much. Hard disk space is still cheaper than RAM, so I think it's a good tradeoff. Plus those dependencies can be used for lots of other things.



Well, xmonad is an excellent example :-)


no real first class functions - you barely use map/reduce/fold/etc. in python

Can you expand on this?


Sure. What I meant is that while python does have first-class functions, you don't make much use of it in idiomatic style. Much of the logic is still encapsulated in bloated, less flexible classes and/or imperative style variables you create to hold middle values.

If you wanted to use first-class functions in your code pervasively, then you lack the massive libraries and compiler optimizations available to Haskell. As a result, first-class functions are only used at a superficial level in Python, perhaps as key arguments to some functions.

In a language like Haskell, on the other hand, you make use of the first-class nature of functions all the time.

It's common to have pipelines like:

foldr step 0 . map convert . concatMap (chunks 2) $ inputList

  where

    step = ...

    convert = ...

Almost everything in that pipeline takes a function as a parameter. Also note how chunks takes an integer, partially applying the function, and returns a new function that is now ready to take a list to chunk into groups of 2. You really get used to this stuff.


Thanks. Yes, I can definitively see the benefits in having a stdlib and compiler that follows a functional approach.


I'll expand a little, because I've been feeling the same thing recently.

Python has syntactic support for list[0] comprehensions, which can be used a little like maps:

  def addOne(n):
    return n+1

  l = [1, 2, 3]
  [ addOne(n) for n in l ] # [2, 3, 4]
a little like filters:

  def isOdd(n):
    return n % 2 == 1

  l = [1, 2, 3]
  [ n for n in l if isOdd(n) ] # [1, 3]
and a little like folds/reductions:

  def accum(s):
    acc = s
    def a(n):
      acc += a
      return acc
    return a

  l = [1, 2, 3]
  reduce = accum(0)
  [ reduce (n) for n in l ][-1] # 6
You can also do Cartesian joins, though I rarely see these.

There are a couple of problems I've run into. The first is that Python's libraries are just not engineered with the idea of using list comprehensions in this way - folding is as awkward as it looks above, exceptions thrown in the list comprehension functions will terminate the comprehension, many python functions alter state and return None rather than a useful output, and so on. The second is that they're amazingly uncomposable, syntactically:

   l.map(addOne).filter(isOdd).reduce(accum(0))
is what I'd write in Scala, which is extremely tractable. In comparison, here's the equivalent in python:

  [ reduce(nr) for nr in [ nf for nf in [ addOne(nm) for nm in l ] if isOdd(nf) ] ][-1]
You note that I've had to rename the elements, because they "leak" to their surrounding comprehension - this can be quite confusing the first time you see it. Also these are fairly trivial comprehensions, which call functions rather than evaluate expressions in-place - this is well-supported and very idiomatic, but makes comprehension composition much harder.

I find Python's comprehension style very convenient, and I'm sure you could produce an excellent theoretical abstraction over it, but if you're coming at it from the point of view of wanting them to be map/reduce or something equally reasonable-about, you're going to be disappointed. Python isn't an object-oriented language, and isn't a functional language - the more I use it the more I think it's something akin to a collection-oriented language. Maybe that's just the way I use it. :)

[0] and also set comprehensions, dict comprehensions, and generator (lazy list) comprehensions, which are wonderful but exacerbate both the problems I talk about.


But Python has map(), filter() and reduce(). Why wouldn't you write your example as

    from operator import add
    reduce(add, filter(isOdd, map(addOne, l)))
I mean, I use list comprehensions when it makes sense, but I won't torture myself with them ;)


Of course you can. But the poor support for lambdas and higher-order-functions makes comprehensions a worse-is-better solution, because you can e.g. pickle comprehension expressions (which you can't do for lambdas), and you don't need to import a module for reduce (in 3.x). I gave up on using them when I realized they were just too frictive (or that they were "un-Pythonic", if you prefer).


I'm sorry, but can you clarify what you mean by poor support for higher-order-functions? And how can one pickle comprehension expressions?

I'm not trying to be argumentative, I just don't have much experience with that. I write functions that return functions/closures regularly, but they're always simple cases.


You're not coming across as argumentative.

By poor support for higher-order functions, I mean that e.g. you have to do "from functools import reduce, partial" for fold or partial function application. It's a trivial complaint, I'll give you, but it's one that's bitten me on more than one occasion (you think I'd learn!). There's also no foldr unless you implement it yourself.

I badly misspoke when I said that you could pickle comprehensions, because what I meant was that the language gives you no hint that you might be able to. Pickling

  sum([ os.stat(f).st_size for f in os.listdir(".") ])
is obviously (I hope) not going to work. On the other hand pickling

  [ lambda n: n % 2 == 0 ]
intuitively ought to, since pickling [ isEven ] would work fine. I've had to rewrite a couple of modules because of this - again, maybe I should have learnt from my mistakes - but it gives me the general impression of "avoid lambdas and functions that regularly use lambdas, because they're occasionally a lot of unexpected work".


I don't understand. You can't pickle functions; when you pickle a function, you get a reference to the function, not the actual function code.

E.g., this doesn't work:

    Dump.py:
      def isEven(n):
          return n % 2 == 0
      import pickle
      with open('pickled','w') as dumpfile:
          pickle.dump(isEven, dumpfile)

    Loader.py
      import pickle
      with open('pickled') as loadfile:
          isEven = pickled.load(loadfile)
This throws

    AttributeError: 'module' object has no attribute 'isEven'
What you can do is marshal the function's code:

    import marshal
    marshal.dump(isEven.func_code, file)
    #Then to load
    isEven = types.FunctionType(marshal.load(file), globals())
But you can also dump a lambda's code:

    import marshal
    marshal.dump((lambda n: n % 2 == 0).func_code, file)
    #Loading is the same
    isEven = types.FunctionType(marshal.load(file), globals())
    isEven(4)
So frankly, I don't get the problem with lambdas.


When you pickle a function, you get a reference to the function, not the actual function code.

Right - that's usually what I want. (I've used pickle to store tests against game assets, e.g. that a model has all of its textures checked into perforce, or that a texture for a model does not exceed 128x128, unless otherwise specified). Marshalling functions is usually a non-starter for this, since a) it's complicated to analyse the call graph before execution, and b) native functions can't be marshalled - an awful lot of code executed against this asset pipeline is thin bindings over native/Java/C# code. Maybe I have found the 1% of the Python use-cases where lambdas suck a bit and everywhere else it's fine--it would be great if my experience was exceptional and no-one else had ever had a similar problem doing something else.


Oh, ok. I don't know how Python could pickle a reference to an anonymous function, 'though.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: