Hacker News new | past | comments | ask | show | jobs | submit login
Go is boring (aeronotix.pl)
137 points by iand on July 27, 2012 | hide | past | favorite | 130 comments



This is largely spot-on. Of course, the author's assertion (that having all of these things in one place is novel) is incorrect for some programmers, but the sheer boringness of the language is a virtue. Go's features as bullet points are unimpressive; the set of features is the impressive part. Putting in features that are solidly understood, that are desirable, and that don't clash with each other make for a simple, solid, nice language.

I've tried without success to dig up an email from Linus Torvalds (on the LKML?) where, responding to an email complaining that a particular implementation of ARM does not breaking any interesting ground, he goes on a mini-tirade about the lack of appreciation for a simple thing done well. I think Go fits the description nicely.


I reached the same conclusion ("Go is boring") myself, but with a different flavor.

After having spent a great deal of time in recent years doing things like GPGPU and a _whole_ lot of SIMD programming (not to mention a lot of use of the STL, BGL, etc), I have to say I'm less impressed by the boringness (aka taking good, solid choices from existing languages) of Go.

I understand that not everyone is excited about SIMD or generic programming or writing code for 32768 GPGPU threads... but Go feels like a missed opportunity in these respects, solving the problems of the mid-1990s with aplomb (which is good).

It seems more like 'a better Java' - solid, but not genuinely breaking any new ground in a way that creates a single good reason to use it.


To be fair, no programming language has really taken a first-party approach to those specific problems.

SIMD and GPGPU both fairly difficult low-level concepts as they stand: I think there would definitely be some valuable postgrad research in looking at how to create higher-level interfaces to graphics acceleration and GPGPU/SIMD that are as simple and effective as Go's goroutines.

The main problem is that SIMD and GPGPU (even hardware accelerated graphics, to a lesser extent) are bolt-ons; they're not a core part of every computer, which is why they don't usually form a core part of any programming language. At best languages might choose to integrate this functionality into the standard library, but i don't think it will ever come built-in for a general-purpose language (maybe for a domain-specific language?).

I've actually been attempting to write a GPGPU interface between Go and OpenCL[1], so it's certainly not impossible to do GPGPU or SIMD in Go (via cgo), but it would require writing your own 3rd-party libraries or making additions to the go runtime/compiler to develop a novel interface to this functionality like goroutines that can run on the GPU (a dream many people in the Go community share).

[1] https://bitbucket.org/genbattle/go-opencl


> To be fair, no programming language has really taken a first-party approach to those specific problems.

APL (and it's descendants J and K) has everything needed to do justice to SIMD/GPGPU. And it had them since the early 60s. I am not aware of an APL compiler that actually uses GPU or SIMD instructions, but one is possible, and wouldn't require any change to the base language.

Not surprisingly, this is because the idiomatic model is closer to the SIMD mindset than to the "standard" (algol/pascal/c/java/python/go) mindset, which is why it is unlikely to ever become popular.


reasonable SIMD support (let's limit it to SSE2 and above for "reasonable") has been in every Intel processor since what, Pentium 4? it's not much of a bolt-on anymore. one of the causes of the lack of good programming models for SIMD is that autovectorization was supposed to generate SIMD code for every application, but autovectorization isn't actually that great for many (most?) applications. (plus naive developers generally write terrible code--if array of structures versus structure of arrays doesn't make sense to you, you're probably writing code that cannot be autovectorized)

the lack of a magic compiler bullet is infinitely more true as soon as you look at anything remotely like a GPU, which gets into other more complicated problems due to a distinct memory space.

if I were to add any features like that to Go, I'd probably look in the direction of generating ISPC ( http://ispc.github.com/ ) or ISPC-like output. no need to solve the distinct address space issue (which you cannot solve), you have work creation so you don't need to do crazy scheduling hoops like persistent launches on the GPU, and it performs very well on Intel processors for SIMD-friendly applications.


I don't think the issue of separate address spaces is unsolvable. As long as you have a type system powerful enough to forbid aliasing between the CPU and the GPU code, you can do it. (For example, Rust's type system can encode task-local data.)


> reasonable SIMD support (let's limit it to SSE2 and above for "reasonable") has been in every Intel processor since what, Pentium 4?

Yeah, and the Athlon64 (and Opteron) for AMD.


SIMD is really a big omission if you do a low level language these days. A modern language should really have first class vector types. I'm talking about static sized small vectors that run on SIMD, e.g. 4 x float32 or 8 x uint8. Not something matlab-esque.

Something along the lines of OpenGL shading language, OpenCL C or relevant vector extensions in modern C compilers. With GCC and Clang vector extensions, SIMD programming is fairly nice. (NOTE: GCC 4.8/git version vector extensions emit bad scalar NEON assembly so you can't really use them if you target ARM).

One particular thing about SIMD coding is the shuffling instructions which are hard to express in normal programming language terms as the order of the shuffling has to be static. GCC and Clang both have a shuffling function that requires the parameters to be compile time constants. But I prefer GLSL/OpenCL shuffle syntax, like vec.xxyy or vec.wzyx or vec.s0s0s1s1, and combinations like (vec4)(vec1.xx, vec2.yy). Too bad this syntax is not available on Clang or GCC when doing normal CPU C code.

NEON's shuffling instructions are rather odd to deal with. An odd thing about them is that Clang's ARM intrinsics actually use __builtin_shufflevector to implement NEON shuffles like vrev.


You picked Go for the wrong reason then. Go is not about parallel programming, which SIMD and GPGPU programming is. Go is about concurrency and that is a completely different beast.

What Go is good for is a different thing. It is good for writing a webserver, which you won't really like to do in SIMD or on a GPGPU. The reason is that a webserver requires you to handle independent activity where things are inherently uncoordinated: while one client is connecting, the other is being sent data.

Vice versa, where Go is bad, is when you want to handle massive parallel computation where each computation is the same and you have to wait until all computation is done before you can continue. This means that you have a sequential program, you have to wait, but not a serial one, since all the computations could be executed on SIMD or a GPGPU in parallel.


I think "Go is boring" is a too simplistic conclusion.

Go creates a genuinly unique programming environment. If you come from a C++ background (like me) you might think the C++ solution will always be structurally superior.

But this is not true. It has a really radical take on OOP, actually realizing some of the most extreme takes on OO from the C++ community: no class inheritance, only interface inheritance. Using the inheritance syntax for non-interfaces ("classes") does inheritance by composition.

Regarding templates: C++ has the best (imperative) language support for templates. But it also shows where it can lead... (typedef typename...) Boost has actually become a playground for clean template implementations that look like nothing but a mess.


> C++ has the best (imperative) language support for templates

D's template support is far superior: static if, constraints, compile-time function execution, string mixins, opDispatch. These features make templates much more practical and more powerful.


Hey, I'm really interested in exploiting the GPU for GP programming, and also in how the SIMD can help you... Would you be willing to give me some examples of the most common/useful uses?


Image processing and machine vision is a big one. Part of the skeletal tracking algorithm for Kinect is implemented as a set of shaders on the Xbox's GPU.

That's just one example, but anything that involves image processing is a prime example of something that can be optimized for GPGPU.

EDIT: To add another couple of examples:

- Digital effects; rendering, etc. Digital effects studios like Weta Digital are using GPGPU for speeding up their photo-realistic rendering [1].

- Physics simulations for games; games can now move resource-heavy activities like physics simulation from the CPU to the GPU, not only freeing up CPU resources, but also increasing the number of particles, etc. that can be simulated (look for n-body simulations as an example).

[1] http://blogs.nvidia.com/2010/01/nvidia-collaborates-with-wet...



I'd like to point out that I never heard of coroutines before Go. Not even in computer science; and I did take an Operating Systems course.

I also never seen any language doing interfaces like Go. Go does it just right. I find that this is pretty impressive as a feature on its own.


> I also never seen any language doing interfaces like Go.

OCaml has used structural subtyping since the beginning for its object layer. C++'s templates also use structural subtyping on type arguments. Pierce also covers the subject in TAPL.

> I find that this is pretty impressive as a feature on its own.

There are advantages and inconvenients to structural subtyping (compared to nominative): it's more flexible and has much lower overhead, but it's subject to false positives (structurally equivalent but semantically unrelated objects) and tends to have much worse error reporting. Not to mention structurally typed systems still usually have and need nominative types in their core, for "non-object" types.


I don't know what structural subtyping is, but I no that Go's interfaces are nothing like anything in C++


    class A { public: int do() { return 1; } };
    class B { public: int do() { return 2; } };
    
    template <typename T> int do(T t) { return t.do(); }


How does one use that template? Can you show me a function that takes anything with a do() method? Like this?:

    do<A>(a)
    do<B>(b)
Can I take any arbitrary type with an "int do()" method and use that with do? Like:

    do(Arbitrary)


"How does one use that template? Can you show me a function that takes anything with a do() method? Like this?:"

It should work without the template parameter: do(a), do(b)

"Can I take any arbitrary type with an "int do()" method and use that with do?"

Yes.


cool


As dan00 noted, it works without the template as long as there is no ambiguity. If there's an ambiguity (a template function defined on <int, int> and <double, double> to which you provide an int and a double) then you have to use explicit template parameter to specify which overload will be used.

It's not the case here, so it just works.


> I don't know what structural subtyping is

Which is why "you have never seen a language doing interfaces like Go", everybody else calls it what it is (structural typing/structural subtyping)

> I no that Go's interfaces are nothing like anything in C++

Well you may "know" it, but your knowledge is wrong.


> Not even in computer science;

What did you specialize in? It's pretty easy to hit upon co-routines, if you have an interest in Scheme or even Haskell. But I can imagine, it's much harder if you are into, say, the innards of database schemas. (Just a random example.)


I don't know why I didn't come across it. Maybe if I took Operating System II ..


It's not surprising you didn't come across it -- I never did in school either. From what I can tell, it's because coroutines were an old idea that were eclipsed by OS threads and the like. They basically became interesting again when OS threads reached scalability limits.

This paper by the authors of Lua has some good historical background:

http://lambda-the-ultimate.org/node/2868

They also poke fun at Python in some ways because the coroutines are fairly limited. Python got them sort of ad hoc with yield in Python 2.4 and send() in Python 2.5. I think Python is by far the most popular language with any kind of coroutine. Coroutines aren't very popular or well understood.


I think you would have had to go deeper into programming languages, not deeper into operating systems.

Disclaimer: The above is definitely true for coroutines in general, but I don't know much about Go and whether its coroutines are truly unique.


I also heard of it in a programming languages course, and also in Udacity CS 212 I think.

To add to the list, they're implemented in Python using yield statements


Python's yield was originally a limited co-routine, something we coined as a "generator" (borrowing the word from Icon, which Tim was quite fond of). It was limited to one frame on the stack and could only return values. Recent enhancements have allowed values to be passed back into the suspended function. That's still not a full co-routine though since Python only lets you go one level down on the stack.

As someone else mentioned, you need to get deeper into programming languages, not operating systems. Scheme has call/cc which can be used to build co-routines. Someone needs to clue in the nodejs crazies as well, callbacks are not the way to design a language. ;-)


All of the node.js kids get really defensive when you point this out; as if async was supposed to be heinously ugly despite the fact that the entire raison d'etre of node.js is to be async.

Programming advances so slowly because all of the training required to become familiar with it eventually becomes a blind spot.


It is hard to get excited about concurrency in vanilla Python while the Global Interpreter Lock still chokes everything down internally. (Vanilla, here, should be read "not Stackless.")

For more effective use of lightweight processes, I'd probably reach for Erlang or Occam.


Not sure why, the GIL is not an issue to node-style concurrency (based on an event loop and async IO), though you need alternative IO layers (Gevent provides exactly that, and can — if requested — monkeypatch the stdlib to replace standard synchronous IO calls by Gevent-provided async IO).

In fact, mixing threads and an event loop with async IO is probably a good way to make everything blow up.


coroutine is an old concept before threading became popular. It was usually implemented as a green-thread library. For C, setjmp/longjmp was used to implement it. Some libraries messed around with the stack register to implement it.

GO puts it as a first class citizen in the language. Other languages might call it Actor, light weight thread, fiber, task, etc. It's popular in embedded systems to implement cooperative threading using coroutine due to its simplicity and low overhead in resource utilization.


A coroutine is not the same as an Actor, though actors can be implemented with coroutines.


If you want to learn coroutines with a relatively small and easily embeddable language, you can try with lua - http://lua-users.org/wiki/CoroutinesTutorial


I love static typed language. With a proper IDE, code navigation, completion work like magic. I end up doing less typing than the dynamic typed language.

Have you ever tried to auto complete the 'init' function in RubyMine? It will ask you which one of the 100 init functions do you mean. :)

Not with static typed language. There is only one init function to choose from because the IDE knows the exact type you are working with at all time.


I do too but I'm finding that a lot of the time there are so many untyped inputs to a system that the static typing doesn't buy you that much. There always seems to be a ton of xml configuration data, incoming JSON from web services, databases with different type schemes etc.


As soon as you receive an untyped input, make it conform to a typed data structure, and throw some kind of error if you can't. That way you catch any issues in your input data long before it bubbles through your app and causes a problem which is super-hard to track down.

See e.g. DictShield for Python (https://github.com/j2labs/dictshield), Jackson for JVM (http://jackson.codehaus.org/), Swiz for node.js (https://github.com/racker/node-swiz)...


I don't know about other environments, but on iOS I just create a class that I store the e.g. JSON data in.


I'm also starting to find that a language with static typing and generally an enforced structure is easier to deal with.

With dynamic languages, you have to hold a lot in your head. Having an enforced structure offloads some things off your brain so you have more mental space to think clearly and not panic or get burned.


Static typing also makes overloading your f.unctions on return type instead of just arguments types possible. Not a lot of languages do this, though. I only know of one, but it's invaluable there.


Perl's a dynamic language, but return type can vary based on context. I once saw this really bite someone where the presence of parentheses on the left-hand side of the expression changed the behavior of the function being called on the right-hand side. Not fun to debug that one.


Exactly. Duck typing sucks. I don't know why it was invented...


It only "sucks"(causes problems rarely, and can be quite useful regardless) in dynamic languages. Languages like Go, Rust, and Haskell provide the same flexibility with static guarantees. All you must do is define type specific implementations to satisfy the interface(or typeclass). For example in Haskell, I can so something like:

    class Stream s where
      read :: s -> (a, s)
      write :: a -> s -> s
and extend it to any type:

    instance Stream [Int] where
      read s = ...
      write a s = ...

    instance Stream File where
      ...
and so on, and I can now pass any type that is a member of Stream, to a function that expects one.


The haskell code is — as far as I can see anyway — nominative typing. It can be post-implemented, but you still need your type to be explicitly made into a Stream instance.

Contrast OCaml, an object type is represented as a set of (method, types) tuples and type-checking is a subset check (if type A has all the methods of type B, then it's a subtype of B regardless of anything else from visibility to semantics):

    # let x =
          object
              method foo = 42
          end;;
    val x : < foo : int > = <obj>
    # let y =
          object
              method foo = 63
              method bar = 12
          end;;
    val y : < bar : int; foo : int > = <obj>
    # x = y;;
    Error: This expression has type < bar : int; foo : int >
           but an expression was expected of type < foo : int >
           The second object type has no method bar
    # type simple = < foo : int >;;
    type simple = < foo : int >
    # (y :> simple) = x;;
    - : bool = false
    #


You're comparing ad-hoc polymorphism with subtype polymorphism. Doing so will lead you to the expression problem:

http://en.wikipedia.org/wiki/Expression_problem

Anyway, there's been quite a few proposals to add extensible records to Haskell, which would allow row polymorphism, similar to what you just showed in OCaml:

http://hackage.haskell.org/trac/ghc/wiki/ExtensibleRecords

Too bad that it has gone nowhere in a long time.


> You're comparing ad-hoc polymorphism with subtype polymorphism.

No, I'm comparing structural typing, which is what Go implements, to nominative typing.

But that may very well be due to me having stayed in context of an other sub-thread where this was the subject, and using that as a filter for the current one.


I really wish more languages had the option of using a Haskell style type system.


Or even just plain Hindley-Milner.


The above is not an example of duck typing.

Even if it was, one example of a difficult bug in one system does not invalidate a whole concept.

And duck typing is almost as old as programming itself...


Yes. I can't agree more. One example to backup: Eclipse makes Java fun.


Is this sarcasm? =)

If you want to know how real auto-complete works, try out IntelliJ. Auto-complete on Eclipse is like slowly being pecked to death by ducks.

A more accurate statement in my mind is "Eclipse makes Java palpable, IntelliJ makes Java fun."


I am downloading IntelliJ. I've use Eclipse for years, It works for me.

I will give Eclipse a try.


IntelliJ is the best IDE out there. It might be pricy, but it will save you so much time in the long run. Since time is money, you will end up saving money too. :)


The Community Edition is free of cost. The choice depends on what you need: http://www.jetbrains.com/idea/features/editions_comparison_m...


yeah free version supports Java SE, with a really nice Scala plugin for download via the plugin manager.


I think the counter-argument here would be that dynamically-typed languages let the person behind the keyboard run the show, as opposed to the IDE.


Alas, nobody has written a really good IDE for Haskell, yet. So we still have to run the show for that statically typed language manually.


If I ever develop a program language, the first action I will take is the clang approach and make it a library based architecture. That way, people can build tools for the language including IDE integration without having to reinvent the wheel.


Yes, this has been invaluable for us in Rust. One cool thing we've been able to do with this architecture is to write a fuzzer -- a tool that uses the Rust compiler itself to generate random Rust programs to test the compiler's correctness.


I think GHC does something like that. At the very least, it exposes an API that lets you do all sorts of fun things. There are projects like Scion[1] that let you integrate that into an editor.

[1]: https://github.com/nominolo/scion/

However, there is simply less drive to develop tooling like that for Haskell than there is for Java. Haskell is a much easier language to use given just a moderately intelligent text editor and a REPL than most others. Java, on the other hand, it verbose and annoying even with a very good IDE.

So Haskell can have good support, but since it isn't terribly necessary it isn't anything like a top priority.


Indeed, “the architecture of GHC” mentions some uses of GHC as a library: http://www.aosabook.org/en/ghc.html


Compiler As A Service. Microsoft are pushing this with Roslyn.

http://www.infoworld.com/d/application-development/microsoft...


What do you think of leksah?


I know of it, but haven't used it, yet. That's why my comment was a bit more guarded than it would have otherwise been. (Though re-reading that line, it doesn't come across as such to me now.)


IDEs are just another potentially-useful (some of them are better done than others) form of abstraction to let you focus on the important parts.


You run the show by architecting your software. Not by counting the number of key strokes you typed.


IntelliJ's Python module (Pycharm) does a as decent job as you might expect based on code and your type annotations (i.e. if your docstring says that foobar is a BlahManager, it will complete foobar methods based on that and warn you if you are calling something foobar doesn't have).


Which IDE do you use with Go?




I've tried giving Go a try a bunch of times now. My primary choice of language is Haskell and I just can't seem to get excited about Go.


Would you say your interest in programming languages is largely academic in nature? I get the impression that Haskell mostly (for now at least) fits best with academia, Java/C# for enterprise, python/ruby for smallish web apps, Go/C/C++/Java for industrial applications (like servers, etc).

There is a very real possibility that Go is just not the right fit for you with your current requirements. No language is the right fit for every application.


Not at all. In fact about 95% of all Haskell I write - and I write quite a bit - is for commercial stuff, ranging all the way across large-ish (not quite google-scale, yet) scale computation, distributed systems, machine learning, modeling/simulations and web development. More academic feeling stuff like parsing and DSLs are just the cherries on top (though even those were for commercial uses).

I'll admit Haskell has a steep learning curve, but once you're there, all this stuff Go is said to do real well feels fairly lackluster compared to what you can find in Haskell-land. What's built into Go can be achieved at the library level in Haskell. As a result, we keep seeing better and better manifestations of key ideas in the library space.

Examples include Cloud Haskell, many, many concurrency libraries, STM, Parallel Haskell, many constant-space data streaming libraries (pipes, conduits, enumerators, ...), several excellent parsing libraries (parsec, attoparsec, trifecta, ...), etc.

As a side note, I used to do lots of python/ruby - I really can't anymore. They feel simultaneously more burdensome to code (no static type-checking), more verbose (no elegant, long pipelines of computations), less expressive (no real first class functions - you barely use map/reduce/fold/etc. in python) and slower (as in runtime).

Having said all that, I do see how Go fills a gap in the market. You need something that's easy to grok and gets just enough of it right that you can produce fast, type-safe-enough and concurrent programs with somewhat less mutable state than what you may be used to in C. A simple mental model and ease of entry are conceivably great for larger, homogeneous teams.


I guess it all depends on whether or not you think that pure functional programming makes sense as a general approach to programming. Pure functional or not is a very fundamental choice that influences all other features of a programming language. So I don't think it makes a whole lot of sense to compare Haskell to Go feature by feature.

Anyway, I agree that there is no reason why Haskell should be confined to the academic space at all. Actually I think the whole "right tool for the job" mantra is largely misplaced when it comes to Turing complete languages (apart from _very_ low level systems programming).


Can you recommend any open source project that you consider a good example of Haskell usage?

(meaning both practical and well-written)


I'd say xmonad[1]! It is a very light and fast tiling WM.

There's also a couple of elegant and blazingly fast web frameworks, such as Yesod, Snap and Happstack[2].

1: http://xmonad.org/

2: http://www.haskell.org/haskellwiki/Web/Comparison_of_Happsta... http://stackoverflow.com/questions/5645168/comparing-haskell...


Installing xmonad requires several hundred megs of dependencies, so what does it mean for it to be light? Low memory footprint?


Pretty much. Hard disk space is still cheaper than RAM, so I think it's a good tradeoff. Plus those dependencies can be used for lots of other things.



Well, xmonad is an excellent example :-)


no real first class functions - you barely use map/reduce/fold/etc. in python

Can you expand on this?


Sure. What I meant is that while python does have first-class functions, you don't make much use of it in idiomatic style. Much of the logic is still encapsulated in bloated, less flexible classes and/or imperative style variables you create to hold middle values.

If you wanted to use first-class functions in your code pervasively, then you lack the massive libraries and compiler optimizations available to Haskell. As a result, first-class functions are only used at a superficial level in Python, perhaps as key arguments to some functions.

In a language like Haskell, on the other hand, you make use of the first-class nature of functions all the time.

It's common to have pipelines like:

foldr step 0 . map convert . concatMap (chunks 2) $ inputList

  where

    step = ...

    convert = ...

Almost everything in that pipeline takes a function as a parameter. Also note how chunks takes an integer, partially applying the function, and returns a new function that is now ready to take a list to chunk into groups of 2. You really get used to this stuff.


Thanks. Yes, I can definitively see the benefits in having a stdlib and compiler that follows a functional approach.


I'll expand a little, because I've been feeling the same thing recently.

Python has syntactic support for list[0] comprehensions, which can be used a little like maps:

  def addOne(n):
    return n+1

  l = [1, 2, 3]
  [ addOne(n) for n in l ] # [2, 3, 4]
a little like filters:

  def isOdd(n):
    return n % 2 == 1

  l = [1, 2, 3]
  [ n for n in l if isOdd(n) ] # [1, 3]
and a little like folds/reductions:

  def accum(s):
    acc = s
    def a(n):
      acc += a
      return acc
    return a

  l = [1, 2, 3]
  reduce = accum(0)
  [ reduce (n) for n in l ][-1] # 6
You can also do Cartesian joins, though I rarely see these.

There are a couple of problems I've run into. The first is that Python's libraries are just not engineered with the idea of using list comprehensions in this way - folding is as awkward as it looks above, exceptions thrown in the list comprehension functions will terminate the comprehension, many python functions alter state and return None rather than a useful output, and so on. The second is that they're amazingly uncomposable, syntactically:

   l.map(addOne).filter(isOdd).reduce(accum(0))
is what I'd write in Scala, which is extremely tractable. In comparison, here's the equivalent in python:

  [ reduce(nr) for nr in [ nf for nf in [ addOne(nm) for nm in l ] if isOdd(nf) ] ][-1]
You note that I've had to rename the elements, because they "leak" to their surrounding comprehension - this can be quite confusing the first time you see it. Also these are fairly trivial comprehensions, which call functions rather than evaluate expressions in-place - this is well-supported and very idiomatic, but makes comprehension composition much harder.

I find Python's comprehension style very convenient, and I'm sure you could produce an excellent theoretical abstraction over it, but if you're coming at it from the point of view of wanting them to be map/reduce or something equally reasonable-about, you're going to be disappointed. Python isn't an object-oriented language, and isn't a functional language - the more I use it the more I think it's something akin to a collection-oriented language. Maybe that's just the way I use it. :)

[0] and also set comprehensions, dict comprehensions, and generator (lazy list) comprehensions, which are wonderful but exacerbate both the problems I talk about.


But Python has map(), filter() and reduce(). Why wouldn't you write your example as

    from operator import add
    reduce(add, filter(isOdd, map(addOne, l)))
I mean, I use list comprehensions when it makes sense, but I won't torture myself with them ;)


Of course you can. But the poor support for lambdas and higher-order-functions makes comprehensions a worse-is-better solution, because you can e.g. pickle comprehension expressions (which you can't do for lambdas), and you don't need to import a module for reduce (in 3.x). I gave up on using them when I realized they were just too frictive (or that they were "un-Pythonic", if you prefer).


I'm sorry, but can you clarify what you mean by poor support for higher-order-functions? And how can one pickle comprehension expressions?

I'm not trying to be argumentative, I just don't have much experience with that. I write functions that return functions/closures regularly, but they're always simple cases.


You're not coming across as argumentative.

By poor support for higher-order functions, I mean that e.g. you have to do "from functools import reduce, partial" for fold or partial function application. It's a trivial complaint, I'll give you, but it's one that's bitten me on more than one occasion (you think I'd learn!). There's also no foldr unless you implement it yourself.

I badly misspoke when I said that you could pickle comprehensions, because what I meant was that the language gives you no hint that you might be able to. Pickling

  sum([ os.stat(f).st_size for f in os.listdir(".") ])
is obviously (I hope) not going to work. On the other hand pickling

  [ lambda n: n % 2 == 0 ]
intuitively ought to, since pickling [ isEven ] would work fine. I've had to rewrite a couple of modules because of this - again, maybe I should have learnt from my mistakes - but it gives me the general impression of "avoid lambdas and functions that regularly use lambdas, because they're occasionally a lot of unexpected work".


I don't understand. You can't pickle functions; when you pickle a function, you get a reference to the function, not the actual function code.

E.g., this doesn't work:

    Dump.py:
      def isEven(n):
          return n % 2 == 0
      import pickle
      with open('pickled','w') as dumpfile:
          pickle.dump(isEven, dumpfile)

    Loader.py
      import pickle
      with open('pickled') as loadfile:
          isEven = pickled.load(loadfile)
This throws

    AttributeError: 'module' object has no attribute 'isEven'
What you can do is marshal the function's code:

    import marshal
    marshal.dump(isEven.func_code, file)
    #Then to load
    isEven = types.FunctionType(marshal.load(file), globals())
But you can also dump a lambda's code:

    import marshal
    marshal.dump((lambda n: n % 2 == 0).func_code, file)
    #Loading is the same
    isEven = types.FunctionType(marshal.load(file), globals())
    isEven(4)
So frankly, I don't get the problem with lambdas.


When you pickle a function, you get a reference to the function, not the actual function code.

Right - that's usually what I want. (I've used pickle to store tests against game assets, e.g. that a model has all of its textures checked into perforce, or that a texture for a model does not exceed 128x128, unless otherwise specified). Marshalling functions is usually a non-starter for this, since a) it's complicated to analyse the call graph before execution, and b) native functions can't be marshalled - an awful lot of code executed against this asset pipeline is thin bindings over native/Java/C# code. Maybe I have found the 1% of the Python use-cases where lambdas suck a bit and everywhere else it's fine--it would be great if my experience was exceptional and no-one else had ever had a similar problem doing something else.


Oh, ok. I don't know how Python could pickle a reference to an anonymous function, 'though.


Haskell isn't just used in academia. It's used a Facebook for example, and no-one would accuse Facebook of being an academic company.


And C/C++ is by developers to implement Java, C#, Python, Ruby, Go, C, and C++. :)


Actually there are Java and C# compilers implemented on the own language.


I'm in the same boat as ozataman, go feels like a huge step backwards to me because I use haskell. My interest in programming languages is roughly 0% academic in nature. I use haskell primarily for web development (also random stuff like parsing log files, automating deployments, etc). I use it for these things because the practical benefits of such a high level language are so good, using anything else is painful.


Same here.


What if I told you that you're not supposed to be excited about your programming language?


That wouldn't work for me. I've been lucky enough to almost always work on things I've been interested in, and it has served me well in my career so far. I doubt I could even perform at 50% efficiency when using tools that don't excite me.

I've also had a ton of fun mastering and compulsively customizing both VIM and Emacs :-)


I can see where you're coming from but for a programmer who is beyond "programming puberty" it is a great plus to be in love with their programming languages. Even better if he/she is unfaithful, and has massive orgies (oh wait did I just take this analogy too far?)


It's a great analogy. I'll go tell my wife about it.


I worked in PHP for years, now we've moved over to NodeJS and I'm massively excited about JS in a way that I never was about PHP.

If you're not excited about your programming language, I would suggest that maybe you're using the wrong one.


As a long-time Python programmer, I tried Go recently for something that needed large amounts of concurrency (a hosted version of hubot: http://instabot.stochastictechnologies.com), and I have to say, I am very pleasantly surprised.

The type system was a bit cumbersome, after coming from Python, especially having to wrangle with pointers after not using them ever, but it's nothing you don't get used to. I'm still not sure how much I gain from static typing, but I'm willing to bear it out.

Channels and goroutines, however, were an absolute dream to use. The entire IRC frontend runs off one process, which, I am led to believe, will basically never need anything more (it just proxies messages from IRC to the backend and back). Communication with the processes was fantastically easy, creating, launching and reasoning about goroutines is, again, very straightforward, and all this feel very much like a first-class citizens.

Python has gevent too, and it suited me very well, but Go feels more integrated and better done.


I think what you gain from static typing is pretty much that you can compile your programs and run without a fat interpreter.

Note - I am not saying that is the only possible benefit of static typing anywhere, e.g. in Haskell - this is more in line with C, you are doing type declarations to cue the compiler rather than to realize some utopian test-free development methodology.


Oh, definitely, I'm just not sure it gains that much speed compared to Python. I'd expect Go's speed to be on par with C, but, from what I understand, it's more like PyPy.

Of course, this is just from what I hear, I haven't run any benchmarks. Does anyone have more details about this?


Having simple, established ways for doing most common things makes it easy to write code without thinking about decisions around the programming language, such as how the syntax should be formatted, how the symbols should be named, how should memory management be arranged, what conventions should be used for splitting code into files and modules (with little extra design issues involved in structuring header files and include relations, in case you're doing C/C++), what kind of build system should be used, which unit test framework should be chosen and which third party libraries should be chosen for the very commonly needed stuff that's nevertheless not included in the language's standard library since it was standardized somewhere in the late 80s.

Having all this stuff basically solved out of the box makes it very easy to start cranking out actual solutions with Go, even though none of that is particularly interesting from a programming language design standpoint.


The author makes valid points and some of the reasons why I tested Go recently are named, but when it was ~15x slower than Perl and 10x slower than Java on some simple regexp matching, I gave up on it.


There's some history behind this. You can read some of it from Russ Cox, one of the Go authors.

https://groups.google.com/forum/?fromgroups#!topic/golang-nu... http://swtch.com/~rsc/regexp/regexp1.html

Edit: missed another good one. There's a lot of discussion about this on the mailing list. https://groups.google.com/forum/?fromgroups#!topic/golang-nu...


With other simple regular expressions Perl and Java will be millions of times slower than Go: http://swtch.com/~rsc/regexp/regexp1.html


The thing that makes me pull back from trying Go is the fact that it has no exceptions and no other feature for handling errors (correct me if I'm wrong).

Ugh! I can't imagine going back to the days when I had to call a function, check its return value to see whether there was an error or not, then, if no error, proceed to the next function call, check it for error, and so on.

I know exceptions were a source of "complexity", and you could say that Maybe monads or other programming attempts to solve the problem increase complexity too.

But to just punt and go back to doing it the ugly, unmaintainable brute force way? I just find that hard to swallow.


This sounds really interesting and compelling.

As someone new to Go... What would be the advantages of using Go over Python (taking into account the emergence and future ascendancy of pypy)?


A few:

- Go has language-level support, in the form of goroutines, for multithreaded concurrency. Python is single-OS-thread-only, and PyPy doesn't change that.

- Go has enough static typing to help you write safer code, without the verbosity of "bigger" languages like C++ or Java. If you write a lot of tests for your Python app, you might not have variable typos or function argument type mismatches, but in Go the compiler catches these things.

- Go is compiled to machine code. This means that, barring a miracle in JIT/VM research, Go will probably always be faster than PyPy for most tasks.

These, together with the fact that many of the niceties of Python are available in Go (lightweight syntax, first-class functions, iterators and list slicing, etc), make it, in my opinion, a compelling alternative to Python.


> Python is single-OS-thread-only, and PyPy doesn't change that.

Python uses native multi-threads, but the GIL restriction means only one thread can run at a time regardless of number of cores or processors you have.

> If you write a lot of tests for your Python app, you might not have variable typos or function argument type mismatches, but in Go the compiler catches these things.

Use pylint and/or syntastic(for vim).

> Go is compiled to machine code. This means that, barring a miracle in JIT/VM research, Go will probably always be faster than PyPy for most tasks.

Go is slower than Java.

http://shootout.alioth.debian.org/u32/benchmark.php?test=all...

They removed lua-jit from the benchmarks. lua-jit will smoke go in most scenarios. Here is a comparison between lua-jit and lua.

http://luajit.org/performance_x86.html

Native code doesn't mean "always faster than JIT/VM". A good JIT can do optimizations which a static compiler can't. For a long running process with hotspots(same code hit multiple times), a JIT can be as good as(or better) than native code.


For some reason, masklinn's comment is dead. Posting it here:

<quote>

> Python uses native multi-threads, but the GIL restriction means only one thread can run >> Can run Python code, if you're multithreading for e.g. IO the IO code will generally release the GIL.

</quote>

Yes, native code can release the GIL and run in parallel. As far as it's pure python, only one thread runs at a time. The options are multiprocessing, gevent style concurrency(which I prefer to node's) and native extensions. It isn't as bleak as people make it out to be.


> For some reason, masklinn's comment is dead. Posting it here:

Sorry, that's probably because I got error messages while posting ending up with 2 or 3 comments, and I removed the extraneous ones. You probably tried to reply to one of those I deleted.


> Python is single-OS-thread-only, and PyPy doesn't change that.

Neither statement is really the whole truth, and Python does have some nice ways to do coroutines and futures now, but I see how Go is better here (those 2 features do need to be in the core of the language) and with your other points, thanks.


>- Go is compiled to machine code. This means that, barring a miracle in JIT/VM research, Go will probably always be faster than PyPy for most tasks.

Actually it's slower or comparable that simple Python in most cases. And slower than Java, which also uses a JIT.

It being "Machine code" doesn't mean much. The implementation also counts, as do the libs. In Python lots of stuff is delegated to plain old C libs.


Go has a smaller, narrower standard library that is more tightly maintained and governed than Python. Python's original slogan, "batteries included" could be amended after 20 years of use to "bitrot included." Some libraries have partially broken semantics that have been maintained because legacy applications rely on those broken semantics. Python modules don't even agree on tab width or casing.

Part of that is youth, Go went 1.0 just this Spring, and had a serious housecleaning applied in the process. "Being boring" is a big part of Go's identity -- most changes have been pragmatic and thoroughly thought out, and when they come, "gofix" often knows how to apply them to legacy code. (Static linking doesn't hurt, either.)

In theory, Go has more to gain in performance with the GC leaving a lot of room for improvement -- CPython has been subject to a lot of tuning over the years and has found a nice local maximum to settle on. In practice, this does not matter nearly as much as picking the right algorithm, profiling, and avoiding dumb mistakes. (A favorite gaffe: "string concatenation is faster than using a format string in Python".)


On the surface they feel similar in that they are both quite concise.

Concurrency is a hugh difference. Python's approach to concurrency is the "global interpreter lock" which means only one thread can be running at once to prevent you from trying to access shared objects from multiple threads. Go has lite weight "go routines" that are multiplexed across os threads in parallel and channels that can be used for passing variables between them and the defer statement to help clean up. This sounds complicated at first but it is the simplest system for reasoning about concurrent code I have used.

Go also has pointers, mutexes, fixed sized arrays (with easy to use dynamic slices over them) and other things that let you implement low level data structures when you need to.

Go has interfaces and embedding but no inheritance which leads to iterative development of organization instead of needing to design from the start. The standard library makes extensive use of these and is extremely well organized...I much prefer go's http client code for example.

It doesn't have as extensive a corpus of library implemented in optimized C/fortran and I really miss things like numpy...in many cases python code can be faster because the libraries do all the heavy lifting.


I think the purpose of Go is to solve some of Googles very specific problems (e.g. easiness feeling comparable to Python, short compile teams, ease construction of concurrent internet server apps), not to enthuse anybody. If I see it as an unspectacular, better Python, not as a top modern language then it makes some sense to me. It seems they had to sacrifice sth. for short compile times (e.g. generics).


Boring != bad.


Boring also probably means "I can build an enterprise level business on it"


I think that's the point the author was trying to make.


Boring <> Performance


I think that's a perfect tl;dr. :)


C is boring too, in all the right ways. I think Go could be what we've been waiting for: C 2.0.


C's preprocessor means C code can often do interesting tricks, such as the X macro: http://www.drdobbs.com/the-new-c-x-macros/184401387

There are tons of other interesting uses for macros. Obviously I'd rather a language had safer equivalents to a C preprocessor.


I view 'boring' as an advantage. Go is so 'boring' that you just can't focus on the language, you focus on your task.


Maybe the website wouldn't be down right now if it'd run on Go.


link broken :(




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: