C was wildly successful because the world needed a language that had some of the mechanisms of high level languages, which allowed low level control and compiled to fast machine code. C hit an in-between niche at the right time and place.
As far as I can see, Go is doing the same. We need a language that has some mechanisms of higher level languages, like CSP style concurrency and GC, but which still allows for low level control of things like memory layout.
I ported a partly done multiplayer game server written in Clojure to Go, and I find that I'm more productive in Go. The tooling and server monitoring is more developed for the JVM platform, but coding in Go is more fun because everything is immediately responsive, and nothing "falls over" like it can with nREPL/bREPL with Clojure/ClojureScript. Always paying that several seconds environment startup delay, or the additional management required to avoid it, does wear on you in the long run.
C was successful, because it was required to target UNIX, like JavaScript is required to target the Web, Objective-C is required to target iOS and so on.
There were plenty of languages with similar capabilities at the time. Algol, PL/I, Mesa, ....
I didn't say C was only successful because of its middle-approach. It was also in the right place at the right time. I'm sure being from Google is helping Golang tremendously.
I'm curious, what other reasons did you have to switch from Clojure to Go? I love coding in Clojure, so I wonder what would make someone "like me" switch.
When going for parallelism, it's helpful to be able to lay out what goes where in memory to avoid false sharing. This is quite hard to do in Clojure. The JVM has better GC, but Go also allows one to avoid GC.
Also, to write fast code, I don't have to decompile bytecodes or do obscure things to make sure arithmetic is what I meant it to be. Int64 addition is int64 addition.
Some things are definitely more powerful in Clojure. But that isn't where my priorities lie right now.
> The JVM has better GC, but Go also allows one to avoid GC.
I don't see that Go allows you to avoid GC any more than Java does, really. You can (and Java programmers do, with regularity) use things like sync.Pool in Java as well.
Golang tends to need far fewer allocated objects than Java.
You can allocate arrays (and slices) of struct in golang. Just one allocated object irregardless of how many items there are. In Java, you need array of objects and an instance of each object. Array of 100 objects needs 101 allocations in Java. So one could say golang avoided gc for those 100 objects it didn't need in the first place.
In Java, you're otherwise limited to arrays of elementary types.
> So one could say golang avoided gc for those 100 objects it didn't need in the first place.
That's a reduction of the number of allocations, not of GC. In a generational copying GC like that of Java, the GC runs a minor collection when you've allocated enough bytes in the nursery to fill it up; it doesn't matter whether those bytes came from 1 object or from 101 objects. From reading Go's documentation [1] it's unclear whether its GC collection cycle is triggered based on the number of objects or whether it's triggered based on memory usage, but I assume the latter as that's how most GCs work.
The pressure on the GC during the mark phase is based on the number of total pointers. You may be able to reduce the total number of pointers by packing pointer-free structs together, but I would be surprised if this helps mark performance that much in practice.
The main way to really reduce GC pressure in a fully GC'd system, short of improving the GC itself, is to use pooling. Which both Java and Go programmers do regularly.
Golang case: 1 object, no child nodes. Nothing to do. Code that does not need to run is the fastest.
JVM case: visit array object and its 100 child nodes. JVM's GC still needs to visit all nodes of the graph. What actually takes time in GC is all those L1/L2/TLB misses and extra page faults caused by following the object graph. If all objects happen to be on a different cache line, L1 spills happen after just 512 references on Intel Sandy Bridge, Ivy Bridge, Haswell, etc. (and sooner in reality). Those extra loads from L2 are not free.
So, in this case Golang needed to visit 1 struct (object). JVM needed to visit 101 objects.
I wasn't talking about object lifetimes at all. I was talking about memory layout and point out in golang you can have the objects sequentially in memory without need for pointer indirection (references).
Thus generational GC doesn't have anything to do with this. Gen GC just something nice to have when a limited call graph generates a lot of temporary objects, i.e. garbage.
I mentioned this in the second paragraph. Like I said, I'd be surprised if that helps mark times that much in practice. For minor collections in a generational GC you're typically doing a Cheney scan, so it's very unlikely to matter as you're copying the whole live region of the semispace anyhow. For major collections on tenured objects, in theory it could help, but again I'm skeptical that it will affect mark performance that much, because compaction does an excellent job of mitigating the cache effects. (It's impossible to accurately measure this stuff right now, as the fact that Go's GC is much more immature than the HotSpot GC will skew the numbers.)
Here's an explanation from Russ Cox (in the form of an SO answer) into how Go and Java differ in terms of object allocations and control of memory layout. http://stackoverflow.com/a/22214673/1567738
I'm sure there's more info in the golang-dev group (https://groups.google.com/forum/#!forum/golang-dev) related to the GC specifics, but it's a moving target and may change substantially in the next few versions.
The main way to really reduce GC pressure in a fully GC'd system, short of improving the GC itself, is to use pooling.
It occurs to me, that I don't understand how pooling helps a non generational GC like golang's. In a generational collector, the contents of the pool would be promoted to regions of GC memory that are copied less. Go's GC isn't generational, so what is going on?
If you use a pool, you can explicitly return memory to it without going through the GC. This causes mark/sweep cycles to occur less often. (Of course, using pools opens you up to use-after-free and memory leaks, albeit without the type- and memory-safety consequences of use-after-free in C/C++.)
If you use a pool, you can explicitly return memory to it without going through the GC. This causes mark/sweep cycles to occur less often.
In a non-generational collector, why less often? Is it that the GC "sees" less garbage, and this figures into the frequency of mark/sweep cycles?
EDIT: Okay, I just got it. If you assume it's a bump allocator, it's easiest to picture. So Go does have another big advantage with regards to reducing GC pressure, in that one can stack allocate structs and arrays. (One would be passing slices on those arrays most often, and stack allocated would be slightly less flexible, of course.)
Right. See the documentation here [1]. Typically GCs run on allocation when they see a high ratio of the live set from the previous run to total memory in use.
Would I buy the author a beer at a con, yes, so I dropped $3 at amazon for the kindle version. That's about what its worth.
Problems:
1) Is it for noobs or the guy learning his 10th language? Starts with "what is a file, what is a folder, what is a text editor". Then leaps into a very matter of fact "and this is how you do recursion". So a noob is going to be totally lost after the first chapters and an old timer is going to be pretty bored with the "what is a folder" level stuff.
2) Strange text encoding errors on the site. ... Bullet point "Strings are" a-hat (as in a with top hat, not ahat) lowercase-epsilon c-concat-e "indexed" a-hat epsilon "starting at 0 not 1" ... current firefox if that matters (which it absolutely shouldn't)
Other than that, its a pretty good intro level book, like I wrote I'd "buy the author a beer" level of goodness.
It's UTF-8 being interpreted as raw bytes: "a-hat" is the 0x80 marker byte. A common fail is forgetting that Microsoft Word auto-corrects straight quotes (which are in ASCII) to curly double quotes (which need Unicode), and serving the resulting text as CP1252 or ASCII or some such.
I wish all these books did a much better job of positioning the language and explaining when you should choose to use it. That seems crucial information, and hard to find out online. The introduction in Go by Example isn't much use:
"Go is an open source programming language designed for building simple, fast, and reliable software."
So that's opposed to all the languages designed for building complicate, slow, unreliable software... OK, I'll bear that in mind.
There are so many viable languages out there now that any introduction needs to give readers informed advice on when it's worth using, and therefore when it's worth learning.
Is there any online resource that does a good job of this, for a wide variety of languages?
For every use-case you'll encounter, there is so many answers.
There is no "when you want to do X, use language Y" guide, because it would be pretty stupid. Every good programming language can be used for nearly all your programming needs.
The only thing you need to keep in mind, is where the target program should run (and the list is very small):
- should the target program run in a browser? Use Javascript or a language that compiles to it (Coffeescript, Dart, TypeScript, ...)
- should the target program run on an embedded board (very low memory footprint, very low CPU cycles available)? Use a language with manual memory management: C, C++, Nimrod, Rust, D ...
- should the target program takes the most performance out of the hardware (games, HPC, performant system library, ...)? Use a language that makes fast executables: C, C++, Nimrod, Rust, D, Go, ...
- should the target program runs on a web server? On a desktop computer? Any case that doesn't fit the previous three? Use whatever language you want or are familiar with, they all can do it, and there is no "best choice".
In the eighties, a lot of devices with very limited memory (from 1kB to 64kB) used garbage collection of sorts (kind of heap compaction). They were called home computers. It worked just fine.
Heck, people are running embedded Java with garbage collection and all with just a few kilobytes of RAM.
Low memory does not mean you need to do manual memory management. Of course, when you add requirements for predictable memory usage, it might be a different matter. Like embedded devices that need to run for extended time periods reliably without issues.
But then again, a lot of Commodore 64 programs written in BASIC managed to do just that without issues. Running for years without hick-ups.
From your answer then we could already improve the author's introduction...
"Go is an open source programming language designed for building simple, fast, and reliable software. It's a good choice where squeezing maximum performance out of the hardware is a primary concern."
Maybe a minor point -- but it makes the book far more compelling to people who will benefit from it, while enabling those who won't to move on.
So given that your choices for high performance is:
"C, C++, Nimrod, Rust, D, Go, ..."
Is there really nothing the author could say to help choose between them?
- should the target program run on an embedded board (very low memory footprint, very low CPU cycles available)? Use a language with manual memory management: C, C++, Nimrod, Rust, D ...
Just finished this book this past weekend. I'm not impressed by any of the Go books on the market, but this one is suitable for a brief introduction to get you going.
After reading it and writing some trivial code, the advantages it has don't really help me with any problems I face being a 1 man shop. Just losing a lot of libraries so I'll probably stick to using Python for 99% of the stuff I do.
It's an easy read, and it does a good job showing what Go is like.
It piqued my interest, and yesterday I started reading 'Programming in Go: Creating Applications for the 21st Century' by Mark Summerfield.
Does anybody know how it compares to 'The Go Programming Language Phrasebook' by David Chisnall?
I can't so how it compares, but I'm enjoying Programming in Go at the moment. It has the good sense to assume Go probably isn't your first language, so gets pretty quickly to the point rather than spending endless pages explaining what a conditional is.
One advantage of putting it in a meta tag is that the encoding is preserved even if the document is read from elsewhere (parsed from the local filesystem, etc).
Not to mention that not everyone has control over what HTTP headers get sent. It's way easier for a non-technical person to just add it in meta tag. Definitely a lot better option than leaving the matter to browser encoding sniffer!
This site isn't new, is it? I just finished writing a book on Go and feel like I encountered this site a few times while researching - though not as a complete anthology.
If so, I've found this site helpful but "Introduction" is the proper term, it doesn't go particularly deep into anything.
What sort of tools can I create in Go? Say I'm someone who programmed six years ago and since then has only a cursory exposure to Python, some Scala and Perl. What "real" applications, desktop, server, web or otherwise, should I attempt to build?
I've heard good things about Golang, then I hear things like its lack of generics make it useless for a lot of cases.
Lack of generics is not really significant. Other advantages in golang are worth much more than this minor deficit. Such as very fast compilation time, goroutines, channels, taggable structs (makes XML or JSON a breeze) and multiple return values -- one can get rid of bug inducing complicated control flow, that exceptions typically cause. Exceptions tend to be very hard to maintain and reason about and they tend to move the code dealing with the errors far away from where the exceptions are thrown. Probably a lot of people disagree about this, but humor me -- I write C++ as my dayjob. Oh, and I prefer golang's "defer" over RAII. It gets most of the job done in a way simpler fashion.
Even when generics are available, they're about a fraction of one percent of code. Other languages have been doing just fine without generics. Even Java has just syntactic sugar compile time type erasure hack instead of the real deal. Those List<String>s are just List<Object>s in bytecode. I don't remember any C-programmer complaining either.
What's very cool about golang in general, is that rather than including all the possible features you can think of (looking at you, C++!), it's more of how few features you can have while still having an expressive and powerful language.
I don't mean to talk down other languages or to praise golang, but I do want to point out one should look at the big picture and not let small details distract. Such as lack of generics.
You can pretty much build whatever you want. The main advantages of Go is that it's Concurrent and it's a compiled language. Performance wise its faster than Python and coding wise its faster than C. Basically its somewhere in between compiled languages and high level languages.
In my experience programming in Go is very nice. It feels like you are coding in a scripted language but compiles.
I don't recommend it for production unless you are one of those master coders. Its still a fairly new language and a lot of time I had trouble finding easy solutions that normally can be found on google / stackoverflow. I also wished the error handling was different. Although Golang community loves the error handling methods (you are suppose to handle the error like this), my code tend to be overflown by it.
Rob Pike made an interesting comment at the q&a for Gophercon I believe. Basically saying that error handling isn't anything special and he didn't want to create special syntax for handling error cases. Instead the syntax of the entire language is at your disposal to deal with errors.
It was sort of a quick answer, so not really detailed.
As well, panic/recover is there if you would prefer that, but err seems to work just fine.
I got Pike's point on that, and love go overall, but this is really the topic on which I can't agree.
The biggest problem with that is that throwing exceptions has a purpose : it makes errors interrupt program by default.
With errors in go, that's the reverse : errors are ignored by default.
Maybe it's just I'm too new in go, but I feel like I'd rather have my errors taken cautiously by default, with the possibility to ignore them (catch) than having to use variable and put if statement everywhere I care about errors (which pretty much means everywhere, in my current design style).
Am I missing something, here, than can allow to avoid this verbosity ? (as for panic / recover, that's good for functions you implement yourself, but you still have to deal with errors from core / lib functions).
> Maybe it's just I'm too new in go, but I feel like I'd rather have my errors taken cautiously by default, with the possibility to ignore them (catch) than having to use variable and put if statement everywhere I care about errors (which pretty much means everywhere, in my current design style).
That's what you should do.
> With errors in go, that's the reverse : errors are ignored by default.
To ignore errors in Go you need to explicitly ignore them because most functions return error.
So, most functions return two things - output data and an error. You have to assign both to something, like this:
data, err := foo()
and go forces you to use variables somehow, or it'll give you a compile time error, so you can't just assign it and never look at it.
There are some functions which only return an error, and yes, for those, you can simply not assign the result to anything. However, in general, you should be suspicious of functions that get called that don't seem to return anything. Can they really never fail?
Note that there are linters which will warn you if you are ignoring errors (google errcheck, I don't have the URL handy).
Yes there's a lot of error checking code in Go, but that's a good thing. It's explicit, and it makes you actually think about the error case. My Go programs are WAY more robust than my programs from other languages.
Forcing you to think about your errors is a good point, there's nothing more painful than discovering that an exception is thrown on production.
I still feel it's getting in the way of the "Go allows you to be more productive" stance, though.
Also, it makes it very dangerous for people not rigorous enough that will not take enough care of their errors.
Granted, "it allows bad developers to do bad things" should not be an argument, but I wouldn't be surprised we hear in a few years of big fails that will be imputed to go error handling (with headlines like "Foo.com tragic fail was due to go forgetful position on handling errors").
Go is not intended to be a fast and loose "bang it out in an afternoon" language. If you're comfortable with the language, then you can be really productive in it, but that includes error handling. You have to handle errors, that's like 50% of programming.
I really doubt you'll see headlines like that. Why would you not see the same thing for code that doesn't handle exceptions in other languages? They're far more invisible than Go's errors, which are right there in the method signature.
> Go is not intended to be a fast and loose "bang it out in an afternoon" language
I was hoping for nothing that extreme :) But this is something I have to consider. I'm cto, in my company, and the "how much time to build that feature" is something that has a very real importance when we decide what to implement.
Ruby got us into thinking of features as a weekly thing. I can justify to spend most of the week doing in go what I would have done in a day with ruby if it allows to do things we could not do in ruby or allows to scale up (and I've already done so for resource heavy background tasks that could be parallelized, actually). Doing it in C probably would have taken more than a week. That's the extent of what I mean by "productivity" in go.
> You have to handle errors, that's like 50% of programming
Well, that's the problem. That's 50% (probably less actually, but nevermind) of go programming. The hard reality is that I care for only 10% of those errors.
If a file was supposed to be transferred to a third party server and this one is down, certainly, I want to handle error, inform user about the situation and ask her to come back later.
But if something is supposed to work 100% of the time, I don't want to have to spend time writing code that would probably just never be executed just in case an unforeseen circumstance happens. I want an exception (with some exception handling system behind that notifying me about the exception, of course, which would be a one time configuration for the whole kind of those errors) and a generic error message for the user.
If something is supposed to work 100% of the time, it won't have an error return value. Otherwise there's a nonzero chance it'll fail. The time you spend writing a simple "network configuration file must exist" is way less than you'd spend trying to read a stack trace when your application crashes with "filenotfound: foo.json".
If Go is 1/5th as productive as Ruby to produce the same feature, either you're very new to go and very experienced with ruby, or you're doing something drastically wrong. Obviously, experience makes a difference, but Go has a very short learning curve, so you should able to get productive relatively quickly.
I honestly don't understand how you can only care about 10% of errors. You have to think about the error case, otherwise your application is going to be a crashy, data lossy mess and no will trust it.
> The time you spend writing a simple "network configuration file must exist" is way less than you'd spend trying to read a stack trace when your application crashes with "filenotfound: foo.json".
Well, stacktrace and enforcing meaningful message error where they are generated is fine with me, but I see what you mean. Certainly, if we were to think of custom message everywhere a problem may happen, it will be easier to debug. But does it really worth it ?
> If Go is 1/5th as productive as Ruby [...]
All assertions probably apply :) I began to learn and use ruby (even if not professionally at first) before rails was a thing, so I'm quite used to it. On the contrary, I've began to use go only a few weeks ago. It took me a week during a week off (with the slow pace it implies) to learn the language and re-implement that task I thought was a perfect fit. So yeah, that's indeed very productive, and that's what I was attracted by in the first place. I also probably have done horrible mistakes that would have make this time shorter, since I'm still learning the language.
But let not underestimate ruby here, and more importantly : its ecosystem. Certainly, if I had to use only ruby standard library, it wouldn't have been that fast. But with the help of activerecord and activesupport, things suddenly get really fast.
After learning to use database/sql, I've searched for something allowing me to save time, and switched to coocood/qbs. It helped a lot (especially on managing connection pool to postgres, since I was parallilizing queries), but of course it can't compare with something that mature than activerecord.
That's not intrinsic value of the language, but it clearly affects what it means to write a feature using the language in term of time.
> I honestly don't understand how you can only care about 10% of errors
Well, I guess it depends on what kind of software you're writing. If I were to design a public facing server app, certainly, the last thing I would like to happen would be it to panic and quit. In my previous case, I was rewriting a non-destructive data processing background job, called by cron. If it fails, cron will mail me with details and try again later ; that's perfectly fine.
But let not be mistaken on what I mean by "I care about 10% of errors". It does not mean I don't want to know there were errors the rest of the time. It means that I don't want some specially crafted logic handling it to ensure data / runtime state. Having the program interrupted and a report made with error message and stacktrace is ok most of the time (for the rare occasions when it happens). Let take some actual examples.
A table in my database has an hstore field, which represents in application what we would call in go a `map[int] int`. Except that in hstore, all is string, actually. So in my go code, I have :
key, err := strconv.Atoi( hkey )
I can understand why this function generates error, and that's a good thing, conversion to int may fail. But in my case, I know for sure I have a numeric string. So yeah, certainly, someone may introduce a bug somewhere sometime that will put a non numeric key in the hstore. But the low degree of probability for this makes me ok to deal with a message like "can't convert 'foo' to integer" with a stacktrace showing the line number when it happens (and indeed, once again, this instruction is followed by `if err != nil { panic( err ) }`).
An other example is database query functions. They all return an error, in order to relay database errors and connection to database errors. That is fine, this is information I want. But do I really need to add chunks of logic testing for those after every single query I make ? Again, having a default "raise and die" handling, with a few chunks of logic to ensure data integrity where it's at risk (on writings, for example) would be enough.
> Basically its somewhere in between compiled languages and high level languages.
High level languages and compiled languages are pretty orthogonal. A language being interpreted/running on a VM# is not a prerequisite for being at a certain level or higher.
# And someone will complain that these are implementation details, not something to do with the abstract specification of the language itself.
I think he's talking more about the language, rather than the book.
You can have a book on APL or Forth that starts out in the first chapter targeting people new to programming, but that does not mean the language itself is the best choice.
Regarding Golang for novice coders, I would say it is an OK choice for a beginner, probably better than starting with Java, but not a good a choice as Python.
Comparing "hello worlds" is a fairly poor way of getting an idea of how much complexity/boilerplate you can expect from a language. It's better to compare something more substantial, like quicksort implementations:
Absolutely! The syntax can be a little confusing at first, but the documentation is very high quality and you can usually get help from #go-nuts on Freenode.
I was briefly excited because I thought I would discover the book containing wisdom of how to program a competitive Go AI engine. (Go is an ancient board game.) Oh well.
As far as I can see, Go is doing the same. We need a language that has some mechanisms of higher level languages, like CSP style concurrency and GC, but which still allows for low level control of things like memory layout.
I ported a partly done multiplayer game server written in Clojure to Go, and I find that I'm more productive in Go. The tooling and server monitoring is more developed for the JVM platform, but coding in Go is more fun because everything is immediately responsive, and nothing "falls over" like it can with nREPL/bREPL with Clojure/ClojureScript. Always paying that several seconds environment startup delay, or the additional management required to avoid it, does wear on you in the long run.