> Go has multiple returns from functions so a very typical scenario is a function that returns something and also an error, which would be nil if everything worked okay
This is one of the problems I have with C. You basically have to remember to manually check for an error on every function call, which means (1) an algorithm with short and simple pseudocode becomes lengthy and hard to read real code, because you need to add a ton of error handling, (2) it is easy to forget to handle an error in some place. Exception-based error handling lets you delegate errors to a top-level handler without cluttering up intermediate code (i.e. an exception anywhere in webpage rendering logic should be handled by 500'ing the entire page, which exception-based error handling lets you do without cluttering intermediate functions between the function producing the error and the function handling the error with error-handling logic).
I've been thinking about learning Go, but if its only error handling mechanisms are "pack it into a return value" or "panic() which is like exit(1) in C" then it's a black mark against the language. It's still a black mark if the language has exception-like error handling but it's not considered "idiomatic Go" and the stdlib and a good chunk of third-party libs report errors in return values.
On the other hand, exceptions are implicit control flow, which means you have to assume every single expression may throw an exception. So if you don't take extreme care to write strong exception safe code, subtle bugs will exist and are extremely hard to find.
Explicit control flow is more verbose, but it is also easy to spot when the control flow passes over something important by mistake. And as pointed out elsewhere, forgetting to check a return value is a solved problem via static analysis.
> On the other hand, exceptions are implicit control flow
Go doesn't force you to deal with errors either. I see plenty of code ignoring them with
retval , _ := MightYieldAnError();
And some languages that have exceptions actually force the client code to handle them, like Java AFAIK .
Finally, Go "explicit control flow" didn't have to be verbose like it is, that's a design choice its authors made.
Go is primitive in the sense that it uses C style code patterns , like the famous :
retval = MightYieldAnError( &outError);
Edit: I forgot to mention the panic/recover which ultimately means Go designers implemented some form of exceptions. That basically voids the usefulness of Go error system, and an admission that it's broken.
> Go doesn't force you to deal with errors either.
> And some languages that have exceptions actually force the client code to handle them, like Java AFAIK.
But this misses the point. Any error which you have to explicitly ignore is fine, because then you were aware of it at some point and explicitly ignored it, and anyone reading the code can plainly see you explicitly ignoring them.
And errors which can only happen at specific points (like function return values) are also ok, because then you only have to consider error handling and cleanup at those points.
But if you have neither of the above - if you have the situation where errors can spontaneously result from any evaluation of any expression with no explicit notation required - then you have a poor situation indeed.
Swift has a nice middle-ground here. Functions which can throw must be marked as such, and if you call a throwing function, you must use the `try` keyword. You don't actually have to handle the error there (you can let it propagate, or choose to let the program crash), but you can if you want.
This means there's no implicit control flow. You can always see where a function might throw, or know that a function will never throw. It also means, performance-wise, the implementation is much more efficient than C++-style stack unwinding.
I've not used this in practice, because I've not tried Swift, but I think it's a good approach.
Go has both error codes, and exceptions (panic/recover). Basic errors implicitly panic, such as null pointer exceptions, arithmetic errors, ...
So in Go you have the worst of both worlds. Constant error handling code everywhere, AND you have to treat every single expression as if it may suddenly throw panics.
Oh and to make matters even worse, Go has typed nil, which is not equal to untyped nil. So you cannot correctly check interfaces against nil, the most common cause of panics in my experience.
// t is some kind of interface
if (t == nil) {
// t is untyped nil
} else {
// t is non-nil OR a typed nil
}
I'm not even sure what the correct fix is. Should every interface come with a isValid() method to check if it's usable ?
Wrt error codes and (2): in Go, you have pretty much ignore errors willingly (by explicitly assigning the error to _). You cannot ignore them accidentally without compiler pestering you about ignoring the return value.
Go makes it really easy to linters (lexer and parser in standard library, simple AST, etc.), so there is a robust linger for that: https://github.com/kisielk/errcheck
It's just as easy in C to write that linter in the post-libclang world. (Except that it's not necessary, since all major open source compilers support the warn-unused-result attribute as a builtin, while Go requires an external tool.)
Go's error model doesn't differ fundamentally from that of C. The built-in unused value error helps a lot by catching a sizable subset of cases, though; Go's designers made the right call there.
It uses error values like in Go, but you must prefix any function which can throw with a try statement or it fails with a compile time error. That way you never forget to handle it, but it's not really an exception either.
This is valid, just not for Go. There isn't some big underlying coherence guiding the design; it's more a bunch of one-offs.
For example, you get a nice operator for appending to a string, but not a slice. A zero-valued slice is empty, a zero-valued map is nil.
This isn't to say that Go is bad, it's just that its tradeoffs aren't in service of any big unifying vision. When range can only iterate over a slice, map, string, or channel, there's no forest. It's just four trees.
The tradeoffs absolutely are in service of a big, unifying vision. That is, software development in the large. There are plenty of talks by Andrew, Rob, etc. that describe this calculus in detail. Whether or not they're convincing to you personally is, I guess, another matter.
I'm not sure how you can seriously say that given the designers and the internal use at Google. But perhaps the universe excludes Google, they're too far ahead anyway.
That's not what I meant. My reply to you needs the context of the parent comment.
To "miss the known universe" was the explicit purpose of the designers. It was supposed to be simple (whether their definition is good or not is debatable). The number of users is irrelevant.
Yeah, I found it quite bizarre as well. Also, the panic()/recover() looks more like they decided to unify assert with setjmp/longjmp rather than exceptions since catching only occurs after your function exits which is pretty much equivalent to having the entire function wrapped in a try-catch block, making it less useful.
Why does Rust get articles about eg an operating system written in Rust, but all the Go posts are how to tie your shoe laces? Somebody explain, please.
Go is for writing api servers which otherwise should be written in python but must be fast.
Rust is for the lowest level code which is not assembly, with all the performance tricks, but that should be secure.
Rust is expensive to develop - if you can afford to use a GC, you should use a GC.
So people want to rewrite all the world's C and C++ to rust, for security. But people are less passionate about rewriting all the world's python into go for performance.
> Rust is expensive to develop - if you can afford to use a GC, you should use a GC
This has not been the case for many people using Rust, including myself. Once you learn the language, there is no "cost" to use it. In fact, the compiler simplifies much of the mental overhead when dealing with concurrency, sharing data, mutability, etc... Sure the compiler has some strict rules, but it's not expensive to develop in by any means.
I would also say Rust is in a sense higher-level than Go in terms of expressiveness and abstractions.
How so? From my experience, I was able to make massive changes and the compiler would guide me through to make sure I didn't miss anything. If it compiles, it'll generally work minus any logical errors/changes.
There are definitely refactoring woes to be had when taking an existing struct and changing it such that it now stores a reference in one of its fields, requiring lifetime parameters to trickle down through your program. The compiler guides you in getting it all right, but it's something that I'd rather see an IDE do automatically in the future.
I used to feel the same way about Rust. But the more I started using it, the more I realised that it's fairly capable and I could use it much more than I had originally planned. While it's not as high level as, say, F#, some features are higher level than Go. For instance, error handling in Rust is nicer than Go. Rust's cost-free functional style is also pretty neat.
> Rust is expensive to develop - if you can afford to use a GC, you should use a GC.
I like GCs, but I wouldn't make such a blanket statement. Manual memory management as implemented in Rust also eliminates data races at compile time, which are in fact the last item listed in the original article.
Because Go is being used by everyone from rocket scientists implementing consensus algorithms for managing distributed systems sanely to numpties like me hacking out utilities to migrate blogs from one engine to another. Those of us at the latter end of the spectrum migrate from perl to python to ruby to go to whatever is the most cromulent systems-glue language this year, and we like those articles.
Rust is almost entirely at the rocket scientist end of the market.
I'm not sure where this conception of Rust comes from. I'm just as much of a numpty as you are, slinging Python and Javascript and really, really bad PHP by day, and yet Rust is what I do for fun. It's the systems language that makes systems programming accessible to those of us who don't have a lifetime to devote to memorizing all the ways a C program can explode in your face.
They're different languages with different design goals, different corporate backers, and different communities. Why should they attract the same projects or developers?
Agreed - getting started was really hard. I think, as my sibling post notes, that's a function of the existing community - unlike many languages where adoption trends heavily towards the wide end of the pyramid and moving upward over time, Rust seems to be starting at the tip and broadening (more slowly than one might like) on its way down.
Or maybe that's the result of a scarcity of those kinds of resources. Hard to say.
I think it's also because people undervalue these kinds of posts. Many people think that the idea must be novel to be worth a post, but that's not true! There's a lot of value in posts that cover the basics. Sometimes, seeing a concept explained in a different way is all that's needed to get something to click.
Rust is a pretty tricky language. So there's the fact that you need to learn more before you can write an accurate article, and if you learn too much, you start losing the beginner's perspective. And if you don't learn enough, you don't have a good enough picture to teach. Thus the number people in a position to write high-quality introductory articles is deflated.
For the same reason the Stack Overflow's C++ tag has the best questions.
If you start out on Rust, the first thing you hit is the borrow checker. There are a billion articles on that, but the questions there seem less novice because the problem is harder.
And then when you have gotten to the point you need best practices in error handling, Rust greets you with this:
To me it's largely about the communities that each have attracted. Golang has a lot of overlap with the Node world and the more trend-following parts of Ruby and Python, IME, and my experience with all of those suggests to me that that's selecting for a very product-focused community with, frankly, a generally lower need for high-end approaches and tactics. But Golang is popular, and it is thought to be easily accessible, and so the easily accessing need more stuff to read and talk over. So you will see many more introductory and "obvious" posts, and it'll be reflected in the ones that jump out of the Golang pond into more general environments. It's not exclusively that, of course - there are plenty of console emulators, etc., written in Golang - but they're not what Product People are reading because they don't know how to handle errors in their new pet language of choice yet.
n.b.: While I have seen very, very competent programmers writing good software in Golang despite the language's considerable efforts to thwart them, and doing so in a product-focused environment, my intuition, as an outsider (which I disclaim because, as somebody whose job is to build systems that do not fail, the worse-is-better ethos of the Golang community scares the hell out of me), is that there's not as much deeper discussion to be had when you are of a generally product-focused mode; many things are unspoken and many others are very specific to your tiny slice of the world--is there as much to say, publicly, about that stuff? And are the other loud people in that community even listening when you do?
Rust seems instead to attract immigrants from what I consider better programming environments; not necessarily better business environments (almost certainly not, in the cases of people I personally know) but environments that are more welcoming and enabling of building hard stuff. The Rust people I know are migrants from heavy-hitter C++ backgrounds, functional programming backgrounds (the biggest fan I know personally previously lived and breathed OCaml), and a few from the deep end of the Ruby community. Thus to me it follows that the interesting stuff that leaks out into the general public is going to be more esoteric to many outside of it (and maybe, for you personally but certainly for me, more interesting).
It also probably helps that the Rust APIs are really smartly designed and it seems--though I am a Rust novice, and this is a personal take--much harder to shoot yourself in the foot. There may just be less need for "these things work but will secretly ruin your day" in general.
A single language that can demonstrate this continuum pretty well is actually Java, I think--you see tons upon tons of "here's why you don't use == in Java"-level posts, but you also see stuff like Aleksey Shipilev absolutely throwing down useful and actionable stuff about the JMM. Java as a community is fragmentary and huge, and there's more space for, and more need for, discussion at both ends of things.
Found myself agreeing with all of this. I very much second the decision to not use a framework and instead use the core to libraries and a standalone multiplexer. Another huge benefit of this is that you can write some glue code and test your end points without using the http mocking interfaces, which are absolutely terrible for testing.
+1 here. Gin is a pretty reasonable web framework that sticks to the core go patterns and practices, but provides some good built in helpers. It includes http router (a high performance route handlers and dispatcher), and the implementation of middlewares is nice.
We've tried with Revel, and I completely agree with the post here. We could use net/http directly, but gin is working super well for us right now.
It is spot on observation about rather bad rules in Go about := and scopes that hit me as well. For example, the following compiles without warnings and even runs the first invocation of test successfully, https://play.golang.org/p/tYQSNjudGT :
package main
func test(n int) {
for {
n := n - 1;
if n == 0 {
break
}
}
}
The 'go vet' command has a -shadow flag to check for this. And as the author points out, editors can point it out as well.
It definitely isn't ideal, but it can be dealt with. I believe Rob Pike had called out that the number of ways you can do variable assignment/declaration is something he disliked, but it's impossible to change in Go 1 at this point.
The issue here is not shadowing in general but shadowing withing a single statement like in n := n + 2. A sane rule would not allow to refer to the declared name in the right-hand side expression.
"Don't use a Web framework" (#1), "use panics wisely" (#2), "be careful when reading from request.body more than once" (#3), "be careful of hidden allocations from pointers-to-slices" (#5), "be careful with the surprising behavior of naked-return-implies-return-0 feature of Go" (#6), and "be careful with the interactions between mutation and shadowing" (#7) are all Go-specific. Among popular languages, issues #5, #6, and #7 apply only to Go.
Otherwise you're right - but all languages have sharp edges.
I'm using Go on my current project and I've gotten a lot further a lot more quickly and with better performance and reliability than I would have using Django, Rails, or Node.
Because I'm not in a hurry I thought it would be fun to write a simple custom server/router/middleware bundle around net/http without using any 3rd party packages. From a cold start it all turned out much easier than expected, and so far the whole thing is under 500 loc.
Being careful around a few sharp edges seems like a small price to pay.
1) Be judicious regarding your choice of tools
2) Be careful with signalling errors
3) Don't read twice from a stream and expect the same data
4) Don't write SQL?
5) Understand pointers
6) Make it easy for readers of your code
7) Understand Scoping
8) Don't use objects concurrently unless they were designed for it
9) Keep track of which versions of dependencies you use
In Go, "don't use a web framework" translates to something like "net/http is already a 'web framework' much like web.py or other 'minimalistic' frameworks". Unlike, for instance, Python, where you "have to" use a web framework because the standard library doesn't ship with much (http.server is too basic on its own [1]), the base net/http is enough for many non-trivial usages. (Of course Python has a few dozen good 3rd-party choices.)
I don't think the post here is actually asserting not to use a web framework; just that you should not use a specific framework, named Revel, that aims to be Rails-esque. It brings up an interesting point for me about what is really possible (or desirable) in a framework written in a statically-typed, compiled language like Go compared to a dynamically-typed, interpreted language like Ruby.
I was intrigued about the part about not being able to ready the response body more than once. Can anyone more experienced explain why this decision was made in the core libraries? And if there is a good reason to not read the body more than once, is the author making a mistake by trying to get around it?
Go's core library specifies several interfaces for reading, writing, and closing streams. Here's Reader, for instance: https://golang.org/pkg/io/#Reader Those interfaces do not promise seeking, so they apply to as many things as possible. (See https://golang.org/pkg/io/#Seeker .)
The HTTP body is provided as an "io.Reader" which actually backs to the network stream, and since TCP streams can't be seeked, it's an error to think you can. This means that native Go HTTP handlers can handle a multi-gigabyte upload or download without consuming all that RAM at once (assuming there is some way to stream), but you lose seeking. You can easily recover read-only access by using io.ReadFull to fully consume a reader into a byte buffer, but then you pay the price of consuming all the RAM, of course. (https://golang.org/pkg/io/#ReadFull ) Whether or not it's a mistake depends on your situation. Fortunately, in the HTTP context, you have a Content-Length that you can examine to make decisions if you need to; the generalized io.Reader does not have that, either.
This all comes from the nature of HTTP itself. I tend to consider it a basic requirement for any web development environment that there be a way to correctly deal with an HTTP upload as a stream, and that the environment not "helpfully" unconditionally load the entire stream into memory for me.
If you want efficiency, I think it is the only way to do it.
Responses can be huge. Why would the library commit to storing them at all, and determine the data type to store them in?
Yes, you could make an API that allows the library user to control that, but that would be needlessly complex compared to having a single call "give me the response stream", and letting the caller of that decide how much of the data to read, and what to do with it.
Can someone follow up with more details on vendor for those not using go currently? What is the best practice for locking versions of libraries down. I am comparing this to gemfile.lock and shrinkwrap.json . Ty
actually, for error handling, panic is a lot more like exceptions than exit. much like java, panic acts like throw, and defer/recovery like try/catch. and you can use it like that in the depths of your library, recovering from panics in your exported functions.
Obviously you need to spend more time playing Kerbal Space Program. Rockets are their own justification.
On a more serious note, I think that's just human nature - if you're invested in something - a technology, a stack, a business model, a car - you value that investment more than its worth, because it is yours. I think there's also an element of "I put in the effort to learn this, so damnit I'm going to use it".
Respectively: Go (which is criticized by some for its prioritization of language simplicity over features), more complex languages, and the task that the program needs to perform.
This is one of the problems I have with C. You basically have to remember to manually check for an error on every function call, which means (1) an algorithm with short and simple pseudocode becomes lengthy and hard to read real code, because you need to add a ton of error handling, (2) it is easy to forget to handle an error in some place. Exception-based error handling lets you delegate errors to a top-level handler without cluttering up intermediate code (i.e. an exception anywhere in webpage rendering logic should be handled by 500'ing the entire page, which exception-based error handling lets you do without cluttering intermediate functions between the function producing the error and the function handling the error with error-handling logic).
I've been thinking about learning Go, but if its only error handling mechanisms are "pack it into a return value" or "panic() which is like exit(1) in C" then it's a black mark against the language. It's still a black mark if the language has exception-like error handling but it's not considered "idiomatic Go" and the stdlib and a good chunk of third-party libs report errors in return values.