I've never programmed in Go before. Coming from a C# background. Can someone tell me, how does Go feel? Is is pleasant to work in, or is it tricky like C.
It's very pleasant to work in. It has its quirks and language design rigidity and can get tricky, though not quite "like C" unless you're implementing cgo directly.
Its a a very simplistic language. Coming from C# there is no inheritance, only composition. Types are super strict too which can be a pain in the ass (especially cuz you cant add things like an int32 and an int64 together). Its nice once you are used to it though.
> especially cuz you cant add things like an int32 and an int64 together
Which in some ways makes a lot of sense, in particular if you want to avoid "undefined behaviour" or just behaviour which is subtle.
For example: If I add a uint32 and a uint64, what is the format of the result? What about uint64 and int32? Which of those is "larger?"
And that's just the least bad example, with two float-formats you can lose precision is strange ways.
So I think go forcing the programmer to be specific is painful in the short term but helpful in the medium to long term. Bugs should go down and maintainability up.
I've been dealing with this same issue in Rust, and I think there is definitely a middle ground to be had.
In the first example, it's trivial to add uint32 and uint64, and store the result as a uint64. No information is lost at all in that operation. This generalizes to any combination of integral types, where [u]int{x} + [u]int{y} = [u]int{max(x, y)}. It's inexplicable to me why a language wouldn't allow these loss-less operations to be implicit.
The other two operations don't have a clear answer, so it makes perfect sense to require an explicit cast in those cases.
I guess that's exactly what I don't understand. If I add a 32-bit integer and a 64-bit integer, what other possible result could I be expecting besides a 64 bit integer?
> If I add a 32-bit integer and a 64-bit integer, what other possible result could I be expecting besides a 64 bit integer?
A 128-bit integer (if you are adding a 32-bit integer and a 64-bit integer, the smallest power-of-2-bits representation guaranteed not to have an overflow is 128-bits, so its the safest result. Though, I'd agree, not the most likely thing most programmers would intend.)
Good point. As painfully explicit as Rust is at times, I'm actually slightly surprised they didn't go that route. (At least for integer sizes less than 32 bits.)
That's a problem with addition, period, not type promotion. Adding a 64-bit and 32-bit integer and promoting the 32-bit integer to 64-bit doesn't produce any problems that you won't have adding two 32-bit integers together or two 64-bit integers.
Hi. I know C# in amateur fashion and Go professionally. I think a lot of people here will give you a very positive review of Go. Since they've got that covered, let me tell you what you're giving up from C# or F# and why I won't use Go. Maybe you can form a balanced conclusion from the aggregates, because I find Go to be very polarizing.
Concurrency
Async: Go uses channels for all concurrency. Period. This mechanism, is sort of like half of the Erlang Actor philosophy, but even more lightweight. Channels and goroutines are constantly coming into and out of existence and you feel no real shame doing it. Even simple tasks often need them becuase Go's I/O libraries tend to be async-first.
Compared to C# and F#'s async, I think you will find this to be very different, but not particularly better in terms of performance. F# offers a very similar abstration with very similar performance characteristics and C# async methods are using a very similar mechanism under the covers to provide closure over computation chains in what amounts to a kind of effectful state monad. Don't sell your home team short on this front; MS's work there is cutting edge.
Error Handling
In daily programming, Go's weakest story is error handling. While many people rightly cricisize try-catch error handling as a primitive and error prone mechanism, the Go solution is to say, "We all hate C but actually C erorr handling was fine so long as you have multiple return values at the syntactic level." So you often return a success value (which is nullable) and an error value (which is nullable) and then ask the caller to check on the null.
This is basically error code checking. People will say it isn't, but really it is. It really is. And unlike some other langauges Go provides no facilities for "chaining" these operations. So you end up writing if err != nil { ... } over and over.
In the case of chained I/O operations, this is really tiresome. It also often leads to code repeating or some somewhat convoluted dispatch logic.
Go Error values also suffer from Go's other issue, it doesn't have an extensible type system. Instead it has "interfaces". In practice, what this means is that it's very difficult to expose new error types or give your clients good ways to dispatch on them. While this means error handling code is lightweight, it also often means it has to do silly things like regex and error message string to find out what a specific failure was.
Some people value that approach. If you're writing an executable it's actually good, becuase it's probably better to fail fast in a recognizable way. But if you're writing a library and offering OTHER people that facility you can't support them well (and you will not be well supported by Go libraries).
Build Tooling
Go's general toolchain is solid and its compiler is wicked fast. But its build story is still really, really bad. Go originally had this mountain of filesystem around every project that was tricky and error prone to share around projects.
With recent releases, they've moved to something that resembles Ruby On Rails's "vendor" approach, where a sub directory contains a whole checkout of each dependency's code. This is actually a pretty major improvement (in part becuase it works better with Github, which is Go's primary distribution mechanism). But even with this change, managing a codebase over time is error prone. Unlike Maven and Nuget, there is no enforced concept of version releases (nor discipline around snapshotting) in Github. So if the maintainer of the library has poor discipline (or if there is code poorly tagged during a maintainer change), it can be difficult to get the exact version of a library you want with the exact bugfixes you need.
Google's response to this is, "We don't have this problem because everyone at Google always keeps /master clean and we basically never make breaking API changes." But if you talk to them internally, the reality is more what you'd expect. Sometimes a lot of time is lost fixing that.
Everything else I wanted to say (aka "Conclusions")
On balance, Go is a good environment for making executables. But a lot of why people like it stems from negative experiences they've had with scripting languages and their poor packaging story, Java and its problems keeping up with other managed language runtimes (and oh god its package process is just silly and antiquated).
You've already got cutting edge concurrency, static builds, a lightweight crossplatform runtime with CoreCLR, and pretty fast cross-compilation. For you, what you might find refreshing is how very clean and unified the Go langauge is. It is many things, but one thing it excels at appealing to is the pythonic there-is-one-way-get-in-line crowd. It is small, purpose-built, and singularly uncomplicated. C# has a "history" and "legacy feature support." Things like delegates that have fallen out of fashion now but are still lurking in the codebase or backing other more modern features.
If you want to try the concurrency model but don't know if you wanna commit to a whole new runtime, do try F# if you haven't yet. You can get great performance and the channel based concurrency out of it, and I think most people would agree its error handling is light years ahead of what Go offers.
If you'd like to try a totally new language with really cool concurrency semantics on a purpose-built runtime, can I recommend Nim-lang.org? Nim is amazing. It's got one of the most ambitiously cool ideas I've seen for micro-optimized concurrency code since reading Marlow's paper on Haxl for facebook.
I agree with all of the above. If Go is going to improve the quality of your programming experience, then this is a great reason to use it, but if it is going to make things harder for you then stick with what you know.
If you are coming from C#, then you probably wouldn't enjoy checking the results of each and every function call. You know how and when to use exceptions and they will save you many lines of code over using Go.
If you have used generics then you will probably feel like you are back to .NET 1.0 when using Go. You will end up generating code for types or copying and pasting class definitions.
That said, I am all for learning a new language that will teach me a new paradigm. In that case, the loss of productivity is worth it because it will add to my toolset. If the paradigm is the same, but the productivity is lower, well then what is the point? Performance? Perhaps.
BTW, I completely agree that Nim is one very cool language. It hasn't introduced new paradigms for me (yet), but it improves my productivity over C++ when working on a small opengl game.
Go uses goroutines for concurrency, not channels. Goroutines are like threads, but they are managed by the Go runtime instead of the OS, and they use less system resources (a new goroutine uses just a few kilobytes, and its stack is resized on demand if necessary).
Go uses channels as a synchronisation mechanism. But other synchronisation mechanisms can be used (i.e. mutexes or atomic operations).
> Even simple tasks often need them because Go's I/O libraries tend to be async-first.
That's quite the contrary. Most libraries are synchronous. You don't need callbacks (like in Node.js) or async/await (like in Python 3). This is made possible by goroutines and that's a big advantage of Go.
I very much like the async/await mechanism offered by C# or Python because it makes side-effects explicit. But its drawback is its "virality": if you introduce an async operation in a function, you have to "propagate" the change by converting all callers to async/await.
About the error handling, I'm still on the fence. Your comment summarizes very well the drawbacks of Go on this topic. But what we gain in terms of simplicity and making error handling very explicit is probably worth it. I think that error handling is still a subject of tension in every language and the problem is not fully solved (even in Rust, Haskell or Erlang). Time will tell.
> But its build story is still really, really bad.
I think you meant "packaging story" instead of "build story". It's true that at this moment, the Go project has not rallied around a single and universal packaging tool (like npm in Node.js). But in practice, if you vendor your dependencies, I think it's easy to manage a large codebase without errors.
> do try F# if you haven't yet. [...] its error handling is light years ahead of what Go offers
> I very much like the async/await mechanism offered by C# or Python because it makes side-effects explicit.
It's also potentially faster and more efficient than goroutines, because it packs the state to be shared on context switches into what is typically a very tight structure instead of saving the entire stack.
> I think that error handling is still a subject of tension in every language and the problem is not fully solved (even in Rust, Haskell or Erlang). Time will tell.
Can you elaborate as to what the problems you see with error handling in those languages are?
It's also potentially faster and more efficient than goroutines, because it packs the state to be shared on context switches into what is typically a very tight structure instead of saving the entire stack.
I'm sure you already know this, but for anyone new to design in concurrency backends, not only does async/await (CPS) have the potential to really trounce goroutine-style concurrency in modern systems, but if you have a really smart compiler and some OS support, it can really fly. See Joe Duffy's blog post about asynchrony in Midori for more info[1].
Well it's not like the Either monad (Haskell) or erlang's {err, Reason} pattern is much better. Go's just bad at sharing details about new errors, in that interfaces are a pretty poor tool for that sort of work. At least Either has an Applicative and Monadic form, which is really nice for decoupling flow from handling.
Error handling is tricky because it requires richness both in how you talk about types and how you handle control flow.
But using GADTs and to a lesser extend F#'s discriminated unions for errors does have nice static checking properties. That's definitely an improvement over Go's "I just regex'd the string please send help" approach.
> It's also potentially faster and more efficient than goroutines, because it packs the state to be shared on context switches into what is typically a very tight structure instead of saving the entire stack.
A context switch to another goroutine doesn't need to "save the entire stack". The stack is already there. The runtime just needs to keep a pointer to the stack of the suspended goroutine. And the stack is usually just a few kilobytes.
Moreover, keeping the stack is useful for debugging because you can print a nice stack trace.
You have to keep the stack around in memory, as opposed to having a fixed structure. And fixed structures really pull away when you think about allocation: you can use a segregated fit/free list structure, whereas you can't with a variable sized thing like a stack. If the stack starts small and grows, you're paying the costs of copying and reallocation whenever it does: another loss. Allocation in a segregated fit scheme is an order of magnitude faster than traditional malloc or allocation in a nursery and tenuring.
For these reasons and others, nginx could never be as fast as it is with a goroutine model.
I agree that a fixed structure is more efficient than a growable stack, especially in terms of memory allocation. But I don't understand how you apply this to an evented server. Asynchronous callbacks are often associated to closures. But closures can vary in size, which makes hard to store them in a fixed sized structure. What am I missing?
I haven't read nginx source code, but I guess they are able to store continuations in a fixed structure because they don't use closures and they know in advance what information must be kept. I don't see how this approach can be used as a general purpose concurrency mechanism for a programming language. But I'd like to learn something :-)
This is really interesting, but I feel like Erlang is a counter example. Not sure if that's a good comparison, so I decided to ask and risk sounding stupid.
Rust error handling can be concise thanks to the try! macro, but macros bring their own problems (like making more difficult to write refactoring and static analysis tools).
Haskell error handling can be concise thanks to monads, but they need higher kinded types which bring their own share of complexity.
The conversation on the "RFC: Stabilize catch_panic", found on Rust's issue tracker, illustrate some unsettled questions I had in mind (https://github.com/rust-lang/rfcs/pull/1236).
For example, kentonv wrote:
All code can fail, because all code can have bugs. Obviously, we don't want every single function everywhere to return Result<T, E> as a way to signal arbitrary "I had a bug" failures. This is what panic is for.
graydon wrote:
Currently you've adopted a somewhat-clunky error-type with manual (macro-assisted) propagation. Some rust code uses that correctly; but much I see in the wild simply calls unwrap() and accepts that a failure there is fatal.
ArtemGr wrote:
The only way to maintain both the safety and the no-panic invariants is to remove the panics from the language whatsoever. Explicit errors on bounds check. No assertions (you should make the assertion errors a part of the function interface instead, e.g. Result). Out of memory errors returned explicitly from every heap and stack allocation.
If you'd like to keep the assertions, the smooth allocations and other goodies then you either need a way to catch the panics or end up making programs that are less reliable than C. No modern language crashes the entire program on an out-of-memory or an integer overflow, but Rust will.
The libraries we have, they do panic, it's a matter of fact. Withing the practical constraints and without some way of catching panics you can't make a reliable program that uses external crates freely.
BurntSushi wrote:
If something like catch_panic is not stabilized, what alternative would you propose? (Option<T> and Result<T, E> are insufficient.)
On the same topic, there is this post about introducing a `?` operator or a `do` notation (inspired by Haskell) to streamline error handling:
But I'm sure you're quite aware of these discussions :-)
My general feeling is that, whatever programming language you consider (Python, JavaScript/Node, Go, Rust, Haskell, Erlang, etc.), the right way to handle errors is still an open question.
> Rust error handling can be concise thanks to the try! macro, but macros bring their own problems (like making more difficult to write refactoring and static analysis tools).
No, it's not more difficult to write static analysis tools. You use libsyntax as a library. Refactoring tools, maybe, but it's a lot better than refactoring with code generation :)
> For example, kentonv wrote:
How does that describe an unsolved problem? It illustrates that Rust's bifurcation of errors into Result and panics works.
> graydon wrote:
I think it's a relatively minor issue that would be solved with "?" or something like what Swift does. Switching to Go's system would make it worse; Graydon's criticism applies even more so to Go than to Rust.
> ArtemGr wrote:
Catching panics is important, yes. No argument there. It doesn't change the overall structure of Rust's error handling story, though.
> No, it's not more difficult to write static analysis tools.
I agree it's solvable, but I'd argue it's a bit more difficult to write static analysis tools when macros are implied. But maybe I'm missing something.
Here is an example:
The subsystem types in the sdl2 crate starting in 0.8.0 are generated by a macro, so racer has issues evaluating the type of video(). A workaround is to explicitly declare the type of renderer. (source: https://github.com/phildawes/racer/issues/337)
But my real concern is how to write refactoring tools when macros are implied. It's seems a lot harder than writing static analysis tools, because the refactoring tool wants to examine the source code with macros expanded, but has to modify the source code with macros unexpanded. In other words, the tool has to map from source with expanded macros, back to source with unexpanded macros. How do you solve that?
As a sidenote, I agree that refactoring generated code doesn't sound fun too :-)
> How does that describe an unsolved problem? It illustrates that Rust's bifurcation of errors into Result and panics works.
I quoted kentonv here because it shows that Rust and Go have converged towards structurally similar solutions to error handling, by using two complementary mechanisms: explicit error checking one one hand (using Result<T,E> in Rust and using multiple return values in Go) and panic/recover on the other hand.
The big difference is that Rust have sum types (instead of using multiple return values in Go) and macros (try! instead of repeating `if err != nil { return err }` in Go).
> Catching panics is important, yes. No argument there. It doesn't change the overall structure of Rust's error handling story, though.
> a bit more difficult to write static analysis tools when macros are implied. But maybe I'm missing something.
Most Rust static analysis tools hook into the compiler and get this for free.
Racer has that problem because racer implements a rudimentary minicompiler that's much faster than Rust. When you want autocompletion, it needs to be fast. Running a full type check is a non-starter here. So you implement your own "type searcher" which is able to perform some level of inference and search for items. Being deliberately incomplete, it doesn't handle some cases; looks like macros are one of them. Since racer uses syntex handling macros would not be much harder (just run the macro visitor first; three lines of code!), but I assume it doesn't for performance reasons or something.
(disclaimer: I only have a rough idea of racer's architecture; ICBW)
> But my real concern is how to write refactoring tools when macros are implied
This is a problem with refactoring whether or not you're using tools. And like you mention there's exactly the same problem with generated code. If anything, Rust macros being hygenic are nicer here, since you can trace back where the generated code comes from and _attempt_ to refactor the source.
And macros like try do not affect refactoring tools at all; being self-contained. Its user-defined macros that mess things up.
> Most Rust static analysis tools hook into the compiler and get this for free. Racer has that problem because racer implements a rudimentary minicompiler that's much faster than Rust.
I didn't know that. Understood. Thank you for the explanation.
> And macros like try do not affect refactoring tools at all; being self-contained. Its user-defined macros that mess things up.
What do you mean by "self-contained"? How is it different from user-defined macros?
It doesn't introduce any new identifiers or anything. As far as refactoring is concerned, it's just another block with nothing interesting inside it. This is sort of highlighted by the fact that we can and do plan to add syntax sugar for try!() -- if it was a language feature it wouldn't cause refactoring issues, so why is that the case here?
User defined macros (there may be some exported library macros that do this too, but try is not one of them) may define functions or implement traits or something, which might need to be modified by your refactor, which might need fiddly modification of the macro internals.
(Also, note that due to Rust's macro hygiene, all variables defined within a macro are inaccessible in the call region, unless the identifier was passed in at the call region. This helps too)
> like making more difficult to write refactoring and static analysis tools
As one of the people behind a lot of the out-of-tree static analysis in Rust (clippy, tenacious, Servo's lints) I'd disagree. Performing static analysis across macro boundaries is easy.
The only problem Clippy has with macros is that the UX of the linting tool is muddled up at times. Clippy checks for many style issues, but sometimes the style issue is internal to the macro.
For example, if Clippy has a lint that checks for `let foo = [expression that evaluates to ()]`, it's quite possible that due to the generic nature of macros, a particular macro invocation will contain a let statement that assigns to a unit value. Now, this isn't bad, since the style violation is inside the macro, and not something the user should worry about. So we do some checking to ensure that the user is indeed responsible for the macro before emitting the lint. Note that this isn't much work either, the only hard part is remembering to insert this check on new lints if it's relevant.
But anyway, the UX of clippy is orthogonal to the static analyses provided.
(I also don't recall us ever having issues with `try!`)
> The conversation on the "RFC: Stabilize catch_panic",
FWIW most of the points are fixed with the catch and ? sugar that you mention later.
> My general feeling is that, whatever programming language you consider (Python, JavaScript/Node, Go, Rust, Haskell, Erlang, etc.), the right way to handle errors is still an open question.
Sure, however this isn't a very useful statement when comparing languages. The OP was making a relative statement; compared to C#. Saying that "all languages have problems with error handling" doesn't add much, since the question being discussed was whether Go's error handling is nicer than C#.
> Go uses goroutines for concurrency, not channels.
This is more of a practical piece of advice than a strictly correct one. In practice, Go concurrency means typing "go" but thinking in terms of channel groups and one-off channels. I just don't find that distinction very productive when someone asks how it "feels" or what it is "like" to program in Go as opposed to asking for an explanation of Go's concurrency model.
> That's quite the contrary. Most libraries are synchronous. You don't need callbacks (like in Node.js) or async/await (like in Python 3). This is made possible by goroutines and that's a big advantage of Go.
Within the context of a single goroutine, you're right. I didn't express this very well. Goroutines are often how you process network I/O. My bad for not explaining this well, I was in a bit of a hurry when I wrote that part and the whole thing got too long so my proofreading was a bit sloppy. Thanks.
> I think that error handling is still a subject of tension in every language and the problem is not fully solved (even in Rust, Haskell or Erlang). Time will tell.
I really think Common Lisp had a fantastic solution here and I miss the evolution over try-catch they spec'd. I wish more people understood it. It was such an excellent idea.
> I think you meant "packaging story" instead of "build story". It's true that at this moment, the Go project has not rallied around a single and universal packaging tool (like npm in Node.js).
For Go, is there a difference? You either have static builds or you don't, and artifacts don't exist outside of this last time I checked. So any packaging solution is de-facto part of the build story and vice versa. This is in sharp contrast to Maven or others where build artifacts can exist entirely outside of the expectation of use in the build chain (it's possible to launch a jar directly).
The actual compilation of the final build is fast and the cross compiler is definitely nice to have while we're all forced into a weird world where there is no good laptop OS for developers who often end up in open plan buildings where we're expected to be migratory (or simply working without an office at all). I wouldn't dream of underselling that. Of course, I'd prefer a good interactive development pattern.
There's a great gif of Daffy Duck dressed as a highwayman constantly swinging down from a rope into a tree over and over. My friends associate this with compiling Go, and I feel like that's a very good metaphor.
But just getting the libraries you need to build is a hassle. Plain and simple.
> I really think Common Lisp had a fantastic solution here and I miss the evolution over try-catch they spec'd.
Are you thinking of Common Lisp's conditions/handlers/restarts? I've never programmed in Common Lisp but have always been intrigued by this idea.
> This is in sharp contrast to Maven or others where build artifacts can exist entirely outside of the expectation of use in the build chain
Ok, I think I understand now. So you'd like to be able to use "prebuilt" dependencies in Go (delivered like a .so in C++ or a .jar in Java)?
Honestly, the compilation is so quick, and the advantages of a statically linked executable are so great, that I have trouble imagining why I would want that. In Go, instead of saving the dependency as a .jar file (for example), I just saved the dependency source in the `vendor` directory. For the record, this is exactly what big projects like Chromium do in C++.
On Lisp, yes conditions and restarts. The only downside is the feature does inhibit some types of optimizations.
On Go, I honestly don't care if they're prebuilt. The fact the package and build story are linked is not the problem. It's the strange split brain assumption that versioning libraries is bad and taking from a git repo tip is reasonable.
Which is doubly weird because of the human contempt evident in Go. As Pike said, evidently Google employees cannot be trusted to do much off anything. Except, mysteriously, for the rather hard task of never breaking a git master and communicating breaking changes without any signing mechanism.
Vendoring is something you do to aid build times and to improve the simplicity of source distribution. It is misapplied as a substitute for actual versions and developer-focused source and binary packaging.
Vendoring means if I recheck a repo after 2 months of having it run in production I can probably still build it. Probably. But, updating it with the latest security or bugfixes? That is no easier.
To me a story is a series of features working together to explain how the developer will actually interact with things. It's not unlike Agile's definition, but maybe more practical? Languages and developer environments are a product, so considering them as the sum of their parts from a UX perspective is very healthy.
So when I say, "you use channels for concurrency" this is not strictly true (technically the concurrency primitive is goroutines, as someone corrected to me above). But since it's a practical consideration that you need to use channels (and your race condition detector will flip out if you don't), I say "you use channels". It's a useful fiction.
I used to call these Wittgenstein's Ladder because I'm a huge nerd but no one ever understood the nature of the joke and so I started speaking relatable english again. :|
A lot of people complains about Go because they expect a better Java/C++/C#. If you expect a "better Java", then you are going to be disappointed.
For me Go is more like C with garbage collection and some features of a scripting language -- and that's exactly how it feels programming in Go for me. At the moment it's my language of choice for all side-projects; from web applications, microservices, shell "scripts", ...
There's an online playground/tutorial that can show you the basics. Even if you never write another Go program, it's worth trying it out just to see what else is out there: https://tour.golang.org/welcome/1