> Go does not allow many basic types, such as strings or booleans, to be nil. Instead, when a type is initialized without a value, it defaults to the “zero value” for that type. This is frequently useful, but complicates database interactions, where null values are common.
This sounds like they weren't defining their types properly.
If your value can potentially be null, it should be a pointer to the type, not the type itself. A string can't be null, but a pointer to a string can.
(In fact, there's no magic going on here - 'nil' is simply the zero value of a pointer. So you always get the zero value - you just need to choose the type that has the zero value you want... which is, in this case, a pointer, not a value).
As explained well in this thread[0], this is the most accurate representation of the data itself. You could create your own type that automatically decodes all null values to whatever the zero value is (empty string, etc)., but then you lose that information.
Yes, this forces you to do a check for the null value before using the data for the first time (or to invent your own monad for abstracting this), but at a high level, that's what you have to do in every language.
Using a pointer adds an extra heap allocation, though. The right thing would be to use a Nullable<T> type and define a custom marshaller once and for all... but Go doesn't have generics, so you can't do that.
> Using a pointer adds an extra heap allocation, though.
If you want to dereference it immediately (as they seem to want to in the post), this isn't really going to affect you. You probably want to pass around a pointer, anyway, so that you're not copying values over each time.
> The right thing would be to use a Nullable<T> type and define a custom marshaller once and for all... but Go doesn't have generics, so you can't do that.
Sure you can - that's literally what the NullBool, etc. types do
A pointer to a string IS a 'Nullable<String>' - what they need to do is define a way to unwrap this cleanly, which is easy.
> If you want to dereference it immediately (as they seem to want to in the post), this isn't really going to affect you. You probably want to pass around a pointer, anyway, so that you're not copying values over each time.
Sure, I'm not saying that the extra allocation will always matter. In most cases it won't. My point is just that this type system workaround does cost some performance. For example, a heap-allocated struct that contains two nullable ints using the pointer trick has 3 heap allocations, not one.
> Sure you can - that's literally what the NullBool, etc. types do
But that has to be done for each type. If you define a custom type Foo and want a nullable version, you have to write the NullFoo boilerplate yourself. This is what generics are for.
A pointer to T is not the same as Nullable<T>. There are many important differences.
The expressible values for a Nullable<T> are, in theory, the following:
A valid instance of T, or Null
The expressible values for a pointer-to-T are, in theory, the following:
A null pointer (pointer-to-0)
A pointer to a valid instance of T
A pointer to a previously valid, but now invalid because it was freed, instance of T
A pointer to an arbitrary location in memory
In systems programming (the space Go is presumably designed to be most useful for), the distinction between valid and invalid data is pretty important, so it's a little lazy to say 'just use pointers' when there are examples out there of safer, more efficient alternatives.
>The expressible values for a pointer-to-T are, in theory, the following: A null pointer (pointer-to-0) A pointer to a valid instance of T A pointer to a previously valid, but now invalid because it was freed, instance of T A pointer to an arbitrary location in memory
In C. Go has garbage collection, and no pointer arithmetic, so that's not the case in Go.
If you think a garbage collector and the lack of pointer arithmetic protect you from heap corruption bugs and developers deciding they're smart enough to manually manage memory for some extra speed, I've got some bad news for you. ;)
As far as I know, it can only create a pointer of type A that actually points to a value of type B, not reference unallocated/deallocated memory. Nullable<T> could do that too if the language allowed it.
Any language that lets you call out into third-party libraries not written in that language can end up with heap corruption.
The important distinction is that a 'Nullable' value type need not involve a pointer, which means it can entirely live on the stack, which dramatically limits the damage that can be done by corruption: Worst case, corruption sets the 'hasValue' flag to true, and you read an uninitialized struct off the stack. Much less catastrophic than a double-free or pointer into random memory (Though, of course, a determined attacker could probably make do with either).
To avoid the nil pointer dereference panic, could you use an idiom borrowed from Go's map implementation?
A library could take advantage of multiple assignment, a la maps:
`i, ok := m["route"]`
If the map has no entry for key "route", i will be the zero type [e.g: an empty string], and OK will be a bool indicating that the key wasn't found in the map.
So in a SQL library it might look something like
`func (s Statement) GetString(columnNum int) string,bool`
Called as: `str,ok := s.GetString(i)`
At least this way your program won't panic() if you forget to check that string != nil and dereference *string, but you still retain the information that the string is nil.
Yes, that's actually more or less what NullString does. It's a struct that contains two fields: String and Valid (a boolean). It also implements a couple of other interfaces, but at its core, it's what you describe.
Functionally, they're basically the same thing - it's only a syntactic difference whether you check whether a pointer is nil vs. checking whether a boolean is false. At some point, you either have to do a check or coerce a null value into a reasonable value. (Or, I guess, just risk it!).
Functional programming purists may cringe at the analogy, but this is is all more or less the same functionality as Scala's 'Option' monad - it just depends on how you want to express it. The only difference is that the Go compiler doesn't force you to check against nil before dereferencing, but if you want to ensure the safety before converting the database value to a String and forgetting about it, then the struct will do that.
This is the approach I've taken for dealing with database nulls in go. I tried using NullString / NullTime / etc, but doing that required writing a custom JSON marshaller for every class with nullable data, which was a huge pain in the ass.
I don't know if you're using SQL or NoSQL databases, but if the latter (and your data is stored in JSON), you may find this package I wrote useful: https://www.github.com/ChimeraCoder/gojson
Instead of unmarshalling to map[string]interface{} and then doing awkward type assertions and checks, etc., this will generate structs that you can unmarshal into. That way, if the unmarshalling succeeds, you're guaranteed to know the types of all your values instead of checking them each time.
As for null values, it automatically uses a pointer in the struct definition if it finds a null value - writing that package was how I first learned about this entire topic, actually, which is why I mention it!
I think that's fair, and in retrospect we should have started with pointers to strings. If this is the best practice, though, we should change the standard library to reflect that, rather than providing NullStrings.
Well, it seems that they've just chosen the second option (no pun intended) that I've mentioned above - that is, to define a monad to abstract this. There may be some benefit to doing this instead of pointers, but as you've found yourself, using a monad requires you to pass that monadic type up the calling chain (OR to unwrap that and deal with both cases safely).
I haven't used SQL databases with Go much, so I'm just looking at NullFoo types now for the first time, but every time I've thought I could improve on the design of Go in some way, the folks on #go-nuts have (kindly) shown me that I wasn't really thinking about things the right way. So, there's probably some reason that it's been designed this way, but I'm saying that purely based on prior experience with other Go matters, not with package "database/sql".
Well, they do correspond to the "Maybe a" for some fixed type a, and Maybe is a monad, however you are right that it would be clearer to to say Maybe or Nullable or Option type, since that's what is meant, not some general monad.
Great post. I've done some experimentation with Go for web dev, and encountered similar problems; it's such a delight to write code in that I'd love to use it in production, but can't justify spending a lot of time debugging the immature libraries and writing new ones, particularly not on the clock for clients. I too ran into a problem with null types in another library interfacing with postgresql. The weak points I found were:
* Lack of an ORM (there are many, almost all incomplete, and lacking in some way)
* Immature or incomplete db interface libraries
* No db migrations (would love to see a simple sql-only solution here)
* Package management is simple and elegant but without explicit versioning, so forking is the end result to ensure stability
* No process pool management for running Go behind something like Apache or nginx.
This goagain tool looks interesting for the last point though:
and the routing I found pretty straightforward with something like github.com/gorilla/mux so there are solutions, they're just not always as fully baked or as mature as you'd find in other ecosystems. One place Go shines though is the great built in web server which really simplifies getting up and running and testing, and which I see they've built on here.
Given how easy and pleasurable it is to work with, and the focus on practicality rather than language features, I'm quite confident that Go will reach it's stated aim of becoming a popular server-side language quite soon. For those new to the philosophy behind the language, I found this informative - http://talks.golang.org/2012/splash.article
I went on a framework binge over the past two weeks to see what the state of web development was. I looked at a dozen or so frameworks including Go frameworks. I came to the same conclusion you did.
Revel was the most promising Go web framework that I saw though very immature.
Go ORMs aren't ORMs. They are object persistence frameworks (at best). None of them really provide the "R" in ORM. Most of them basically do "SELECT * FROM table WHERE id=12" and dump that into a struct. There goes the "M" as well.
The Go language itself is very nice to work with though. It's a great language. Give it 2-3 more years and a truly usable ORM and web framework will crop up.
Many people coming from dynamic languages rely too heavily on ORMs. It's not very hard to write some SQL that will do what you want it to do, and it'll almost always be faster than an ORM. There is a little more setup, but it's not really that hard unless you have a really huge data model. Also, many people are moving away from relational databases for web platforms anyway, and non-relational databases are a lot easier to write code against. Check out the mgo package for running against mongodb.
I don't think people use ORMS because relations are hard, but because they are boring, repetitive, and just complex enough to make them tricky but not interesting. So ORMS are used to take away some of that cognitive load when you just want to say that Story has_many :commenters, via: comments or something similar. It's simpler to express what you want, gets rid of boilerplate joins etc, and usually you can drop down to SQL if you need it to fine tune a query, so why not?
Re relational versus non-relational, they're really suitable for different kinds of data, and it's disingenuous to suggest that one is the future and one is the past.
This; the moment I realized I was never going to choose to do CRUD web stuff in Go was when I realized that this was the perspective Golang developers have on ORMs.
Hibernate is not a good example of an ORM, frankly. (I myself have horrors of both Hibernate and TopLink, which was the top Java ORM way back when.)
Ruby's ActiveRecord is a much better choice. It has an excellent balance between SQL and OO. It doesn't pretend that SQL doesn't exist; on the contrary, it encourgaes SQL use, and merely maps tables to objects, adds a bunch of useful features (data validation, change management, automatic joins, declarative migrations) and gets out of your way most of the time.
Have you used rails ORM (activerecord) or sqlalchemy ? (I haven't used the latter much but people rave about it). Hibernate is an ORM taken to the extremes
At first when I started doing Android apps I was writing raw sql code. I very quickly remembered how tedious and repetitive this is and went running for the nearest ORM. Nothing feels better than ripping out huge chunks of boilerplate.
Personally I don't like ORMs because they're black boxes. When some complex model relationship results in crappy generated SQL (which I've experienced a number of times) I had to sit down and write SQL. The problem being that after a couple of years of ORMing my SQL skills had atrophied.
So now I've gone fully circle: from SQL to ORM and back to SQL.
No process pool management for running Go behind something like Apache or nginx.
Aside from goroutine scheduling issues, what would be the main reasons to run multiple Go processes, instead of just running a single multi-threaded process?
There are perhaps a few advantages to having lots of processes that you wouldn't get in a single threaded app: redundancy in case a process does hang, rolling restarts to switch out to a newer version of the code seamlessly by starting multiple processes to handle requests before killing the old one, and I suspect you'd hit some limits of the scheduler, as you hinted in your question.
It's probably possible to have most of that logic in a single Go process which runs a bunch of sub-processes for serving requests, but then you're also pretty much writing a load-balancer as well as your app each time. Perhaps better to separate out those tasks and put them into a separate process manager which runs a pool of processes, as on other platforms like Ruby with Unicorn or Passenger? Those platforms have other reasons of course for scaling with processes and not threads, which don't apply to Go.
Not sure how hard or efficient this would be (just using one process) as I haven't tried an app in this style in go, have only been playing with it so far. I would be really interested to see a Go server implementation that managed a bunch of goroutines to serve requests, are there any examples out there?
> Not sure how hard or efficient this would be (just using one process) as I haven't tried an app in this style in go, have only been playing with it so far. I would be really interested to see a Go server implementation that managed a bunch of goroutines to serve requests, are there any examples out there?
I'm slightly confused by this question, because that's what the standard library does. If you've ever used net/rpc, or net/http, then it spawns goroutines for each request (or connection, respectively).
If you meant to say spawn a bunch of _processes_ to serve requests, then no, I don't think anyone has done it. I don't think it makes a whole lot of sense for anyone to write code to do this in Go, tbh.
> redundancy in case a process does hang
If you're talking about deadlocks, then only the deadlocked goroutines will be blocked. The rest will make progress, just as if you had multiple processes.
> rolling restarts to switch out to a newer version of the code seamlessly by starting multiple processes to handle requests before killing the old one
You can do this with only 2 processes, old and new. You can spawn the new one, tell your LB to add the new to the pool, wait 30 seconds, remove old from pool, wait 60 seconds, kill old one. You don't need an LB, you could, of course, use an nginx frontend or something instead. There's also some neat ways to do nginx-style zero downtime restarts that I've never tried, but heard good things about.
> I suspect you'd hit some limits of the scheduler
AFAIK, limits of the scheduler tend to be hit when you increase GOMAXPROCS to something above 8. At this point, you'll spend a lot of your time in the runtime managing goroutines. My solution is just to run multiple processes with GOMAXPROCS=8 and point your LB at both of them. Again, you can just use nginx.
Feel free to experiment with the model you proposed, but this is relatively non-idiomatic, and the context-switching cost will start to mislead you as to Go's actual potential. The advantage, btw, that you get when you use the model I spoke of is that in memory caching, connection pooling, context switch time are all close to optimal, and you have fewer processes to monitor/restart/update.
Thanks for the reply, which clears up some of my misunderstandings (sorry, new to Go), I was using the standard library and hadn't looked under the hood, I'll go take a look at what it does. Re rolling restarts:
You can do this with only 2 processes, old and new. You can spawn the new one, tell your LB to add the new to the pool, wait 30 seconds, remove old from pool, wait 60 seconds, kill old one.
You still need a LB to do this, though I take your point that you could use nginx, might experiment with that.
Seems like most issues (except the nil problem, which is explained very well by chimeracoder) are not really with the language itself, but just with its ecosystem, which naturally is nowhere near something like Java or Ruby at this point. I agree, and while I love Go, I would probably develop a new web app in Sinatra. Who knows if that will be the case next year, though.
Yep, that's what I noticed too. This makes me hopeful, because if the biggest criticisms of a new language are "it's not mature yet", it's a very good sign for the language. I'm glad they didn't have problems with the language's core components.
Also, it matches my experience. It's very nice for some uses, but web development isn't yet a strong suit. Django, for example, is much more mature and has a component for almost anything you might need for web development.
Yeah and this is why I don't think it will ever become semi-main stream. I personally use Express with Node and it seems a good amount of people use Sinatra/Flask as well.
It's much more than just that though. If you scanned Express feature by feature you could implement most of them in Go. I would imagine someone experienced in JS and Go could port the entire thing over in a day.
It's just then what? Now you have:
1. Similar / worse performance.
2. Way less useful libs to leverage.
3. Have to write 2 languages instead of 1 (if you happen to already use Node).
4. Dealing with way less mature libs for crucial components that you definitely don't want to be writing yourself.
There's basically no gains. Deploying single binaries is great but a "write once, use nearly forever" build script makes deployment a snap with any non-compiled language.
Testing support is amazing in JS too and debugging is leap years ahead of Go. Using gdb is just archaic compared to using node-inspector. I'm sure Ruby and Python have equally as amazing testing/debugging support too.
Yes because Javascript and Golang are basically the same language with basically similar approaches to concurrency, similar deployment characteristics, similar toolchains, and comparable performance in most situations. Is what you're saying, right?
No, they have much different ways of dealing with concurrency. Deployment is a solved problem in most modern languages.
I think you may have misunderstood my post?
I also spent a pretty decent amount of time performing real world benchmarks for both languages by writing applications in both and then ran various performance metrics. Performance for general web apps with real world data goes back and forth depending on what you're doing.
My point was Go doesn't offer nearly enough pros for it to worth switching to.
If by "solved problem" you mean "if you accept the problem of keeping an up-to-date deployment environment on every machine with up-to-date patches", compared with "build this binary copy it over and run it", yes, Node and Golang deployment is comparable.
It is a solved problem. You can't just copy over a folder (or file) and call it a day but it doesn't take that much to get Node deployed by simply typing 1 command on 1 machine and this is the same thing you would end up doing for Go too.
If you ever dealt with deploying to more than 1 machine you would realize that doing it by hand is a pretty crazy idea. The first thing you would do is create a solution that allows pretty much hands free deployment, and those solutions exist for every modern language. Heck, it's one of the first things I did as a developer even while deploying to 1 machine.
Ok, so you're going to be scp'ing your binary over manually every time? Do you manually do your other build tasks too?
No, you would have a build script that minifies/concats assets, runs tests, maybe generate docs, then finally deploy using whatever method you happen to be using if everything passes.
This might be a git deploy, or scping files over to some server.
In either case you're never copying 1 file over because that is abstracted away from you by your build script. In return you type 1 command and let your build script do the dirty work for you.
Typing this one command is the same if you're using Go or Node or any other modern language. It doesn't really matter that I have to add a few extra commands to my build script because these are things I only have to do once.
Interesting that you are using Node.js as your example. Just a couple years ago Node was the new hotness, but was pretty terrible for creating an entire website.
Go has been out for a number of years. It shipped v1.0 almost a year ago and Node hasn't shipped v1.0 but who cares about version numbers.
Both languages have been available for quite some time. It's not about hotness at all. No one seems to WANT to use Go for serious web application development and this is proven time again by the lack of good libs available.
Node.js was released in 2009, the same year as Go. But Node uses a language released in the 90's, which at this point is arguably the most popular language in the world.
Go on the other hand was actually a new language, which took a while to stabilize. So I don't think this is really a valid comparison, especially if you're going to talk about library support.
I think it's fair because a lot of JS code is highly coupled with the DOM which isn't really going to be that useful for the server except for projects like phantomjs and other similar libs.
The only benefit JS has over Go for adoption is that people are pretty much forced into using JS or something that compiles into JS.
I think Go's take on concurrency is less confusing than callbacks. It's not like the language is massive and close to impossible to learn.
People had a ton of time to create interesting web dev related libs for Go but no one has. There's only a mix of libs that are pre-1.0 (dead), really low quality (missing extremely critical things), or bad performance because they haven't been tweaked under load because no one is using them.
But Node wasn't the first server-side javascript. In fact, Netscape released a server platform for it a few months after releasing the language. Quite a few others have cropped up since then: http://en.wikipedia.org/wiki/Comparison_of_server-side_JavaS...
They weren't all that popular, but I think that makes my point. These things can take a while.
If pre-1.0 makes a Go library dead, then Go has only had a year to develop non-dead libraries. You have to really be an early adopter to build stuff on a language that's still making breaking changes.
It doesn't matter. SSJS's past has nothing to do with anything in this discussion.
Also you're misreading what I'm saying too. I never said a pre-1.0 lib makes a go lib dead. It feels like a lot of people made libs pre-1.0 and then abandoned them but pre-1.0 libs make up a pretty big portion of what's available to use right now.
I don't want to use a buggy untested, unmaintained lib as an application developer. This goes for any language. But right now a lot of Go's libs are in this state.
Anyways I'm done replying. You would rather pick at negatives in every post I make and ignore the other things that make sense just to somehow make your case better.
The only part of JavaScript that's tied to working with web UI is the DOM API, which isn't even a part of the ECMAScript standard but rather set by the W3C and meant to be language-independent. I think Python could have worked out on the client-side but there were other options and none of them lasted. Give JS a more serious look. You might be surprised to discover that it has some unique features that make it attractive as a general-use language. I think JS has a lot to do with why people suddenly wanted closures in Java or that they implemented proper first class funcs in C#. It's exceptionally good at event-driven paradigms, normalization and reducing complexity, which is exactly what was needed on the client-side but also serves as a powerful feature-set in general use IMO. It's good at letting you set your own paradigm. If you don't love callback-passing-intensive code, that's something you can bury under an interface that appeals to you more.
Concurrency. Yes, you do want it. go func() { /* periodically do stuff here. No cron jobs! Yay...! / }
Static binaries. Yes, you want that too. For example, I'm on a box with no external access. bundle install rails. Ugh~ o, that's not even possible. Blah. You can even cross compile* for other platforms. Amazing.
...but your comments are entirely valid. I wouldn't pick it over node or flask for a serious project; but it's fun to play with.
Please keep in mind, that the database/sql package is - like Go itself - still very young. There is still a lot of work to do compared to mature libraries like JDBC, but it will improve with every release.
Yeah, like the fact two queries can't run simultaneously in a single transaction. All queries in an single transaction must be run sequentially. What worries me though is that this is "by design", and seen as completely proper by the Go devs.
That's the reason I stopped using Go: they have guys who don't understand databases writing their database code.
NULL is one of the most powerful features of SQL. It simply means a lack of data. This means it is neither equal nor not equal to anything else (including NULL). NULL is not a value; it's a state of non-existence.
So, for example, if you were providing a survey with an optional question with a yes/no answer. NULL would mean "no answer", false would mean "no", and true would mean "yes". Storing the "no answer" as a false would be incorrect since they did not answer the question.
It could also happen if you were adding a new column. Existing rows do not have data and would deserve a NULL unless you had a deterministic way to fill in a true or false value.
> NULL is one of the most powerful features of SQL.
I'd argue that NULL is, from a logical perspective, the single most broken feature of SQL.
> It simply means a lack of data.
The semantics of NULLs are less straightforward than that, and have a poor relationship to how SQL actually treats them. Every table with one or more nullable columns really should be a table with all the non nullable columns, plus additional table with each combination of columns that would never be missing together, each of which has a foreign key relationship back to the first table.
That's for the simple case, where the semantics of missing data are always consistent for any set of columns; in real-world databases there are often more than one reason data that can be missing might be missing, and those different reasons (because they are different classes of fact), for any given column or set of columns to which they apply, each call for another table with a foreign key reference to the table containing only the mandatory columns.
> So, for example, if you were providing a survey with an optional question with a yes/no answer. NULL would mean "no answer", false would mean "no", and true would mean "yes". Storing the "no answer" as a false would be incorrect since they did not answer the question.
Sure, storing it as one table with all the questions as columns and storing the "no" answer when the answer was missing would be an error. If all the questions aren't required for the survey to be valid, then -- from a logical perspective -- the problem is presenting the whole thing as a single relation in the first place. Its a set of relations, that share a key (but not necessarily all values of the key.)
And how much overhead in logic, code and frustration would that cause in terms of development and support.. right now, I'm dealing with an over-normalized database close to what you are describing and needing over 20 joins in a single query to get a complete record for display (not including actual sub-records) but to get a complete set of properties, where null means not there...
> And how much overhead in logic, code and frustration would that cause in terms of development and support..
Depends on the competencies of the people doing dev and support. Personally -- both as a developer and a technical user -- I've had more problems dealing with situations where NULLS had ambiguous semantics, where the typical naive use of nullable columns instead of normalization into logical units of data that must all be present or absent together resulted in avoidable data inconsistencies, etc., than I've ever had with overnormalized tables.
Joins for queries are a solve-once development problem; data inconsistencies and ambiguities resulting from the problems with NULL are an ongoing problem.
It really depends on what type of app you design. For data-entry apps, having nullable allowed for booleans is fairly common, it would mean "not provided" for instance.
I basically encountered the same annoyances, primarily with the whole NullBool, NullString business.
Go is pretty good at most things, dealing with SQL so far was a bit annoying however.
The Goroutine scheduler also needs some love still. The Garbage Collector and global heap I feel like is always a bad idea but Go did it anyways. Shouldn't they have learned from Java that global heaps and concurrency don't mix well?
I'm 100% a ruby fan, but it seems crazy they went from Go to ruby if ultimate performance isn't their goal. Wouldnt a static type language be worlds faster?
I didn't see anything in the article that mentioned performance at all. I think they just wanted to try out Go. Ruby being the industry standard for new websites these days, it seems logical they'd choose it if they didn't like their experience with Go.
Go is pretty slow right now. I wrote a prime number finder in Go and Ruby 1.9 and Ruby smoked it. Granted that's probably mostly due to all the math in Ruby being C, but still, you can't just say that Go is faster because it's compiled before hand.
bender:Desktop phil$ time go run primes.go
Found them! 78702
real 0m21.349s
user 0m21.304s
sys 0m0.033s
bender:Desktop phil$ ruby -v
ruby 1.9.3p194 (2012-04-20 revision 35410) [x86_64-darwin11.4.0]
bender:Desktop phil$ time ruby primes.rb
Found them! 78702
real 0m7.656s
user 0m7.651s
sys 0m0.005s
bender:Desktop phil$ time go build primes.go
real 0m0.269s
user 0m0.231s
sys 0m0.033s
You're using integer arithmetic in Ruby and floating-point arithmetic in Go. Try replacing
math.Mod(float64(i), float64(j))
with
i%j
My results:
$ time go run primes.go
Found them! 78702
real 0m0.835s
user 0m0.787s
sys 0m0.039s
$ time ruby primes.rb
Found them! 78702
real 0m21.013s
user 0m21.005s
sys 0m0.016s
It's worth noting that the time on the go side includes the time to run the compiler, and linker, since you're using `go run`, instead of `go build`. Not saying that's bad, just something to keep in mind when benchmark numbers are in the second range.
(I understand that you did because the original poster did it)
This sounds like they weren't defining their types properly.
If your value can potentially be null, it should be a pointer to the type, not the type itself. A string can't be null, but a pointer to a string can.
(In fact, there's no magic going on here - 'nil' is simply the zero value of a pointer. So you always get the zero value - you just need to choose the type that has the zero value you want... which is, in this case, a pointer, not a value).
As explained well in this thread[0], this is the most accurate representation of the data itself. You could create your own type that automatically decodes all null values to whatever the zero value is (empty string, etc)., but then you lose that information.
Yes, this forces you to do a check for the null value before using the data for the first time (or to invent your own monad for abstracting this), but at a high level, that's what you have to do in every language.
[0] https://groups.google.com/forum/?fromgroups#!topic/golang-nu...