Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Go in Production – Lessons Learned (tdom.dev)
195 points by tdom on Nov 6, 2020 | hide | past | favorite | 104 comments


To those who say frameworks are needed because the stdlib is not enough: No, the stdlib defines very clear interfaces that libraries can implement. The result is that you can drag and drop middlewares from multiple packages into your codebase, because they all conform to the universal middleware signature `func(http.Handler) http.Handler`. Want CSRF? Want Sessions? Want Auth? They all exist as separate packages (check out the Gorilla toolkit), but the beauty is that they all work without needing to know the existence of each other.

Frameworks like Echo and Gin eschew the universal middleware signature `func (http.Handler) http.Handler` for their custom built ones, and as a result cannot tap into the ecosystem of middlewares that target Go's net/http. Between a stdlib package and a third party package, which do you think is more stable?

You are not constrained to net/http's router. Chi is a good example of a third party router that implements features not in net/http but still conforms to the `func(http.Handler) http.Handler` interface, and as a result middlewares that work for net/http will also work for Chi.

https://justinas.org/embrace-gos-http-tools


> Frameworks like Echo and Gin eschew the universal middleware signature `func (http.Handler) http.Handler` for their custom built ones, and as a result cannot tap into the ecosystem of middlewares that target Go's net/http. Between a stdlib package and a third party package, which do you think is more stable?

I think you're overstating the cost of using "proprietary" middlewares. I'm most familiar with Gin; it is trivial to wrap a "universal" middleware in a Gin middleware (gin.WrapH). It is not trivial to make a universal middleware that does the equivalent of gin.Context#AbortWithError.

I don't fully disagree either - obviously `gin.WrapH` everywhere is noise, but so is e.g. `chi.URLParam(req, "abc")` compared to `c.Param("abc")` in Gin, and Gin's parameters are much cheaper for a middleware to tweak. (I choose chi here because it's one I'm familiar with that tries hard to keep the standard Handler signature at the expense of interfaces to its own features.)

A lot of problems would go away if the standard signature was `func (Context, ResponseWriter, Request) error`. Sometimes a minimal universal option is nice because minimalism is also a virtue; but also sometimes it's just missing necessary features.


Came in here to say this.

I helped many teams move from Java to Go, and came across this so many times. Everyone wanted to find the Spring equivalent for Go (or Django, Ruby on Rails, etc) when it doesn’t actually exist. That’s by design.

No ill feelings toward the developers of Echo, or other similar frameworks (Go Micro specifically). However, touting their framework as the way to build APIs in Go is disingenuous and leads to software that is difficult to maintain.

Just my two cents.


>I helped many teams move from Java to Go, and came across this so many times. Everyone wanted to find the Spring equivalent for Go (or Django, Ruby on Rails, etc) when it doesn’t actually exist. That’s by design.

I'm not sure how it's "by design". Go, by itself, doesn't give much more than e.g. Java gives with its Servlet base libs, etc. And yet Java has Spring, and some popular framework could very well emerge as the "THE" framework for Go.

Whether that framework would conform to the stdlib interfaces / middleware is another question. It might not, if it's compelling enough (there are lots of popular Java non-servlet API conforming frameworks, e.g. Play, Vert.x, etc).

It's only because of fragmentation (and small still market, with lots of NIH) that one hasn't emerged as such, not some special Go design. Many/most Go-ers like to keep it simple, so they don't adopt any big framework lib (after all, if they didn't keep it simple, they wouldn't be using Go).


I think it's only a matter of time before there's a solid choice full featured Go web framework that rivals Rails/Django.


I might be wrong but it seems that Go needs to have a proper generic support for that ever to happen and that's one of the main reasons why there is no equivalent of RoR in the C world.

For a very good perspective on this check out this excellent article by the creator of Stanza language responding to his friend's request of not creating any more new languages but just libraries:

https://jaxenter.com/stop-designing-languages-write-librarie...


A much needed interface in stdlib is logging.


Yeah I made a library for that. I am not sure if people are actually using it but it got over 70k goddamn hits in my access.log!


Agree. I wasn't at all convinced by the examples showing the Echo was somehow better than using the "built in" signatures, especially since there's quite a bit of handling that Echo (and things like it) makes implicit that you'll probably want to be explicit in some edge case.

Coming from WSGI, I found go's middlwares familiar and positively pleasant.

I too ended up on Chi as the router that worked well without getting in the way.


> To those who say frameworks are needed because the stdlib is not enough: No, the stdlib defines very clear interfaces that libraries can implement.

Yeah, I'm not much into Go development but I've talked to a few devs who built real web apps with it and they rarely use any libraries, but do use "interface style" libs like Gorilla Mux.

Here's a snippet from my podcast where I talked to Jon Calhoun on using mostly the standard library and about 15k lines of code to build a video course platform in Go: https://runninginproduction.com/podcast/42-creating-a-video-...


> they rarely use any libraries... He mentioned some of the Gorilla packages that he used too, such as Mux for routing.

Gorilla is perhaps the most maximalist Go web library "system", and its mux is by far the most complex of common routers. Using Gorilla for a site is about as far as being outside the Go stdlib as you can be before reimplementing core protocol handling.


The more you get to know Go's stdlib the less and less youll think a web framework is necessary.

Youll find that utilizing http Round Tripper along with the Handler interface in the http library will make middleware easy.

Youll eventually dig into filepath and path methods for extracting path parameters from url paths.

And logging and recovery will be a concept youll need to extend outside of just http and into the rest of your application.

I totally advise new comers to use a web framework, but in the long run, you probably wont want too.


I see this sentiment a lot in the Go community. I think it is reasonable in some cases, but there are many use cases (vanilla CRUD web apps) where a web framework is really helpful.

The standard library is very low level. Want sessions? DIY. Want user auth? DIY. Want CSRF protection? DIY. The list goes on.

It feels like a waste of time implementing these "solved problems" from scratch, but the biggest problem is how easy it is to introduce security vulnerabilities when implementing from scratch, or forgetting to do so.

It’s nice to learn concepts from first principals by using the standard library. But once I know how these things work, I’d rather rely on someone else’s battle tested code and best practices.

Yes, you can add in separate libraries to solve these specific problems, but they are less likely to compose as well as they would in a framework. On top of this, each time you pull in a new library you have to spend time evaluating it. When I use a framework I don't have to think.


The general advice is not DIY everything by using the stdlib, it's to use packages that conform to the stdlib interface, because doing so gives you infinite composability. All of the concerns that have been pointed out have good, testing and rock solid implementations available that you can just drop in, with mix and match from different authors and frameworks. All because they use the same interface.

This isn't even a new idea. Many Ruby on Rails plugins are actually Rack plugins (even to the point of Rails itself being implemented as a collection of Rack middleware). Rack is the interface that defines how a request is to handled, similar to the Go stdlib interface.

It's definitely true that idiomatic Go tends towards copying being better than dependencies, but the standard interfaces make it much easier to use and swap tried and tested dependencies because they all share the same interface.


I find that this is a totally fine trade off to make until it isn't, and by then you're completely confined by your choice of framework. Better to use libraries built to compose around interfaces taken from the standard library so you lose none of the control. I'll also emphasize that doing it this way does make discovering the initial pieces harder than if they're all in one framework together, but I find the slight increase in research yields orders of magnitude improved results once your code starts to age past the two year mark and you reap all the reliability and composibility of the standard library.

Also, here's a list of out-of-the-box library implementations for all the features you mentioned:

Sessions: https://github.com/gorilla/sessions

CSRF: https://github.com/gorilla/csrf

User auth: https://github.com/qor/auth


> The standard library is very low level. Want sessions? DIY. Want user auth? DIY. Want CSRF protection? DIY. The list goes on.

That's not the sentiment expressed here. To implement these things from scratch is not the only alternative to using a framework.


I agree with this. I do believe, that if youre writing a prototypical, client facing, transactional application, a web framework is very useful.

I also advice any individuals coming from Django or friends to use a web framework.

But after some time spent with the stdlib, and understanding how some of those implementations work, it gets to the point where Id rather not read another set of documentation, learn a new mental model, and deal with bugs. This all comes with a framework.

After awhile you realize that the stdlib provides most of what you need, and that writing more vanilla Go can be simpler then learning a full framework.

I will admit, most my work in Go revolves around internal services, and dont deal with web technologies such as CSRF and CORS. So I do acknowledge my opinion here is leaned toward those use cases.


I've been writing Go services in production for ~5 years and I don't really agree. Go Web frameworks are usually what used to be referred as micro frameworks. They're great because they embed the bare minimum making the writing experience tolerable.

What you say is true, it's easy to do with stdlib, but it's neither enjoyable not readable. A router lib and a validation lib are realistically still needed.

The same applies for the sql package, it's basically unusable without at least something like sqlx.


Here is the thing, I have been doing Web development alongside native for a couple of decades now.

This Go enlightment seems to only touch those that don't realize that programming to interfaces was already a thing back in Objective-C and WebObject days, or using Smalltalk categories (later formalized as traits in Pharo).

Also that languages like Java and .NET also have a similar Http server on their standard library since Java 6 (2006) and .NET 2.0 (2002).

The reason we don't use them beyond toy examples, is that they don't scale when things start getting hard and something like IIS, ngix or similar is called into action.


Could you explain what you mean by things start getting hard? Why did the languages you mentioned stop using their HTTP interfaces?


Because that is just the basic, when doing web applications, besides handling HTTP requests you need:

- workload distribution - security and authentication - role management via some form of directory services like LDAP or AD - running tasks in background in response for certain events - mapping into various kinds of databases and information sources - handling caching - have a way to manage reusable components of html/css/js + respective backend code

Yep one can make use of libraries to achieve all of that, but then most likely they don't compose in an easy way, nor make an eco-system.


> The reason we don't use them beyond toy examples, is that they don't scale when things start getting hard and something like IIS, ngix or similar is called into action.

Hmm? You seem to be mixing two things here. Nobody is suggesting not using NGINX/Caddy etc. The comment was about being able to go very far with just the standard library.


It sounds like what you've done is used a bunch of tools that weren't designed with web development in mind and made them do those things. That seems like not only a security issue but also an absolute mess to get into after you're done.


100% correct. That sounds like a nightmare. Most backend web devs don't have the desire/expertise to handle all the intricacies of web security.


There's a lot of non-obvious stuff to cover, like getting all the caching related headers right, vary headers, relationships between proxy headers and logging, thwarting path traversal, dealing with file uploads, and so on.


What path to this enlightenment would you recommend to one who has cut his teeth and delivered many projects (over many years) in Rails/Django?


This is a side effect of go’s lack of meta programming to be honest. So much magic that is possible in rails/django/etc is just not possible in golang.

This leads to go web frameworks being sort of semi-hard mountainous turds of code generation. You’ll want to stick to stdlib after sifting through them and deciding they aren’t worth it.


Admittedly I liked Django's database layer but using it in Real World production has also left me feeling kind of... eh. Having automigrations and an ORM is fantastic 90% of the time. The other 10% of the time almost ruins the magic entirely. The ORM is not a panacea like SQLAlchemy nearly is. Issues with migrations in production can get surprisingly tricky surprisingly quick.

The rest though I do not miss at all. Middleware patching attributes directly into the request object? Python's sad excuse for incremental typing? Ignoring the database bits of Django I struggle to find anything that I don't enjoy doing more in Go. I was a HUGE fan of Django REST Framework and assorted boilerplate for a long time, but I am so extremely glad to be off that ride. Yeah, it lets you do really cool, complex things succinctly and cleverly. The problem is that it lets you do really cool, complex things succinctly and cleverly. The cleverness becomes the enemy. I now have learned to appreciate code that is utterly stupid, obvious, nearly braindead. Every codepath is screaming at you. That is Go in a nutshell. if err != nil { ... } ad nauseam. Sounds terrible... but it kind of isn't.

There are some things I would not use Go for. Game development is one of those things. Web servers, though? If you are going to be doing serious work in production environments, Go is absolutely among the best choices.

I am still, however, looking forward to trying Rust more and more. My initial impressions with Rocket.rs have been lukewarm. (One thing that is bothersome but not quite a deal breaker is compile times. A lot of crates make the experience bad almost immediately.)


Using SQLAlchemy has been my biggest (technical) regret in my current project; it was great at first, but overtime it's made testing a lot harder and session/object management has led to subtle bugs.


Getting Flask/Pyramid + SQLAlchemy + pytest to play nicely with sessions, transactions and rollbacks has often been the one thing that made me go "screw it" and start over with Django.


I feel like it’s a great tool to have available because it has solutions for everything you might want, but I found myself preferring it more for scripting than for application servers. Never quite figured out how to properly handle database sessions, really.


Speaking from experience, session/object management is always tricky with ORM's. SQLAlchemy has been good for us; but the usage has been very, very disciplined and doesn't look idiomatic.


After Django, writing database code manually in Go is not fun at all.


> Youll find that utilizing http Round Tripper along with the Handler interface in the http library will make middleware easy.

I thought RoundTripper was a purely client interface - am I missing something? Servers instead have the base Handler, net.Listeners, ConnState, BaseContext, ConnContext etc.


This article has almost nothing to do with Go in particular. Just...general stuff to follow when writing code. What is it here that talks specific things about "Go" + "in production"?


There’s an ongoing debate among go dev if the stdlib is enough or not to perform « real world » work (which isn‘t the case in other languages). This article is from someone who think it’s not. That’s interesting.


It's not really a debate. It divides fairly neatly along the lines of "has been writing Go for a couple of years" vs "has come to Go fairly recently".

There's a well-trodden path of developers who were trained in PHP/Rails/JS/Django/etc starting on Go. Their first question is always "what framework should I use?" (you can see this being asked at least once a week on r/golang). They then go down that path, and find (what they think to be) Go to be clunky and boring and hard to think about. There's a minority that decide that what Go needs is another framework, because the one they tried obviously isn't working for them.

Then there's a split, and some people write articles like this one, and probably move on to Rust or whatever language is next for them. The others start understanding. They refactor their code to not use that logging library because they start to understand the understated power of the stdlib's logging functions. They refactor to get rid of the ORM that's become a problem. Slow realisation dawns and they finally get rid of the framework and go back to writing http.Handlers. This process usually takes a couple of years (well, it did for me anyway).

And then our reborn Gopher goes to the forums to spread the light: "all you need is the standard library! You don't need a framework!", and is met by derision and misunderstanding. The Go community gets a reputation for being unjustifiably anti-frameworks and a bit weird about ORMs too. Eventually our hero shrugs their shoulders and lets the newbies find their own path, contenting themselves with upvoting those who understand.


I'm not a backend developer and I can't comment on the actual framework vs no framework in Go discussion. What I want to say it looks a bit arrogant to portray yourself as this enlightenened person who has seen what would take other people years to understand, and to tell them that you won't bother to convince them but they'll understand one day.

There has to be a better way to communicate your point.


It took me years too, as I said.

I wrote a whole logging library, and a "better" database access library. I even put them up on Github and asked for feedback in the Go google group. Mostly I got told "you don't need either of these", which I ignored. Until I finally came to understand.

It's a well-trodden path. I'm not the first to walk it, and I see others starting on it now.


Learning over time isn't arrogance, it's just the truth


When given. I can't not be late. scrope


We use standard library in production. Never had a problem with it. It's performance is great, the http.Handler interface is a great pattern too.

I rarely find myself straying far from the standard library. If there's something that takes a bit more code but doesn't bring a dependency then that's what I'll do.

I definitely agree that there comes a day where you just see the light and realise that simple is good, the standard library is good, and that you don't need most of the stuff a framework gives you. But it gets really hard convincing others that Go is a great language because it's not Rust. Don't know what there's so much hate for it in this community. I've never fell in love with a language like this before. It's just incredibly productive for me and fits my thinking perfectly.


agree 100%. This is exactly how I feel about it.


Some of them move back to PHP once they realise the PHP standard lib is actually rediculously full of helpful tools for web development and the idea of treating individual PHP files and the directory hierarchy as the routing structure makes everything so bloody simple. Pair that with apache and the newer mod php and you have an extremely easy to up and running situation. Development feedback loop is rediculous and everything is simple again.

Frameworks need to die.


I think it's stuff you need once you move beyond basic learning go projects.

I found it pretty useful, the tip about sqlx, echo, and a quick docker file example were useful reminders, the rest ive seen elsewhere but yea.

I get what your saying but I found this useful.


Haha. Just to attract more clicks


I wouldn't describe this as good lessons for using Go in production, I would describe this as "opinions I came to after building my first real Go web project."

1. You probably do NOT need a framework

Use the default Go HTTP libraries. For the other functionality that you'll need, use libraries. If you throw in with frameworks like Labstack Echo, you're forever coupled to the incredibly specific and one-note behavior of the framework you choose. The dependencies you choose should be light, with their most attractive aspect being the interfaces they provide. I point most strongly to go-kit as an EXCELLENT set of libraries for writing HTTP services. Their Log package is a small example of what I look for in quality libraries.

2. You NEED a good code structure

You need to be structuring your code, yes. But you should NOT be leaning on the conventional directory structure to give meaning. You will need to read more of other peoples code than you will need to write your own code, and other people will not be following your code structure. Much more useful is to get good tools and practices reading code. Go, unlike other languages (e.g. C#), is meant to be readable and understandable without a heavy IDE there to help resolve elaborate indirection and overloaded imports. If you have the code on your disk and can use Grep, you'll do fine. If you're using an editor which supports language servers like Gopls, you'll be able to fly through the codebases.

3. Pick a DB driver wisely, and the wisest choice for SQL driver is database/sql

SQLx is not database/sql, which is a real problem when most all the ecosystem is built around database/sql. The only real pain point people have with database/sql is the scanning, which is why other people have built libraries to help with this: https://github.com/kisielk/sqlstruct

Probably just use database/sql.

4. Docker

Most all of this advice is fine.

Basically, I recommend sticking to the more lightweight and more standard implementations, as they have the abstractions you'll need to the long haul. Though if you're just trying to get a thing going ASAP and it has to be Go, do what you gotta do with the code that catches you're fancy.


> You will need to read more of other peoples code than you will need to write your own code,

Any examples of good go web projects to look at? All of the ones I've found have fallen into either "directory dump" or "I'm an mvc but not really".


> I recently got a DevOps job that mostly involves writing a new backend system in Go completely from scratch.

If this wasn't on the front page of HN I'd have stopped reading here, but it is, so I didn't, then I regretted it.


What offends you in that statement?

FWIW, I, who is usually writing JVM apps in Java/Kotlin occasionally Scala, had an app to write that that extensively used K8s APIs, and I found that the best client API was (naturally, considering the makeup of the K8s ecosystem) the Go one, so I wrote the app in Go.

I had to overcome some common Go issues (use a map[t]interface{} when you need "is x in a list of unique elements", if you need more advanced set operations on custom types, you're going to need to generate code or hard code each implementation, because generics are for suckers, allegedly. Hit some issues around channels and blocking reads and buffering, but worked around them easily enough using pointers and locks where channels weren't appropriate)

But aside from the limitations of Golang, over all it was a rather straight-forward experience. Stuff worked, tools worked, Intellij IDEA integration worked, Go's module system worked (DIAF $GOPATH), code ran, no worries.

Will say though, Go logging realllllly needs some work from the community. An SLF4J equivalent at least. I used logrus as my logging library as it gave me the easiest route to making my logs go to Kafka in Logstash format, but it has some insane defaults - like the log event timestamp defaulting to "seconds since start-up".

Oh, and be very careful with named returns. Naming a return, but then assigning nothing to it, will compile. Which is, to say the least, rather contrary to how most of Go works.


> Naming a return, but then assigning nothing to it, will compile.

This, I believe, comes down to Go having sane default values i.e. if you name a return of type string and assign nothing to it, you'd get "", which is a valid string value and something you can check against.

But I agree it should at least be a lint warning.


Yeah, I didn't quite realise it actually initialised the variable when I first started using it. I mean, as far as footguns go, it's a pretty minor one, and over all, I quite enjoyed my Go experience, it was very easy as a Go beginner to write code that worked.


Can you at least explain why?


For the same reason as if it had said “I recently got a React dev job which mostly consists of writing Kubernetes deployments.” It sounds like the beginning of an article which uses buzzwords without a lot of depth.


Why would you want to serve a 200 with an error message in a JSON response body?


I've seen this pattern before, and one can argue that it is the correct one. Your options are your http status are your application's status, which I think is what most of us are used to.

The other one is http status codes are http status codes. As in the http request was done correctly, but the application code wasn't. More specifically, http layer was executed successfully, but the application layer was not.


That is simply a 500 error. Or a 422 if request was semantically wrong.

No, this pattern is not the right one for REST. Don't call it REST, call it command/action RPC. The 90s called - they want their architecture back.


But there's response codes for those situations. If the request is malformed you should send a 400 bad request. If the application errored, send something in the 500 range.


I've seen this pattern before, and one can argue that it is the correct one. Your options are your http status are your application's status, which I think is what most of us are used to.

It's pretty infuriating actually. On a similar note I have to use certain command line tools provided by a third party vendor that exits 0 on failure, and writes something to STDERR (on success it exits 0 and writes something to STDOUT). The Unix conventions evolved over decades because they were consistent and useful.


You're on the Internet. You are going to see all patterns and combinations you can think about.

But there's an rfc and that's what defines correct usage.


This is more RPC style. I used to prefer this approach because that way you're not constrained to the Nxx error codes of http; it's just request, response. Having worked in a few companies since then, it seems odd to me now. Besides, all your middleware etc is probably going to use status codes conventionally, so your application code's 2xx error responses will stick out like a sore thumb.


A downside of "200 for everything, error in the body" is that it locks you out from using off the shelf API monitoring tools. Now you need some custom monitoring tool that can read & understand your custom error bodies.


Your options are your http status are your application's status, which I think is what most of us are used to.

I used to use HTTP status codes in this way because I understood it was the correct REST way of doing things.

However, one day, a sysadmin contacted me to tell me that we had broken a release because our API was returning a 404. Actually it was a problem in the checking script that was checking for data that was no longer in the DB.

By making application codes equal to HTTP status codes, we had removed any way to distinguish between fatal errors and API results.


404 is literally the correct response for this behavior, per the HTTP spec.

There was no server-side error. The requested resource was unable to be located because it no longer existed. You should return a 404 here, and not just for non-HTML API clients.

Application-side fatal errors are in the 500 block. So yes, you absolutely can distinguish this case.


The requested resource was unable to be located because it no longer existed.

I should have mentioned that the API was behind a reverse proxy. This leads to the following questions:

Which requested resource is missing? The API itself or the item requested from the API? How does a client distinguish between these?


I can’t imagine any way a reverse proxy would generate a 404 or indeed any 4xx error within itself. Any such error would be coming from the upstream application server.

If the API itself was not found that’s one of 502, 503 or 504.

200 responses with an error property is an antipattern precisely because of things like reverse proxies. They have no way to unpack and interpret every developers pet error format. They _do_ understand http status codes and can act accordingly, like retrying requests, not caching responses, etc as appropriate.


I can’t imagine any way a reverse proxy would generate a 404 or indeed any 4xx error within itself.

What if the configuration changes and the API path/URL is no longer in the reverse proxy config? What if the reverse proxy is dynamically configured and our application didn't register itself properly?

They _do_ understand http status codes and can act accordingly, like retrying requests, not caching responses, etc as appropriate.

Of course applications should return HTTP codes when appropriate e.g 500. The principle is that using HTTP codes for application specific information e.g item not found in DB, is a blunt instrument.

They have no way to unpack and interpret every developers pet error format.

I would argue that they don't need to. The conversation is between client and API at higher level than HTTP. HTTP codes are great for information such as "its broken", "please authenticate", "its busy". But HTTP codes are not so useful for things such as "item not in db", "parameter x is missing", "unsupported API version" because this information is nothing to do with HTTP.


404 is a normal response code. Something is wrong in the infrastructure if you have this doubts.


I just had an issue today in which customer deleted a service account (which they shouldn't touch) required by our API to process their entities. Right now I send a 5xx server error but as is a customer misconfiguration problem, and problem is not on the server side and is not a bad request, but a setting client changed I would use this pattern. This will avoid triggering our alerts.


1. As others said, RPC is sometimes fine.

2. You can return a proper HTTP status and a JSON response body detailing what happened.


Because RPC is fine.


Another reason to use GraphQL. No more endless discussion about which HTTP codes to use, which HTTP action to use, how to send in data, how to format your response, ...


How does GraphQL allow you to not worry about those things?


GraphQL only uses two HTTP methods (GET and POST). But they actually don't differentiate in their function: you can do any type of reading/writing kind of queries in both GET and POST requests[1]. POST requests are used because they allow for larger bodies.

GraphQL defines the format of the reponse in case of errors[1]

GraphQL doesn't use HTTP status codes to communicate out-of-the-ordinary conditions. You can expect to always get HTTP status 200[1]

The data response is a mirror of the query you sent in with the data present[2]

How to query for data is explicitely laid out[3]

How to send in parameters is explicitely laid out[4]

[1] https://graphql.org/learn/serving-over-http/ [2] https://graphql.org/learn/ [3] https://graphql.org/learn/queries/#fields [4] https://graphql.org/learn/queries/#variables


AFAIK graphql does have response error codes. You should always have response error codes no matter the form of api. They are semantic info.


GraphQL does not have response error codes. It only dictates that any response that has errors, should have an "error" field. It is not defined what this error field should contain, it can be a string or an object with a code and text, etc...


I could be wrong. Some of the graphql servers I have interacted with have given me useful error codes. E.g GitHub.

We do have nice wrappers around the request to raise a proper named exception regardless so it doesn’t matter.


And when intermediate proxies happily retry your small “delete the most recent record” request because it was a GET which is, in the rest of reality, guaranteed not to alter state on the receiving server except in the corner of the internet you’ve defined as your own it’s destructive?


You can use GET for read queries and POST for write queries.

Edit: actually you made me think a little bit more about this, if you can make your mutations idempotent, spurious retried POST requests shouldn't be a problem at all. However "delete the last record" is not an idempotent operation by definition but also one you wouldn't use in the real world - usually you delete by ID.

Edit 2: it's easy to make the server reject mutations sent via GET.


RPC over HTTP pretty much always goes through POST, sometimes with the option of using GET for querying as an optimisation.

For graphql specifically, if you allow GET-ing the GraphQL endpoint (which usually isn't the case by default), it's trivial to ensure only queries go through that method.


And adds a bunch of other endless discussions: how to cache data (POST is not cacheable), how to auth data (anyone has access to everything), how to...


I don't feel like this is added by GraphQL because you'll have these questions regardless of using GraphQL.

how to cache data (POST is not cacheable) -> You can use GET requests and GET requests are cacheable.

how to auth data (anyone has access to everything) -> Authentication or authorization? What do you mean with anyone has access to everything?

how to... -> yes?


> You can use GET requests and GET requests are cacheable.

GET requests are a crutch added to GraphQL precisely because of limitation of POST requests.

And the backend still has to normalise the GET request, and possibly peek inside it to make sure that it is the same as some previous request.

> how to auth data (anyone has access to everything) -> Authentication or authorization? What do you mean with anyone has access to everything?

Your schema is a single endpoint with all the fields you need exposed. Oh, but a person X with access Y might not have access to fields A, B, C, and D.

Too bad, these fields can appear at any level of the hierarchy in the request, deal with it.

> how to... -> yes?

A GraphQL query is ad-hoc. It can have unbounded complexity and unbounded recursion. Ooops, now you have to build complexity analysers and things to figure out recursion levels.

A GraphQL service usually collects data from several external services and/or a database (or even several databases). But remember, a GraphQL query is both ad-hoc and with potential unbounded complexity. Oh, suddenly we have to think how much data and at what time to we retrieve, how do we get the data without retrieving too much, and without hammering the external services and the database with thousands of extra requests.

That's just from the top of my head.

Ans so you end up with piles of additional solutions of various quality and availability on top of GraphQL servers and clients: caching, persisted queries etc. etc.


> GET requests are a crutch added to GraphQL precisely because of limitation of POST requests.

How are GET requests a crutch? If anything GraphQL is completely agnostic to which HTTP method you use to access it. You don't even have to run GraphQL over HTTP, it can work over MQTT, NATS, telnet...

> And the backend still has to normalise the GET request, and possibly peek inside it to make sure that it is the same as some previous request.

Which is what any caching proxy must do anyway?

> Your schema is a single endpoint with all the fields you need exposed. Oh, but a person X with access Y might not have access to fields A, B, C, and D.

In your GraphQL implementation you can just deny fulfilling requests that contain fields person X doesn't have access to. This problem is not limited to GraphQL, it's a generic authorization problem.

> A GraphQL query is ad-hoc. It can have unbounded complexity and unbounded recursion. Ooops, now you have to build complexity analysers and things to figure out recursion levels.

You don't have to build a complexity analyzer or figure out recursion levels, there are already tools that do that for you. But you can go another way and just create a list of approved queries.

> A GraphQL service usually collects data from several external services and/or a database (or even several databases)

Usually? That's just speculation. And that's entirely on the implementation of that service, it has nothing to do with GraphQL spec/technology itself.


> How are GET requests a crutch?

They were not in the original spec IIRC. URL's are limited in legth (it's not in the spec, but most clients have a limit) etc.

> Which is what any caching proxy must do anyway?

Nope. A caching proxy can benefit from HTTP Cache Headers [1]. But cache headers don't work well with GraphQL's GET requests, and don't work at all with the default, which is POST.

> This problem is not limited to GraphQL, it's a generic authorization problem.

GraphQL makes it significantly more complex though. Because your requests are ad-hoc.

> You don't have to build a complexity analyzer or figure out recursion levels, there are already tools that do that for you.

Indeed. By adding more and more complexity. And no, tools only solve a part of the problem. Simply a dataloader on a server doesn't entirely solve the N+1 problem.

> But you can go another way and just create a list of approved queries.

Turning it into REST with none of the benefits of REST.

> Usually? That's just speculation. And that's entirely on the implementation of that service

It's not speculation. That's the main use case for GraphQL. But even if you just slap it on top of a single database, you still have the problem of ad-hoc queries hammering your database.

[1] https://www.keycdn.com/blog/http-cache-headers


> They were not in the original spec IIRC. URL's are limited in legth (it's not in the spec, but most clients have a limit) etc.

They were not in spec because the spec doesn't say anything over which medium it should be transported. In fact the spec [1] only mentions the word HTTP 5 times: 4 times in example data and one time discussing implementation details when sending data over HTTP. GraphQL can't be faulted for the limits of the transport over which it is used.

> Nope. A caching proxy can benefit from HTTP Cache Headers [1]. But cache headers don't work well with GraphQL's GET requests, and don't work at all with the default, which is POST.

How do cache headers not work well with GraphQL GET requests? That is entirely up to the server that implements the API. If that server doesn't implement caching well, that's not GraphQL's fault.

> It's not speculation. That's the main use case for GraphQL. But even if you just slap it on top of a single database, you still have the problem of ad-hoc queries hammering your database.

The main use case of GraphQL is any two things that want to exchange data with each other. Merging data from multiple data sources as its main use case is simply not true. The ability of GraphQL to merge different data sources is one of its abilities but it's not intrinsic to GraphQL.

> Turning it into REST with none of the benefits of REST.

And what exactly are those benefits? I'm here defending GraphQL yet none of the downsides of REST are being taken into account. GraphQL brings structure where there was none, that alone is a significant reason to choose GraphQL to structure your API.

> N+1 problem

There are tools like Postgraphile that solve this. It converts your GraphQL query into one efficient database query.

> ad-hoc queries hammering your database

And what prevents anyone from hammering a REST API? GraphQL doesn't release the developer from implementing sane constraints - something that has to happen with any API implementation and not specific to GraphQL.

[1] http://spec.graphql.org/June2018/


> They were not in spec because the spec doesn't say anything over which medium

If not the spec, then original documentation. GET is a late add-on.

> How do cache headers not work well with GraphQL GET requests?

In REST:

- a resource is uniquely identified by it's URI

- when the server sends back cache headers, any client in between (any proxies, the browser, any http clients in any programming language etc.) can and will use these cache headers to cache the request

In GraphQL GET:

- http://myapi/graphql?query={user{id,name}} and http://myapi/graphql?query={user{name,id}} are two different requests

- it gets worse for more complex queries, especially if they are dynamically constructed on the client

- each of those is viewed as a separate query with separate caching

- cache normalisation and query normalisation are a thing in the graphql world (and non-existent in REST) because of that.

That's a yet another layer of complexity that you have to deal with

> And what exactly are those benefits? I'm here defending GraphQL yet none of the downsides of REST are being taken into account.

I wish anyone was willing to discuss the downsides of GraphQL. Bashing REST is the norm, but GraphQL is the holy grail that accepts no criticism.

Benefits of REST over GraphQL, off the top of my head:

- it's HTTP, plain and simple. So everything HTTP has to offer is directly available in REST. See this HTTP decision diagram [1]

- caching doesn't require you to normalise and unpack every single request and response just to figure out if something is cached

- You know your requests, so you can provide optimised queries, resolution strategies, necessary calls to external services as required by the call

> There are tools like Postgraphile that solve this. It converts your GraphQL query into one efficient database query.

I'd love to see that proven for any sufficiently complex and large database.

> And what prevents anyone from hammering a REST API?

No ad-hoc queries prevents anyone from hammering a REST API that you can specifically tune to the specific request and data you need.

GraphQL requires significantly more care especially if you're not running it on just one database. And even then, oops, joins: https://news.ycombinator.com/item?id=25014918

And we're back to requiring the graphql server to be able to limit recursion depth, query complexity, etc. etc.

[1] https://github.com/for-GET/http-decision-diagram/tree/master...


You almost certainly don't need a web framework. Over the long term, it's easy to find yourself boxed in with how opinionated many of them are.

Instead, if you build on the standard library, you can compose your application from there- a good muxer, some standard middleware that are generic http.Handler's, a session library, etc.


So, you chose a good muxer. Now, which of hundreds of logging, cors, sessions, jwt and permissions middlewares work with said mud and with each other best?

Or, and bear with me here, you can outsource those decisions to other folks, and just write your business logic.


I personally like this combo as my "framework":

* go-chi/chi

* rs/zerolog

* html/template

* spf13/viper

* raw SQL


sqlx is great for raw SQL


Faced similar issues in both of the Go projects I've worked on, particularly the one that used Mongo as the database.

My conclusion from both experiences is that Go is a tool like any other, with its strengths and weaknesses, and with particular idiosyncrasies that make it particularly important to properly architect the application. It can punish you pretty hard[1] if you don't follow proper patterns, or the architecture you decided.

For instance, I think the issues were more pronounced with the project using Mongo precisely because of the flexibility Mongo offers, which allows to delay deciding on schemas, and thus means that the architecture itself can change constantly.

[1]: Such as making coupling and code repetition to balloon pretty fast.


For logging libraries, I find TJ's structured logs indispensable: https://medium.com/@tjholowaychuk/apex-log-e8d9627f4a9a

No need to use Echo..


Around the ORM choice, I would stay with raw SQL Queries and use sqlc with to generate the boilerplate (structs and the repository itself). I have been using it and it is amazing. Only issue it only supports postgres, although it has beta support for other DBs.

https://github.com/kyleconroy/sqlc

SQLx is good for simple read queries, but iirc for write operations you still need to map things manually and reads with JOINs are a bit tricky. Gorm might be good for simple CRUD applications, but it's magic has a performance cost.


> silently handling panics

That has a bad smell


Especially since caught panics amount to exceptions and most Golang code isn't exception safe. It's a recipe for leaking database connections, deadlocks, etc.


Go error handling is awful, but this isn't a fair.

In go, if you write code like the following:

    conn, err := db.Connect()
    defer conn.Close()
That defer will be run during a panic. Same thing as 'defer mutex.Unlock()'

Yes, like most of go, it's manual and painful and poorly thought out, but most people do follow these patterns, so for the most part go code will safely unwind from a panic.


Most of the time, sure, but there are lots of cases where `defer` being function scoped leads to manual unlocks, e.g. inside loops or short critical sections in the middle of a function. Yes, you can use closures/otherwise, but few codebases are that disciplined. Those edge cases are plentiful enough that catching panics is dangerous.


I assume they meant “automatic handling of panics.” Unless that framework has interesting defaults. On mobile, else I’d check.


Having seen the REST api examples they gave, i'd steer clear from using this language for that particular task.


So does the 2nd part's example snippet use transactions or what? Seems pretty dubious, even for an example. With Node.js I pass a function as a parameter to a transactional exec and any failure should rollback the whole thing. Don't know if that's the same with Go.


For configuration management, I've found viper[1] to be quite handy.

[1]: https://github.com/spf13/viper


Won't Go's ecosystem go through a lot of churn after generics are introduced?


Go will never have generics. mark my words.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: