Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No process pool management for running Go behind something like Apache or nginx.

Aside from goroutine scheduling issues, what would be the main reasons to run multiple Go processes, instead of just running a single multi-threaded process?




There are perhaps a few advantages to having lots of processes that you wouldn't get in a single threaded app: redundancy in case a process does hang, rolling restarts to switch out to a newer version of the code seamlessly by starting multiple processes to handle requests before killing the old one, and I suspect you'd hit some limits of the scheduler, as you hinted in your question.

It's probably possible to have most of that logic in a single Go process which runs a bunch of sub-processes for serving requests, but then you're also pretty much writing a load-balancer as well as your app each time. Perhaps better to separate out those tasks and put them into a separate process manager which runs a pool of processes, as on other platforms like Ruby with Unicorn or Passenger? Those platforms have other reasons of course for scaling with processes and not threads, which don't apply to Go.

Not sure how hard or efficient this would be (just using one process) as I haven't tried an app in this style in go, have only been playing with it so far. I would be really interested to see a Go server implementation that managed a bunch of goroutines to serve requests, are there any examples out there?


> Not sure how hard or efficient this would be (just using one process) as I haven't tried an app in this style in go, have only been playing with it so far. I would be really interested to see a Go server implementation that managed a bunch of goroutines to serve requests, are there any examples out there?

I'm slightly confused by this question, because that's what the standard library does. If you've ever used net/rpc, or net/http, then it spawns goroutines for each request (or connection, respectively).

If you meant to say spawn a bunch of _processes_ to serve requests, then no, I don't think anyone has done it. I don't think it makes a whole lot of sense for anyone to write code to do this in Go, tbh.

> redundancy in case a process does hang

If you're talking about deadlocks, then only the deadlocked goroutines will be blocked. The rest will make progress, just as if you had multiple processes.

> rolling restarts to switch out to a newer version of the code seamlessly by starting multiple processes to handle requests before killing the old one

You can do this with only 2 processes, old and new. You can spawn the new one, tell your LB to add the new to the pool, wait 30 seconds, remove old from pool, wait 60 seconds, kill old one. You don't need an LB, you could, of course, use an nginx frontend or something instead. There's also some neat ways to do nginx-style zero downtime restarts that I've never tried, but heard good things about.

> I suspect you'd hit some limits of the scheduler

AFAIK, limits of the scheduler tend to be hit when you increase GOMAXPROCS to something above 8. At this point, you'll spend a lot of your time in the runtime managing goroutines. My solution is just to run multiple processes with GOMAXPROCS=8 and point your LB at both of them. Again, you can just use nginx.

Feel free to experiment with the model you proposed, but this is relatively non-idiomatic, and the context-switching cost will start to mislead you as to Go's actual potential. The advantage, btw, that you get when you use the model I spoke of is that in memory caching, connection pooling, context switch time are all close to optimal, and you have fewer processes to monitor/restart/update.


Thanks for the reply, which clears up some of my misunderstandings (sorry, new to Go), I was using the standard library and hadn't looked under the hood, I'll go take a look at what it does. Re rolling restarts:

You can do this with only 2 processes, old and new. You can spawn the new one, tell your LB to add the new to the pool, wait 30 seconds, remove old from pool, wait 60 seconds, kill old one.

You still need a LB to do this, though I take your point that you could use nginx, might experiment with that.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: