Hacker News new | past | comments | ask | show | jobs | submit login

Fun stuff!

If you like this kind of thing, we are developing a very powerful and flexible reverse proxy with load balancing into Caddy 2: https://github.com/caddyserver/caddy/wiki/v2:-Documentation#...

It's mostly "done" actually. It's already looking really promising, especially considering that it can do things that other servers keep proprietary, if they do it at all (for example, NTLM proxying, or coordinated automation of TLS certs in a cluster).

If you want to get involved, now's a great time while we're still in beta! It's a fun project that the community is really coming together to help build.




Any thought or plans for some kind of back-pressure? Health checks and response times are useful, to a degree, but there are a number of workloads where they don't actually capture the cost of the work involved, and they can also really trip you up something nasty under certain failure conditions :D

edit: by way of example, I used to work for a service that customers would upload files to, that's all that the traffic was. There was wild variability in the size, processing cost, and upload speed of each request. None of the standard load-balancing approaches really balance "load" from a service perspective. While things worked, it was rarely optimal.


HAProxy has a built in way to do some of this -- you can setup an agent check that lets you dynamically adjust weights: https://cbonte.github.io/haproxy-dconv/2.0/configuration.htm...

I think most proxies are planning to move logic like this to the control plane. Envoy's gRPC stuff has some ways to dynamically throttle traffic to backends.

Load balancers really need to become programming runtimes, imo. Config languages aren't very expressive, and almost everyone needs their own logic at the LB level.

I _just_ put together a demo of latency based load balancing using HAProxy + awk and it's neat, but still very rudimentary compared to what I could express in, say, JavaScript: https://github.com/superfly/multi-cloud-haproxy


Caddy 2 has an embedded scripting language that allows this. We have to flush it out some more but it's looking really good.

In some of our early testing on basic workloads, we found that it's up to 2x faster than NGINX+Lua, largely because it does not require a VM. (This is a broad generalization, and we need to specifically optimize for these cases -- but this approach holds promise.)


Oh neat! What language did you all settle on?


F5 BIG-IP can be programmed in Tcl. While it is a programming language, I have only seen it programmed by non-developers. With copy paste code, repeated string constants all over the place and no unit tests.

I agree that load balancer a do need the expressiveness of programming languages, but ideally that would only be with some typing and ability to easily un it test.


Yes! And a good surface API, OpenResty (nginx + lua) is reasonably powerful but you're really limited based on the events they give you.

I'm really hopeful about deno (https://github.com/denoland/deno) for this. TypeScript is nice, the deno TCP perf is good, all it needs is some good proxy libraries.


Yes, aside from multiple load balancing policies, Caddy 2 has a circuit breaker that will automatically adjust the load balancing before latency to a particular backend begins to grow out of tolerances.

Both the load balancing policies and the circuit breakers are extensible, meaning it is easy to change their behavior and add new ones as needed.

I could also imagine a specific load balancing policy that adds up a cost for each request using headers such as transfer size; i.e. dynamic weights. This would be a great contribution to the project if you are interested!

Caddy 2 also has an embedded scripting language that can make this kind of logic scriptable and dynamic, but that's still a WIP.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: