We're running Kubernetes for minor amounts of traffic in production, but we're still not in a great place due to a few limitations in running Kubernetes on your own gear.
I know that people like to talk about cost savings and stuff like that, but I'd like to see if it actually lowered your app latency and increased sales/conversions/whatnot. Things that matter a lot to a growth business.
I ask because the various overlay/iptables/NAT/complicated networking setups in Kubernetes lend themselves to adding more overhead and being much slower than running on "bare metal" and talking directly to a "native" IP. I really, really wish that Kubernetes had full, built-in IPv6 support. It would remove a lot of this crud.
Our solution works around this by assigning IP's with Romana and advertising those into a full layer 3 network with bird. The pods register services into CoreDNS, and an "old fashioned" load balancer resolves service names into A records. Requests are then round robin'd across the various IP's directly. There's no overlay network. There's no central ingress controllers to deal with. There's no NAT. It's direct from LB to pod.
The nginx ingress controller is not a long term solution. It's a stop-gap measure. Someone really needs to build a proper, lightweight, programmable, and cheap, software-based load balancer that I can anycast to across several servers. That or Facebook just needs to open-source theirs.
Regarding networking, did you consider Flannel? Its "host-gw" backend doesn't have any overhead, as it's only setting up routing tables, which is fine for small (<100s of nodes) clusters that have L2 connectivity between nodes.
Our network is full layer 3 Clos Leaf/Spine. We'd much prefer something with network advertisements (OSPF/BGP) or SDN. Layer 2 stuff is OK for labs, but I don't know anyone building out layer 2 networks any more.
I know that people like to talk about cost savings and stuff like that, but I'd like to see if it actually lowered your app latency and increased sales/conversions/whatnot. Things that matter a lot to a growth business.
I ask because the various overlay/iptables/NAT/complicated networking setups in Kubernetes lend themselves to adding more overhead and being much slower than running on "bare metal" and talking directly to a "native" IP. I really, really wish that Kubernetes had full, built-in IPv6 support. It would remove a lot of this crud.
Our solution works around this by assigning IP's with Romana and advertising those into a full layer 3 network with bird. The pods register services into CoreDNS, and an "old fashioned" load balancer resolves service names into A records. Requests are then round robin'd across the various IP's directly. There's no overlay network. There's no central ingress controllers to deal with. There's no NAT. It's direct from LB to pod.
The nginx ingress controller is not a long term solution. It's a stop-gap measure. Someone really needs to build a proper, lightweight, programmable, and cheap, software-based load balancer that I can anycast to across several servers. That or Facebook just needs to open-source theirs.