One does not simply go from a flat network to overlays. Overlays are slow, difficult, cause really odd failures and are often hilariously immature. They are the experimental graph database of the network world.
Just have a segregated network, and let the VPC/dhcp do all the hard stuff.
Have your hosts on the default VLAN(or Interface if your cloudy), with its own subnet (Subnets should only exist in one VLAN.) Then if you are in cloud land, have a second network adaptor on a different subnet. If you are running real steel, then you can use a bonded network adaptor with multiple VLANs on the same interface. (The need for a VLAN in a VPC isn't that critical because there are other tools to impose network segregation.)
Then use macvtap, or macvlan(or which ever thing that gives each container a macaddress) to give each container its own IP. This means that your container is visible on that entire subnet, either inside the host or without.
There is no need to faff with routing, it comes for free with your VPC/network or similar. Each container automatically has a hostname, IP, route. It will also be fast. As a bonus it call cane be created at the start using cloudformation or TF.
You can have multiple adaptors on a host, so you can separate different classes of container.
Look, the more networking that you can offload to the actual network the better.
If you are ever re-creating DHCP/routing/DNS in your project, you need to take a step back and think hard about how you got there.
70% of the networking modes in k8s are batshit insane. a large amount are basically attempts at vendor lock in, or worse someone's experiment thats got out of hand. I know networking has always been really poor in docker land, but there are ways to beat the stupid out of it.
I will have to take the other side of that golden rule. Not sure where it came from. But when one has a decent handle on the tools at hand, they work wonderously well.
I have bare metal servers tied together with L3 routing via Free Range Routing running BGP/VxLAN. It Just Works.
No hard coded vlans between physical machines. Just point-point L3 links. Vlans are tortuous between machines as a Layer 2 protocol, given spanning tree and all of its slow to converge madness.
OP was mostly talking about cloud + docket containers. Your use-case is unrelated and seems to make sense.. But I still agree with OP and I believe overlays in the cloud is generally an anti-pattern of unnecessary complexity.
Where I work we use overlays (flannel) and it just works. I don't think we've had issues. AFAIK the primary reason was that the network can be secure/encrypted. Otherwise you're running everything with TLS and managing all the certs can be more painful. Or you're running without encryption which is a potential security problem. You still need to do that for external facing stuff but that's a lot less.
Just have a segregated network, and let the VPC/dhcp do all the hard stuff.
Have your hosts on the default VLAN(or Interface if your cloudy), with its own subnet (Subnets should only exist in one VLAN.) Then if you are in cloud land, have a second network adaptor on a different subnet. If you are running real steel, then you can use a bonded network adaptor with multiple VLANs on the same interface. (The need for a VLAN in a VPC isn't that critical because there are other tools to impose network segregation.)
Then use macvtap, or macvlan(or which ever thing that gives each container a macaddress) to give each container its own IP. This means that your container is visible on that entire subnet, either inside the host or without.
There is no need to faff with routing, it comes for free with your VPC/network or similar. Each container automatically has a hostname, IP, route. It will also be fast. As a bonus it call cane be created at the start using cloudformation or TF.
You can have multiple adaptors on a host, so you can separate different classes of container.
Look, the more networking that you can offload to the actual network the better.
If you are ever re-creating DHCP/routing/DNS in your project, you need to take a step back and think hard about how you got there.
70% of the networking modes in k8s are batshit insane. a large amount are basically attempts at vendor lock in, or worse someone's experiment thats got out of hand. I know networking has always been really poor in docker land, but there are ways to beat the stupid out of it.
The golden rule is this:
Always. Avoid. Network. Overlays.