Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's no good reason a VM or container on Hetzner cannot use a firewall like IPTables. If that makes the service too expensive you increase cost or otherwise lower resources. A firewall is a very simple, essential part of network security. Every simple IoT device running Linux can run IPTables, too.


I guess you did not read the link I posted initially. When you set up a firewall on a machine to block all incoming traffic on all ports except 443 and then run docker compose exposing port 8000:8000 and put a reverse proxy like caddy/nginx in front (e.g. if you want to host multiple services on one IP over HTTPS), Docker punches holes in the iptables config without your permission, making both ports 443 and 8000 open on your machine.

@globular-toast was not suggesting an iptables setup on a VM, instead they are suggesting to have a firewall on a totally different device/VM than the one running docker. Sure, you can do that with iptables and /proc/sys/net/ipv4/ip_forward (see https://serverfault.com/questions/564866/how-to-set-up-linux...) but that's a whole new level of complexity for someone who is not an experienced network admin (plus you now need to pay for 2 VMs and keep them both patched).


Either you run a VM inside the VM or indeed two VMs. Jumphost does not require a lot of resources.

The problem here is the user does not understand that exposing 8080 on external network means it is reachable by everyone. If you use an internal network between database and application, cache and application, application and reverse proxy, and put proper auth on reverse proxy, you're good to go. Guides do suggest this. They even explain LE for reverse proxy.


Docker by default modifies iptables rules to allow traffic when you use the options to launch a container with port options.

If you have your own firewall rules, docker just writes its own around them.


I always have to define 'external: true' at the network. Which I don't do with databases. I link it to an internal network, shared with application. You can do the same with your web application, thereby only needing auth on reverse proxy. Then you use whitelisting on that port, or you use a VPN. But I also always use a firewall where OCI daemon does not have root access on.


> I always have to define 'external: true' at the network

That option has nothing to do with the problem at hand.

https://docs.docker.com/reference/compose-file/networks/#ext...


I thought "external" referred to whether the network was managed by compose or not


Yeah, true, but I have set it up in such a way that such network is an exposed bridge whereas the other networks created by docker-compose are not. It isn't even possible to reach these from outside. They're not routed, each of these backends uses standard Postgres port so with 1:1 NAT it'd give errors. Even on 127.0.0.1 it does not work:

$ nc 127.0.0.1 5432 && echo success || echo no success no success

Example snippet from docker-compose:

DB/cache (e.g. Postgres & Redis, in this example Postgres):

    [..]
    ports:
      - "5432:5432"
    networks:
      - backend
    [..]
App:

    [..]
    networks:
      - backend
      - frontend
    [..]
networks: frontend: external: true backend: internal: true


Nobody is disputing that it is possible to set up a secure container network. But this post is about the fact that the default docker behavior is an insecure footgun for users who don’t realize what it’s doing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: