Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Docker by default modifies iptables rules to allow traffic when you use the options to launch a container with port options.

If you have your own firewall rules, docker just writes its own around them.



I always have to define 'external: true' at the network. Which I don't do with databases. I link it to an internal network, shared with application. You can do the same with your web application, thereby only needing auth on reverse proxy. Then you use whitelisting on that port, or you use a VPN. But I also always use a firewall where OCI daemon does not have root access on.


> I always have to define 'external: true' at the network

That option has nothing to do with the problem at hand.

https://docs.docker.com/reference/compose-file/networks/#ext...


I thought "external" referred to whether the network was managed by compose or not


Yeah, true, but I have set it up in such a way that such network is an exposed bridge whereas the other networks created by docker-compose are not. It isn't even possible to reach these from outside. They're not routed, each of these backends uses standard Postgres port so with 1:1 NAT it'd give errors. Even on 127.0.0.1 it does not work:

$ nc 127.0.0.1 5432 && echo success || echo no success no success

Example snippet from docker-compose:

DB/cache (e.g. Postgres & Redis, in this example Postgres):

    [..]
    ports:
      - "5432:5432"
    networks:
      - backend
    [..]
App:

    [..]
    networks:
      - backend
      - frontend
    [..]
networks: frontend: external: true backend: internal: true


Nobody is disputing that it is possible to set up a secure container network. But this post is about the fact that the default docker behavior is an insecure footgun for users who don’t realize what it’s doing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: