Yep, this is correct.
To work around that, you could either add extra_hosts
to your docker-compose.yml
like so:
- /appdata/caddy/caddy-data:/data
environment:
- EMAIL=user@example.com # The email address to use for ACME registration.
networks:
- proxy_net
+ extra_hosts:
+ - host.docker.internal:host-gateway
networks:
proxy_net:
Which adds another hostname to /etc/hosts
within the caddy container.
With that, you will be able to use that new hostname to reference your actual host docker is running on:
encode gzip
- reverse_proxy localhost:8000 {
+ reverse_proxy host.docker.internal:8000 {
header_up X-Real-IP {remote_host}
}
}
However, you have to make sure your service on the actual host is listening on whatever host.docker.internal
resolves to (usually 172.17.0.1
- the docker0
interface).
Meaning, if you configured that service on the host to only listen on 127.0.0.1
, you will have to change that.
The alternative would be to run caddy either outside of the container (directly on your host) or with network_mode: host
:
- EMAIL=user@example.com # The email address to use for ACME registration.
+ network_mode: host
- networks:
- - proxy_net
- networks:
- proxy_net:
- driver: bridge
The downside here is, that you will have to manually map a port for each Docker service and can’t use your proxy_net
. But localhost
will refer to the actual hosts’ localhost
.
whoami:
image: jwilder/whoami
## exposes :8000
## so we can use `reverse_proxy 127.0.0.1:8000`
ports:
- 127.0.0.1:8000:8000