Reverse_proxy docker

  1. Caddy version (caddy version):

  2. How I run Caddy:
    docker run --name caddy -d -p 80:80 -p 443:443 -p 2019:2019 -p 6984:6984 -p 8443:8443 -v /home/caddy/blendedapplication:/srv -v /home/caddy/data:/data -v /home/caddy/config:/config -v /home/caddy/caddyfile/Caddyfile:/etc/caddy/Caddyfile caddy

a. System environment:
Ubuntu 18.04.5 LTS + Docker 20.10.5 + container --name keycloak port 8080 _ container --name couchdb port 5984

d. My complete Caddyfile or JSON config:

  admin {
:6984 {
        reverse_proxy couchdb:5984
:8443 {
        reverse_proxy keycloak:8080
  1. The problem I’m having:
    I want to resolve my running and working keycloak container on 8080 through 8443 and my running and working couchdb container on 5984 through 6984.
    curl -v localhost:8443 and curl -v localhost:6984 both give HTTP/1.1 502 Bad Gateway Server: Caddy

I can see from the logs that it is likely Docker networking which fails to resolve the IP of either couchdb or keycloak containers. But I am at a loss understanding why that is and how to solve it. I understand this is not Caddy specific, but it must be a very common scenario with possibly a simple resolution

  1. Error messages and/or full log output:
{"level":"info","ts":1618067998.530174,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
{"level":"info","ts":1618067998.5375714,"logger":"admin","msg":"admin endpoint started","address":"tcp/","enforce_origin":false,"origins":[""]}
{"level":"warn","ts":1618067998.5375843,"logger":"admin","msg":"admin endpoint on open interface; host checking disabled","address":"tcp/"}
{"level":"info","ts":1618067998.5381608,"msg":"autosaved config","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1618067998.5381708,"msg":"serving initial configuration"}
{"level":"info","ts":1618067998.5384357,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0001ff1f0"}
{"level":"info","ts":1618067998.538935,"logger":"tls","msg":"cleaned up storage units"}
{"level":"error","ts":1618068009.67327,"logger":"http.log.error","msg":"dial tcp: lookup couchdb on no such host","request":{"remote_addr":"","proto":"HTTP/1.1","method":"GET","host":"localhost:6984","uri":"/","headers":{"User-Agent":["curl/7.58.0"],"Accept":["*/*"]}},"duration":0.017254921,"status":502,"err_id":"1gn1dkgjx","err_trace":"reverseproxy.statusError (reverseproxy.go:783)"}
{"level":"error","ts":1618068021.21765,"logger":"http.log.error","msg":"dial tcp: lookup keycloak on no such host","request":{"remote_addr":"","proto":"HTTP/1.1","method":"GET","host":"localhost:8443","uri":"/","headers":{"User-Agent":["curl/7.58.0"],"Accept":["*/*"]}},"duration":0.141960895,"status":502,"err_id":"6dp8am5f3","err_trace":"reverseproxy.statusError (reverseproxy.go:783)"}
{"level":"error","ts":1618070510.6747932,"logger":"http.log.error","msg":"dial tcp: lookup couchdb on no such host","request":{"remote_addr":"","proto":"HTTP/1.1","method":"GET","host":"localhost:6984","uri":"/","headers":{"User-Agent":["curl/7.58.0"],"Accept":["*/*"]}},"duration":0.027581793,"status":502,"err_id":"je65mnw27","err_trace":"reverseproxy.statusError (reverseproxy.go:783)"}
  1. What I already tried:
    Reading and rereading the docs/examples and issues.

  2. Links to relevant resources:

docker network ls for each container would be a good start… as you can verify if your network configurations are resulting in both being in the same network.

This is a good discussion around troubleshooting over another reverse proxy implementation and the progression of troubleshooting it.


You can make this easier by using docker-compose, it’ll make sure everything is using the same network, and it’s an easier interface to manage spinning up all your containers rather than doing a bunch of docker run commands.

There’s an example of how it looks for Caddy on Docker Hub

Just logging the docker inspect here. Caddy, couchdb and keycloak look to be in the same network.
{ "Name": "bridge", "Id": "92126a5c42005423ea7929bd04bcae5dc3eb100f1a6562cdb5872c4c61ef5cbf", "Created": "2021-04-03T10:03:59.9176206Z", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "", "Gateway": "" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "49bb3af9c8c6cffce6b60efb539b05030e5e67012c556872fbe6686399dcdb0a": { "Name": "keycloak", "EndpointID": "70e07a4ce79dfdb7a048a3ab2e9821f7fc68fbfa4c6bd108b6ea16be18df44ff", "MacAddress": "02:42:ac:11:00:02", "IPv4Address": "", "IPv6Address": "" }, "d4af3e9181beab529ab63e94be1f7450342d17431ddf0f3d0c5e982b674eeb58": { "Name": "caddy", "EndpointID": "cf2147a9627ef0163b7d891876158f0367f850f4d2eb5d95a1e7b515cf71426a", "MacAddress": "02:42:ac:11:00:04", "IPv4Address": "", "IPv6Address": "" }, "d8866344963215b375417291d61d37b7cebef04f86bb803418036732a495b934": { "Name": "couchdb", "EndpointID": "60a3f4953981ec08819f66aba537710b5c04f55e6c5133257fc4d23d1f220eb8", "MacAddress": "02:42:ac:11:00:03", "IPv4Address": "", "IPv6Address": "" } }, "Options": { "": "true", "": "true", "": "true", "": "", "": "docker0", "": "1500" }, "Labels": {} }

Then tried networking from inside the Caddy container.
/srv # nc -zv couchdb 5984 nc: bad address 'couchdb' /srv # nc -zv 5984 ( open /srv # nc -zv keycloak 5984 nc: bad address 'keycloak' /srv # nc -zv 8080 ( open
So it comes down to resolution of the service names as everything else works.
/srv # ping couchdb ping: bad address 'couchdb' /srv # ping PING ( 56 data bytes 64 bytes from seq=0 ttl=64 time=0.084 ms /srv # ping PING ( 56 data bytes 64 bytes from seq=0 ttl=112 time=2.387 ms

Will ultimately try rinse and redo with docker compose although i want to understand if the lookup can not be fixed. Sorry to burden the caddy forum with this docker issue… comments received already much appreciated!

HI Francis, the example also declares global volumes. Is that optional? What purpose could that serve? Thanks for explaining these things.


As explained near the top of the docs on Docker Hub, Caddy stores certificates, keys, and other state information in those paths. Setting up those volumes gives Docker somewhere to keep those files across container removals (e.g. when you upgrade Caddy etc). It’s pretty important to keep that data persisted.

The other option is to use a bind mount from a particular directory instead of a named volume, if you prefer. To Caddy, that’s all the same. It’s just where you want to store the files on your system. A named volume will be managed by Docker and put somewhere in /var/lib/docker (I forget the exact path off the top of my head)

1 Like

Thanks Francis, I get the mapping and I have the volumes set up inside the Caddy service to point to data/config/Caddyfile… but was just wondering if the general volume declaration at the end (used to access volumes between services, if I understood correctly) is also needed. The Caddy example has both volumes inside the service declaration as also global ones in the compose file.

For named volumes, docker-compose does require the named volumes to be declared like that.

This topic was automatically closed after 30 days. New replies are no longer allowed.