Reverse proxy redirects to another Docker container with similar name if correct container is stopped

1. Caddy version (caddy version):

v2.3.0

2. How I run Caddy:

I run Caddy as a reverse proxy for internal network use via Docker/Docker Compose

a. System environment:

Ubuntu v18.04.5, Docker v20.10.3, Docker Compose v1.25.0

b. Command:

docker-compose up -d

c. Service/unit/compose file:

version: "3.7"

services:
  caddy:
    image: caddy
    container_name: caddy
    volumes:
      - ./.caddy/Caddyfile:/etc/caddy/Caddyfile
      - caddy-config:/config
      - caddy-data:/data
    ports:
      - 80:80
      - 443:443
    networks:
      - reverse-proxy
    restart: unless-stopped

volumes:
  caddy-config:
    name: caddy-config
  caddy-data:
    name: caddy-data

networks:
  reverse-proxy:
    name: reverse-proxy

d. My complete Caddyfile or JSON config:

http://transmission.fantasio.local {
  reverse_proxy transmission:9091
}

http://transmission2.fantasio.local {
  reverse_proxy transmission2:9091
}

3. The problem I’m having:

I have 2 Docker containers with names transmission and transmission2 both exposing their web UI on port 9091. When both containers are running everything works, http://transmission.fantasio.local points to transmission and http://transmission2.fantasio.local points to transmission2.

However, if I stop transmission, http://transmission.fantasio.local will start showing transmission2 instead. Can anyone tell me what could be happening here? When transmission is stopped I want http://transmission.fantasio.local to show error.

4. Error messages and/or full log output:

5. What I already tried:

6. Links to relevant resources:

That’s odd.

Ultimately I don’t this is an issue with Caddy but just either misuse or a misunderstanding of how Docker works.

How are you actually running your other containers? I’ll guess you have other docker-compose files with those in it? Why not combine them into one docker-compose file instead of splitting it up?

You can add this to the top of your Caddyfile to help see what’s going on in your logs, e.g. what IP address those names are resolving to:

{
	debug
}

I asked in r/docker before coming here.

Yes, I have multiple Compose files based on category and so I can reuse them with different configurations.

Will try the debug directive and get back here.

Hear, hear!

If transmission and transmission2 are different containers, there should be as much difference between those two as there are between foo and bar, there’s not really such thing as the Docker network routing traffic to the “closest named” service to the one you’re looking up.

I am highly confident that this is a problem in the environment that Caddy’s debug mode might help troubleshoot, but Caddy simply does a DNS lookup for the upstream hostname and then sends traffic. It’s definitely a DNS/network issue; either Docker is returning the wrong IP address for the hostname after the former container goes down, or it’s routing traffic from that IP address to the wrong container after the former container goes down. Both seem really… really… odd. Seeing how those other containers are configured (as asked by @francislavoie) might shed some light on how/why this is occurring.

1 Like

Sorry, I was away over the weekend.

I started both Compose projects and copied an excerpt of the debug logs from Caddy while I was accessing the sites from Firefox on another computer (192.168.152). They immediately started answering each other’s requests.

I don’t know if you can deduce anything from the logs provided, but I hold hope. Or pointing me in the right direction!

https://gist.githubusercontent.com/vikekh/2e55eb55aac4a550de4ba499aa7db5c1/raw/a343d491f9220337e4fdffd91ad0ef574ab261a7/caddy.txt

You also asked how I started the Transmission containers and this config is publicly available at GitHub. Main differences are environment variables used for container and network names.

{"level":"debug","ts":1617648401.3300328,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"transmission:9091","request":{"remote_addr":"192.168.1.152:65418","proto":"HTTP/1.1","method":"GET","host":"transmission2.fantasio.local","uri":"/transmission/web/tr-web-control/script/jquery/json2.min.js","headers":{"Cache-Control":["max-age=0"],"X-Forwarded-For":["192.168.1.152"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:87.0) Gecko/20100101 Firefox/87.0"],"Accept":["*/*"],"Accept-Language":["en-US,en;q=0.5"],"Accept-Encoding":["gzip, deflate"],"X-Forwarded-Proto":["http"],"Referer":["http://transmission2.fantasio.local/transmission/web/"]}},"duration":0.000702244,"headers":{"Server":["Transmission"],"Content-Type":["application/javascript"],"Date":["Mon, 05 Apr 2021 18:46:41 GMT"],"Expires":["Tue, 06 Apr 2021 18:46:41 GMT"],"Content-Encoding":["gzip"],"Content-Length":["1225"]},"status":200}

Right off the bat we’re getting:

"upstream": "transmission:9091"

With:

"host": "transmission2.fantasio.local"

So Caddy appears to be sending the request for transmission2.fantasio.local upstream to the transmission container.

That appears to be the inverse of the described issue (that transmission.fantasio.local returns content from transmission2).

We also have a few of these:

{"level":"debug","ts":1617648414.8207567,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"transmission:9091","request":{"remote_addr":"192.168.1.152:65375","proto":"HTTP/1.1","method":"POST","host":"transmission.fantasio.local","uri":"/transmission/rpc","headers":{"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:87.0) Gecko/20100101 Firefox/87.0"],"Accept":["application/json, text/javascript, */*; q=0.01"],"Accept-Language":["en-US,en;q=0.5"],"Content-Type":["application/x-www-form-urlencoded; charset=UTF-8"],"X-Requested-With":["XMLHttpRequest"],"Origin":["http://transmission.fantasio.local"],"Accept-Encoding":["gzip, deflate"],"X-Transmission-Session-Id":["X2e2FYTb1ceWAk3QtwmfV5P2apJuz1G67Kr5YRbOPSwEVvym"],"Content-Length":["347"],"X-Forwarded-For":["192.168.1.152"],"Referer":["http://transmission.fantasio.local/transmission/web/"],"X-Forwarded-Proto":["http"]}},"duration":2.034023811,"error":"context canceled"}

Where the host is transmission.fantasio.local and the selected upstream is also transmission (not transmission2). On it’s own, that appears to be the desired behaviour. But it looks like both sites are just sending all traffic to transmission.

I can’t find a single instance in the posted logs of Caddy attempting to connect to transmission2 at all. This is exceedingly strange. It should be essentially impossible for a site block to arbitrarily use configuration from another site block, so for Caddy to ever select the transmission upstream for the transmission2.fantasio.local site should never happen. At this stage I’d start looking into whether it’s possible the Caddyfile you think you’re using isn’t actually the Caddyfile your Caddy server is using.

1 Like

This topic was automatically closed after 30 days. New replies are no longer allowed.