Caddy not working with Pi-Hole local DNS

1. Caddy version (caddy version):

2.3.0

2. How I run Caddy:

Using Docker with a Caddyfile. Pi-Hole is also running using docker on the same server. None of it is in VMs. Both are working individually.

a. System environment:

Kubuntu 20.4 LTS, Docker

b. Command:

sudo docker run -d --restart always --name caddy -p 80:80 -p 443:443
-v /etc/caddy/data:/data
-v /etc/caddy/Caddyfile:/etc/caddy/Caddyfile
caddy

d. My complete Caddyfile or JSON config:

(doesn't work, and not sure if I can actually do /admin)
http://pi.hole, http://pihole {
    reverse_proxy 192.168.1.73:1080/admin
}

(doesn't work)
http://logs {
    reverse_proxy 192.168.1.73:8888
}

(works)
mydomain.com {
    respond "There is nothing here."
}

(works)
jelly.mydomain.com {
    reverse_proxy 172.17.0.1:8096
}

3. The problem I’m having:

I setup some local dns endpoints on my pi-hole. The ones not pointing to this server work. The ones that point to it do not. I also have jellyfin running on a docker container through a reverse proxy, which works fine. It’s just the local dns records on the caddy server that aren’t working.

4. Error messages and/or full log output:

After trying to hit logs/ and pihole/

today at 1:16 PM  {"level":"error","ts":1610219797.873682,"logger":"http.log.error","msg":"dial tcp 192.186.2.73:8888: i/o timeout","request":{"remote_addr":"192.168.2.73:52564","proto":"HTTP/1.1","method":"GET","host":"logs","uri":"/","headers":{"Accept-Encoding":["gzip, deflate"],"Connection":["keep-alive"],"Upgrade-Insecure-Requests":["1"],"Cache-Control":["max-age=0"],"User-Agent":["Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"],"Accept-Language":["en-US,en;q=0.5"]}},"duration":10.00071486,"status":502,"err_id":"rwqqxdirw","err_trace":"reverseproxy.statusError (reverseproxy.go:783)"}
today at 1:18 PM  {"level":"error","ts":1610219912.3005428,"logger":"http.log.error","msg":"dial tcp 192.186.2.73:8888: i/o timeout","request":{"remote_addr":"192.168.2.73:52888","proto":"HTTP/1.1","method":"GET","host":"logs","uri":"/","headers":{"Accept-Encoding":["gzip, deflate"],"Connection":["keep-alive"],"Upgrade-Insecure-Requests":["1"],"User-Agent":["Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"],"Accept-Language":["en-US,en;q=0.5"]}},"duration":10.000332771,"status":502,"err_id":"sh3vq4ryz","err_trace":"reverseproxy.statusError (reverseproxy.go:783)"}
today at 1:21 PM  {"level":"error","ts":1610220071.137832,"logger":"http.log.error","msg":"dial 192.168.2.73:1080: unknown network 192.168.2.73:1080","request":{"remote_addr":"192.168.2.73:53256","proto":"HTTP/1.1","method":"GET","host":"pihole","uri":"/","headers":{"Upgrade-Insecure-Requests":["1"],"User-Agent":["Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"],"Accept-Language":["en-US,en;q=0.5"],"Accept-Encoding":["gzip, deflate"],"Connection":["keep-alive"]}},"duration":0.000136045,"status":502,"err_id":"7u195z59f","err_trace":"reverseproxy.statusError (reverseproxy.go:783)"}
today at 1:21 PM  {"level":"error","ts":1610220071.1727135,"logger":"http.log.error","msg":"dial 192.168.2.73:1080: unknown network 192.168.2.73:1080","request":{"remote_addr":"192.168.2.73:53256","proto":"HTTP/1.1","method":"GET","host":"pihole","uri":"/favicon.ico","headers":{"User-Agent":["Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0"],"Accept":["image/webp,*/*"],"Accept-Language":["en-US,en;q=0.5"],"Accept-Encoding":["gzip, deflate"],"Connection":["keep-alive"],"Referer":["http://pihole/"]}},"duration":0.000097744,"status":502,"err_id":"b4xn8ct7q","err_trace":"reverseproxy.statusError (reverseproxy.go:783)"}

5. What I already tried:

I learned the first issue is that this needs to be served without TLS, because these are not public endpoints. I tried a few different settings here. None worked, but it seems like the proper way is prepending http:// to the endpoint.

I’ve also tried different IPs in the Caddyfile. Is there a best practice on what to use here? Considering I’d rather not use ufw to open a port that I don’t want to expose for a local container if possible. I think the docker container IP and the servers LAN address are interchangeable right?

I’ve only gotten timeouts or 502 Bad Gateway errors. I’m probably missing something small. I just want to not have to open extra ports, or remember IP address to access things on LAN.

One more smaller issue is that caddy reload isn’t working.
Running sudo docker exec caddy reload returns

OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: exec: "reload": executable file not found in $PATH: unknown

so I’ve just been restarting the container. Anyone know why it’s not working?

Thanks!

caddy is the name of the container, and you’re trying to run the shell command reload in that container, which doesn’t exist. Instead, you need to run the caddy reload. But also, the work default working directory inside the container is /srv, so Caddy won’t know where to look for your Caddyfile. So you’ll need to do sudo docker exec caddy -w /etc/caddy caddy reload.

There’s a section in the docker hub docs about reloading: Docker

If you need to proxy to a subpath, then you need to do a rewrite. So, something like this, which will prefix every request path with /admin.

http://pi.hole, http://pihole {
	rewrite * /admin{uri}
	reverse_proxy 192.168.1.73:1080
}

I think you would be better served here by using docker-compose to run your Docker stack, since you’re running more than one container. It’ll be easier to manage, and you’ll get some of the extra bits like being able to make use of Docker’s internal DNS server which will allow you to use the container names for proxying instead of IP addresses, and having just a single command to spin everything up rather than individual docker run commands.

Once using docker-compose, and having your pihole running in a container called pihole-server (because if you called it pihole, it would conflict with the name you want to use from outside the docker stack), then you could do reverse_proxy pihole-server:1080 instead of using the IP address. Same with Jellyfin, you could use jellyfin:8096 assuming it’s running in a container as well.

1 Like

Thanks for the quick and in depth response. I have a few things running in docker-compose, but everything is in it’s in file right now. I wasn’t thinking about combining them before, but it does make sense, since all the networking stuff is coupled. I will try that.

1 Like

Ok, I refactored everything to use docker-compose and utilized the named routes in the Caddyfile. I’m actually in the exact same state it seems. Caddy works. Pi-hole works. But they still don’t work together. Trying to hit an endpoints gives me this in the Caddy logs:

today at 6:02 PM  {"level":"error","ts":1610236927.2636373,"logger":"http.log.error","msg":"dial tcp 172.23.0.5:8888: connect: connection refused","request":{"remote_addr":"192.168.2.73:59500","proto":"HTTP/1.1","method":"GET","host":"logs","uri":"/","headers":{"User-Agent":["Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:84.0) Gecko/20100101 Firefox/84.0"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"],"Accept-Language":["en-US,en;q=0.5"],"Accept-Encoding":["gzip, deflate"],"Connection":["keep-alive"],"Upgrade-Insecure-Requests":["1"]}},"duration":0.000790964,"status":502,"err_id":"e2gxmkvqn","err_trace":"reverseproxy.statusError (reverseproxy.go:783)"}

I tried disabling my firewall (ufw) as well, and it made no difference. Any ideas?

What’s your whole docker-compose.yml and Caddyfile at this point?

Hard to say where the misconfiguration lies without the whole picture.

Ah so that’s your problem. When you’re inside the Docker stack, you need to use the port internal to the Docker network, not the port that’s bound to the host. So you’ll use pihole-server:80, not pihole-server:1080. Same idea with dozzle, i.e. dozzle:8080 instead.

2 Likes

Thank you! That was the issue. Everything is working now. Switching everything to a single docker-compose really simplified everything.

1 Like

This topic was automatically closed after 30 days. New replies are no longer allowed.