Caddy + Pi-Hole with docker-compose - Local DNS not working

1. Caddy version:

(via docker compose exec caddy caddy version)
v2.6.3 h1:QRVBNIqfpqZ1eJacY44I6eUC1OcxQ8D04EKImzpj7S8=

2. How I installed, and run Caddy:

Installed with my docker-compose.yaml file

a. System environment:

Linux raspberrypi 5.15.0-1024-raspi #26-Ubuntu SMP PREEMPT Wed Jan 18 15:29:53 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux

b. Command:

docker compose up

c. Service/unit/compose file:

docker compose file from docker-pi-hole/docker-compose-caddy-proxy.yml at master · pi-hole/docker-pi-hole · GitHub, edited to fit my needs.

version: "3.8"

# https://github.com/pi-hole/docker-pi-hole/blob/master/README.md

services:

  caddy:
    container_name: caddy
    image: caddy:latest
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
      - "443:443/udp"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
      - caddy_config:/config
    dns:
      - 127.0.0.1
      - 9.9.9.9

  pihole:
    dns:
      - 127.0.0.1
      - 9.9.9.9
    depends_on:
      - caddy
    container_name: pihole
    image: pihole/pihole:latest
    # For DHCP it is recommended to remove these ports and instead add: network_mode: "host"
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "67:67/udp"
      - "8080:80/tcp" # Outside access for debugging purposes
    environment:
      TZ: 'Berlin/Europe'
      WEBPASSWORD: 'password'
      FTLCONF_LOCAL_IPV4: '192.168.1.105' # static IP I bound in my router to the RBPi's Mac
      PIHOLE_DNS_: '9.9.9.9;149.112.112.112'
      DNS_BOGUS_PRIV: true,
      DNSMASQ_LISTENING: 'all'
    # Volumes store your data between container upgrades
    volumes:
      - './etc-pihole:/etc/pihole'
      - './etc-dnsmasq.d:/etc/dnsmasq.d'
    #   https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
    cap_add:
      - NET_ADMIN
    restart: unless-stopped # Recommended but not required (DHCP needs NET_ADMIN)

volumes:
  caddy_data:
    external: true
  caddy_config:

d. My complete Caddy config:

http://pihole.homelab, http://pihole.homelab {
  tls internal
  redir * /admin{uri}
  reverse_proxy pihole:80
}

3. The problem I’m having:

I setup the Raspberry PI in my TP Link Archer MR600 as the primary and only DNS server. In my home network, it has the static IP 198.168.1.105. My devices are using it as a DNS successfully, and I see the queries in the Pihole dashboard.

Next, inside Pihole, I setup “raspberrypi.homelab” as a local DNS entry, and I routed it there to 192.168.1.105.

However, in my browsers, this does not successfully resolve. My dekstop’s chrome returns DNS_PROBE_POSSIBLE.

4. Error messages and/or full log output:

Caddy Logs

caddy  | {"level":"warn","ts":1676052433.9501321,"msg":"Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
caddy  | {"level":"info","ts":1676052433.9560323,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
caddy  | {"level":"warn","ts":1676052433.9580371,"logger":"http","msg":"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server","server_name":"srv0","http_port":80}
caddy  | {"level":"info","ts":1676052433.9580884,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0x40002e9650"}
caddy  | {"level":"info","ts":1676052433.9593194,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
caddy  | {"level":"info","ts":1676052433.9594252,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
caddy  | {"level":"info","ts":1676052433.960261,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
caddy  | {"level":"info","ts":1676052433.9603055,"msg":"serving initial configuration"}
caddy  | {"level":"info","ts":1676052433.961614,"logger":"tls","msg":"finished cleaning storage units"}
caddy  | {"level":"info","ts":1676052785.389926,"msg":"shutting down apps, then terminating","signal":"SIGTERM"}
caddy  | {"level":"warn","ts":1676052785.3900235,"msg":"exiting; byeee!! 👋","signal":"SIGTERM"}
caddy  | {"level":"info","ts":1676052785.3905296,"logger":"tls.cache.maintenance","msg":"stopped background certificate maintenance","cache":"0x40002e9650"}
caddy  | {"level":"info","ts":1676052785.390751,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
caddy  | {"level":"info","ts":1676052785.3907855,"msg":"shutdown complete","signal":"SIGTERM","exit_code":0}
caddy  | {"level":"info","ts":1676052816.5251994,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
caddy  | {"level":"warn","ts":1676052816.528291,"msg":"Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
caddy  | {"level":"info","ts":1676052816.5310848,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
caddy  | {"level":"warn","ts":1676052816.5334852,"logger":"http","msg":"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server","server_name":"srv0","http_port":80}
caddy  | {"level":"info","ts":1676052816.5335355,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0x400017fb90"}
caddy  | {"level":"info","ts":1676052816.5346417,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
caddy  | {"level":"info","ts":1676052816.5347385,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
caddy  | {"level":"info","ts":1676052816.5353725,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
caddy  | {"level":"info","ts":1676052816.5354218,"msg":"serving initial configuration"}
caddy  | {"level":"info","ts":1676052816.5362368,"logger":"tls","msg":"finished cleaning storage units"}

Pihole logs:

pihole  | s6-rc: info: service s6rc-oneshot-runner: starting
pihole  | s6-rc: info: service s6rc-oneshot-runner successfully started
pihole  | s6-rc: info: service fix-attrs: starting
pihole  | s6-rc: info: service fix-attrs successfully started
pihole  | s6-rc: info: service legacy-cont-init: starting
pihole  | s6-rc: info: service legacy-cont-init successfully started
pihole  | s6-rc: info: service cron: starting
pihole  | s6-rc: info: service cron successfully started
pihole  | s6-rc: info: service _uid-gid-changer: starting
pihole  | s6-rc: info: service _uid-gid-changer successfully started
pihole  | s6-rc: info: service _startup: starting
pihole  |   [i] Starting docker specific checks & setup for docker pihole/pihole
pihole  |   [i] Setting capabilities on pihole-FTL where possible
pihole  |   [i] Applying the following caps to pihole-FTL:
pihole  |         * CAP_CHOWN
pihole  |         * CAP_NET_BIND_SERVICE
pihole  |         * CAP_NET_RAW
pihole  |         * CAP_NET_ADMIN
pihole  |   [i] Ensuring basic configuration by re-running select functions from basic-install.sh
pihole  |
pihole  |   [i] Installing configs from /etc/.pihole...
pihole  |   [i] Existing dnsmasq.conf found... it is not a Pi-hole file, leaving alone!
  [✓] Installed /etc/dnsmasq.d/01-pihole.conf
  [✓] Installed /etc/dnsmasq.d/06-rfc6761.conf
pihole  |
pihole  |   [i] Installing latest logrotate script...
pihole  |       [i] Existing logrotate file found. No changes made.
pihole  |   [i] Assigning password defined by Environment Variable
pihole  |   [✓] New password set
pihole  |   [i] Added ENV to php:
pihole  |                     "TZ" => "Berlin/Europe",
pihole  |                     "PIHOLE_DOCKER_TAG" => "",
pihole  |                     "PHP_ERROR_LOG" => "/var/log/lighttpd/error-pihole.log",
pihole  |                     "CORS_HOSTS" => "",
pihole  |                     "VIRTUAL_HOST" => "b07cc7a61259",
pihole  |   [i] Using IPv4 and IPv6
pihole  |   [i] Preexisting ad list /etc/pihole/adlists.list detected (exiting setup_blocklists early)
pihole  |   [i] Setting DNS servers based on PIHOLE_DNS_ variable
pihole  |   [i] Applying pihole-FTL.conf setting LOCAL_IPV4=192.168.1.105
pihole  |   [i] FTL binding to default interface: eth0
pihole  |   [i] Enabling Query Logging
pihole  |   [i] Testing lighttpd config: Syntax OK
pihole  |   [i] All config checks passed, cleared for startup ...
pihole  |   [i] Docker start setup complete
pihole  |
pihole  |   [i] pihole-FTL (no-daemon) will be started as pihole
pihole  |
pihole  | s6-rc: info: service _startup successfully started
pihole  | s6-rc: info: service pihole-FTL: starting
pihole  | s6-rc: info: service pihole-FTL successfully started
pihole  | s6-rc: info: service lighttpd: starting
pihole  | s6-rc: info: service lighttpd successfully started
pihole  | s6-rc: info: service _postFTL: starting
pihole  | s6-rc: info: service _postFTL successfully started
pihole  | s6-rc: info: service legacy-services: starting
pihole  |   Checking if custom gravity.db is set in /etc/pihole/pihole-FTL.conf
pihole  | s6-rc: info: service legacy-services successfully started
pihole  |   [i] Neutrino emissions detected...
  [✓] Pulling blocklist source list into range
pihole  |
  [✓] Preparing new gravity database
pihole  |   [i] Using libz compression
pihole  |
pihole  |   [i] Target: https://raw.githubusercontent.com/StevenBlack/hosts/master/hosts
  [✓] Status: Retrieval successful
pihole  |   [i] Imported 177888 domains, ignoring 1 non-domain entries
pihole  |       Sample of non-domain entries:
pihole  |         - 0.0.0.0
pihole  |   [i] List stayed unchanged
pihole  |
  [✓] Creating new gravity databases
  [✓] Storing downloaded domains in new gravity database
  [✓] Building tree
  [✓] Swapping databases
pihole  |   [✓] The old database remains available.
pihole  |   [i] Number of gravity domains: 177888 (177888 unique domains)
pihole  |   [i] Number of exact blacklisted domains: 0
pihole  |   [i] Number of regex blacklist filters: 0
pihole  |   [i] Number of exact whitelisted domains: 0
pihole  |   [i] Number of regex whitelist filters: 0
  [✓] Cleaning up stray matter
pihole  |
pihole  |   [✓] FTL is listening on port 53
pihole  |      [✓] UDP (IPv4)
pihole  |      [✓] TCP (IPv4)
pihole  |      [✓] UDP (IPv6)
pihole  |      [✓] TCP (IPv6)
pihole  |
pihole  |   [✓] Pi-hole blocking is enabled
pihole  |
pihole  |   Pi-hole version is v5.15.3 (Latest: v5.15.3)
pihole  |   AdminLTE version is v5.18.3 (Latest: v5.18.3)
pihole  |   FTL version is v5.20.1 (Latest: v5.20.1)
pihole  |   Container tag is: 2023.01.10
pihole  |

5. What I already tried:

  1. Restarting my router
  2. Hardcoding my Raspberry Pi’s IP as DNS in my devices.

I noticed that using nslookup without as second parameter on any client, it returns some ipv6 address as my dns server

nslookup pi.hole
Server:  UnKnown
Address:  2a02:3018:0:40ff::aaaa

*** UnKnown can't find pi.hole: Non-existent domain

But if I force it to use my pihole as DNS server, I can get the correct output:

nslookup pihole.homelab 192.168.1.105
Server:  UnKnown
Address:  192.168.1.105

Name:    pihole.homelab
Address:  192.168.0.105

6. Links to relevant resources:

This isn’t a question about Caddy, it’s a question about pihole and Docker. You’ll likely get better help in those forums instead. Caddy doesn’t have anything to do with resolving DNS queries, because Caddy isn’t a DNS server.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.