Help with Caddyfile in docker

1. Caddy version (caddy version):

2.3.0

2. How I run Caddy:

in Docker on a Synology NAS

a. System environment:

Synology DSM7

b. Command:

I click the toggle on the container to start caddy

d. My complete Caddyfile or JSON config:

# The Caddyfile is an easy way to configure your Caddy web server.
#
# Unless the file starts with a global options block, the first
# uncommented line is always the address of your site.
#
# To use your own domain name (with automatic HTTPS), first make
# sure your domain's A/AAAA DNS records are properly pointed to
# this machine's public IP, then replace the line below with your
# domain name.
ian.gay

# Set this path to your site's directory.
root * /usr/share/caddy

# Enable the static file server.
file_server

hass.ian.gay {
    reverse_proxy 192.168.1.198:8123
}

syno.ian.gay {
    reverse_proxy 192.168.1.201:5001
}

3. The problem I’m having:

Neither reverse proxy works. The first errors out, the second shows the default caddy homepage.

4. Error messages and/or full log output:

2021/04/26 21:22:02.543 INFO using provided configuration {“config_file”: “/etc/caddy/Caddyfile”, “config_adapter”: “caddyfile”} stdout

21:22:02 2021/04/26 21:22:02.547 INFO admin admin endpoint started {“address”: “tcp/localhost:2019”, “enforce_origin”: false, “origins”: [“localhost:2019”, “[::1]:2019”, “127.0.0.1:2019”]} stdout

21:22:02 2021/04/26 21:22:02.548 INFO http server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server {“server_name”: “srv0”, “http_port”: 80} stdout

21:22:02 2021/04/26 21:22:02.552 INFO tls.cache.maintenance started background certificate maintenance {“cache”: “0xc000282000”} stdout

21:22:02 2021/04/26 21:22:02.553 INFO tls cleaned up storage units stdout

21:22:02 2021/04/26 21:22:02.582 INFO autosaved config {“file”: “/config/caddy/autosave.json”} stdout

21:22:02 2021/04/26 21:22:02.582 INFO serving initial configuration

5. What I already tried:

multiple different caddyfiles and even other reverse proxies like the built-in synology one and NGINX. My root domain goes to a different machine but many subdomains redirect here.

If you’re trying to serve more than one site, you must wrap all sites with braces.

I shortened down to this but that didn’t work:

hass.ian.gay {
    reverse_proxy 192.168.1.198:8123
}

syno.ian.gay {
	reverse_proxy 192.168.1.201:5001
}

What do you mean by “didn’t work”? Are you sure you properly reloaded Caddy after your config changes? What’s in your logs now? What behaviour are you actually seeing?

hass loads a broken page (it will load the caddy page on regular http), syno loads the default Caddy page. yes, I restarted the docker container. Logs are the same:

|2021-04-26 21:09:39|stdout|2021/04/26 21:09:39.666 e[34mINFOe[0m shutdown done {signal: SIGTERM}|
|---|---|---|
|2021-04-26 21:09:39|stdout|2021/04/26 21:09:39.666 e[34mINFOe[0m admin stopped previous server|
|2021-04-26 21:09:39|stdout|2021/04/26 21:09:39.165 e[34mINFOe[0m tls.cache.maintenance stopped background certificate maintenance {cache: 0xc000346000}|
|2021-04-26 21:09:38|stdout|2021/04/26 21:09:38.664 e[34mINFOe[0m shutting down apps then terminating {signal: SIGTERM}|
|2021-04-26 21:09:19|stdout|2021/04/26 21:09:19.822 e[34mINFOe[0m tls cleaned up storage units|
|2021-04-26 21:09:19|stdout|2021/04/26 21:09:19.822 e[34mINFOe[0m serving initial configuration|
|2021-04-26 21:09:19|stdout|2021/04/26 21:09:19.822 e[34mINFOe[0m autosaved config {file: /config/caddy/autosave.json}|
|2021-04-26 21:09:19|stdout|2021/04/26 21:09:19.814 e[34mINFOe[0m tls.cache.maintenance started background certificate maintenance {cache: 0xc000346000}|
|2021-04-26 21:09:19|stdout|2021/04/26 21:09:19.814 e[34mINFOe[0m http server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server {server_name: srv0, http_port: 80}|
|2021-04-26 21:09:19|stdout|2021/04/26 21:09:19.813 e[34mINFOe[0m admin admin endpoint started {address: tcp/localhost:2019, enforce_origin: false, origins: [localhost:2019, [::1]:2019, 127.0.0.1:2019]}|
|2021-04-26 21:09:19|stdout|2021/04/26 21:09:19.811 e[34mINFOe[0m using provided configuration {config_file: /etc/caddy/Caddyfile, config_adapter: caddyfile}|
|2021-04-26 20:41:14|stdout|run: loading initial config: loading new config: http app module: start: tcp: listening on :80: listen tcp :80: bind: address already in use|
|2021-04-26 20:41:14|stdout|2021/04/26 20:41:14.388 e[34mINFOe[0m http server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server {server_name: srv0, http_port: 80}|
|2021-04-26 20:41:14|stdout|2021/04/26 20:41:14.387 e[34mINFOe[0m admin admin endpoint started {address: tcp/localhost:2019, enforce_origin: false, origins: [localhost:2019, [::1]:2019, 127.0.0.1:2019]}|
|2021-04-26 20:41:14|stdout|2021/04/26 20:41:14.380 e[34mINFOe[0m using provided configuration {config_file: /etc/caddy/Caddyfile, config_adapter: caddyfile}|

That should be impossible if you actually updated your Caddyfile to remove root and file_server. You must have missed a step somewhere.

How are you running your Docker container?

I removed it from /config/Caddyfile but the one it’s looking at is in /config/caddy/Caddyfile becuase I wasn’t sure which one it was looking at. I’m using Synology’s Docker version.

weird, I made sure it was gone from both, rebooted the container, and it’s still showing the Caddy page in an incognito window

Your Caddyfile should be mounted to /etc/caddy/Caddyfile. See the log output, it shows the path to the config it loaded.

Please read the docs on Docker Hub

That did it for most of them, but there’s one that’s still not working - I get a too many redirects error with the synology redirect. any idea why?

Not without more information. You really need to elaborate here. I can’t make assumptions, you need to show me exactly what you’re seeing.

Apologies, here’s the error I’m seeing:
image

I have the built-in synology web station uninstalled, no reversy proxy options are set, and the moustache file that controls the ports on the synology were changed from 80/443 to 81/444 using the one liner in this post

I don’t know what’s trying to redirect it. For context, the synology web interface is hosted on 5000 (http) and 5001 (https) and 5000 will redirect to 5001, but I’m not sure why it’s redirecting 443->5001 in such a way it gets into a loop. (I’ve tried clearing my cookies and using incognito)

I’m not totally clear on what’s going on. Are you saying there’s a redirect from from https://syno.ian.gay to https://syno.ian.gay:5001 going on?

If 5001 is Synology’s HTTPS port, you need to configure Caddy to proxy over HTTPS. By default, it proxies over HTTP (because proxying over HTTPS is typically unnecessary because the boundary into your LAN has already been crossed), or proxy over HTTP with port 5000 which I assume is its HTTP port (which ideally isn’t configured to do HTTP->HTTPS redirects, in which case you’ll need to play with Synology configuration to make it not do that).

This might help you understand:

1 Like

I’m trying to use that guide but I also found out that this affects most of the other redirects - when I added my full suite of apps, 3 more have the same issue of the “too many redirects” screen. I didn’t have this issue before so I’m not sure what’s causing it, there’s no other software doing any redirects. Do I need to match those rules more specifically than they are now?

Bumping this up to see if anyone else has any ideas. I had this working for a few days and then it just stopped again with no warning. I didn’t make changes or update my Pi. Everything, not just the synology redirect, is not working.

I get this when running:

pi@raspberrypi:/etc/caddy $ caddy run
2021/05/12 20:18:42.713 INFO    using adjacent Caddyfile
2021/05/12 20:18:42.725 WARN    input is not formatted with 'caddy fmt' {"adapter": "caddyfile", "file": "Caddyfile", "line": 2}
2021/05/12 20:18:42.731 INFO    admin   admin endpoint started  {"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["localhost:2019", "[::1]:2019", "127.0.0.1:2019"]}
2021/05/12 20:18:42.734 INFO    tls.cache.maintenance   started background certificate maintenance      {"cache": "0x40ae050"}
2021/05/12 20:18:42.735 INFO    http    server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS {"server_name": "srv0", "https_port": 443}
2021/05/12 20:18:42.736 INFO    http    enabling automatic HTTP->HTTPS redirects        {"server_name": "srv0"}
2021/05/12 20:18:42.741 INFO    tls     cleaning storage unit   {"description": "FileStorage:/home/pi/.local/share/caddy"}
2021/05/12 20:18:42.742 INFO    http    enabling automatic TLS certificate management   {"domains": ["deluge.ian.gay", "hass.ian.gay", "syno.ian.gay", "tautulli.ian.gay", "sonarr.ian.gay", "radarr.ian.gay", "lidarr.ian.gay"]}
2021/05/12 20:18:42.753 INFO    tls     finished cleaning storage units
2021/05/12 20:18:42.788 INFO    autosaved config (load with --resume flag)      {"file": "/home/pi/.config/caddy/autosave.json"}
2021/05/12 20:18:42.790 INFO    serving initial configuration

and this is my caddyfile:

pi@raspberrypi:/etc/caddy $ cat Caddyfile
hass.ian.gay {
        reverse_proxy 192.168.1.198:8123
}
syno.ian.gay {
        reverse_proxy 192.168.1.201:5555
}
sonarr.ian.gay {
        reverse_proxy 192.168.1.201:8989
}
radarr.ian.gay {
        reverse_proxy 192.168.1.201:7878
}
lidarr.ian.gay {
        reverse_proxy 192.168.1.201:8686
}
tautulli.ian.gay {
        reverse_proxy 192.168.1.201:8181
}
deluge.ian.gay {
        reverse_proxy 192.168.1.201:8112
}

What do you mean by “it stopped again”? Be specific. What are the symptoms?

Connections will time out:
image

edit: I also switched to a dedicated Raspberry pi for caddy after initially posting, and it did work at the start

When troubleshooting a timeout error, here are the usual suspects:

  • Is the app running? (Check the Caddy process on the server and tail its logs to ensure nothing untowards is happening and preventing it from accepting connections)
  • Is DNS configured correctly? (Check the IP address of the server and the IP address that the hostname resolves to)
  • Is the firewall configured correctly? (Ensure the server is not dropping those packets)
2 Likes

Gah, my DDNS stopped working and my IP changed. thank you

2 Likes

Ahh. A haiku for you, in commiseration:

3 Likes