Loading config when resource is unavailable

1. Caddy version (caddy version):


2. How I run Caddy:

a. System environment:


b. Command:

BASIC_AUTH_PASSWORD_HASHED=$(caddy hash-password --plaintext $BASIC_AUTH_PASSWORD) caddy run --config /etc/caddy/Caddyfile.json

c. Service/unit/compose file:


FROM caddy:${CADDY_VERSION}-builder-alpine AS builder

RUN xcaddy build \
    --with github.com/caddy-dns/route53@${ROUTE53_VERSION} \
    --with github.com/gamalan/caddy-tlsredis@${TLSREDIS_VERSION}

FROM caddy:${CADDY_VERSION}-alpine
COPY --from=builder /usr/bin/caddy /usr/bin/caddy


COPY static/root /

CMD ["scaddy"]

d. My complete Caddyfile or JSON config:

    "admin": {
        "listen": ":2099",
        "config": {
            "load_interval": "60s",
            "load": {
                "module": "http",
                "timeout": "10s",
                "url": "http://localhost:5000/caddy-config"
    "apps": {
        "http": {
            "servers": {
                "local": {
                    "listen": [":8080"]

3. The problem I’m having:

We have a Caddy container that relies on another container to load the initial config. If the resource is not available Caddy will exit and cause the container to restart. Is there a way to make it retry a few times before throwing an error?

Once the fix for load_intervals is released, will this cause the container to crash every interval if the resource goes down?

4. Error messages and/or full log output:

[redirects] {"level":"info","ts":1642723751.248971,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile.json","config_adapter":""}
[redirects] {"level":"info","ts":1642723751.260327,"logger":"admin","msg":"admin endpoint started","address":"tcp/:2099","enforce_origin":true,"origins":["supervisor"]}
[redirects] {"level":"warn","ts":1642723751.260359,"logger":"admin","msg":"admin endpoint on open interface; host checking disabled","address":"tcp/:2099"}
[redirects] {"level":"info","ts":1642723751.262205,"caller":"caddy-tlsredis@v0.2.7-0.20210222032122-eb7b6bb5f8cb/storageredis.go:275","msg":"TLS Storage are using Redis, on certificates.common.idearium.local:6380"}
[redirects] {"level":"info","ts":1642723751.2701685,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"secure","https_port":443}
[redirects] {"level":"info","ts":1642723751.2717488,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"secure"}
[redirects] {"level":"info","ts":1642723751.2782104,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0002ce000"}
[redirects] {"level":"info","ts":1642723751.2844455,"logger":"tls","msg":"cleaning storage unit","description":"{\"Client\":{},\"ClientLocker\":{},\"Logger\":{},\"address\":\"certificates.common.idearium.local:6380\",\"host\":\"certificates.common.idearium.local\",\"port\":\"6380\",\"db\":0,\"username\":\"\",\"password\":\"\",\"timeout\":5,\"key_prefix\":\"caddytls\",\"value_prefix\":\"caddy-storage-redis\",\"aes_key\":\"\",\"tls_enabled\":false,\"tls_insecure\":true}"}
[redirects] run: loading initial config: loading new config: loading dynamic config from *caddyconfig.HTTPLoader: Get "http://supervisor:4000/caddy-redirects-config": dial tcp: lookup supervisor on no such host

5. What I already tried:

6. Links to relevant resources:

You’re probably better off using an entrypoint bash script to control this, to attempt retries of caddy run according to your own needs.

You could try this right now by building against the master branch. Just change your Dockerfile to use xcaddy build master, i.e. pass a git reference to the build commmand.

I’m not certain how that feature will behave, but I expect that for the initial config load, it expects to always find a config. This can probably be adjusted, if it doesn’t work that way. You can open a feature request on github for this, or write a PR if you want to take a crack at implementing it (probably quite simple).

1 Like

This topic was automatically closed after 30 days. New replies are no longer allowed.