My SSL gets renewed all the time in Docker if restart container|service

Hi there,

I’m running caddy inside docker container
Version of it is: v2.5.2 h1:eCJdLyEyAGzuQTa5Mh3gETnYWDClo1LjtQm2q9RNZrs=

The config looks like this:

example.com {
  tls name@gmail.com

  respond /status 200
  ...
}

The data for caddy container is stored in /data [using XDG_DATA_HOME]
The config for caddy container is stored in /config [using XDG_CONFIG_HOME]

the problem that I have is if I’ll restart multiple times the container I’ll not receive a cert anymore [and make sense, there is a request limit] so the question is why I’m requesting for new cert if I already store the cert in a volume in /data and /config paths and the cert is not expired?

what can I do to not request all the time new certificates if restarting the container

regards,

Please follow our recommended Docker Compose setup. You need to persist the data volume.

Please make sure to use the latest version of Caddy, you’re using a very old version. Don’t use outdated software.

Please fill out the help topic template as per the forum rules, if you need more help.

1 Like

If the problem with requesting new ssl is not with caddy version then the problem is somewhere else.
I mentioned that I have volumes mounted into /config and /data so data from there will be persistant.

So it may be related with caddy version? [I’ll try with a new version of it]

LE:
My dockerfile is looking something like this:

FROM alpine:3.19.1

ENV XDG_DATA_HOME=/data
ENV XDG_CONFIG_HOME=/config

WORKDIR /dallas
COPY ./ /dallas

RUN set -eux && \
    apk add --no-cache libcap nss-tools && \
    adduser -u 1000 -g dallas -s /bin/false --disabled-password dallas && chown dallas:dallas -R /dallas && \
    setcap CAP_NET_BIND_SERVICE=+eip /dallas/caddy
USER dallas

EXPOSE 8080 8443

ENTRYPOINT ["sh", "/dallas/entrypoint.sh"]

also something that I noticed if I’ll go to docker volume points all files are updated all the time with the new timestamp if I’ll update|restart the container
so, not sure why but shouldn,t just read the data and if cert is old to renew it?

this is the log from docker

web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103631.7776015,"msg":"using provided configuration","config_file":"/tmpapp/CaddyfileForContainer","config_adapter":""}
web.1.9jynotlddy2j@dallas-web    | {"level":"warn","ts":1711103631.7829993,"msg":"Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies","adapter":"caddyfile","file":"/tmpapp/CaddyfileForContainer","line":9}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103631.7906928,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"
]}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103631.791112,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":8443
}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103631.7911465,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc00018abd0"}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103631.7911527,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103631.7943547,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["testing.dallas.domain"]}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103631.7943547,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103631.7947476,"logger":"tls","msg":"finished cleaning storage units"}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103631.794944,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103631.795419,"msg":"serving initial configuration"}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103631.7954814,"logger":"tls.obtain","msg":"acquiring lock","identifier":"testing.dallas.domain"}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103631.7960086,"logger":"tls.obtain","msg":"lock acquired","identifier":"testing.dallas.domain"}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103632.5222049,"logger":"tls.issuance.acme","msg":"waiting on internal rate limiter","identifiers":["testing.dallas.domain"],"ca":"https://acme-v02.api.letsencrypt.org/directory","accou
nt":"cip@cip.com"}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103632.5223773,"logger":"tls.issuance.acme","msg":"done waiting on internal rate limiter","identifiers":["testing.dallas.domain"],"ca":"https://acme-v02.api.letsencrypt.org/directory","
account":"cip@cip.com"}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103632.9809399,"logger":"tls.issuance.acme.acme_client","msg":"trying to solve challenge","identifier":"testing.dallas.domain","challenge_type":"http-01","ca":"https://acme-v02.api.lets
encrypt.org/directory"}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103633.3344157,"logger":"tls.issuance.acme","msg":"served key authentication","identifier":"testing.dallas.domain","challenge":"http-01","remote":"10.0.0.2:62184","distributed":false}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103633.4569151,"logger":"tls.issuance.acme","msg":"served key authentication","identifier":"testing.dallas.domain","challenge":"http-01","remote":"10.0.0.2:50891","distributed":false}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103633.4941962,"logger":"tls.issuance.acme","msg":"served key authentication","identifier":"testing.dallas.domain","challenge":"http-01","remote":"10.0.0.2:53996","distributed":false}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103633.50208,"logger":"tls.issuance.acme","msg":"served key authentication","identifier":"testing.dallas.domain","challenge":"http-01","remote":"10.0.0.2:14208","distributed":false}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103633.9570894,"logger":"tls.issuance.acme.acme_client","msg":"validations succeeded; finalizing order","order":"https://acme-v02.api.letsencrypt.org/acme/order/1631734407/25443167
5367"}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103635.5285325,"logger":"tls.issuance.acme.acme_client","msg":"successfully downloaded available certificate chains","count":2,"first_url":"https://acme-v02.api.letsencrypt.org/acm
e/cert/0437b2389d522921135d98a548020657841c"}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103635.5287716,"logger":"tls.obtain","msg":"certificate obtained successfully","identifier":"testing.dallas.domain"}
web.1.9jynotlddy2j@dallas-web    | {"level":"info","ts":1711103635.528783,"logger":"tls.obtain","msg":"releasing lock","identifier":"testing.dallas.domain"}

I was able to understand what is happening but not why
so I’m creating a docker service …

If I’ll remove the service and recreate it [using, of course, the volumes for /data and /config mounted into it] the log is logging like this:

web.1.xj6g4mpqt5tu@dallas-web    | {"level":"info","ts":1711104500.6704786,"msg":"using provided configuration","config_file":"/tmpapp/CaddyfileForContainer","config_adapter":"caddyfile"}
web.1.xj6g4mpqt5tu@dallas-web    | {"level":"warn","ts":1711104500.6768174,"msg":"Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies","adapter":"caddyfile","file":"/tmpapp/CaddyfileForContainer","line":9}
web.1.xj6g4mpqt5tu@dallas-web    | {"level":"info","ts":1711104500.6808946,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
web.1.xj6g4mpqt5tu@dallas-web    | {"level":"info","ts":1711104500.6812854,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc00042aee0"}
web.1.xj6g4mpqt5tu@dallas-web    | {"level":"info","ts":1711104500.6814542,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":8443}
web.1.xj6g4mpqt5tu@dallas-web    | {"level":"info","ts":1711104500.6814928,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
web.1.xj6g4mpqt5tu@dallas-web    | {"level":"info","ts":1711104500.684261,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
web.1.xj6g4mpqt5tu@dallas-web    | {"level":"info","ts":1711104500.6850305,"logger":"tls","msg":"finished cleaning storage units"}
web.1.xj6g4mpqt5tu@dallas-web    | {"level":"info","ts":1711104500.685273,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["testing.dallas.domain"]}
web.1.xj6g4mpqt5tu@dallas-web    | {"level":"info","ts":1711104500.6862576,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
web.1.xj6g4mpqt5tu@dallas-web    | {"level":"info","ts":1711104500.68628,"msg":"serving initial configuration"}
web.1.v5lx5gobf53y@dallas-web    | {"level":"info","ts":1711104854.2196288,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
web.1.v5lx5gobf53y@dallas-web    | {"level":"info","ts":1711104854.220262,"logger":"tls","msg":"finished cleaning storage units"}

but if I’ll force update the service docker service update --force <service name> [same issue if I’ll just update the image as docker service update --image <image> <service name>]
it will give the log from above post so it will actually recreate the certificate

is it there a reason why is doing that and how I can’t bypass that?

@francislavoie any idea please?

regards,

I don’t know what dallas is. Why don’t you use our official Docker image? What does your docker run command look like, or how does your Docker Compose file look like? Are you sure you have a bind mount or volume for /data?

Of course I’m sure I have a volume for /data same as for /config
Dallas is just a folder an container name

entrypoint looks like:

<code that will create Caddyfile>

./caddy run --config /tmp/Caddyfile

Let me give you a full example and you can try for yourself, as I guess the problem is on caddy

Dockerfile:

FROM caddy:2.7.6-alpine

ENV CADDYPATH=/path
ENV XDG_DATA_HOME=/data
ENV XDG_CONFIG_HOME=/config

WORKDIR /dallas
COPY . .

RUN set -eux && \
    apk add --no-cache libcap nss-tools && \
    adduser -u 10654 -g dallas -s /bin/false --disabled-password dallas && chown dallas:dallas -R /app && \
    setcap CAP_NET_BIND_SERVICE=+eip /app/caddy
USER dallas

EXPOSE 8080 8443

ENTRYPOINT ["/dallas/entrypoint.sh"]

creating the image as:

docker build -t dallas:1 ./

entrypoint.sh

#!/bin/sh
echo "{ " > /tmp/Caddyfile
echo "  https_port 8443 " >> /tmp/Caddyfile
echo "  http_port 8080 " >> /tmp/Caddyfile
echo "} " >> /tmp/Caddyfile

echo "domain {" >> /tmp/Caddyfile
echo "  tls email@gmail.com" >> /tmp/Caddyfile
echo "" >> /tmp/Caddyfile
echo "  respond /private 403" >> /tmp/Caddyfile
echo "  respond / 200" >> /tmp/Caddyfile
echo "}" >> /tmp/Caddyfile

./caddy run --config /tmp/Caddyfile

creating the service:

docker service create --read-only --name dallas --publish 80:8080 --publish 443:8443 --mount type=tmpfs,destination=/tmp \
    --mount type=volume,source=caddy_path,destination=/path \
    --mount type=volume,source=caddy_config,destination=/config --mount type=volume,source=caddy_data,destination=/data \
    dallas:1

if I’ll make changes into Dockerfile and then update to dallas:2, it will request for new certificates. If I’ll do that more than 5 times/day, I’ll end up having website down

Your entrypoint script is not correct, you’re not properly passing through signals to the process so Caddy will not gracefully shut down when the container is stopped. Entrypoint scripts are tricky to get right.

I’m not sure why you’re building the Caddyfile in a script like this. Why not just write a Caddyfile then copy it into the container at build time or at runtime with a volume?

You don’t need these lines if you’re using the official Caddy image, we set those already. caddy-docker/2.7/alpine/Dockerfile at 5cc71afbebe1b590a9996c24e714e7b67ab53d03 · caddyserver/caddy-docker · GitHub

The issue might be that the /data/caddy and /config/caddy are root-owned so it fails to write there? But Caddy’s logs would show an error if it failed to write. Check the volume to see if there’s actually files in there.

I’m not seeing any evidence of an issue with Caddy, it seems like a problem with your Docker setup.

1 Like

The volume are created with access for dallas user and I do have data into them
[yes, if there was issue with that errors into logs would appear …]

I need to build Caddyfile on the fly as it will depends based on some scenarios. The one that I added there is a simple example so I do need to create it there.

Can you point me on the right direction with entrypoint.sh [running caddy with exec will not help in this problem]?

regards,

You can google “docker entrypoint pass through signals” or something to that effect, there’s dozens of articles that explain. (TLDR: use exec to replace the current process with the given command)

Then I don’t understand what the problem is. I don’t see any evidence of a problem. Why do you think there’s a problem?

There is a problem because all the time I update the image [docker service update --image] caddy makes new requests for certificates. If I’ll do more than 5 updates/day I’ll end up with reaching the letsencrypt limit → end up having web down

Okay, well like I said, that sounds like a Docker problem, not a Caddy problem. It sounds like your Docker volumes are being recreated or emptied when you do that. You’ll need to look into what happening in your Docker setup when you run that command.

1 Like

Yeah … If I’ll bind to a folder instead of a docker volume it seems the problem is not happening anymore :frowning:

Alright, then yeah that’s pretty definitive proof that your service update command is wiping out the volumes. I don’t use that command. Quick google suggests you might need the --mount-add option to add the volume when the update is performed?

mount-add is for containers not for services
probably I’ll stick with binding host folders into it rather than using docker volumes

thank you @francislavoie