Caddy stops working suddenly

I have been self-hosting my password manager (vaultwarden and caddy) for over 3 months. The pair had been running gracefully until last week when I suddenly cannot connect to vaultwarden anymore after a system restart. Upon further investigation I can see that caddy "could not get certificate from issuer” on startup. I cannot recall any change to the *.yml and the whole setup, except updating ubuntu.

1. Caddy version (caddy version):

I am not sure which one is running. Following the instruction from vaultwarden, I downloaded a copy of custom built caddy server for duckdns. But at the same time, I put “caddy:latest” In my docker-compose.yml. So which executable is actually being run?

2. How I run Caddy:

docker-compose

a. System environment:

NAME="Ubuntu"
VERSION="20.04.3 LTS (Focal Fossa)"
PRETTY_NAME="Ubuntu 20.04.3 LTS"
VERSION_ID="20.04"


Docker version 20.10.10, build b485636

b. Command:

Docker-compose up 

c. Service/unit/compose file:

version: '3'

services:
  bitwarden:
    image: vaultwarden/server:latest
    container_name: bitwarden
    restart: always
    environment:
      - TZ=America/Toronto
      - EXTENDED_LOGGING=true
      - ROCKET_PORT=8080
      - WEBSOCKET_ENABLED=true  # Enable WebSocket notifications.
      - SIGNUPS_ALLOWED=false
      - LOG_FILE=/bw/bitwarden.log
    volumes:
      - ${HOME}/MyDocker/vaultwarden/:/bw
      - ${HOME}/MyDocker/vaultwarden/data:/data

  caddy:
    image: caddy:latest
    container_name: caddy
    restart: always
    ports:
      - 80:80  # Needed for the ACME HTTP-01 challenge.
      - 443:443
    volumes:
      - ${HOME}/MyDocker/caddy/caddy-duckdns:/usr/bin/caddy  # Your custom build of Caddy. 
      - ${HOME}/MyDocker/caddy/Caddyfile:/etc/caddy/Caddyfile:ro
      - ${HOME}/MyDocker/caddy/config:/config
      - ${HOME}/MyDocker/caddy/data:/data
      - ${HOME}/MyDocker/caddy:/caddy
    environment:
      - ACME_AGREE=true
      - DOMAIN=95daap8.duckdns.org  # Your domain.
      - EMAIL=myEmail@gmail.com       # The email address to use for ACME registration.
      - DUCKDNS_TOKEN=<<<my-duckdns-tocken>>>         # Your Duck DNS token.
      - LOG_FILE=/caddy/access.log

d. My complete Caddyfile or JSON config:

bitwarden.{$DOMAIN} {
  log {
    level INFO
    output file {$LOG_FILE} {
      roll_size 10MB
      roll_keep 10
    }
  }
  
  # Use the ACME DNS-01 challenge to get a cert for the configured domain.
  tls {
     dns duckdns {$DUCKDNS_TOKEN}
     ca https://acme-staging-v02.api.letsencrypt.org/directory
  }

  encode gzip

  reverse_proxy /notifications/hub bitwarden:3012

  reverse_proxy bitwarden:8080
}

3. The problem I’m having:

Vaultwarden no longer responds. If I hit up the url, I’ll get “connection has timed out”. Using the bitwarden Firefox extension, I cannot sync with the server.

4. Error messages and/or full log output:

Starting bitwarden ... done
Starting caddy     ... done
Attaching to caddy, bitwarden
caddy        | {"level":"info","ts":1636256028.4099035,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
caddy        | {"level":"warn","ts":1636256028.412603,"msg":"input is not formatted with 'caddy fmt'","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
caddy        | {"level":"info","ts":1636256028.4146504,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["localhost:2019","[::1]:2019","127.0.0.1:2019"]}
caddy        | {"level":"info","ts":1636256028.4162095,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc00037c000"}
caddy        | {"level":"info","ts":1636256028.416628,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
caddy        | {"level":"info","ts":1636256028.416657,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
caddy        | {"level":"info","ts":1636256028.4174325,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
caddy        | {"level":"info","ts":1636256028.417518,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["95daap8.duckdns.org"]}
caddy        | {"level":"info","ts":1636256028.4183562,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
caddy        | {"level":"info","ts":1636256028.4183762,"msg":"serving initial configuration"}
caddy        | {"level":"info","ts":1636256028.4187615,"logger":"tls.obtain","msg":"acquiring lock","identifier":"95daap8.duckdns.org"}
caddy        | {"level":"info","ts":1636256028.4187849,"logger":"tls","msg":"finished cleaning storage units"}
caddy        | {"level":"info","ts":1636256028.487794,"logger":"tls.obtain","msg":"lock acquired","identifier":"95daap8.duckdns.org"}
caddy        | {"level":"info","ts":1636256028.5119073,"logger":"tls.issuance.acme","msg":"waiting on internal rate limiter","identifiers":["95daap8.duckdns.org"],"ca":"https://acme-staging-v02.api.letsencrypt.org/directory","account":""}
caddy        | {"level":"info","ts":1636256028.5119457,"logger":"tls.issuance.acme","msg":"done waiting on internal rate limiter","identifiers":["95daap8.duckdns.org"],"ca":"https://acme-staging-v02.api.letsencrypt.org/directory","account":""}
bitwarden    | /--------------------------------------------------------------------\
bitwarden    | |                        Starting Vaultwarden                        |
bitwarden    | |                           Version 1.23.0                           |
bitwarden    | |--------------------------------------------------------------------|
bitwarden    | | This is an *unofficial* Bitwarden implementation, DO NOT use the   |
bitwarden    | | official channels to report bugs/features, regardless of client.   |
bitwarden    | | Send usage/configuration questions or feature requests to:         |
bitwarden    | |   https://vaultwarden.discourse.group/                             |
bitwarden    | | Report suspected bugs/issues in the software itself at:            |
bitwarden    | |   https://github.com/dani-garcia/vaultwarden/issues/new            |
bitwarden    | \--------------------------------------------------------------------/
bitwarden    | 
bitwarden    | [INFO] No .env file found.
bitwarden    | 
bitwarden    | [2021-11-06 23:33:48.604][start][INFO] Rocket has launched from http://0.0.0.0:8080
bitwarden    | [2021-11-06 23:33:48.602][parity_ws][INFO] Listening for new connections on 0.0.0.0:3012.
caddy        | {"level":"warn","ts":1636256038.523879,"logger":"tls.issuance.acme.acme_client","msg":"HTTP request failed; retrying","url":"https://acme-staging-v02.api.letsencrypt.org/directory","error":"performing request: Get \"https://acme-staging-v02.api.letsencrypt.org/directory\": dial tcp: lookup acme-staging-v02.api.letsencrypt.org on 127.0.0.11:53: read udp 127.0.0.1:48018->127.0.0.11:53: i/o timeout"}
caddy        | {"level":"warn","ts":1636256048.7757137,"logger":"tls.issuance.acme.acme_client","msg":"HTTP request failed; retrying","url":"https://acme-staging-v02.api.letsencrypt.org/directory","error":"performing request: Get \"https://acme-staging-v02.api.letsencrypt.org/directory\": dial tcp: lookup acme-staging-v02.api.letsencrypt.org on 127.0.0.11:53: read udp 127.0.0.1:49970->127.0.0.11:53: i/o timeout"}
caddy        | {"level":"warn","ts":1636256059.027217,"logger":"tls.issuance.acme.acme_client","msg":"HTTP request failed; retrying","url":"https://acme-staging-v02.api.letsencrypt.org/directory","error":"performing request: Get \"https://acme-staging-v02.api.letsencrypt.org/directory\": dial tcp: lookup acme-staging-v02.api.letsencrypt.org on 127.0.0.11:53: read udp 127.0.0.1:53588->127.0.0.11:53: i/o timeout"}
caddy        | {"level":"error","ts":1636256059.0273416,"logger":"tls.obtain","msg":"could not get certificate from issuer","identifier":"95daap8.duckdns.org","issuer":"acme-staging-v02.api.letsencrypt.org-directory","error":"[95daap8.duckdns.org] creating new order: provisioning client: performing request: Get \"https://acme-staging-v02.api.letsencrypt.org/directory\": dial tcp: lookup acme-staging-v02.api.letsencrypt.org on 127.0.0.11:53: read udp 127.0.0.1:53588->127.0.0.11:53: i/o timeout (ca=https://acme-staging-v02.api.letsencrypt.org/directory)"}
caddy        | {"level":"error","ts":1636256059.0273826,"logger":"tls.obtain","msg":"will retry","error":"[95daap8.duckdns.org] Obtain: [95daap8.duckdns.org] creating new order: provisioning client: performing request: Get \"https://acme-staging-v02.api.letsencrypt.org/directory\": dial tcp: lookup acme-staging-v02.api.letsencrypt.org on 127.0.0.11:53: read udp 127.0.0.1:53588->127.0.0.11:53: i/o timeout (ca=https://acme-staging-v02.api.letsencrypt.org/directory)","attempt":1,"retrying_in":60,"elapsed":30.53955218,"max_duration":2592000}
caddy        | {"level":"warn","ts":1636256129.042533,"logger":"tls.issuance.acme.acme_client","msg":"HTTP request failed; retrying","url":"https://acme-staging-v02.api.letsencrypt.org/directory","error":"performing request: Get \"https://acme-staging-v02.api.letsencrypt.org/directory\": dial tcp: lookup acme-staging-v02.api.letsencrypt.org on 127.0.0.11:53: read udp 127.0.0.1:59699->127.0.0.11:53: i/o timeout"}
caddy        | {"level":"warn","ts":1636256139.3001022,"logger":"tls.issuance.acme.acme_client","msg":"HTTP request failed; retrying","url":"https://acme-staging-v02.api.letsencrypt.org/directory","error":"performing request: Get \"https://acme-staging-v02.api.letsencrypt.org/directory\": dial tcp: lookup acme-staging-v02.api.letsencrypt.org on 127.0.0.11:53: read udp 127.0.0.1:51208->127.0.0.11:53: i/o timeout"}

5. What I already tried:

Using https://portchecker.co/ I verified that port 443 is open but not port 80. I am guessing that somehow the redirection of http->https failed (due to the failure to get certificate?).

On my machine that runs docker if I do “https://localhost”, I will get:

Secure Connection Failed
Error code: SSL_ERROR_INTERNAL_ERROR_ALERT

I have verified that bitwarden is serving by the bitwarden container at port 8080 properly. If I “curl http://172.24.0.3:8080”, I will get the html for bitwarden main page.

<!doctype html><html><head>…

I did “curl -v 95daap8.duckdns.org” and this is the response:

*   Trying 192.168.1.55:80...
* TCP_NODELAY set
* Connected to 95daap8.duckdns.org (192.168.1.55) port 80 (#0)
> GET / HTTP/1.1
> Host: 95daap8.duckdns.org
> User-Agent: curl/7.68.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 308 Permanent Redirect
< Connection: close
< Location: https://95daap8.duckdns.org/
< Server: Caddy
< Date: Sun, 07 Nov 2021 04:08:30 GMT
< Content-Length: 0
< 
* Closing connection 0

I also did “curl -v https://acme-staging-v02.api.letsencrypt.org/directory” and this is the response

*   Trying 172.65.46.172:443...
* TCP_NODELAY set
* Connected to acme-staging-v02.api.letsencrypt.org (172.65.46.172) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
*  subject: CN=acme-staging.api.letsencrypt.org
*  start date: Oct 17 04:58:17 2021 GMT
*  expire date: Jan 15 04:58:16 2022 GMT
*  subjectAltName: host "acme-staging-v02.api.letsencrypt.org" matched cert's "acme-staging-v02.api.letsencrypt.org"
*  issuer: C=US; O=Let's Encrypt; CN=R3
*  SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55d4ee567c80)
> GET /directory HTTP/2
> Host: acme-staging-v02.api.letsencrypt.org
> user-agent: curl/7.68.0
> accept: */*
> 
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 200 
< server: nginx
< date: Sun, 07 Nov 2021 04:10:39 GMT
< content-type: application/json
< content-length: 822
< cache-control: public, max-age=0, no-cache
< x-frame-options: DENY
< strict-transport-security: max-age=604800
< 
{
  "f-JdEK0g4Mo": "https://community.letsencrypt.org/t/adding-random-entries-to-the-directory/33417",
  "keyChange": "https://acme-staging-v02.api.letsencrypt.org/acme/key-change",
  "meta": {
    "caaIdentities": [
      "letsencrypt.org"
    ],
    "termsOfService": "https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf",
    "website": "https://letsencrypt.org/docs/staging-environment/"
  },
  "newAccount": "https://acme-staging-v02.api.letsencrypt.org/acme/new-acct",
  "newNonce": "https://acme-staging-v02.api.letsencrypt.org/acme/new-nonce",
  "newOrder": "https://acme-staging-v02.api.letsencrypt.org/acme/new-order",
  "renewalInfo": "https://acme-staging-v02.api.letsencrypt.org/get/draft-aaron-ari/renewalInfo/",
  "revokeCert": "https://acme-staging-v02.api.letsencrypt.org/acme/revoke-cert"
* Connection #0 to host acme-staging-v02.api.letsencrypt.org left intact

This really puzzle me because things were running perfect before last week for more than 3 months.
I don’t really want to revert to writing passwords on paper, so please, can some guru shed some light to help me.

6. Links to relevant resources:

You can check the version by running this command:

docker-compose exec caddy caddy version

I don’t recommend doing it this way. Instead, you should follow the instructions on Docker to write a Dockerfile that builds Caddy with the plugins you need. See the section called “Adding custom Caddy modules”.

You don’t need this anymore, this was specific to Caddy v1. Simply using Caddy v2 implicitly opts you into Let’s Encrypt’s terms of service. The Caddy project got permission from them to do that.

Interesting, so the errors are actually that Caddy isn’t able to make DNS lookups to resolve the Let’s Encrypt domain to an IP address.

This might be an issue with your Docker setup, where it isn’t properly set up to forward DNS requests. I have no idea why that would be the case, but it’s definitely a problem with DNS resolution in Docker, or on your system.

1 Like

Thank you for your prompt response.

v2.4.3

How am I suppose to do that? I downloaded “go” and installed xcaddy. I then downloaded the code at GitHub - caddy-dns/duckdns: Caddy module: dns.providers.duckdns but what should I do next? The documentation doesn’t talk much about how to build the binary.
The section “Building your own caddy image” talks about using :builder image as a short-cut but how do I do that? Do I need to create a file with the content

FROM caddy... 
:
:
COPY --from=builder...

And finally how do I invoke the build? Please excuse my ignorance, but I am really a newbie here.

My machine that runs docker connects to the modem+router supplied by my ISP. I have set DNS to 1.1.1.1 on the router. Is that an issue? nslookup to my domain does resolve my public IP properly on the machine. Not sure how I can verify in the container though.

Docker uses a file format called the Dockerfile to define how an image is built. It’s how all the images you find on Docker Hub are created. See the docs: https://docs.docker.com/engine/reference/builder/

You’re meant to make a file named Dockerfile, you can put it beside your Caddyfile. Then instead of image: caddy:latest in your docker-compose.yml, use build: ${HOME}/MyDocker/caddy/Dockerfile. Then the next time you run docker-compose up -d, your custom build of Caddy will be built from that Dockerfile.

You actually didn’t need to do any of those things, because the Docker image build will automate all those steps. Like I said, use the Dockerfile found on Docker in the section “Adding custom Caddy modules” as your basis for writing your own with just the plugins you need.

Thank you Francis.
After issuing a whole bunch of “chmod” and “chown” I managed to get caddy to build.

Here is my Dockerfile:

FROM caddy:2.4.5-builder AS builder

RUN xcaddy build \
     --with github.com/caddy-dns/duckdns

FROM caddy:2.4.5

COPY --from=builder /usr/local/bin ~/MyDocker/caddy/duckdns

But there is an error when I “docker-compose up”:

caddy        | {"level":"info","ts":1636430758.171165,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
caddy        | run: adapting config using caddyfile: parsing caddyfile tokens for 'tls': /etc/caddy/Caddyfile:13 - Error during parsing: getting module named 'dns.providers.duckdns': module not registered: dns.providers.duckdns

How do register dns.providers.duckdns?
Also the build process creates a lot of containers along the build. Is this normal?

The error you’re seeing about missing duckdns module is due to this line:

It should be

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

For the DNS issue, I suspect you’re experiencing slow DNS due to state firewall? I found this reported issue which sounds similar to what you’re experiencing.

Can you try the workaround by speechifying the DNS servers in the docker-compose file?

OH thank you you all for the help. I’ve got tied up a little and have yet to try.
Will definitely respond back if it works.

Still having issue with registering the module:

bitwarden    | [2021-11-12 12:56:45.196][parity_ws][INFO] Listening for new connections on 0.0.0.0:3012.
bitwarden    | [2021-11-12 12:56:45.198][start][INFO] Rocket has launched from http://0.0.0.0:8080
caddy        | {"level":"info","ts":1636739804.9633737,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
caddy        | run: adapting config using caddyfile: parsing caddyfile tokens for 'tls': /etc/caddy/Caddyfile:12 - Error during parsing: getting module named 'dns.providers.duckdns': module not registered: dns.providers.duckdns
caddy exited with code 1

Do I need to rebuild caddy? And how?

Is your docker-compose.yml using the custom built caddy container?

I am sorry Mohammed as I was away the past weekend.
Here is my current docker-compose.yml

  caddy:
    container_name: caddy
    restart: always
    ports:
      - 80:80  # Needed for the ACME HTTP-01 challenge.
      - 443:443
    volumes:
      - ${HOME}/MyDocker/caddy/Caddyfile:/etc/caddy/Caddyfile:ro
      - ${HOME}/MyDocker/caddy/config:/config
      - ${HOME}/MyDocker/caddy/data:/data
      - ${HOME}/MyDocker/caddy:/caddy
    build:
      context: ${HOME}/MyDocker/caddy
      dockerfile: Dockerfile
    environment:
      - DOMAIN=95daap8.duckdns.org  # Your domain.
      - EMAIL=myemail@gmail.com       # The email address to use for ACME registration.
      - DUCKDNS_TOKEN=<<my Duck DNS token>>

And this is my Dockerfile:

FROM caddy:builder AS builder

RUN xcaddy build \
     --with github.com/caddy-dns/duckdns

FROM caddy

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

My question is how does docker-compose.yml know where to load my custom build?

You specified build in your docker-compose.yml:

This tells it which Dockerfile to build from. When the build runs, it will tag the image with the project name (directory your docker-compose file is in by default) + service name (caddy in this case).

You can run docker images to see all the images you have pulled or built locally, and you should see something like project_caddy (I have no idea what directory your docker-compose.yml is in so I can’t really guess what the project name will be).

If the image is already built, then docker-compose won’t try to build it again next time you run the stack. You can force a rebuild with docker-compose build <service>, so in this case docker-compose build caddy.

You should remember to periodically pull the latest dependent images from Docker Hub though, and you can do that with docker-compose pull (I forget if that will read from Dockerfiles and pull dependent base images though)

1 Like

Thank you very much.
After the image has been rebuilt, Caddy is now working flawlessly. There is no need to workaround the DNS issue in the docker-compose file. Things just started working magically since.
Thank you once again for you wonderful people.

1 Like

This topic was automatically closed after 30 days. New replies are no longer allowed.