Docker Use a custom certificate and key

1. The problem I’m having:

Hi. I have written and deleted multiple versions of this question because it gets very convoluted to ask a very simple question.

I can run caddy locally and use a custom certificate and key:
tls cert.pem key.pem

When I bind these to a docker container using a Caddyfile, the logs say the certificates are loaded, but I can not connect.

Is there a way to use custom certificate and key with the latest official image?

2. Error messages and/or full log output:

Caddyfile local

foo.test

reverse_proxy localhost:8080

tls ./certs/foo.test.pem ./certs/foo.test-key.pem

Caddyfile in docker

foo.test {
	reverse_proxy / web:8080
	tls /root/certs/foo.test.pem /root/certs/foo.test-key.pem
}

DockerCompose

services:
  caddy:
    image: "arm64v8/caddy"
    volumes:
      - ./certs:/root/certs # to sync mkcert certificates to Caddy
      - ./Caddyfile:/etc/caddy/Caddyfile  # to mount custom Caddyfile
    ports:
      - "80:80"
      - "443:2019"
    depends_on:
      - web

  web:
    build: ./app

Log

 [+] Running 3/0
 ✔ Network caddy_default    Created                                                     0.0s 
 ✔ Container caddy-web-1    Created                                                     0.0s 
 ✔ Container caddy-caddy-1  Created                                                     0.0s 
Attaching to caddy-caddy-1, caddy-web-1
caddy-caddy-1  | {"level":"info","ts":1690036332.68093,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
caddy-caddy-1  | {"level":"info","ts":1690036332.6823535,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
caddy-caddy-1  | {"level":"info","ts":1690036332.6827698,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0x40003aed90"}
caddy-caddy-1  | {"level":"warn","ts":1690036332.6866915,"logger":"tls","msg":"stapling OCSP","error":"no OCSP stapling for [foo.test]: no OCSP server specified in certificate"}
caddy-caddy-1  | {"level":"info","ts":1690036332.6867504,"logger":"http","msg":"skipping automatic certificate management because one or more matching certificates are already loaded","domain":"foo.test","server_name":"srv0"}
caddy-caddy-1  | {"level":"info","ts":1690036332.686761,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
caddy-caddy-1  | {"level":"info","ts":1690036332.687039,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
caddy-caddy-1  | {"level":"info","ts":1690036332.687104,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Receive-Buffer-Size for details."}
caddy-caddy-1  | {"level":"info","ts":1690036332.687187,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
caddy-caddy-1  | {"level":"info","ts":1690036332.687213,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
caddy-caddy-1  | {"level":"info","ts":1690036332.6873643,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
caddy-caddy-1  | {"level":"info","ts":1690036332.68738,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
caddy-caddy-1  | {"level":"info","ts":1690036332.6873841,"msg":"serving initial configuration"}
caddy-caddy-1  | {"level":"info","ts":1690036332.687398,"logger":"tls","msg":"finished cleaning storage units"}

3. Caddy version:

v2.6.4 h1:2hwYqiRwk1tf3VruhMpLcYTg+11fCdr8S3jhNAdnPy8=

4. How I installed and ran Caddy:

This is wrong. Caddy is listening on port 443 inside the container.

Port 2019 is for the admin endpoint, which you should not expose publicly.

1 Like

Thank you. That solved it.

The codewithhugo article using the abiosoft image had ports: 443:2015 which worked. So I thought we had to listen on 443 and forward it to the container on 2019 in the v2 image.

Just didn’t understand what was happening. Ugh,

Since I couldn’t find a simple example, here is how to use your own pem files generated with mkcrt., e.g.

Edit /etc/hosts 127.0.0.1 foo.test

docker compose up

curl https://foo.test
Hello world

docker-compose.yml

services:
  caddy:
    image: "arm64v8/caddy"
    volumes:
      - ./certs:/root/certs
      - ./Caddyfile:/etc/caddy/Caddyfile
    ports:
      - "443:443"
    depends_on:
      - web
  web:
    build: ./app

caddy file

foo.test {
	reverse_proxy / web:8080
	tls /root/certs/foo.test.pem /root/certs/foo.test-key.pem
}

app Dockerfile

FROM node:10

WORKDIR /node/app

COPY ./index.js /node/app

ENV NODE_ENV production

ENV PORT 8080
EXPOSE 8080

CMD ["node", "index.js"]

app index.js

const http = require('http')

const server = http.createServer((req, res) => {
  res.writeHead(200, { 'Content-Type': 'text/plain' });
  res.end('Hello world');
});

server.listen(process.env.PORT)

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.