HTTPS for local domains - working with CURL, but not with browsers

1. Caddy version (caddy version):


2. How I run Caddy:

I run Caddy in Docker, this is the docker-compose file:

version: "3.8"
    image: caddy:2.4.1-alpine
      - 80:80
      - 443:443
      - 2019:2019
      - ./caddy/Caddyfile:/etc/caddy/Caddyfile:ro
      - /usr/local/share/ca-certificates:/data/caddy/pki/authorities/local
      - ./:/var/www
    restart: unless-stopped

a. System environment:

Local development, docker 20.10.6 on Ubuntu 18.04

b. Command:

docker-compose up

or as inline:

$ docker run -d -p 80:80 -p 443:443 -p 2019:2019 \
    -v home/code/caddy/Caddyfile:/etc/caddy/Caddyfile:ro \
    -v /usr/local/share/ca-certificates:/data/caddy/pki/authorities/local \
    -v /home/code:/var/www \

d. My complete Caddyfile or JSON config:


:443 {
    tls internal {

example.localhost {
    root * /var/www

3. The problem I’m having:

I want to have HTTPS for local domains. I’ve been using mkcert and it’s been working great. I was hoping to make it a little easier with Caddy and on demand TLS.

Since I’m on Ubuntu, I’ve mounted the ca-certificates into the container so Caddy’s certificates are available on the host machine:

-v /usr/local/share/ca-certificates:/data/caddy/pki/authorities/local

I have confirmed that Caddy generates a root.crt and an itermediate.crt in this directory. After that I ran

sudo update-ca-certificates 

When I curl the URL, everything looks fine fine and it says SSL certificate verify ok:

curl --insecure -vvI https://example.localhost 2>&1
* Rebuilt URL to: https://example.localhost/
*   Trying ::1...
* Connected to example.localhost (::1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS Unknown, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Unknown (8):
* TLSv1.3 (IN), TLS Unknown, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS Unknown, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS Unknown, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Client hello (1):
* TLSv1.3 (OUT), TLS Unknown, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN, server accepted to use h2
* Server certificate:
*  subject: [NONE]
*  start date: May 26 08:56:28 2021 GMT
*  expire date: May 26 20:56:28 2021 GMT
*  issuer: CN=Caddy Local Authority - ECC Intermediate
*  SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* TLSv1.3 (OUT), TLS Unknown, Unknown (23):
* TLSv1.3 (OUT), TLS Unknown, Unknown (23):
* TLSv1.3 (OUT), TLS Unknown, Unknown (23):
* Using Stream ID: 1 (easy handle 0x5639450e07f0)
* TLSv1.3 (OUT), TLS Unknown, Unknown (23):
> Host: example.localhost
> User-Agent: curl/7.58.0
> Accept: */*

4. Error messages and/or full log output:

But when I run it in the browser (Chromium, Brave, Firefox), it complains about an invalid CA:

I have confirmed that the second part of the chain is the content of the intermediate.crt generated by Caddy.

5. What I already tried:

This is pretty much all I’ve tried, I’m not sure how to proceed. My understanding is that the CA should be installed and trusted on the host machine, since CURL is not having any issues accessing the URL over HTTPS.

Also, this is exactly how I did it with mkcert for another local domain (served with nginx) and it worked fine. The certificate generated by mkcert is also located in /usr/local/share/ca-certificates, like the ones generated by Caddy.

So any ideas would be appreciated. Thanks!

I can’t really recommend this. You risk clobbering the CA certs on your host machine.

You don’t need to make Caddy generate the root.crt directly into that directory either, since it has a long lifetime. You can just copy it from the storage to install it.

You should have a volume for all of /data, else you risk losing important state stored there.

Browsers don’t all use the system trust store. Firefox for example manages its own trust store (for Firefox, it’s part of NSS). Also, make sure to clear out all browser state; Chromium is known to have problems with caching certificates.

If you grab the Caddy binary and run it on your host machine, you can use the caddy trust command with sudo to have Caddy attempt to install it in all the trust stores it can find (pretty much exactly the same way as mkcert does it).

sudo XDG_DATA_HOME=/path/to/data caddy trust

The environment variable is there to override the path to Caddy’s storage location, because you need to point it to your Docker volume instead.

1 Like

Now that is interesting, I actually didn’t know that! Checked Chromium’s trust store and the certificate is in fact not showing up there. Gave Firefox a few wobbles (had to restart it several times) and now I can access that domain over HTTPS :partying_face: - still no idea how to convice Chromium though, but that’s another issue.

So my main takeaway from your wonderful answer was: When running Caddy in Docker,

First run sudo XDG_DATA_HOME=/path/to/data caddy trust on the host, and then mount /path/to/data to /data within the container

Let Caddy generate everything within the container and then try pull certs from there on your own thinking you’re knowing what you’re doing

Thank you so much and have a great day!

1 Like

Apparently Chromium uses NSS on Linux now

Well, more the other way around, really. You can let the Caddy inside the container generate the CA cert, then use the tool on the host to install it. That way it’s created according to the config in your container (but that would only probably only make a difference if you changed settings on the pki app via JSON config).

You can let Caddy generate everything within the container (and that’s what I was suggesting earlier) but you need to run some tooling outside of the container to install it to your system trust (because when it’s in a container, Caddy’s isolated from the rest of the system, so it can’t do it).

1 Like

Thank you so much for taking the time to clarify. I’ve been trying to wrap my head around it and to get it working properly for a few days now, so I really, really appreciate it!

So here’s basically what I ended up with (Linux / Ubuntu):

  1. Run caddy trust within the container and let it generate all the data and store it in the volume:
    docker run --name=caddy__install -v $(pwd)/docker/caddy/caddy_data:/data caddy:2.4.1-alpine caddy trust

  2. Copy the caddy binary from the container to the host
    docker cp caddy__install:/usr/bin/caddy $(pwd)/caddy

  3. Run caddy trust on the host with the volume containing the previously generated data set as the data home dir
    sudo XDG_DATA_HOME=$(pwd)/docker/caddy/caddy_data $(pwd)/caddy trust

  4. Update the CA certs (Ubuntu specific)
    sudo update-ca-certificates

Now, this kind of works ok, at least for CURL and Firefox. As you’ve pointed out, Chromium is using NSS on Linux. In step 3, while running caddy trust on the host, I get the following output:

> define JAVA_HOME environment variable to use the Java trust
> certificate installed properly in NSS security databases
> certificate installed properly in linux trusts

So as far as I understand it, NSS should have been taken care of and therefore it should work in Chromium. But unfortunately it doesn’t. Also, I have no idea how to add it manually, so here’s what I came up with: Step 1 and 2 are the same as above, then

  1. Copy the root cert generated by Caddy over from the volume into a separate folder as rootCA.pem and chnage the permissions to make it readable:

    mkdir -p $(pwd)/certs

    sudo cp $(pwd)/docker/caddy/caddy_data/caddy/pki/authorities/local/root.crt $(pwd)/certs/rootCA.pem

    sudo chmod go+r $(pwd)/certs/rootCA.pem

  2. Run mkcert -install with the CAROOT pointing to that folder:
    CAROOT=$(pwd)/certs mkcert -install
    Unintsalling would work the same way:
    CAROOT=$(pwd)/certs mkcert -uninstall

This actully did the trick for me and made it work in Chromium.

I’m not overly excited about this solution since it still depends on mkcert and makes assumptions about the way how Caddy handles things, but I think it’s the least “adventurous” one.

Thanks again, for the help and for maintaining this really, really cool project! :heavy_heart_exclamation:

1 Like

Thanks for sharing your solution for other searchers!

Which is kinda funny, since Caddy actually uses a very slight fork of mkcert as the code for managing the trust store. In other words, Caddy trust store code is almost == mkcert.

Ah, looks like mkcert has a couple more NSS DB locations in their script that hasn’t been ported to the smallstep one (which Caddy uses):

1 Like

I opened an issue for that:

1 Like

Holy smokes, you guys are killing it!

Really nice catch. Probably should have mentioned earlier that I’m using Chromium from Snap, but I totally forgot that I do. It all makes sense now.

Anyway, hope it’s getting merged soon. Thanks again, and also thanks @matt for checking in!


This topic was automatically closed after 30 days. New replies are no longer allowed.