Dockerize Caddy with existing SSL certificate

1. The problem I’m having:

We already have a running caddy server which was used to handle multiple tenants. We also already have our own SSL certificate for this to make sure each tenant will have a valid SSL as well (yes, we use “on_demand_tls” to make sure the subdomain is valid).

We want to containerize it and run it on AWS ECS but we’re not sure what we should do with the SSL certificate that we already have. I found a thread but it didn’t tell us anything about the implementation itself (I put the link below).

We thought we could use Redis as a storage for other domains, but we’re not sure how to migrate our existing certificate. Any help will be appreciated.

2. Error messages and/or full log output:

PASTE OVER THIS, BETWEEN THE ``` LINES.
Please use the preview pane to ensure it looks nice.

3. Caddy version:

4. How I installed and ran Caddy:

a. System environment:

b. Command:

PASTE OVER THIS, BETWEEN THE ``` LINES.
Please use the preview pane to ensure it looks nice.

c. Service/unit/compose file:

PASTE OVER THIS, BETWEEN THE ``` LINES.
Please use the preview pane to ensure it looks nice.

d. My complete Caddy config:

PASTE OVER THIS, BETWEEN THE ``` LINES.
Please use the preview pane to ensure it looks nice.

5. Links to relevant resources:

A similar case:

Redis module that we thought we could use:

You don’t need to migrate certs. You can just let Caddy reissue new ones.

It should be fine to reissue as long as you don’t have thousands of certificates, in which case it would take a bit longer to get through all the certs due to rate limiting.

You should use a storage plugin if you plan to have multiple Caddy instances, such that they all shared the same storage and can coordinate in issuing certs.

I see. So, I can just drop the tls line for our main domain and let caddy do the rest?

Aside from that, we also allow our tenant (around 10000 users) to use their custom domain. We already have their certificates right now, is it okay to let caddy handle it? Or is it too much for the rate-limiting?

You didn’t share your config, you didn’t fill out the help topic template. So we’re not on the same page here. Please do so. I don’t know what you mean.

Caddy issues certs at 10 certs per 10 seconds. See Automatic HTTPS — Caddy Documentation

So that means it would take at best 10,000 seconds (or 2.7 hours) to get through all of them.

If you already have certs for all of those with Caddy, then you can use the caddy storage export command to export your storage, and then run caddy storage import on your new instance to load them into your new storage. Then swap DNS records over to your new one.

Sorry about the config, this is the config that we currently use:

{
	admin 0.0.0.0:2020
	# http_port 2015
	# ocsp_stapling off
	# debug

	on_demand_tls {
		ask https://api.zzz.id/domain-checker
	}

	storage redis {
		host {env.REDIS_HOST}
		port {env.REDIS_PORT}
		# username {env.REDIS_USERNAME}
		password {env.REDIS_PASSWORD}
		db 1
		timeout 5
		key_prefix "caddytls"
		value_prefix "caddy-storage-redis"
		tls_enabled {env.REDIS_TLS_ENABLED}
		tls_insecure {env.REDIS_TLS_INSECURE}
		# aes_key ""
	}
}

*.zzz.id {
	respond /notfound 404

	encode gzip

	tls /etc/caddy/ssl-2024/zzz.bundle.crt /etc/caddy/ssl-2024/zzz.key

	reverse_proxy /storefront/* https://api.zzz.id {
		header_up oo-api-key zzzzzzzzz
	}

	reverse_proxy /page/* zzz.ap-southeast-1.elasticbeanstalk.com {
		header_up Host {host}
		header_up X-Real-IP {header.X-Forwarded-For}
		header_up Access-Control-Allow-Methods "POST, GET, OPTIONS"
		header_up Access-Control-Allow-Headers "*"
		header_down Access-Control-Allow-Origin "*"
	}

	reverse_proxy /* {
		to zzz.zzz.zzz.zzz:2015
		to zzz.zzz.zzz.zzz:2015
		to zzz.zzz.zzz.zzz:2015

		lb_policy least_conn
		fail_duration 30s

		header_up Host {host}
		header_up X-Real-IP {header.X-Forwarded-For}
		header_up Access-Control-Allow-Origin *
		header_up Access-Control-Allow-Methods "GET, POST, PUT, PATCH, OPTIONS, DELETE"
		header_down Access-Control-Allow-Origin *
		header_down Access-Control-Allow-Methods "GET, POST, PUT, PATCH, OPTIONS, DELETE"
	}

	header /js {
		Cache-Control "public, max-age=31536000;"
	}

	header /css {
		Cache-Control "public, max-age=31536000;"
	}

	header /favicon.ico {
		Cache-Control "public, max-age=31536000;"
	}

	log {
		output file /var/log/caddy/access.log
		format console
	}
}

:443 {
	respond /notfound 404

	encode gzip

	tls zzz@gmail.com {
		on_demand
	}

	reverse_proxy /storefront/* https://api.zzz.id {
		header_up oo-api-key zzzzzzzzz
	}

	reverse_proxy /page/* zzz.ap-southeast-1.elasticbeanstalk.com {
		header_up Host {host}
		header_up X-Real-IP {header.X-Forwarded-For}
		header_up Access-Control-Allow-Methods "POST, GET, OPTIONS"
		header_up Access-Control-Allow-Headers "*"
		header_down Access-Control-Allow-Origin "*"
	}

	reverse_proxy /* {
		to zzz.zzz.zzz.zzz:2015
		to zzz.zzz.zzz.zzz:2015
		to zzz.zzz.zzz.zzz:2015

		lb_policy least_conn
		fail_duration 30s

		header_up Host {host}
		header_up X-Real-IP {header.X-Forwarded-For}
	}

	header /js {
		Cache-Control "public, max-age=31536000;"
	}

	header /css {
		Cache-Control "public, max-age=31536000;"
	}

	header /favicon.ico {
		Cache-Control "public, max-age=31536000;"
	}

	log {
		output file /var/log/caddy/access.log
		format console
	}
}

Using https:// pointing to a domain served by Caddy can be problematic. It means that if Caddy doesn’t yet have a valid certificate for that domain, it would fail to connect, so nothing would be able to get certificates (possibly including itself, if On-Demand is used for that domain as well).

A better solution (for now… in a later Caddy version we may revise this) is to have a site block serving another port for local use only like :5000 or something which only sends the request to your upstream app.

{
	on_demand_tls {
		ask http://localhost:5000/domain-checker
	}
}

:5000 {
	reverse_proxy <your-upstreams>
}

I’m confused. You already had Redis setup? Your original post made it sound like you weren’t using a storage plugin.

If so, then you won’t need to do any kind of migration. You just point your new Caddy instance to the same storage and you’re good to go.

That said, I suggest using GitHub - pberkel/caddy-storage-redis instead. It’s a rewrite from gamalan’s version with many significant improvements. But it stores data differently, so you would need to migrate (export + import) if you switch to it.

This is strange. Shouldn’t you change your app to read from X-Forwarded-For instead? That header is more of a de-facto standard than X-Real-IP.

These don’t make sense. Those are CORS response headers, but you’re setting them on the request going upstream. You should remove these (saving some bytes from all requests).

Path matchers in Caddy are exact match. So this will only match a request to exactly /js. Which I don’t think ever happens.

You would need to use /js* as your matcher instead.

You could also just merge all those 3 into one matcher.

@assets path /js* /css* /favicon.ico
header @assets Cache-Control "public, max-age=31536000;"

This is weird. Why would anyone request the path /notfound? I don’t understand the intent of this. You can probably remove it.

I tried to change my configuration based on your suggestion. But I got another error that said “no solvers available”.

2024/02/09 08:11:04.401 INFO    using provided configuration    {"config_file": "/etc/caddy/Caddyfile", "config_adapter": ""}
2024/02/09 08:11:04.405 INFO    admin   admin endpoint started  {"address": "0.0.0.0:2020", "enforce_origin": false, "origins": ["//0.0.0.0:2020"]}
2024/02/09 08:11:04.405 WARN    admin   admin endpoint on open interface; host checking disabled        {"address": "0.0.0.0:2020"}
2024/02/09 08:11:04.407 INFO    caddy.storage.redis     Provision Redis simple storage using address [18.142.99.69:6379]
2024/02/09 08:11:04.408 INFO    http.auto_https server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS {"server_name": "srv0", "https_port": 443}
2024/02/09 08:11:04.408 INFO    http.auto_https enabling automatic HTTP->HTTPS redirects        {"server_name": "srv0"}
2024/02/09 08:11:04.408 INFO    tls.cache.maintenance   started background certificate maintenance      {"cache": "0xc000674080"}
2024/02/09 08:11:04.409 INFO    http    enabling HTTP/3 listener        {"addr": ":443"}
2024/02/09 08:11:04.410 INFO    http.log        server running  {"name": "srv0", "protocols": ["h1", "h2", "h3"]}
2024/02/09 08:11:04.410 INFO    http.log        server running  {"name": "remaining_auto_https_redirects", "protocols": ["h1", "h2", "h3"]}
2024/02/09 08:11:04.410 INFO    http    enabling automatic TLS certificate management   {"domains": ["*.domain.id"]}
2024/02/09 08:11:04.411 INFO    autosaved config (load with --resume flag)      {"file": "/root/.config/caddy/autosave.json"}
2024/02/09 08:11:04.411 INFO    serving initial configuration
2024/02/09 08:11:04.413 WARN    tls     storage cleaning happened too recently; skipping for now        {"storage": "{\"client_type\":\"simple\",\"address\":[\"zzz.zzz.zzz.zzz:6379\"],\"host\":[],\"port\":[],\"db\":0,\"timeout\":\"5\",\"username\":\"\",\"password\":\"REDACTED\",\"master_name\":\"\",\"key_prefix\":\"caddy\",\"encryption_key\":\"\",\"compression\":false,\"tls_enabled\":false,\"tls_insecure\":true,\"tls_server_certs_pem\":\"\",\"tls_server_certs_path\":\"\",\"route_by_latency\":false,\"route_randomly\":false}", "instance": "df8a8ade-1145-402b-b190-dddb6f4d03b2", "try_again": "2024/02/10 08:11:04.413", "try_again_in": 86399.99999972}
2024/02/09 08:11:04.413 INFO    tls     finished cleaning storage units
2024/02/09 08:11:04.415 INFO    tls.obtain      acquiring lock  {"identifier": "*.domain.id"}
2024/02/09 08:11:04.415 INFO    tls.obtain      lock acquired   {"identifier": "*.domain.id"}
2024/02/09 08:11:04.416 INFO    tls.obtain      obtaining certificate   {"identifier": "*.domain.id"}
2024/02/09 08:11:04.419 INFO    tls     waiting on internal rate limiter        {"identifiers": ["*.domain.id"], "ca": "https://acme-v02.api.letsencrypt.org/directory", "account": ""}
2024/02/09 08:11:04.419 INFO    tls     done waiting on internal rate limiter   {"identifiers": ["*.domain.id"], "ca": "https://acme-v02.api.letsencrypt.org/directory", "account": ""}
2024/02/09 08:11:05.836 ERROR   tls.obtain      could not get certificate from issuer   {"identifier": "*.domain.id", "issuer": "acme-v02.api.letsencrypt.org-directory", "error": "[*.domain.id] solving challenges: *.domain.id: no solvers available for remaining challenges (configured=[tls-alpn-01 http-01] offered=[dns-01] remaining=[dns-01]) (order=https://acme-v02.api.letsencrypt.org/acme/order/1561474067/243062089317) (ca=https://acme-v02.api.letsencrypt.org/directory)"}
2024/02/09 08:11:05.839 INFO    tls     waiting on internal rate limiter        {"identifiers": ["*.domain.id"], "ca": "https://acme.zerossl.com/v2/DV90", "account": "caddy@zerossl.com"}
2024/02/09 08:11:05.839 INFO    tls     done waiting on internal rate limiter   {"identifiers": ["*.domain.id"], "ca": "https://acme.zerossl.com/v2/DV90", "account": "caddy@zerossl.com"}
2024/02/09 08:11:10.406 ERROR   tls.obtain      could not get certificate from issuer   {"identifier": "*.domain.id", "issuer": "acme.zerossl.com-v2-DV90", "error": "[*.domain.id] solving challenges: *.domain.id: no solvers available for remaining challenges (configured=[http-01 tls-alpn-01] offered=[dns-01] remaining=[dns-01]) (order=https://acme.zerossl.com/v2/DV90/order/O_vP1fkCy0MTcbcsql5AGg) (ca=https://acme.zerossl.com/v2/DV90)"}
2024/02/09 08:11:10.406 ERROR   tls.obtain      will retry      {"error": "[*.domain.id] Obtain: [*.domain.id] solving challenges: *.domain.id: no solvers available for remaining challenges (configured=[http-01 tls-alpn-01] offered=[dns-01] remaining=[dns-01]) (order=https://acme.zerossl.com/v2/DV90/order/O_vP1fkCy0MTcbcsql5AGg) (ca=https://acme.zerossl.com/v2/DV90)", "attempt": 1, "retrying_in": 60, "elapsed": 5.990234034, "max_duration": 2592000}

So, I tried to install the required module (which is Route53). I can see it when I execute caddy list-modules. And now, it returns below error:

2024/02/09 09:01:41.676 INFO    using provided configuration    {"config_file": "/etc/caddy/Caddyfile", "config_adapter": ""}
2024/02/09 09:01:41.680 WARN    Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies    {"adapter": "caddyfile", "file": "/etc/caddy/Caddyfile", "line": 146}
2024/02/09 09:01:41.681 INFO    admin   admin endpoint started  {"address": "0.0.0.0:2020", "enforce_origin": false, "origins": ["//0.0.0.0:2020"]}
2024/02/09 09:01:41.681 WARN    admin   admin endpoint on open interface; host checking disabled        {"address": "0.0.0.0:2020"}
2024/02/09 09:01:41.683 INFO    caddy.storage.redis     Provision Redis simple storage using address [ZZZ.ZZZ.ZZZ.ZZZ:6379]
2024/02/09 09:01:41.683 INFO    tls.cache.maintenance   started background certificate maintenance      {"cache": "0xc0003b8b00"}
2024/02/09 09:01:41.683 INFO    http.auto_https server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS {"server_name": "srv0", "https_port": 443}
2024/02/09 09:01:41.683 INFO    http.auto_https enabling automatic HTTP->HTTPS redirects        {"server_name": "srv0"}
2024/02/09 09:01:41.684 INFO    http    enabling HTTP/3 listener        {"addr": ":443"}
2024/02/09 09:01:41.685 INFO    http.log        server running  {"name": "srv0", "protocols": ["h1", "h2", "h3"]}
2024/02/09 09:01:41.685 INFO    http.log        server running  {"name": "remaining_auto_https_redirects", "protocols": ["h1", "h2", "h3"]}
2024/02/09 09:01:41.685 INFO    http    enabling automatic TLS certificate management   {"domains": ["*.domain.id"]}
2024/02/09 09:01:41.685 WARN    tls     storage cleaning happened too recently; skipping for now        {"storage": "{\"client_type\":\"simple\",\"address\":[\"ZZZ.ZZZ.ZZZ.ZZZ:6379\"],\"host\":[],\"port\":[],\"db\":0,\"timeout\":\"5\",\"username\":\"\",\"password\":\"REDACTED\",\"master_name\":\"\",\"key_prefix\":\"caddy\",\"encryption_key\":\"\",\"compression\":false,\"tls_enabled\":false,\"tls_insecure\":true,\"tls_server_certs_pem\":\"\",\"tls_server_certs_path\":\"\",\"route_by_latency\":false,\"route_randomly\":false}", "instance": "df8a8ade-1145-402b-b190-dddb6f4d03b2", "try_again": "2024/02/10 09:01:41.685", "try_again_in": 86399.99999953}
2024/02/09 09:01:41.686 INFO    tls     finished cleaning storage units
2024/02/09 09:01:41.688 INFO    autosaved config (load with --resume flag)      {"file": "/root/.config/caddy/autosave.json"}
2024/02/09 09:01:41.688 INFO    serving initial configuration
2024/02/09 09:01:41.691 INFO    tls.obtain      acquiring lock  {"identifier": "*.domain.id"}
2024/02/09 09:01:44.696 INFO    tls.obtain      lock acquired   {"identifier": "*.domain.id"}
2024/02/09 09:01:44.696 INFO    tls.obtain      obtaining certificate   {"identifier": "*.domain.id"}
2024/02/09 09:01:44.699 INFO    tls.issuance.acme       waiting on internal rate limiter        {"identifiers": ["*.domain.id"], "ca": "https://acme-v02.api.letsencrypt.org/directory", "account": ""}
2024/02/09 09:01:44.699 INFO    tls.issuance.acme       done waiting on internal rate limiter   {"identifiers": ["*.domain.id"], "ca": "https://acme-v02.api.letsencrypt.org/directory", "account": ""}
2024/02/09 09:01:46.132 INFO    tls.issuance.acme.acme_client   trying to solve challenge       {"identifier": "*.domain.id", "challenge_type": "dns-01", "ca": "https://acme-v02.api.letsencrypt.org/directory"}
2024/02/09 09:01:48.227 ERROR   tls.issuance.acme.acme_client   cleaning up solver      {"identifier": "*.domain.id", "challenge_type": "dns-01", "error": "no memory of presenting a DNS record for \"_acme-challenge.domain.id\" (usually OK if presenting also failed)"}
2024/02/09 09:01:48.483 ERROR   tls.obtain      could not get certificate from issuer   {"identifier": "*.domain.id", "issuer": "acme-v02.api.letsencrypt.org-directory", "error": "[*.domain.id] solving challenges: presenting for challenge: adding temporary record for zone \"domain.id.\": operation error Route 53: ChangeResourceRecordSets, https response error StatusCode: 400, RequestID: dff13b3a-d8a0-4b64-ab1b-24c380744aff, InvalidChangeBatch: [RRSet with DNS name _acme-challenge.domain.id., type TXT cannot be created as other RRSets exist with the same name and type.] (order=https://acme-v02.api.letsencrypt.org/acme/order/1561474067/243071084437) (ca=https://acme-v02.api.letsencrypt.org/directory)"}
2024/02/09 09:01:48.486 INFO    tls.issuance.zerossl    waiting on internal rate limiter        {"identifiers": ["*.domain.id"], "ca": "https://acme.zerossl.com/v2/DV90", "account": "caddy@zerossl.com"}
2024/02/09 09:01:48.486 INFO    tls.issuance.zerossl    done waiting on internal rate limiter   {"identifiers": ["*.domain.id"], "ca": "https://acme.zerossl.com/v2/DV90", "account": "caddy@zerossl.com"}

We do have _acme-challenge in Route53, we use it for email purposes. The type is Multivalue answer. Is there any way for caddy to be able to append the new value to the record?

P.S. For the record, I already created the required user and its policy as well.

Right, if you want to use a wildcard domain, you must use the DNS challenge to solve for it.

Or enable on_demand for that domain, so Caddy issues a one certificate per subdomain.

The error message is basically saying “HTTP and TLS-ALPN are configured, but only DNS challenge is possible here”

Hmm. I don’t really understand this error message. It’s coming from route53. It sounds like there might already be a TXT record for that domain (manually created)? Maybe try removing the TXT record, then reload Caddy? :thinking:

It’s supposed to be appending, I think :thinking: maybe there’s a bug in the route53 plugin. Probably best to open an issue on that plugin’s repo to get help. I don’t use AWS so I can’t help much.

It looks like the plugin (route53) does have an issue. The only workaround for now is to remove the subdomain first, run the caddy, and put the other one manually.
Anyway, thank you for all the help you gave. I appreciate it.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.