1. The problem I’m having:
Our use-case is to use Caddy as a webserver only and use AWS ACM certificate on purpose we have some issues with the built-in TLS feature Caddy has I really don’t want to explain the reason we are shifting to AWS ACM otherwise this Topic will become very lengthy.
We must want to use AWS ACM certificate with Caddy
2. Error messages and/or full log output:
{"level":"info","ts":1734110047.143561,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
{"level":"warn","ts":1734110047.1465132,"msg":"input is not formatted with 'caddy fmt'","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
{"level":"info","ts":1734110047.1476424,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["[::1]:2019","127.0.0.1:2019","localhost:2019"]}
{"level":"info","ts":1734110047.1480486,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0000b2930"}
{"level":"debug","ts":1734110047.1485317,"logger":"http","msg":"starting server loop","address":"[::]:60601","http3":false,"tls":false}
{"level":"debug","ts":1734110047.1485925,"logger":"http","msg":"starting server loop","address":"[::]:443","http3":false,"tls":true}
{"level":"info","ts":1734110047.1486046,"logger":"tls","msg":"cleaning storage unit","description":"{0xc00003ba40 0xc0006948f0 s3.amazonaws.com bucket-name some-key some-key ssl false}"}
{"level":"info","ts":1734110047.1487422,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1734110047.1487591,"msg":"serving initial configuration"}
{"level":"info","ts":1734110047.4052582,"logger":"tls","msg":"finished cleaning storage units"}
3. Logs when I do curl from terminal to the URL:
$ curl -I -v https://hqapi.dev.bigteams.com/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Host hqapi.dev.bigteams.com:443 was resolved.
* IPv6: (none)
* IPv4: 44.217.205.167, 54.166.20.110, 52.5.72.131, 3.210.151.4, 52.55.69.88
* Trying 44.217.205.167:443...
* Connected to hqapi.dev.bigteams.com (44.217.205.167) port 443
* schannel: disabled automatic use of client certificate
0 0 0 0 0 0 0 0 --:--:-- 0:00:06 --:--:-- 0* using HTTP/1.x
> HEAD / HTTP/1.1
> Host: hqapi.dev.bigteams.com
> User-Agent: curl/8.8.0
> Accept: */*
>
* Request completely sent off
0 0 0 0 0 0 0 0 --:--:-- 0:00:10 --:--:-- 0* schannel: server close notification received (close_notify)
0 0 0 0 0 0 0 0 --:--:-- 0:00:11 --:--:-- 0* Empty reply from server
0 0 0 0 0 0 0 0 --:--:-- 0:00:11 --:--:-- 0
* Closing connection
* schannel: shutting down SSL/TLS connection with hqapi.dev.bigteams.com port 443
curl: (52) Empty reply from server
4. Caddy version:
Docker image: 2.4.5-alpine
5. How I installed and ran Caddy:
Using Docker image and deployed it on AWS EKS K8s v1.29
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.14.2
PRETTY_NAME="Alpine Linux v3.14"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
a. System environment:
Amazon Linux 2
b. Command:
Deployed on K8s v1.29
c. Service/unit/compose file:
d. My complete Caddy config:
{
auto_https off
debug
storage s3 {
host {$S3_HOST}
bucket {$S3_BUCKET}
access_id {$S3_ACCESS_ID}
secret_key {$S3_SECRET_KEY}
prefix "ssl"
}
}
:60601 {
metrics
}
https://hqapi.dev.bigteams.com {
log {
output stdout
format json
level DEBUG
}
header X-Frame-Options SAMEORIGIN
encode gzip
reverse_proxy hq-api.hq.svc.cluster.local:3001 {
header_up X-Forwarded-Port 443
}
log {
output file /var/log/containers/access.log
}
file_server
}
6. Links to relevant resources:
7. Work Around with a small problem:
We have a work arround where using below Caddyfile we were able to access the site on HTTPS with ACM certificate but the problem is the site is also accessible on HTTP which is not secure and we are not able to stop this and not able to find how we can stop this.
a. Working Caddyfile:
{
storage s3 {
host {$S3_HOST}
bucket {$S3_BUCKET}
access_id {$S3_ACCESS_ID}
secret_key {$S3_SECRET_KEY}
prefix "ssl"
}
}
:60601 {
metrics
}
http://hqapi.dev.bigteams.com {
log {
output stdout
format json
level DEBUG
}
header X-Frame-Options SAMEORIGIN
encode gzip
reverse_proxy http://hq-app.hq.svc.cluster.local:3000 {
transport http
header_up X-Forwarded-Port 443
}
log {
output file /var/log/containers/access.log
}
file_server
}