1. Caddy version (caddy version
):
v2.4.6
2. How I run Caddy:
caddy run --config /etc/caddy/config.json
a. System environment:
Running caddy from docker image in Alpine Linux v3.14
b. Command:
caddy run --config /etc/caddy/config.json
c. Service/unit/compose file:
K8s deployment using helm charts
apiVersion: apps/v1
kind: Deployment
metadata:
name: caddy
namespace: default
labels:
app.kubernetes.io/instance: caddy-controller
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: caddy
app.kubernetes.io/version: 1.16.0
helm.sh/chart: caddy-0.1.0
annotations:
deployment.kubernetes.io/revision: '38'
meta.helm.sh/release-name: caddy-controller
meta.helm.sh/release-namespace: default
spec:
replicas: 2
selector:
matchLabels:
app.kubernetes.io/instance: caddy-controller
app.kubernetes.io/name: caddy
template:
metadata:
namespace: default
creationTimestamp: null
labels:
app.kubernetes.io/instance: caddy-controller
app.kubernetes.io/name: caddy
spec:
volumes:
- name: caddy-startup-script
configMap:
name: caddy-startup-script
items:
- key: SCRIPT
path: script.sh
defaultMode: 504
containers:
- name: caddy
image: registry.gitlab.com/komibi/cms/caddy:2-redis
command:
- /application/scripts/script.sh
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
resources:
limits:
cpu: 200m
memory: 2000Mi
requests:
cpu: 50m
memory: 500Mi
volumeMounts:
- name: caddy-startup-script
mountPath: /application/scripts
securityContext: {}
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: caddy
serviceAccount: caddy
securityContext: {}
imagePullSecrets:
- name: regcred
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
d. My complete Caddyfile or JSON config:
{
"admin":{
"disabled":false,
"enforce_origin":true,
"listen":"localhost:2019",
"origins":[
"caddy-api:8080",
"localhost:2019",
"127.0.0.1:2019"
]
},
"apps":{
"http":{
"servers":{
"hocoos":{
"listen":[
":443"
],
"routes":[
{
"@id":"magic.hocoos.cafe",
"handle":[
{
"handler":"subroute",
"routes":[
{
"handle":[
{
"handler":"rewrite",
"strip_path_prefix":"/api/auth"
},
{
"handler":"reverse_proxy",
"upstreams":[
{
"dial":"web-api:5000"
}
]
}
],
"match":[
{
"path":[
"/api/auth/*"
]
}
]
}
]
},
{
"handler":"subroute",
"routes":[
{
"handle":[
{
"handler":"rewrite",
"strip_path_prefix":"/api/web"
},
{
"handler":"reverse_proxy",
"upstreams":[
{
"dial":"web-api:5000"
}
]
}
],
"match":[
{
"path":[
"/api/web/*"
]
}
]
}
]
},
{
"handler":"subroute",
"routes":[
{
"handle":[
{
"handler":"reverse_proxy",
"upstreams":[
{
"dial":"front-vue:3000"
}
]
}
],
"match":[
{
"path":[
"/",
"/*"
]
}
]
}
]
}
],
"match":[
{
"host":[
"magic.hocoos.cafe"
]
}
],
"terminal":true
},
{
"@id":"admin.hocoos.cafe",
"handle":[
{
"handler":"subroute",
"routes":[
{
"handle":[
{
"handler":"rewrite",
"strip_path_prefix":"/api/auth"
},
{
"handler":"reverse_proxy",
"upstreams":[
{
"dial":"web-api:5000"
}
]
}
],
"match":[
{
"path":[
"/api/auth/*"
]
}
]
}
]
},
{
"handler":"subroute",
"routes":[
{
"handle":[
{
"handler":"rewrite",
"strip_path_prefix":"/api/web"
},
{
"handler":"reverse_proxy",
"upstreams":[
{
"dial":"web-api:5000"
}
]
}
],
"match":[
{
"path":[
"/api/web/*"
]
}
]
}
]
},
{
"handler":"subroute",
"routes":[
{
"handle":[
{
"handler":"reverse_proxy",
"upstreams":[
{
"dial":"front-admin:3006"
}
]
}
],
"match":[
{
"path":[
"/",
"/*"
]
}
]
}
]
}
],
"match":[
{
"host":[
"admin.hocoos.cafe"
]
}
],
"terminal":false
},
{
"handle":[
{
"handler":"subroute",
"routes":[
{
"handle":[
{
"handler":"rewrite",
"strip_path_prefix":"/api/web"
},
{
"handler":"reverse_proxy",
"upstreams":[
{
"dial":"web-api:5000"
}
]
}
],
"match":[
{
"path":[
"/api/web/*"
]
}
]
}
]
}
],
"terminal":false
},
{
"handle":[
{
"handler":"subroute",
"routes":[
{
"handle":[
{
"handler":"reverse_proxy",
"upstreams":[
{
"dial":"front-site-vue:3001"
}
]
}
],
"match":[
{
"path":[
"/",
"/*"
]
}
]
}
]
}
],
"match":[
{
"not":[
{
"host":[
"magic.hocoos.cafe"
]
},
{
"host":[
"admin.hocoos.cafe"
]
}
]
}
],
"terminal":false
},
{
"handle":[
{
"handler":"subroute",
"routes":[
{
"handle":[
{
"handler":"rewrite",
"strip_path_prefix":"/api/auth"
},
{
"handler":"reverse_proxy",
"upstreams":[
{
"dial":"web-api:5000"
}
]
}
],
"match":[
{
"path":[
"/api/auth/*"
]
}
]
}
]
}
],
"terminal":false
}
]
}
}
},
"tls":{
"automation":{
"on_demand":{
"ask":"http://caddy-controller:8080/api/check",
"rate_limit":{
"burst":20,
"interval":2
}
},
"policies":[
{
"issuers":[
{
"ca":"",
"module":"internal"
}
],
"on_demand":false,
"storage":{
"address":"redis-master:6379",
"aes_key":"",
"db":0,
"host":"redis-master",
"key_prefix":"cloudflare",
"module":"redis",
"password":"password",
"port":"6379",
"timeout":5,
"tls_enabled":false,
"tls_insecure":true,
"value_prefix":"cloudflare"
},
"subjects":[
"*.hocoos.site"
]
},
{
"issuers":[
{
"ca":"https://acme-v02.api.letsencrypt.org/directory",
"challenges":{
"http":{
"disabled":true
}
},
"email":"hocoos@hocoos.com",
"module":"acme"
}
],
"on_demand":true,
"subjects":[
]
}
]
}
}
},
"logging":{
"logs":{
"default":{
"level":"DEBUG"
}
}
},
"storage":{
"address":"redis-master:6379",
"aes_key":"",
"db":0,
"host":"redis-master",
"key_prefix":"caddytlsprod",
"module":"redis",
"password":"password",
"port":"6379",
"timeout":5,
"tls_enabled":false,
"tls_insecure":true,
"value_prefix":"caddy-storage-redis"
}
}
3. The problem I’m having:
We are attempting to do a split management between custom certificate stored on redis for specific wildcard domain and on_demand R3 Lets Encrypt certificates for every other domain. So far we have achieved multiple results, while none of them are what we are looking for.
4. Error messages and/or full log output:
There is no concrete error or valuable log, however when we attempted to split the policies, caddy kept “searching” for certificates using the
web.site.com->*.site.com->*.*.com->*.*.*
schema when not expected to.
5. What I already tried:
We tried setting up the TLS store in different directories as such:
Redis Key Prefix: cloudflare
1.
-> cloudflare/certificates/*.hocoos.site/*.hocoos.site.crt
-> cloudflare/certificates/*.hocoos.site/*.hocoos.site.key
-> cloudflare/certificates/*.hocoos.site/*.hocoos.site.json
*.hocoos.site.json:
```
{
"sans": [
"hocoos.site",
"*.hocoos.site"
]
}
```
2.
-> cloudflare/certificates/*.hocoos.site.crt/certificates/*.hocoos.site.crt
-> cloudflare/certificates/*.hocoos.site.crt/certificates/*.hocoos.site.key
The other option we tried to pursue is utilizing the policy’s get_certificate directive to pull certificartes from a remote server , while attempting this with a storage block pointing to redis inside of the policy we didnt recieve any errors with the directive but no calls were made to our remote server.
When we removed the storage block we recieved the following error from caddy:
loading http app module: provision http: getting tls app: loading tls app module: decoding module config: tls: json: unknown field "get_certificate"
6. Links to relevant resources:
None