Caddyfile + docker-compose + https

1. Caddy version (caddy version):

docker image caddy:2.3.0

2. How I run Caddy:

docker-compose.yml


  caddy:
    image: caddy:2.3.0
    container_name: caddy --domain <mydomain>.com
    ports:
      - "80:80"
      - "443:443"
      - "3000:3000"
      - "9090:9090"
      - "9093:9093"
      - "9091:9091"
    volumes:
      - ./caddy:/etc/caddy
    env_file:
       ./.env
    environment:
      - ADMIN_USER=${ADMIN_USER:-admin}
      - ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin}
      - ADMIN_PASSWORD_HASH=${ADMIN_PASSWORD_HASH:-JDJhJDE0JE91S1FrN0Z0VEsyWmhrQVpON1VzdHVLSDkyWHdsN0xNbEZYdnNIZm1pb2d1blg4Y09mL0ZP}
    restart: unless-stopped
    networks:
      - monitor-net
    labels:
      org.label-schema.group: "monitoring"

a. System environment:

$ docker-compose --version
docker-compose version 1.25.5, build unknown
$ uname -a
Linux <mydomain>.com 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

b. Command:

sudo docker restart caddy

c. Service/unit/compose file:

  caddy:
    image: caddy:2.3.0
    container_name: caddy --domain <mydomain>.com
    ports:
      - "80:80"
      - "443:443"
      - "3000:3000"
      - "9090:9090"
      - "9093:9093"
      - "9091:9091"
    volumes:
      - ./caddy:/etc/caddy
    environment:
      - ADMIN_USER=${ADMIN_USER:-admin}
      - ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin}
      - ADMIN_PASSWORD_HASH=${ADMIN_PASSWORD_HASH:-JDJhJDE0JE91S1FrN0Z0VEsyWmhrQVpON1VzdHVLSDkyWHdsN0xNbEZYdnNIZm1pb2d1blg4Y09mL0ZP}
    restart: unless-stopped
    networks:
      - monitor-net
    labels:
      org.label-schema.group: "monitoring"

  prometheus:
    image: prom/prometheus:v2.26.0
    container_name: prometheus
    volumes:
      - ./prometheus:/etc/prometheus
      - prometheus_data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.console.libraries=/etc/prometheus/console_libraries'
      - '--web.console.templates=/etc/prometheus/consoles'
      - '--storage.tsdb.retention.time=200h'
      - '--web.enable-lifecycle'
      - '--web.external-url=http://<mydomain>:9090/'
    restart: unless-stopped
    expose:
      - 9090
    networks:
      - monitor-net
    labels:
      org.label-schema.group: "monitoring"

d. My complete Caddyfile or JSON config:

current :

(basic-auth) {
       basicauth /* {
           {$ADMIN_USER} {$ADMIN_PASSWORD_HASH}
       }
}

:9090 {
    import basic-auth
    reverse_proxy prometheus:9090
}

:9093 {
    import basic-auth
    reverse_proxy alertmanager:9093
}

:9091 {
    import basic-auth
    reverse_proxy pushgateway:9091
}

:3000 {
    reverse_proxy grafana:3000
}

No BasicAuth for grafana since it’s configured to delegates it’s authentication to gitlab.com

I’m trying to access each of my services with :
https://.com/prometheus/
https://.com/alertmanager/
https://.com/pushgateway/
https://.com/grafana/

  • step 1 : prometheus alertmanager and pushgateway and grafana available in https
  • step 2 : prometheus alertmanager and pushgateway using basicauth
  • step 3 : prometheus alertmanager and pushgateway delegating their auth to gitlab.com OAuth through caddy

3. The problem I’m having:

Trying to use Caddy v2 to directly handle the authentication + SSL layers

4. Error messages and/or full log output:



5. What I already tried:

  • adding nginx +certbot in front of caddy

KO :

#<mydomain>.com {
#   route /prometheus/* {
#	uri strip_prefix /prometheus
#	import basic-auth
#	reverse_proxy prometheus:9090
#   }
#}

KO :

route /prometheus* {
	import basic-auth
	reverse_proxy prometheus:9090
}

route /pushgateway* {
        import basic-auth
	reverse_proxy pushgateway:9091
}

route /alertmanager* {
        import basic-auth
        reverse_proxy alertmanager:9093
}

route /grafana* {
        reverse_proxy grafana:3000
}
{
    # email to use on Let's Encrypt
    email email@<domain>.com

    # Uncomment for debug
    #acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
        acme_ca https://acme.zerossl.com/v2/DV90
	#debug
}
#

(basic-auth) {
       basicauth /* {
           {$ADMIN_USER} {$ADMIN_PASSWORD_HASH}
       }
}

<mydomain>.com {

#   import basic-auth
#
#   reverse_proxy /grafana/*      grafana:3000
#
#   reverse_proxy /prometheus/*   prometheus:9090
#   reverse_proxy /pushgateway/*  pushgateway:9091
#   reverse_proxy /alertmanager/* alertmanager:9093
#}

6. Links to relevant resources:

basically, I’m trying to add auto sign SSL to this docker-compose :

I fill that the steps 1 (https with letsencrypt from docker-compose) and step 2 (redirection on url/service/ → localhost:port) should be really straightforwards but I didn’t found any relevant examples in the caddy v2 documentation, nor am I sure how to structure the caddyfile from the documentation.

Thanks in advance for your help

You’re missing a /data volume. Please add one as the docs on Docker show.

This doesn’t really make sense. Remove this line.

Remove the /*, it’s very slightly less efficient than not specifying a matcher, because this makes Caddy make a path comparison which will always pass. Use basicauth { instead.

What’s the problem? What do you see in your logs? What’s not working? It’s not clear from what you wrote.

I recommend using subdomains for each of your services instead of subpaths. It works more reliably. Read the article for an explanation:

1 Like

Hello @francislavoie and thanks a lot for your quick answers !

  • basicauth { modification : done
  • container_name: caddy : done
  • caddy_data volume : done

I recommend using subdomains for each of your services instead of subpaths. It works more reliably. Read the article for an explanation

I am fine with this as long as I don’t have to record another DNS entry (this is handled by some other team in my organization, overloaded and might take weeks). I am not 100% sur about the caddyfile syntax that would allow me this settings, do you have an example somewhere on the caddy v2 documentation on how to transform the following Caddyfile to use subdomains with https ?

Caddyfle :

(basic-auth) {
       basicauth {
           {$ADMIN_USER} {$ADMIN_PASSWORD_HASH}
       }
}
:9090 {
    import basic-auth
    reverse_proxy prometheus:9090
}

:9093 {
    import basic-auth
    reverse_proxy alertmanager:9093
}

:9091 {
    import basic-auth
    reverse_proxy pushgateway:9091
}

:3000 {
    reverse_proxy grafana:3000
}

docker-compose.yml :

version: '2.1'

networks:
  monitor-net:
    driver: bridge

volumes:
    prometheus_data: {}
    grafana_data: {}
    caddy_data: {}

services:
[...]
  caddy:
    image: caddy:2.3.0
    container_name: caddy
    ports:
      - "80:80"
      - "443:443"
      - "3000:3000"
      - "9090:9090"
      - "9093:9093"
      - "9091:9091"
    volumes:
      - ./caddy:/etc/caddy
      - caddy_data:/data
    env_file:
      ./.env
    environment:
      - ADMIN_USER=${ADMIN_USER:-admin}
      - ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin}
      - ADMIN_PASSWORD_HASH=${ADMIN_PASSWORD_HASH:-JDJhJDE0JE91S1FrN0Z0VEsyWmhrQVpON1VzdHVLSDkyWHdsN0xNbEZYdnNIZm1pb2d1blg4Y09mL0ZP}
    restart: unless-stopped
    networks:
      - monitor-net
    labels:
      org.label-schema.group: "monitoring"

Thanks in advance for your help !

Just change each port address like :9090 on each of your sites to the actual domain you want, like prometheus.mydomain.com. That’s it. If the domain is pointing to your server, then Caddy will be able to have a certificate issued from Let’s Encrypt automatically.

ok, thanks a lot, this time the caddy configuration seems ok, and it indeed tries to resolve the SSL Challenges.

is there anything else to add in either the Caddyfile or docker-compose.yml so that caddy be able to resolve the challenge for the subdomains ?

(I do have a DNS redirection working properly to jump.<domain>.com, hence trying to have alertmanager.jump.<domain>.com, grafana.jump.<domain>.com, …)

(edit : I think I was hitting the rate limit).

logs

caddy                      | 2021-05-21T17:16:48.183328277Z {"level":"info","ts":1621617408.1831677,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
caddy                      | 2021-05-21T17:16:48.185105335Z {"level":"info","ts":1621617408.1850524,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["127.0.0.1:2019","localhost:2019","[::1]:2019"]}
caddy                      | 2021-05-21T17:16:48.185336579Z {"level":"info","ts":1621617408.1852767,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0003d2700"}
caddy                      | 2021-05-21T17:16:48.185348178Z {"level":"info","ts":1621617408.1853132,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv1","https_port":443}
caddy                      | 2021-05-21T17:16:48.185355777Z {"level":"info","ts":1621617408.185329,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv1"}
caddy                      | 2021-05-21T17:16:50.655895146Z {"level":"info","ts":1621617410.6557593,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["alertmanager.jump.<mydomain>.com","jump.<mydomain>.com"]}
caddy                      | 2021-05-21T17:16:50.656453009Z {"level":"info","ts":1621617410.6563551,"logger":"tls.obtain","msg":"acquiring lock","identifier":"alertmanager.jump.<mydomain>.com"}
caddy                      | 2021-05-21T17:16:50.656463975Z {"level":"info","ts":1621617410.6563642,"logger":"tls.obtain","msg":"acquiring lock","identifier":"jump.<mydomain>.com"}
caddy                      | 2021-05-21T17:16:50.656533275Z {"level":"info","ts":1621617410.656508,"logger":"tls.obtain","msg":"lock acquired","identifier":"alertmanager.jump.<mydomain>.com"}
caddy                      | 2021-05-21T17:16:50.656584726Z {"level":"info","ts":1621617410.6565514,"logger":"tls.obtain","msg":"lock acquired","identifier":"jump.<mydomain>.com"}
caddy                      | 2021-05-21T17:16:50.656598230Z {"level":"info","ts":1621617410.6565745,"logger":"tls","msg":"cleaned up storage units"}
caddy                      | 2021-05-21T17:16:50.657532928Z {"level":"info","ts":1621617410.6574912,"msg":"autosaved config","file":"/config/caddy/autosave.json"}
caddy                      | 2021-05-21T17:16:50.657540379Z {"level":"info","ts":1621617410.6575093,"msg":"serving initial configuration"}
caddy                      | 2021-05-21T17:16:51.657110493Z {"level":"info","ts":1621617411.6568549,"logger":"tls.issuance.acme","msg":"waiting on internal rate limiter","identifiers":["alertmanager.jump.<mydomain>.com"]}
caddy                      | 2021-05-21T17:16:51.657168174Z {"level":"info","ts":1621617411.6569088,"logger":"tls.issuance.acme","msg":"done waiting on internal rate limiter","identifiers":["alertmanager.jump.<mydomain>.com"]}
caddy                      | 2021-05-21T17:16:51.940469451Z {"level":"info","ts":1621617411.9402518,"logger":"tls.issuance.acme.acme_client","msg":"trying to solve challenge","identifier":"alertmanager.jump.<mydomain>.com","challenge_type":"http-01","ca":"https://acme-v02.api.letsencrypt.org/directory"}
caddy                      | 2021-05-21T17:16:52.015070490Z {"level":"info","ts":1621617412.014891,"logger":"tls.issuance.acme","msg":"waiting on internal rate limiter","identifiers":["jump.<mydomain>.com"]}
caddy                      | 2021-05-21T17:16:52.015121459Z {"level":"info","ts":1621617412.0149488,"logger":"tls.issuance.acme","msg":"done waiting on internal rate limiter","identifiers":["jump.<mydomain>.com"]}
caddy                      | 2021-05-21T17:16:52.295207745Z {"level":"info","ts":1621617412.2950842,"logger":"tls.issuance.acme.acme_client","msg":"trying to solve challenge","identifier":"jump.<mydomain>.com","challenge_type":"http-01","ca":"https://acme-v02.api.letsencrypt.org/directory"}
caddy                      | 2021-05-21T17:16:52.456108985Z {"level":"error","ts":1621617412.4558957,"logger":"tls.issuance.acme.acme_client","msg":"challenge failed","identifier":"alertmanager.jump.<mydomain>.com","challenge_type":"http-01","status_code":400,"problem_type":"urn:ietf:params:acme:error:dns","error":"DNS problem: NXDOMAIN looking up A for alertmanager.jump.<mydomain>.com - check that a DNS record exists for this domain"}
caddy                      | 2021-05-21T17:16:52.456158753Z {"level":"error","ts":1621617412.4559536,"logger":"tls.issuance.acme.acme_client","msg":"validating authorization","identifier":"alertmanager.jump.<mydomain>.com","error":"authorization failed: HTTP 400 urn:ietf:params:acme:error:dns - DNS problem: NXDOMAIN looking up A for alertmanager.jump.<mydomain>.com - check that a DNS record exists for this domain","order":"https://acme-v02.api.letsencrypt.org/acme/order/124390463/9856583324","attempt":1,"max_attempts":3}
caddy                      | 2021-05-21T17:16:52.504136545Z {"level":"info","ts":1621617412.5039365,"logger":"tls.issuance.acme","msg":"served key authentication","identifier":"jump.<mydomain>.com","challenge":"http-01","remote":"18.184.114.154:10578"}
caddy                      | 2021-05-21T17:16:52.637674459Z {"level":"info","ts":1621617412.6374285,"logger":"tls.issuance.acme","msg":"served key authentication","identifier":"jump.<mydomain>.com","challenge":"http-01","remote":"64.78.149.164:15540"}
caddy                      | 2021-05-21T17:16:52.665895271Z {"level":"info","ts":1621617412.665721,"logger":"tls.issuance.acme","msg":"served key authentication","identifier":"jump.<mydomain>.com","challenge":"http-01","remote":"18.116.86.117:18706"}
caddy                      | 2021-05-21T17:16:52.737611705Z {"level":"info","ts":1621617412.737443,"logger":"tls.issuance.acme","msg":"served key authentication","identifier":"jump.<mydomain>.com","challenge":"http-01","remote":"34.221.255.206:59602"}
caddy                      | 2021-05-21T17:16:53.197672203Z {"level":"info","ts":1621617413.197434,"logger":"tls.issuance.acme.acme_client","msg":"validations succeeded; finalizing order","order":"https://acme-v02.api.letsencrypt.org/acme/order/124390464/9856583391"}
caddy                      | 2021-05-21T17:16:53.738284062Z {"level":"info","ts":1621617413.7379966,"logger":"tls.issuance.acme.acme_client","msg":"trying to solve challenge","identifier":"alertmanager.jump.<mydomain>.com","challenge_type":"tls-alpn-01","ca":"https://acme-v02.api.letsencrypt.org/directory"}
caddy                      | 2021-05-21T17:16:53.818600348Z {"level":"info","ts":1621617413.8183353,"logger":"tls.issuance.acme.acme_client","msg":"successfully downloaded available certificate chains","count":2,"first_url":"https://acme-v02.api.letsencrypt.org/acme/cert/047a85aa22d47da28cf6ef62c18bd0d0bd89"}
caddy                      | 2021-05-21T17:16:53.819369537Z {"level":"info","ts":1621617413.8191738,"logger":"tls.obtain","msg":"certificate obtained successfully","identifier":"jump.<mydomain>.com"}
caddy                      | 2021-05-21T17:16:53.819424599Z {"level":"info","ts":1621617413.819204,"logger":"tls.obtain","msg":"releasing lock","identifier":"jump.<mydomain>.com"}
caddy                      | 2021-05-21T17:16:54.256050370Z {"level":"error","ts":1621617414.2558086,"logger":"tls.issuance.acme.acme_client","msg":"challenge failed","identifier":"alertmanager.jump.<mydomain>.com","challenge_type":"tls-alpn-01","status_code":400,"problem_type":"urn:ietf:params:acme:error:dns","error":"DNS problem: NXDOMAIN looking up A for alertmanager.jump.<mydomain>.com - check that a DNS record exists for this domain"}
caddy                      | 2021-05-21T17:16:54.256108577Z {"level":"error","ts":1621617414.2558682,"logger":"tls.issuance.acme.acme_client","msg":"validating authorization","identifier":"alertmanager.jump.<mydomain>.com","error":"authorization failed: HTTP 400 urn:ietf:params:acme:error:dns - DNS problem: NXDOMAIN looking up A for alertmanager.jump.<mydomain>.com - check that a DNS record exists for this domain","order":"https://acme-v02.api.letsencrypt.org/acme/order/124390463/9856583643","attempt":2,"max_attempts":3}
caddy                      | 2021-05-21T17:16:56.323746680Z {"level":"info","ts":1621617416.3235393,"logger":"tls.issuance.zerossl","msg":"generated EAB credentials","key_id":"Q_JhdeXiaAYH7Hu-HyVXUQ"}
caddy                      | 2021-05-21T17:16:57.539845356Z {"level":"info","ts":1621617417.5396278,"logger":"tls.issuance.acme","msg":"waiting on internal rate limiter","identifiers":["alertmanager.jump.<mydomain>.com"]}
caddy                      | 2021-05-21T17:16:57.539897992Z {"level":"info","ts":1621617417.5396729,"logger":"tls.issuance.acme","msg":"done waiting on internal rate limiter","identifiers":["alertmanager.jump.<mydomain>.com"]}
caddy                      | 2021-05-21T17:16:58.295789683Z {"level":"info","ts":1621617418.2955456,"logger":"tls.issuance.acme.acme_client","msg":"trying to solve challenge","identifier":"alertmanager.jump.<mydomain>.com","challenge_type":"http-01","ca":"https://acme.zerossl.com/v2/DV90"}

Caddyfile

{
    # email to use on Let's Encrypt
    email email@domain.com

    # Uncomment for debug
    #acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
    #debug
}


(basic-auth) {
       basicauth {
           {$ADMIN_USER} {$ADMIN_PASSWORD_HASH}
       }
}

:9090 {
    import basic-auth
    reverse_proxy prometheus:9090
}

alertmanager.jump.<mydomain>.com {
#:9093 {
    import basic-auth
    reverse_proxy alertmanager:9093
}

#pushgateway.jump.<mydomain>.com {
:9091 {
    import basic-auth
    reverse_proxy pushgateway:9091
}

#grafana.jump.<mydomain>.com {
:3000 {
    reverse_proxy grafana:3000
}

jump.<mydomain>.com

docker-compose.yml

version: '2.1'

networks:
  monitor-net:
    driver: bridge

volumes:
    prometheus_data: {}
    grafana_data: {}
    caddy_data: {}

services:
[...]
  caddy:
    image: caddy:2.3.0
    container_name: caddy
    ports:
      - "80:80"
      - "443:443"
      - "3000:3000"
      - "9090:9090"
      - "9093:9093"
      - "9091:9091"
    volumes:
      - ./caddy:/etc/caddy
      - caddy_data:/data
    env_file:
      ./.env
    environment:
      - ADMIN_USER=${ADMIN_USER:-admin}
      - ADMIN_PASSWORD=${ADMIN_PASSWORD:-admin}
      - ADMIN_PASSWORD_HASH=${ADMIN_PASSWORD_HASH:-JDJhJDE0JE91S1FrN0Z0VEsyWmhrQVpON1VzdHVLSDkyWHdsN0xNbEZYdnNIZm1pb2d1blg4Y09mL0ZP}
    restart: unless-stopped
    networks:
      - monitor-net
    labels:
      org.label-schema.group: "monitoring"

The question is then this subomain not resolved, but published by caddy.

(sorry for the edit @francislavoie )

Make sure the DNS is correct, make sure your firewall allows connections on port 80/443 all the way through to your server, make sure there’s no CDN or other proxy in the way which might be preventing the requests from reaching Caddy.

Thanks for your pointers !

I think I indeed have an issue with
<domain>.com and jump.<domain>.com not resolving as I would expect

  • <domain>.com resolves to gitlab pages :white_check_mark:
DNS resolution
{
  "AD": false,
  "Additional": [],
  "Answer": [
    {
      "TTL": 287,
      "data": "35.185.44.232",
      "name": "<domain>.com.",
      "type": 1
    }
  ],
  "CD": false,
  "Question": [
    {
      "name": "<domain>.com.",
      "type": 1
    }
  ],
  "RA": true,
  "RD": true,
  "Status": 0,
  "TC": false
}
  • jump.<domain>.com resolves to my server :white_check_mark:
DNS resolution
{
  "AD": false,
  "Additional": [],
  "Answer": [
    {
      "TTL": 42,
      "data": "<server IP>",
      "name": "jump.<domain>.com.",
      "type": 1
    }
  ],
  "CD": false,
  "Question": [
    {
      "name": "jump.<domain>.com.",
      "type": 1
    }
  ],
  "RA": true,
  "RD": true,
  "Status": 0,
  "TC": false
}
  • alertmanager.jump.<domain>.com => seems to be resolving to <domain>.com subdomains ? :warning:
DNS resolution
  "AD": false,
  "Additional": [],
  "Authority": [
    {
      "TTL": 899,
      "data": "ns-774.awsdns-32.net. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400",
      "name": "<domain>.com.",
      "type": 6
    }
  ],
  "CD": false,
  "Comment": "Response from 2600:9000:5306:3900::1.",
  "Question": [
    {
      "name": "alertmanager.jump.<domain>.com.",
      "type": 1
    }
  ],
  "RA": true,
  "RD": true,
  "Status": 3,
  "TC": false
}

So I would need to change the DNS record for this to work (Redirect all subdomains from one domain, to the equivalent subdomain of another domain using DNS and nginx? - Webmasters Stack Exchange) , which I can’t do easily (I do not have control on the DNS records for those servers, only the actual server configuration).

Unless you have another suggestion or option ?

Thanks again @francislavoie for your help!

Looks like that’s the error code you’ll need to look for. I don’t recognize that tooling, I don’t use AWS.

You could ask for *.jump.example.com to be pointed to your server’s IP, so that way you don’t need individual subdomains to be pointed there each time you want to add something new.

1 Like

I used : Google Public DNS

Thanks, I am opening a request to have the DNS changed to :

A  jump.<domain>.com <IP>
CNAME *.jump.<domain>.com jump.<domain>.com

Thanks a lot for your help and recommendations !

1 Like

This topic was automatically closed after 30 days. New replies are no longer allowed.