No certificate created - no "enabling automatic TLS certificate management" - http: TLS handshake error from 193.118.53.202:51252: no certificate available for '172.21.0.4'

1. Caddy version (caddy version): 2.4.1

2. How I run Caddy:

a. System environment:

$ docker-compose version
docker-compose version 1.24.0, build 0aa59064
docker-py version: 3.7.2
CPython version: 3.6.8
OpenSSL version: OpenSSL 1.1.0j  20 Nov 2018

$ docker version
Client:
 Version:           18.09.5
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        e8ff056
 Built:             Thu Apr 11 04:43:57 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.5
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.8
  Git commit:       e8ff056
  Built:            Thu Apr 11 04:10:53 2019
  OS/Arch:          linux/amd64
  Experimental:     false

On Ubuntu 18.04

b. Command:

env DOMAIN=adv-shr-elasticsearch2-dev.westeurope.cloudapp.azure.com docker-compose -f docker-compose.yml -f docker-compose.azurevm-highperf-caddy.yml

c. Service/unit/compose file:

docker-compose.yml:

version: "2"

services:
  elasticsearch:
    build:
      context: elasticsearch/
    volumes:
      - elasticsearch-data:/usr/share/elasticsearch/data
      - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
    environment:
      node.name: elasticsearch
      cluster.initial_master_nodes: elasticsearch
      ES_CLUSTER_NAME: search-cluster
      ES_DATA_DIR: /usr/share/elasticsearch/data
    networks:
      - elk

  kibana:
    build:
      context: kibana/
    volumes:
      - kibana-data:/usr/share/kibana/data
      - ./kibana/config/:/usr/share/kibana/config:ro
    environment:
      KB_DATA_DIR: /usr/share/kibana/data
      KB_ELASTICSEARCH_URL: http://elasticsearch:9200
      KB_SERVER_NAME: kibana
    networks:
      - elk
    depends_on:
      - elasticsearch

volumes:
  elasticsearch-data:
    driver: local
  kibana-data:
    driver: local

networks:
  elk:
    driver: bridge

docker-compose.azurevm-highperf-caddy.yml:

version: "2"

services:
  elasticsearch:
    restart: always
    environment:
      ES_JAVA_OPTS: "-Xmx4000m -Xms4000m"

  kibana:
    restart: always
    environment:
      KB_BASE_PATH: /kibana

  caddy:
    image: caddy:2.4.1
    container_name: caddy
    restart: always
    volumes:
      - caddy-config:/config
      - caddy-data:/data
      - ./caddy:/etc/caddy
    ports:
      - 80:80
      - 443:443
    networks:
      - elk
    depends_on:
      - elasticsearch

volumes:
  caddy-config:
    driver: local
  caddy-data:
    driver: local

d. My complete Caddyfile or JSON config:

{
        email alexander@skwar.me
        debug
}

{$DOMAIN}:443

encode zstd gzip

log {
        level DEBUG
        output file /data/access.log {
                roll_size 10MB
                roll_keep 10
        }
}

handle_path /elasticsearch* {
        basicauth bcrypt Elasticsearch {
                import elasticsearch.auth.*
        }

        reverse_proxy http://elasticsearch:9200
}

handle_path /kibana* {
        basicauth bcrypt kibana {
                import kibana.auth.*
        }

        reverse_proxy http://kibana:5601
}

3. The problem I’m having:

When I try to connect to the system with https and curl (or any browser, for that matter), I get an error:

curl -v https://adv-shr-elasticsearch2-dev.westeurope.cloudapp.azure.com/
*   Trying 52.174.xx.xxx...
* TCP_NODELAY set
* Connected to adv-shr-elasticsearch2-dev.westeurope.cloudapp.azure.com (52.174.xx.xxx) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS alert, Server hello (2):
* error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error
* stopped the pause stream!
* Closing connection 0
curl: (35) error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error

4. Error messages and/or full log output:

{"level":"info","ts":1623244031.5708134,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
{"level":"info","ts":1623244031.5738392,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["localhost:2019","[::1]:2019","127.0.0.1:2019"]}
{"level":"info","ts":1623244031.575923,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0000e9650"}
{"level":"info","ts":1623244031.575854,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
{"level":"info","ts":1623244031.5759695,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
{"level":"info","ts":1623244034.971001,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
{"level":"info","ts":1623244034.9712322,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"debug","ts":1623244034.9728982,"logger":"http","msg":"starting server loop","address":"[::]:443","http3":false,"tls":true}
{"level":"debug","ts":1623244034.9729648,"logger":"http","msg":"starting server loop","address":"[::]:80","http3":false,"tls":false}
{"level":"info","ts":1623244034.974246,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1623244034.9742641,"msg":"serving initial configuration"}
{"level":"debug","ts":1623244040.1820939,"logger":"http.stdlib","msg":"http: TLS handshake error from 193.118.53.202:51252: no certificate available for '172.21.0.4'"}

When I compare the log output of this vm to a vm where Caddy works, I find that in the latter case there’s also this:

{"level":"debug","ts":1623244396.9936051,"logger":"http","msg":"starting server loop","address":"[::]:80","http3":false,"tls":false}
{"level":"info","ts":1623244396.9936795,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["adv-shr-es-https-test-1.westeurope.cloudapp.azure.com"]}
{"level":"info","ts":1623244397.0198913,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}

I’m referring to the 2nd line there: enabling automatic TLS certificate management

Why is it missing on the broken VM?

I’m using the same Docker images - 2.4.1. With and without alpine.

5. What I already tried:

6. Links to relevant resources:

Are you sure your DOMAIN env var is set? I don’t see a log line with the domain, indicating that Caddy would fetch a certificate.

1 Like

Good catch!

No, it wasn’t set.

I intended to pass it from the host to docker-compose and further to the containers by invoking docker-compose like so:

env DOMAIN=foo.bar.baz docker-compose -f docker-compose.yml up

But for this to work, the environment variable (DOMAIN in this case) needs to be explicitly mentioned in the docker-compose.yml file. Like so:

...
  caddy:
    environment:
      - DOMAIN
...

Thanks for the heads up :slight_smile:

1 Like

By the way, this is the sort of situation where looking at the /config/autosave.json can be helpful, because you can check what the Caddyfile adapted to. You would notice that there was no domain in there.

1 Like

LOL, yeah, you kinda had to rub it in my face, right? :laughing: But I totally was deserving that, so :heart:

Very much appreciated, it really is. Next time I know to look there as well :slight_smile:

1 Like

:wink:

Just a tip, no snark intended :sweat_smile:

This topic was automatically closed after 30 days. New replies are no longer allowed.