How to properly test and troubleshoot ssl issues

1. The problem I’m having:

Hello,

I am new to caddy ( and Docker ) and I am hoping to get some insight to troubleshoot why my SSL is not working.

My goal is to self host some containers such as Vaultwarden and Home Assistant.

Currently I am trying to determine that I have an active SSL certificate. I can navigate to the ‘certificates’ folder and see a cert for my domain, but no access from the outside. If I perform a wget it seems that I am not getting seeing a SSL

Any assistance is greatly apprecited.

2. Error messages and/or full log output:

root@docker-server:/opt/caddy# wget -vv domain.duckdns.org
--2023-10-17 16:02:28--  http://domain.duckdns.org/
Resolving domain.duckdns.org (domain.duckdns.org)... xx.xx.xx.xx
Connecting to domain.duckdns.org (domain.duckdns.org)|xx.xx.xx.xx|:80... connected.
HTTP request sent, awaiting response... 308 Permanent Redirect
Location: https://domain.duckdns.org/ [following]
--2023-10-17 16:02:28--  https://domain.duckdns.org/
Connecting to domain.duckdns.org (domain.duckdns.org)|xx.xx.xx.xx|:443... connected.
OpenSSL: error:0A000458:SSL routines::tlsv1 unrecognized name
Unable to establish SSL connection.
root@docker-server:/opt/caddy# 


---  or ----

--2023-10-17 16:22:49--  https://localhost/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:443... connected.
OpenSSL: error:0A000438:SSL routines::tlsv1 alert internal error
Unable to establish SSL connection

3. Caddy version:

not sure if I got this correct “latest” alpine:3, 3.18, 3.18.4, latest

4. How I installed and ran Caddy:

Via Docker compose - configs below

a. System environment:

Ubuntu server 22.04
Docker:
Client: Docker Engine - Community
Version: 24.0.6
API version: 1.43
Go version: go1.20.7
Git commit: ed223bc
Built: Mon Sep 4 12:31:44 2023
OS/Arch: linux/amd64
Context: default

b. Command:

docker compose up -d

c. Service/unit/compose file:

PASTE OVER THIS, BETWEEN THE ``` LINES.
Please use the preview pane to ensure it looks nice.

d. My complete Caddy config: —> tesing file from the Wiki

docker-compose.yaml

version: '2.21'
services:
  caddy:
    image: caddy:2
    container_name: caddy
    restart: always
    ports:
      - 80:80
      - 443:443
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      - ./caddy-config:/config
      - ./caddy-data:/data
    environment:
      DOMAIN: "domain.duckdns.org"
      EMAIL: "myemail@somecompany.com"
      LOG_FILE: /data/access.log



Caddyfile:
 
GNU nano 6.2                                                                                                             Caddyfile                                                                                                                      
domain.duckdns.org {
  tls myemail@somecompany.com

  reverse_proxy localhost:2022
}

:2022 {
    respond "Welcome to Caddy SSL Conf"
}

5. Links to relevant resources:

1 Like

What’s in your Caddy container’s logs? That’s what we’re looking for.

Are you sure your server is accessible publicly? It needs to be reachable on ports 80 and 443 from the outside to solve the ACME HTTP and TLS-ALPN challenges. Is this running in your home network, or on a VPS?

Greetings,

Thank you for your reply. The network is a home network, and yes I have ports 80 and 443 forwarded to the Ubuntu server’s IP address.

Here are the logs from the caddy container from last boot:

INF ts=1697571366.8419106 msg=using provided configuration config_file=/etc/caddy/Caddyfile config_adapter=caddyfile

WRN ts=1697571366.8430305 msg=Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies adapter=caddyfile file=/etc/caddy/Caddyfile line=1

INF ts=1697571366.8439033 logger=admin msg=admin endpoint started address=localhost:2019 enforce_origin=false origins=["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]

INF ts=1697571366.8443239 logger=tls.cache.maintenance msg=started background certificate maintenance cache=0xc00038dd00

INF ts=1697571366.8445125 logger=http.auto_https msg=server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS server_name=srv1 https_port=443

INF ts=1697571366.844588 logger=http.auto_https msg=enabling automatic HTTP->HTTPS redirects server_name=srv1

INF ts=1697571366.8449214 logger=tls msg=cleaning storage unit description=FileStorage:/data/caddy

INF ts=1697571366.8454227 logger=tls msg=finished cleaning storage units

INF ts=1697571366.8466847 logger=http.log msg=server running name=srv0 protocols=["h1","h2","h3"]

INF ts=1697571366.8484766 logger=http msg=enabling HTTP/3 listener addr=:443

INF ts=1697571366.848621 msg=failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details.

INF ts=1697571366.8488948 logger=http.log msg=server running name=srv1 protocols=["h1","h2","h3"]

INF ts=1697571366.8490252 logger=http.log msg=server running name=remaining_auto_https_redirects protocols=["h1","h2","h3"]

INF ts=1697571366.8491046 logger=http msg=enabling automatic TLS certificate management domains=["domain.duckdns.org"]

WRN ts=1697572322.5247028 logger=tls msg=stapling OCSP error=no OCSP stapling for [domain.duckdns.org]: making OCSP request: Post "http://r3.o.lencr.org": read tcp 172.25.0.2:51038->23.217.9.15:80: read: connection timed out identifiers=["domain.duckdns.org"]

INF ts=1697572322.525704 msg=autosaved config (load with --resume flag) file=/config/caddy/autosave.json

INF ts=1697572322.525715 msg=serving initial configuration

Best regards

Hmm, is your network blocking outgoing traffic? This should work.

Other than that, it seems like Caddy probably did succeed at issuing a cert on a previous startup, because there’s no issuance errors there.

Please make requests with curl -vL and show what you get.

Thank you again for your input. Being that I am so new do docker, I question what I feel sould be OK and what is not.

From the Ubuntu server, I hope that this is what you are requesting:

root@docker-server:/opt/caddy# curl -vL http://r3.o.lencr.org
*   Trying 23.217.9.15:80...
* Connected to r3.o.lencr.org (23.217.9.15) port 80 (#0)
> GET / HTTP/1.1
> Host: r3.o.lencr.org
> User-Agent: curl/7.81.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Server: nginx
< Content-Length: 0
< Cache-Control: max-age=14819
< Expires: Wed, 18 Oct 2023 04:24:28 GMT
< Date: Wed, 18 Oct 2023 00:17:29 GMT
< Connection: keep-alive
< 
* Connection #0 to host r3.o.lencr.org left intact

Now what I dont know, is can I get to the ‘instance’ of the caddy container? I had selected tty and rebuilt the container, but it seems that I cant create a console session to test from the actual container

Best regards,

I meant that you should make a request to your own domain using curl -vL. I want to see what it shows when you attempt to connect to your own server.

But anyway, Caddy was attempting OCSP stapling - Wikipedia but it failed, which is strange because apparently you were able to connect. :man_shrugging:

1 Like

Ahhh, I see :blush:

I get the unrecognized name, which is similar to what I get in a browser,
The webpage at https://–.duckdns.org/ might be temporarily down or it may have moved permanently to a new web address.

ERR_SSL_UNRECOGNIZED_NAME_ALERT

root@docker-server:/opt/caddy# curl -vL https://--.duckdns.org
*   Trying xx.xx.xx.xx:443...
* Connected to --.duckdns.org (xx.xx.xx.xx) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Unknown (21):
* TLSv1.3 (IN), TLS alert, unrecognized name (624):
* error:0A000458:SSL routines::tlsv1 unrecognized name
* Closing connection 0
curl: (35) error:0A000458:SSL routines::tlsv1 unrecognized name

I could try to rebuild from scratch if that would help, but I believe that I would need to wait a bit for rate limiting.

Thank you again,

1 Like

Frankly, it’s hard to help without knowing the actual domain you’re using, because I can’t conduct my own tests to check the behaviour.

Are you sure your Caddyfile has the same domain in it as what you’re trying to request? If they don’t match, then it won’t work.

1 Like

Hello,

Thank you again, your help is amazing. I have created a new testing domain and obtained a new cert. I updated the Caddyfile with the new domain name. The behavior appears to be the same.

If I understand correctly, if I were to requsts this domain, I would get a responce of ‘Welcome to Caddy SSL Conf’ Correct?

Caddyfile:

root@docker-server:/opt/caddy-server# cat Caddyfile 
losttest.duckdns.org {  
  tls ----@--.com

  reverse_proxy localhost:2022
}

:2022 {
    respond "Welcome to Caddy SSL Conf"
}


LOGs

INF ts=1697675706.1382616 msg=autosaved config (load with --resume flag) file=/config/caddy/autosave.json
INF ts=1697675706.138983 msg=serving initial configuration
INF ts=1697675706.1386647 logger=tls msg=cleaning storage unit description=FileStorage:/data/caddy
INF ts=1697675706.139335 logger=tls msg=finished cleaning storage units
INF ts=1697675706.1386313 logger=tls.cache.maintenance msg=started background certificate maintenance cache=0xc0002f9d80
INF ts=1697675706.1398351 logger=tls.obtain msg=acquiring lock identifier=losttest.duckdns.org
INF ts=1697675706.1441932 logger=tls.obtain msg=lock acquired identifier=losttest.duckdns.org
INF ts=1697675706.1444232 logger=tls.obtain msg=obtaining certificate identifier=losttest.duckdns.org
INF ts=1697675706.1461174 logger=tls.issuance.acme msg=waiting on internal rate limiter identifiers=["losttest.duckdns.org"] ca=https://acme-v02.api.letsencrypt.org/directory account=----@--.com
INF ts=1697675706.1461935 logger=tls.issuance.acme msg=done waiting on internal rate limiter identifiers=["losttest.duckdns.org"] ca=https://acme-v02.api.letsencrypt.org/directory account=----@--.com
INF ts=1697675707.0271974 logger=tls.issuance.acme.acme_client msg=trying to solve challenge identifier=losttest.duckdns.org challenge_type=tls-alpn-01 ca=https://acme-v02.api.letsencrypt.org/directory
ERR ts=1697675707.4114597 logger=tls.issuance.acme.acme_client msg=challenge failed identifier=losttest.duckdns.org challenge_type=tls-alpn-01 problem={"type":"urn:ietf:params:acme:error:tls","title":"","detail":"xx.xx.xx.xx: remote error: tls: unrecognized name","instance":"","subproblems":[]}
ERR ts=1697675707.4114945 logger=tls.issuance.acme.acme_client msg=validating authorization identifier=losttest.duckdns.org problem={"type":"urn:ietf:params:acme:error:tls","title":"","detail":"xx.xx.xx.xx: remote error: tls: unrecognized name","instance":"","subproblems":[]} order=https://acme-v02.api.letsencrypt.org/acme/order/1366886096/216049166846 attempt=1 max_attempts=3
INF ts=1697675708.6798718 logger=tls.issuance.acme.acme_client msg=trying to solve challenge identifier=losttest.duckdns.org challenge_type=http-01 ca=https://acme-v02.api.letsencrypt.org/directory
INF ts=1697675708.8562975 logger=tls.issuance.acme msg=served key authentication identifier=losttest.duckdns.org challenge=http-01 remote=3.15.177.221:44560 distributed=false
INF ts=1697675708.880426 logger=tls.issuance.acme msg=served key authentication identifier=losttest.duckdns.org challenge=http-01 remote=23.178.112.202:50560 distributed=false
INF ts=1697675718.8578846 logger=tls.issuance.acme msg=served key authentication identifier=losttest.duckdns.org challenge=http-01 remote=34.219.195.117:50188 distributed=false
INF ts=1697675729.0820947 logger=tls.issuance.acme.acme_client msg=authorization finalized identifier=losttest.duckdns.org authz_status=valid
INF ts=1697675729.0821195 logger=tls.issuance.acme.acme_client msg=validations succeeded; finalizing order order=https://acme-v02.api.letsencrypt.org/acme/order/1366886096/216049173566
INF ts=1697675729.8863804 logger=tls.issuance.acme.acme_client msg=successfully downloaded available certificate chains count=2 first_url=https://acme-v02.api.letsencrypt.org/acme/cert/042a53958694a8c27834a81ca87f7a4364e7
INF ts=1697675729.886697 logger=tls.obtain msg=certificate obtained successfully identifier=losttest.duckdns.org
INF ts=1697675729.8867617 logger=tls.obtain msg=releasing lock identifier=losttest.duckdns.org

I believe that this is a successful cert.

Next, curl to the domain, and when I try via broswer: ERR_SSL_UNRECOGNIZED_NAME_ALERT

root@docker-server:/opt/caddy-server# curl -vL https://losttest.duckdns.org
*   Trying xx.xx.xx.xx:443...
* Connected to losttest.duckdns.org (xx.xx.xx.xx) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs
* TLSv1.0 (OUT), TLS header, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS header, Unknown (21):
* TLSv1.3 (IN), TLS alert, unrecognized name (624):
* error:0A000458:SSL routines::tlsv1 unrecognized name
* Closing connection 0
curl: (35) error:0A000458:SSL routines::tlsv1 unrecognized name
root@docker-server:/opt/caddy-server# 

Best regards,

1 Like

Try connecting from a different machine. Does it work? Try connecting from your phone on cell networks (i.e. from outside of your local network).

Add the debug global option and make another request, your logs might show more detail as to why the request fails.

Is that really your domain? Because I’m not able to connect at all, I get a timeout. So I’m surprised that Let’s Encrypt would’ve been able to successfully complete the ACME HTTP challenge.

Hello,
That is the domain, I did not change / redact any data except email and the public IP. I built a new subdomain from duck dns, and as you see in the logs, it looks to connect and pull down a cert. I have these files in the certificate folder: losttest.duckdns.org.crt losttest.duckdns.org.json losttest.duckdns.org.key, so I have to presume that the cert was successful.

I also have tried to connect from cell on data, just in case there were any issues with hairpen nat, but unfortunately same results.

I have access from the outside to different VM on a different server with same FW configurations. The only difference is this case is the docker (on a VM). I have verified that there is no UFW running on the docker’s host, and I would think that pulling down a cert confirm that.

I am stumped, but as a noob to docker and caddy I would expect that a bit. I had the same results trying to run ngnix proxy manager, so I looked into caddy. Honestly I like the simplicity that caddy has to offer, so I hope to understand this more and concur the basic :slight_smile:

My guess for next steps is I need to look into tcpdump or setting up some iptables to see if I can see any traffic from the outside make it in. I have to think that it is, as I have seen in logging from the bots rattling my door with previous setups.

Best regards,

I have figured it out. Additionally, I will provide myself with the dunce cap award for the week. :face_with_diagonal_mouth:

Being that I was testing and learning docker and reverse proxies, I had also learned an additional valuable lesson. If you have more then one domain that is pointing to the same public IP using the same ports, that will not work.

If I understand correctly, the cert worked being that it would use port 80, but once traffic was redirected to 443 that would point to another server, hence the name change and would break.

I appreciate your attention and hanging in there. I am sure that there are others that have set up this ‘trap’ and maybe one day, I can help someone else.

Best regards.

That definitely can work, but you need to configure Caddy to handle each of those domains.

I don’t understand. How would it redirect to a different server? Were you port forwarding 80 and 443 to different machines? The HTTP->HTTPS redirects in Caddy reuse the same domain, and domains will resolve to the same IP regardless of the port being connected to (because there’s no such thing as ports in DNS, that’s a separate layer).

Hello,

Yes, I was forwarding only port 443 to a machine, as well as port 80, 443 pointing to the machine running docker / caddy. As as soon as I disabled the other port forward, boom, caddy was happy. I believe that ‘if’ I have both 80 and 443 pointing to the first machine, I would have never received a cert ( assuming I am understanding this ).

I agree that I can use caddy to get the service back over to the previous machine. That will be my next step.

Best regards,

1 Like

Yeah, you need ports 80 and 443 pointing to Caddy.

Glad you figured it out :+1:

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.