Taiscale, Synology, HTTPS to docker services

1. Caddy version:

2.6.2

2. How I installed, and run Caddy:

Docker-compose in Portainer (e.g. Stack)

a. System environment:

Synology DSM 7.1
Docker version 20.10.3, build 55f0773

b. Command:

CMD ["caddy" "run" "--config" "/etc/caddy/Caddyfile" "--adapter" "caddyfile"] 

c. Service/unit/compose file:

version: "3.7"

networks:
  
  proxy-network:
    name: proxy-network

services:
  caddy:
    image: caddy:latest
    restart: unless-stopped
    container_name: caddy
    networks:
      - proxy-network
    hostname: caddy
    depends_on:
      - tailscale 
    ports:
      - "8080:80"
      - "8443:443"
      - "8443:443/udp"
    volumes:
      - /volume1/docker/caddy/Caddyfile:/etc/caddy/Caddyfile
      - /volume1/docker/caddy/data:/data
      - /volume1/docker/caddy/config:/config
      - /volume1/docker/tailscale/tmp/tailscaled.sock:/var/run/tailscale/tailscaled.sock

  tailscale:
    container_name: tailscaled
    image: tailscale/tailscale
    network_mode: host
    cap_add:
      - NET_ADMIN
      - NET_RAW
    volumes:
      - /volume1/docker/tailscale/varlib:/var/lib
      - /volume1/docker/tailscale/tmp:/tmp
    environment:
      - TS_STATE_DIR=/var/lib/tailscale
      - TS_AUTH_KEY= SOME_EXAMPLE_KEY

d. My complete Caddy config:

nas.tail1ccbb.ts.net {
 
 tls {
    get_certificate tailscale 
 }

 
 handle_path /docker/* {
    reverse_proxy /* portainer-ce:9000
 }
 reverse_proxy 100.116.196.48:5000
}

3. The problem I’m having:

I use Taiscale and they MagisDNS option to get domain name to my NAS Synology. For example, it is nas.tail1ccbb.ts.net
Also i got certs from Taiscale to my domain name. (https://tailscale.com/blog/caddy/)

I would like do configuration so that all https requests to docker services i choose are go by this mask, for example:
if i want use portainer it would be
https://nas.tail1ccbb.ts.net/docker
if i want use vaultwardern it would be
https://nas.tail1ccbb.ts.net/vault
If i want get on root page (synology main page) - it should redirect to https://nas.tail1ccbb.ts.net:5001

  1. First problem, that Synology use its own proxy server. And when i try run map caddy ports to 443, 80 - appears error, that ports already in use. So, I would like to avoid explicitly specifying ports when accessing services.

  2. Second problem, main, is that i cant understand how get config to redirect relevant https requests to docker service.
    I tried to use simple config on topic description. (docker service is portainer. Usually ot works on local ip http://192.168.1.199:9000). As i see by curl - https connection works with tailscale certs works (handshake is successful). But there is no main page Portainer.
    The ‘portainer:9000’ name succesfully has been resolved from caddy container by name and by it ip either.

4. Error messages and/or full log output:

 curl -vL https://nas.tail1ccbb.ts.net:8443/docker                                                                                                                                                                                                                                                       took  10s
*   Trying 100.116.196.48:8443...
* Connected to nas.tail1ccbb.ts.net (100.116.196.48) port 8443 (#0)
* ALPN: offers h2
* ALPN: offers http/1.1
*  CAfile: /etc/ssl/cert.pem
*  CApath: none
* (304) (OUT), TLS handshake, Client hello (1):
* (304) (IN), TLS handshake, Server hello (2):
* (304) (IN), TLS handshake, Unknown (8):
* (304) (IN), TLS handshake, Certificate (11):
* (304) (IN), TLS handshake, CERT verify (15):
* (304) (IN), TLS handshake, Finished (20):
* (304) (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256
* ALPN: server accepted h2
* Server certificate:
*  subject: CN=nas.tail1ccbb.ts.net
*  start date: Feb  6 12:28:18 2023 GMT
*  expire date: May  7 12:28:17 2023 GMT
*  subjectAltName: host "nas.tail1ccbb.ts.net" matched cert's "nas.tail1ccbb.ts.net"
*  issuer: C=US; O=Let's Encrypt; CN=R3
*  SSL certificate verify ok.
* Using HTTP2, server supports multiplexing
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* h2h3 [:method: GET]
* h2h3 [:path: /docker]
* h2h3 [:scheme: https]
* h2h3 [:authority: nas.tail1ccbb.ts.net:8443]
* h2h3 [user-agent: curl/7.86.0]
* h2h3 [accept: */*]
* Using Stream ID: 1 (easy handle 0x12d810a00)
> GET /docker HTTP/2
> Host: nas.tail1ccbb.ts.net:8443
> user-agent: curl/7.86.0
> accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 250)!
< HTTP/2 502
< alt-svc: h3=":443"; ma=2592000
< server: Caddy
< content-length: 0
< date: Tue, 07 Feb 2023 10:02:12 GMT
<
* Connection #0 to host nas.tail1ccbb.ts.net left intact

5. What I already tried:

I have tried various config of Caddy from reddit and Caddy community. But no one works. I cant handle how solve my problem and what my problem is)

6. Links to relevant resources:

I don’t think that Caddy can help you with this Synology issue.

If you don’t specify ports in the URL when trying to access your services, your browser will assume ports 80 and 443 for HTTP and HTTPS, respectively. If you want to reach Caddy, but Caddy isn’t accessible on the default ports, you will need to specify. There’s no way around that.

I don’t see Portainer in your compose file. If it’s not composed, and sharing a Docker network with the Caddy service, then Caddy won’t be able to resolve it. Status 502 (Bad Gateway) means that Caddy couldn’t resolve or couldn’t connect to your upstream.

1 Like

Portainer was launched separatelty. Here inspect network:

keplian@Keplian_NAS:/$ sudo docker inspect proxy-network
Password:
[
    {
        "Name": "proxy-network",
        "Id": "c717c426f23cfa0d5e27d703174d4d51393ad6fa78532c2d7c3dbc2adc9209c4",
        "Created": "2023-02-06T16:21:31.806472568+03:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "192.168.32.0/20",
                    "Gateway": "192.168.32.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "39e1263c157195706227aaf58788f8d0eeaea0bb17a31772558fc920a8bd3a28": {
                "Name": "caddy",
                "EndpointID": "b3eaa1ca30db7d929e6cb2da3c3b12fb9bbb55f305d2e26aaa7e837a3035d600",
                "MacAddress": "02:42:c0:a8:20:02",
                "IPv4Address": "192.168.32.2/20",
                "IPv6Address": ""
            },
            "717a0bff0072e5bd775d8c87035916a30bfa135ddca3fefc3b1bea8885da8680": {
                "Name": "portainer-ce",
                "EndpointID": "36c1ac515bac13986a5984a3bc31e1bd62d01c05b5d6e1de2ae494f2b380ccbc",
                "MacAddress": "02:42:c0:a8:20:03",
                "IPv4Address": "192.168.32.3/20",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {
            "com.docker.compose.network": "proxy-network",
            "com.docker.compose.project": "ts_caddy",
            "com.docker.compose.version": "2.10.2"
        }
    }
]

To check Docker’s internal DNS resolution is working properly from Caddy’s perspective, check: docker-compose exec caddy nslookup portainer-ce

I have to admit, I’ve never actually joined a non-composed container to a composed Docker network, but the Docker documentation indicates that custom bridge networks allow resolution by name or alias - composed or otherwise - so theoretically it should be just fine.

If DNS is working and Caddy can resolve it, that implies the 502s are related to the Portainer container itself, and there’s things we can do to check that.

Thx for advice. Here ouput from nslookup

keplian@Keplian_NAS:/$ sudo docker exec caddy nslookup portainer-ce
Password:
Server:		127.0.0.11
Address:	127.0.0.11:53

Non-authoritative answer:

Non-authoritative answer:
Name:	portainer-ce
Address: 192.168.32.3

Looks good.

Enable the debug global option in your Caddyfile, reload it, and then run your curl command again from your original post.

Then, paste the output of that as well as Caddy’s log output from the request. This will give us an idea of what Caddy is seeing.

  1. Here is journal log from caddy container:
INF ts=1675766062.8179758 msg=shutting down apps, then terminating signal=SIGTERM

WRN ts=1675766062.8245456 msg=exiting; byeee!! 👋 signal=SIGTERM

INF ts=1675766062.82494 logger=tls.cache.maintenance msg=stopped background certificate maintenance cache=0xc00068cf50

INF ts=1675766062.8250039 logger=admin msg=stopped previous server address=localhost:2019

INF ts=1675766062.8250127 msg=shutdown complete signal=SIGTERM exit_code=0

INF ts=1675766070.531314 msg=using provided configuration config_file=/etc/caddy/Caddyfile config_adapter=caddyfile

WRN ts=1675766070.5326736 msg=Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies adapter=caddyfile file=/etc/caddy/Caddyfile line=3

INF ts=1675766070.533761 logger=admin msg=admin endpoint started address=localhost:2019 enforce_origin=false origins=["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]

INF ts=1675766070.5340044 logger=http msg=enabling automatic HTTP->HTTPS redirects server_name=srv0

INF ts=1675766070.5342565 logger=tls.cache.maintenance msg=started background certificate maintenance cache=0xc000754ee0

INF ts=1675766070.53465 logger=http msg=enabling HTTP/3 listener addr=:8443

INF ts=1675766070.5347326 msg=failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details.

INF ts=1675766070.5346475 logger=tls msg=cleaning storage unit description=FileStorage:/data/caddy

INF ts=1675766070.5347905 logger=tls msg=finished cleaning storage units

INF ts=1675766070.5348597 logger=http.log msg=server running name=srv0 protocols=["h1","h2","h3"]

INF ts=1675766070.5349038 logger=http.log msg=server running name=remaining_auto_https_redirects protocols=["h1","h2","h3"]

INF ts=1675766070.535071 msg=autosaved config (load with --resume flag) file=/config/caddy/autosave.json

INF ts=1675766070.5350828 msg=serving initial configuration

INF ts=1675766101.3838487 msg=shutting down apps, then terminating signal=SIGTERM

WRN ts=1675766101.3838832 msg=exiting; byeee!! 👋 signal=SIGTERM

INF ts=1675766101.3841429 logger=tls.cache.maintenance msg=stopped background certificate maintenance cache=0xc000754ee0

INF ts=1675766101.3842173 logger=admin msg=stopped previous server address=localhost:2019

INF ts=1675766101.384228 msg=shutdown complete signal=SIGTERM exit_code=0

INF ts=1675766108.863619 msg=using provided configuration config_file=/etc/caddy/Caddyfile config_adapter=caddyfile

WRN ts=1675766108.8647523 msg=Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies adapter=caddyfile file=/etc/caddy/Caddyfile line=3

INF ts=1675766108.8653383 logger=admin msg=admin endpoint started address=localhost:2019 enforce_origin=false origins=["//127.0.0.1:2019","//localhost:2019","//[::1]:2019"]

INF ts=1675766108.865491 logger=http msg=server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS server_name=srv0 https_port=443

INF ts=1675766108.8655002 logger=http msg=enabling automatic HTTP->HTTPS redirects server_name=srv0

INF ts=1675766108.865521 logger=tls.cache.maintenance msg=started background certificate maintenance cache=0xc0004928c0

INF ts=1675766108.865804 logger=tls msg=cleaning storage unit description=FileStorage:/data/caddy

INF ts=1675766108.8658195 logger=http msg=enabling HTTP/3 listener addr=:443

INF ts=1675766108.865828 logger=tls msg=finished cleaning storage units

INF ts=1675766108.865865 msg=failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details.

INF ts=1675766108.865949 logger=http.log msg=server running name=srv0 protocols=["h1","h2","h3"]

INF ts=1675766108.8659732 logger=http.log msg=server running name=remaining_auto_https_redirects protocols=["h1","h2","h3"]

INF ts=1675766108.8661208 msg=autosaved config (load with --resume flag) file=/config/caddy/autosave.json

INF ts=1675766108.8661277 msg=serving initial configuration

INF ts=1675766191.3957489 msg=shutting down apps, then terminating signal=SIGTERM

WRN ts=1675766191.3957849 msg=exiting; byeee!! 👋 signal=SIGTERM

INF ts=1675766191.3960207 logger=tls.cache.maintenance msg=stopped background certificate maintenance cache=0xc0004928c0

INF ts=1675766191.396078 logger=admin msg=stopped previous server address=localhost:2019

INF ts=1675766191.3960853 msg=shutdown complete signal=SIGTERM exit_code=0

INF ts=1675766198.2682745 msg=using provided configuration config_file=/etc/caddy/Caddyfile config_adapter=caddyfile

WRN ts=1675766198.269405 msg=Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies adapter=caddyfile file=/etc/caddy/Caddyfile line=3

INF ts=1675766198.2700055 logger=admin msg=admin endpoint started address=localhost:2019 enforce_origin=false origins=["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]

INF ts=1675766198.2701895 logger=http msg=server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS server_name=srv0 https_port=443

INF ts=1675766198.2702012 logger=http msg=enabling automatic HTTP->HTTPS redirects server_name=srv0

INF ts=1675766198.2702148 logger=tls.cache.maintenance msg=started background certificate maintenance cache=0xc0002b1500

INF ts=1675766198.270551 logger=http msg=enabling HTTP/3 listener addr=:443

INF ts=1675766198.270606 msg=failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details.

INF ts=1675766198.2706347 logger=tls msg=cleaning storage unit description=FileStorage:/data/caddy

INF ts=1675766198.2706597 logger=tls msg=finished cleaning storage units

INF ts=1675766198.270683 logger=http.log msg=server running name=srv0 protocols=["h1","h2","h3"]

INF ts=1675766198.2707098 logger=http.log msg=server running name=remaining_auto_https_redirects protocols=["h1","h2","h3"]

INF ts=1675766198.2708569 msg=autosaved config (load with --resume flag) file=/config/caddy/autosave.json

INF ts=1675766198.2708666 msg=serving initial configuration

ERR ts=1675766380.4276607 logger=http.log.error msg=dial tcp 100.116.196.48:5000: i/o timeout request={"remote_ip":"192.168.32.1","remote_port":"44898","proto":"HTTP/2.0","method":"GET","host":"nas.tail1ccbb.ts.net:8443","uri":"/docker","headers":{"Accept":["*/*"],"User-Agent":["curl/7.86.0"]},"tls":{"resumed":false,"version":772,"cipher_suite":4867,"proto":"h2","server_name":"nas.tail1ccbb.ts.net"}} duration=3.001474015 status=502 err_id=k7fjuzdq1 err_trace=reverseproxy.statusError (reverseproxy.go:1272)

ERR ts=1675766402.797415 logger=http.log.error msg=dial tcp 100.116.196.48:5000: i/o timeout request={"remote_ip":"192.168.32.1","remote_port":"44906","proto":"HTTP/2.0","method":"GET","host":"nas.tail1ccbb.ts.net:8443","uri":"/docker","headers":{"Accept-Encoding":["gzip, deflate, br"],"Upgrade-Insecure-Requests":["1"],"Sec-Fetch-Dest":["document"],"Sec-Fetch-Mode":["navigate"],"Sec-Fetch-User":["?1"],"Te":["trailers"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8"],"Accept-Language":["en-US,en;q=0.5"],"Cookie":[],"Sec-Fetch-Site":["none"],"User-Agent":["Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/109.0"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"nas.tail1ccbb.ts.net"}} duration=3.002952912 status=502 err_id=eqtj1j5p7 err_trace=reverseproxy.statusError (reverseproxy.go:1272)

ERR ts=1675774932.2840376 logger=http.log.error msg=dial tcp 100.116.196.48:5000: i/o timeout request={"remote_ip":"192.168.32.1","remote_port":"45052","proto":"HTTP/2.0","method":"GET","host":"nas.tail1ccbb.ts.net:8443","uri":"/docker","headers":{"Accept-Language":["en-US,en;q=0.5"],"Upgrade-Insecure-Requests":["1"],"Sec-Fetch-Dest":["document"],"Sec-Fetch-Mode":["navigate"],"Sec-Fetch-User":["?1"],"User-Agent":["Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/109.0"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8"],"Accept-Encoding":["gzip, deflate, br"],"Cookie":[],"Sec-Fetch-Site":["none"],"Te":["trailers"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"nas.tail1ccbb.ts.net"}} duration=3.008183466 status=502 err_id=u9wi613hu err_trace=reverseproxy.statusError (reverseproxy.go:1272)

INF ts=1675846218.1028962 msg=shutting down apps, then terminating signal=SIGTERM

WRN ts=1675846218.10758 msg=exiting; byeee!! 👋 signal=SIGTERM

INF ts=1675846218.1904387 logger=tls.cache.maintenance msg=stopped background certificate maintenance cache=0xc0002b1500

INF ts=1675846218.190589 logger=admin msg=stopped previous server address=localhost:2019

INF ts=1675846218.1906023 msg=shutdown complete signal=SIGTERM exit_code=0

INF ts=1675846224.3670604 msg=using provided configuration config_file=/etc/caddy/Caddyfile config_adapter=caddyfile

WRN ts=1675846224.368201 msg=Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies adapter=caddyfile file=/etc/caddy/Caddyfile line=5

INF ts=1675846224.3690994 logger=admin msg=admin endpoint started address=localhost:2019 enforce_origin=false origins=["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]

INF ts=1675846224.3695743 logger=http msg=server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS server_name=srv0 https_port=443

INF ts=1675846224.3697467 logger=http msg=enabling automatic HTTP->HTTPS redirects server_name=srv0

INF ts=1675846224.3698676 logger=tls.cache.maintenance msg=started background certificate maintenance cache=0xc0005d0ee0

INF ts=1675846224.3704088 logger=tls msg=cleaning storage unit description=FileStorage:/data/caddy

INF ts=1675846224.3704343 logger=http msg=enabling HTTP/3 listener addr=:443

INF ts=1675846224.37045 logger=tls msg=finished cleaning storage units

INF ts=1675846224.3704958 msg=failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details.

DBG ts=1675846224.3705692 logger=http msg=starting server loop address=[::]:443 tls=true http3=true

INF ts=1675846224.3705804 logger=http.log msg=server running name=srv0 protocols=["h1","h2","h3"]

DBG ts=1675846224.3706038 logger=http msg=starting server loop address=[::]:80 tls=false http3=false

INF ts=1675846224.3706114 logger=http.log msg=server running name=remaining_auto_https_redirects protocols=["h1","h2","h3"]

INF ts=1675846224.3707469 msg=autosaved config (load with --resume flag) file=/config/caddy/autosave.json

INF ts=1675846224.3707564 msg=serving initial configuration

DBG ts=1675846250.1442406 logger=events msg=event name=tls_get_certificate id=ee01ecf5-740b-4bc0-bf93-81651dcfad00 origin=tls data={"client_hello":{"CipherSuites":[4867,4866,4865,52393,52392,52394,49200,49196,49192,49188,49172,49162,159,107,57,65413,196,136,129,157,61,53,192,132,49199,49195,49191,49187,49171,49161,158,103,51,190,69,156,60,47,186,65,49169,49159,5,4,49170,49160,22,10,255],"ServerName":"nas.tail1ccbb.ts.net","SupportedCurves":[29,23,24,25],"SupportedPoints":"AA==","SignatureSchemes":[2054,1537,1539,2053,1281,1283,2052,1025,1027,513,515],"SupportedProtos":["h2","http/1.1"],"SupportedVersions":[772,771,770,769],"Conn":{}}}

DBG ts=1675846250.1444104 logger=tls.handshake msg=no matching certificates and no custom selection logic identifier=nas.tail1ccbb.ts.net

DBG ts=1675846250.1444175 logger=tls.handshake msg=no matching certificates and no custom selection logic identifier=*.tail1ccbb.ts.net

DBG ts=1675846250.1444216 logger=tls.handshake msg=no matching certificates and no custom selection logic identifier=*.*.ts.net

DBG ts=1675846250.1444247 logger=tls.handshake msg=no matching certificates and no custom selection logic identifier=*.*.*.net

DBG ts=1675846250.1444278 logger=tls.handshake msg=no matching certificates and no custom selection logic identifier=*.*.*.*

DBG ts=1675846250.1480117 logger=tls.handshake msg=using externally-managed certificate remote_ip=192.168.32.1 remote_port=46672 sni=nas.tail1ccbb.ts.net names=["nas.tail1ccbb.ts.net"] expiration=1683462498

DBG ts=1675846250.1553688 logger=http.handlers.reverse_proxy msg=selected upstream dial=100.116.196.48:5000 total_upstreams=1

DBG ts=1675846253.1575828 logger=http.handlers.reverse_proxy msg=upstream roundtrip upstream=100.116.196.48:5000 duration=3.002159923 request={"remote_ip":"192.168.32.1","remote_port":"46672","proto":"HTTP/2.0","method":"GET","host":"nas.tail1ccbb.ts.net:8443","uri":"/docker","headers":{"Accept":["*/*"],"X-Forwarded-For":["192.168.32.1"],"X-Forwarded-Proto":["https"],"X-Forwarded-Host":["nas.tail1ccbb.ts.net:8443"],"User-Agent":["curl/7.86.0"]},"tls":{"resumed":false,"version":772,"cipher_suite":4867,"proto":"h2","server_name":"nas.tail1ccbb.ts.net"}} error=dial tcp 100.116.196.48:5000: i/o timeout

ERR ts=1675846253.157669 logger=http.log.error msg=dial tcp 100.116.196.48:5000: i/o timeout request={"remote_ip":"192.168.32.1","remote_port":"46672","proto":"HTTP/2.0","method":"GET","host":"nas.tail1ccbb.ts.net:8443","uri":"/docker","headers":{"User-Agent":["curl/7.86.0"],"Accept":["*/*"]},"tls":{"resumed":false,"version":772,"cipher_suite":4867,"proto":"h2","server_name":"nas.tail1ccbb.ts.net"}} duration=3.002331385 status=502 err_id=htarjaeqf err_trace=reverseproxy.statusError (reverseproxy.go:1272)

DBG ts=1675846259.3865082 logger=events msg=event name=tls_get_certificate id=36208569-d640-43a2-bea4-d49d4883e33b origin=tls data={"client_hello":{"CipherSuites":[4867,4866,4865,52393,52392,52394,49200,49196,49192,49188,49172,49162,159,107,57,65413,196,136,129,157,61,53,192,132,49199,49195,49191,49187,49171,49161,158,103,51,190,69,156,60,47,186,65,49169,49159,5,4,49170,49160,22,10,255],"ServerName":"nas.tail1ccbb.ts.net","SupportedCurves":[29,23,24,25],"SupportedPoints":"AA==","SignatureSchemes":[2054,1537,1539,2053,1281,1283,2052,1025,1027,513,515],"SupportedProtos":["h2","http/1.1"],"SupportedVersions":[772,771,770,769],"Conn":{}}}

DBG ts=1675846259.3865955 logger=tls.handshake msg=no matching certificates and no custom selection logic identifier=nas.tail1ccbb.ts.net

DBG ts=1675846259.3866026 logger=tls.handshake msg=no matching certificates and no custom selection logic identifier=*.tail1ccbb.ts.net

DBG ts=1675846259.3866065 logger=tls.handshake msg=no matching certificates and no custom selection logic identifier=*.*.ts.net

DBG ts=1675846259.3866096 logger=tls.handshake msg=no matching certificates and no custom selection logic identifier=*.*.*.net

DBG ts=1675846259.3866127 logger=tls.handshake msg=no matching certificates and no custom selection logic identifier=*.*.*.*

DBG ts=1675846259.3890882 logger=tls.handshake msg=using externally-managed certificate remote_ip=192.168.32.1 remote_port=46678 sni=nas.tail1ccbb.ts.net names=["nas.tail1ccbb.ts.net"] expiration=1683462498

DBG ts=1675846259.4046247 logger=http.handlers.reverse_proxy msg=selected upstream dial=100.116.196.48:5000 total_upstreams=1

DBG ts=1675846262.4077475 logger=http.handlers.reverse_proxy msg=upstream roundtrip upstream=100.116.196.48:5000 duration=3.003051954 request={"remote_ip":"192.168.32.1","remote_port":"46678","proto":"HTTP/2.0","method":"GET","host":"nas.tail1ccbb.ts.net:8443","uri":"/docker","headers":{"Accept":["*/*"],"User-Agent":["curl/7.86.0"],"X-Forwarded-For":["192.168.32.1"],"X-Forwarded-Proto":["https"],"X-Forwarded-Host":["nas.tail1ccbb.ts.net:8443"]},"tls":{"resumed":false,"version":772,"cipher_suite":4867,"proto":"h2","server_name":"nas.tail1ccbb.ts.net"}} error=dial tcp 100.116.196.48:5000: i/o timeout

ERR ts=1675846262.4078217 logger=http.log.error msg=dial tcp 100.116.196.48:5000: i/o timeout request={"remote_ip":"192.168.32.1","remote_port":"46678","proto":"HTTP/2.0","method":"GET","host":"nas.tail1ccbb.ts.net:8443","uri":"/docker","headers":{"User-Agent":["curl/7.86.0"],"Accept":["*/*"]},"tls":{"resumed":false,"version":772,"cipher_suite":4867,"proto":"h2","server_name":"nas.tail1ccbb.ts.net"}} duration=3.003220392 status=502 err_id=zktb9vkv0 err_trace=reverseproxy.statusError (reverseproxy.go:1272)
  1. Here is curl ouptut:
  ~/.ssh ❯ curl -vL https://nas.tail1ccbb.ts.net:8443/docker                                                                                                                                                                                                                                               took  3s
*   Trying 100.116.196.48:8443...
* Connected to nas.tail1ccbb.ts.net (100.116.196.48) port 8443 (#0)
* ALPN: offers h2
* ALPN: offers http/1.1
*  CAfile: /etc/ssl/cert.pem
*  CApath: none
* (304) (OUT), TLS handshake, Client hello (1):
* (304) (IN), TLS handshake, Server hello (2):
* (304) (IN), TLS handshake, Unknown (8):
* (304) (IN), TLS handshake, Certificate (11):
* (304) (IN), TLS handshake, CERT verify (15):
* (304) (IN), TLS handshake, Finished (20):
* (304) (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256
* ALPN: server accepted h2
* Server certificate:
*  subject: CN=nas.tail1ccbb.ts.net
*  start date: Feb  6 12:28:18 2023 GMT
*  expire date: May  7 12:28:17 2023 GMT
*  subjectAltName: host "nas.tail1ccbb.ts.net" matched cert's "nas.tail1ccbb.ts.net"
*  issuer: C=US; O=Let's Encrypt; CN=R3
*  SSL certificate verify ok.
* Using HTTP2, server supports multiplexing
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* h2h3 [:method: GET]
* h2h3 [:path: /docker]
* h2h3 [:scheme: https]
* h2h3 [:authority: nas.tail1ccbb.ts.net:8443]
* h2h3 [user-agent: curl/7.86.0]
* h2h3 [accept: */*]
* Using Stream ID: 1 (easy handle 0x152012e00)
> GET /docker HTTP/2
> Host: nas.tail1ccbb.ts.net:8443
> user-agent: curl/7.86.0
> accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 250)!
< HTTP/2 502
< alt-svc: h3=":443"; ma=2592000
< server: Caddy
< content-length: 0
< date: Wed, 08 Feb 2023 08:51:02 GMT
<
* Connection #0 to host nas.tail1ccbb.ts.net left intact

Looks like it failed because you’ve configured Caddy to only forward requests for /docker/* to Portainer, not /docker:

And the service at 100.116.196.48:5000 is not accessible (i/o timeout).

Note that with handle_path /docker/*, only requests like /docker/foo, /docker/bar etc. will be sent to Portainer, and requests like /docker and /docker/ will not.

I changed Caddyfile to that:

{
	debug
}



nas.tail1ccbb.ts.net {
 

 tls {
    get_certificate tailscale 
 }


 handle_path /docker {
    reverse_proxy portainer-ce:9000
 }
 reverse_proxy localhost:5000
}

And it error 502 is gone! (i see portainer template html). Thank you!
You have no idea how grateful I am for your help. I really appreciate you spending your time with me.

Main problem is solved. The rest are minor ones.

3 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.