OT: Cloudflare & Caddy

This is probably off-topic, so delete or close this if it is. I’m trying to understand the interplay with Cloudflare and Caddy for the purpose of setting up cloudflared tunnel and setting up authentication with Authelia. I first tried setting up the CF Tunnel using the wiki guide here and didn’t get it to work: I can create a tunnel but can’t access any Caddy-proxied apps. So I took a break to setup Authelia authentication (without tunnel, just port forwarding 80/443), but I don’t understand the bit about X-Forward-For transform rules (Forwarded Headers - Integration - Authelia). It looks like it’s asking me to create two rules that remove that header:

<Remove X-Forward-for if> (http.x_forwarded_for ne "")
<Remove X-Forward-for if>  (not ip.src in {173.245.48.0/20 103.21.244.0/22 103.22.200.0/22 103.31.4.0/22 141.101.64.0/18 108.162.192.0/18 190.93.240.0/20 188.114.96.0/20 197.234.240.0/22 198.41.128.0/17 162.158.0.0/15 104.16.0.0/13 104.24.0.0/14 172.64.0.0/13 131.0.72.0/22})

Those rules seem to overlap/conflict by my understanding. Maybe I’m supposed to pick one? It doesn’t say. Which one? But I also can’t tell what they do from looking at Caddy logs. They don’t change anything about the header that shows in the Caddy log:
"X-Forwarded-For":["10.0.0.2"] is what I see in all cases when I try to access one of my proxied apps running in Docker (my config is still essential the same as I posted previously: How to configure Cloudflare? - #15 by simsrw73). I’m worried I’m going to get this wrong and break security, given all the warning in the docs, so I haven’t even tried to progress beyond this point.

Caddy has good docs. Cloudflare has good docs. Authelia has good docs. But I can’t find anything that explains the interplay, how they work together or how they can be made to work together. I’m missing a lot of information and I just have no idea where to find it. I would gladly rtfm that tells what I need to know in order to glue these things together if I could find it. I tried (several times) asking for help in the Cloudflare forum & reddit with no help. Hoping that it’s kinda related enough to Caddy that someone here might point me in the right direction.

Howdy @simsrw73, I’ve stood up a successful Authelia integration and have Cloudflare (and tunnels) in the mix, across multiple hosts over the internet, and run into and overcome a number of pitfalls along the way! I might be able to give you some pointers here.

The first question I have to ask, because it vastly changes the scenario depending on your answer:

Are your apps, Caddy instance, and Authelia itself all hosted on the same server?

1 Like

They will be spread across multiple “servers”. That was the primary reason I chose to setup Docker in Swarm mode, so I could use overlay network. If I’m understanding correctly, that should make it trivial to spread apps across any number of servers once I get the foundation laid. I’m still a bit unclear about accessing apps outside of Docker in my internal network.

My config as it is is below. I didn’t post it in my original because I was mainly focused on trying to understand the headers and how they need to be modified on Cloudlfare. The configuration recommended in the Caddy documentation (reverse_proxy (Caddyfile directive) — Caddy Documentation), but not explained well enough for me to trust my understanding in the Authelia documentation. A sequence diagram would make it so much simpler to understand. I understand how to create the rules. I just don’t trust my understanding of the rules it wants me to create, given all the security warnings.

services:
  caddy:
    image: caddy-docker-proxy--mysmarthome_network:latest
    build:
      context: /docker-services/caddy/builder
      args:
        CADDY_VERSION: "2.5.2"
     env_file: .env
    environment:
      - CADDY_INGRESS_NETWORKS=proxy
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /docker-services/caddy/data:/data
      - /docker-services/caddy/config:/config
      - /docker-services/caddy/logs:/var/log/caddy
    ports:
      - "80:80"
      - "443:443"
    networks:
      - proxy
    extra_hosts:
      - host.docker.internal:host-gateway
    deploy:
      labels:
        caddy.debug:
        caddy.log.output: file /var/log/caddy/caddy.log
        caddy.acme_dns: "cloudflare {env.CF_API_TOKEN}"
        caddy.email: "{env.EMAIL}"
      placement:
        constraints:
          - node.role == manager
      replicas: 1
      resources:
        reservations:
          cpus: "0.1"
          memory: 200M
      restart_policy:
        condition: any

networks:
  proxy:
    external: true
  cloudflared:
    external: true

My test app:

services:

  whoami:
    image: jwilder/whoami
    networks:
      - proxy
    deploy:
      labels:
        caddy: whoami.mysmarthome.network
        caddy.reverse_proxy_0: "{{upstreams 8000}}"
        caddy.reverse_proxy_0.trusted_proxies: "10.0.0.0/24 192.168.99.0/24 192.168.200.0/24"
        caddy.tls.resolvers: "172.64.36.1, 172.64.36.2"
        caddy.log.output: file /var/log/caddy/whoami.log

networks:
  proxy:
    external: true

And cloudflared:

services:
  cloudflared:
    image: "cloudflare/cloudflared:latest"
    command: tunnel --no-autoupdate run
    env_file:
      - .env
    volumes:
      - /docker-services/cloudflared/config:/etc/cloudflared
    networks:
      - cloudflared
      - proxy
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]
      restart_policy:
        condition: on-failure

networks:
  cloudflared:
    external: true
  proxy:
    external: true
xtunnel: 00000000-0000-0000-0000-000000000000

ingress:
  - hostname: "mysmarthome.network"
    service: https://caddy:443
    originRequest:
      originServerName: "mysmarthome.network"
  - hostname: "*.mysmarthome.network"
    service: https://caddy:443
    originRequest:
      originServerName: "mysmarthome.network"
  - service: http_status:404

Ahh, I actually haven’t done Swarm myself! I’m just using distributed Docker compose projects.

I think that might actually make things simpler because I’m pretty sure you can just refer to services running on other Swarm hosts by their service name, and the overlay network handles traffic, right?

I don’t think you’ll even need any kind of ingress management. You’ll be using the Cloudflare tunnel to route incoming traffic.

This one is a bit of a multifaceted doozy to wrap your head around, depending on your situation.

The MOST important thing is the second sentence of the document you linked:

The X-Forwarded-* headers presented to Authelia must be from trusted sources.

Let me repeat that for truth:

The X-Forwarded-* headers presented to Authelia must be from trusted sources.

What happens if the X-Forwarded-* headers don’t come from trusted sources? The document also answers that, a little further down. It is written from the perspective of Cloudflare but it’s true of ANY part of the chain that accepts untrusted X-Forwarded-* headers.

Cloudflare adds the X-Forwarded-For header if it does not exist, and if it does exist it will just append another IP to it. This means a client can forge their remote IP address with the most widely accepted remote IP header out of the box.

What’s the risk here if a client is allowed to forge their remote IP address for authentication purposes? Also explained in the documentation.

In particular this is important for Access Control Rules as the network criteria relies on the X-Forwarded-For header. This header is expected to have a true representation of the clients actual IP address.

An untrusted X-Forwarded-For could allow an attacker to pretend to be from an allow-listed IP address and bypass your authentication mechanism*.

*IF you use the client’s network as a means of authenticating them.

So how do we stop that?

Option 1

Do nothing at all. Do not use the client’s network for access control, and do not configure Cloudflare or Caddy especially regarding this issue at all.

Literally, do nothing. Caddy doesn’t trust these headers from clients by default:

For these X-Forwarded-* headers, by default, Caddy will ignore their values from incoming requests, to prevent spoofing.
reverse_proxy (Caddyfile directive) — Caddy Documentation

Since Caddy is the last line of defense in front of Authelia - remember the most important part of the Authelia doc I repeated earlier for truth?

The X-Forwarded-* headers presented to Authelia must be from trusted sources.

Well, with Caddy there, by default, no untrusted headers will ever be presented to Authelia. Problem solved!

Alternatively…

Option 2

If you need to use the client’s network address for access control purposes, THEN you need to:

  1. Conceptualize your chain of proxies (it might help to literally draw it on paper and draw out a request chain; for you it is probably many clients → Cloudflare → Caddy → Authelia)
  2. Configure the chain of proxy trust (this would be configuring Caddy to trust Cloudflare)
  3. Configure Cloudflare to discard any client’s supplied X-Forwarded-* data outright.

In this case, the first suggested rule simply nukes all X-Forwarded-For if the client supplied it, and the second part of the rule can be ANDed to allow an exception in cases where Cloudflare is not always going to be at the edge receiving connections directly from untrusted clients, e.g.:

(http.x_forwarded_for ne "" and not ip.src in {trusted IP address of a known proxy in front of Cloudflare})

Because the important part is that you need to discard this bad data at the edge, but not throw away known trusted data if it was sourced from your own trusted proxy.

If you don’t have any proxies in front of Cloudflare, then you don’t need to carve out this exception - i.e. you can throw out all supplied X-Forwarded-For headers - and

(http.x_forwarded_for ne "")

is probably fine.

(I would like to CC @james_d_elliott to review and correct my writing here if appropriate.)

3 Likes

Now, this part stuck out to me because there’s a bit of a pitfall here for people running Authelia on remote (e.g. internet-accessible) hosts who also use Cloudflare to cover both the protected applications AND Authelia itself. This is why I asked if they were going to be hosted on the same server, but really, the Swarm overlay network should provide private access directly to Authelia that makes this a non-issue.

To explain a little bit, when you make a request through the Cloudflare proxy to Caddy to access a protected application, and Caddy then has to reach out through the Cloudflare proxy to reach Authelia on another remote host, Cloudflare will see this request looping twice through its edge and kill the request, assuming it’s in error. Since the request it kills is the forward_auth subrequest, this breaks all access to the protected applications.

However, if Authelia is running in the same overlay network, that subrequest doesn’t need to loop through Cloudflare a second time, so it won’t be killed. You can still have Authelia behind Cloudflare for clients to access the login portal.

What error are you getting here? I’ve got a feeling I know what the problem is, but I wouldn’t be able to say outright without knowing exactly how the tunnel is configured or knowing exactly what Cloudflare’s returning by way of error.

3 Likes

The information above is absolutely accurate. We give two differing scenarios for the purpose of options. The most applicable rule for nearly everyone is the first one, with the desired result of “Always Remove”.

The most critical header is X-Forwarded-For since it can often have security implications. But this rule could be extended to all the other X-Forwarded-* headers.

I am trying to figure out a way to refactor this doc to be more clear.

3 Likes

That’s correct. It works much as if all services are on the same Docker host. There have been some challenges though, mostly, because nearly all docs, blogs, etc. for containerized apps assume you are running Docker standalone and there are occasion gotchas with Swarm, incl discussion below.

It certainly feels like a doozy.

After initially reading your reply and also @james_d_elliott reply, I thought I got it. Everything clicked and made sense… Except that I had earlier made the mistake of playing around and observing the X-Forwarded-For header under different scenarios, before I had posted, and when I remembered some of the results I saw… it didn’t match up with what I thought I understood so I went back to check again and fell in a sizable rabbit hole… I should have just made the suggested changes and moved on.

I was hopping to step through this with mermaid markdown but it looks like it’s not supported here (maybe? GitHub - unfoldingWord/discourse-mermaid: Adds the Mermaid JS library to discourse)

sequenceDiagram
    actor Client
    participant Cloudflare
    participant Swarm as Docker Swarm Ingress
    participant Caddy
    participant Authelia
    participant App
	
    Client->>Cloudflare: XFF: []
    Cloudflare->>Swarm: XFF: []
    Swarm->>Caddy: XFF: []
    Caddy->>Authelia: XFF: []
    Authelia->>Caddy: XFF: []
    Caddy->>App: XFF: []	

At the starting point, on Cloudflare, I observe that:

Cloudflare transform rule must only remove any upstream ips from X-Forwarded-For.
But still adds itself if Cloudflare Proxy is enabled (orange cloud).[3]

Is that a correct understanding?

This may be well known but it took a lot of head-scratching to figure out why the X-Forwarded-For never contained what I expected, esp the cloudflare proxies, when proxy was enabled.

The log for caddy itself always showed:

X-Forwarded-For: ["10.0.0.2"]

Which is the ip Docker Swarm’s “ingress” overlay network gateway

The caddy log for the whoami app showed:

X-Forwarded-For: ["<client-ip>"]

when Caddy’s trusted_ips was empty. But when I added the docker network to trusted_ips, it showed up

X-Forwarded-For: ["<client-ip>, 10.0.0.2"]

This is what I expected. Although maybe Caddy’s docs would be clearer if it used a different word from ignore here and state that it does not pass those headers along.

Turns out the 10.0.0.2 is Docker’s Ingress mesh network and it drops everything and adds itself so that it can handle the routing. This is a Docker Swarm specific issue:

https://github.com/moby/moby/issues/25526 (tldr)
GitHub - newsnowlabs/docker-ingress-routing-daemon: Docker swarm daemon that modifies ingress mesh routing to expose true client IPs to service containers (great explanation)

If I run the above dird service, I finally get something along the lines of what I was expecting in caddy’s log (with trusted_ips set to cloudflares ips):

X-Forwarded-For: ["<client-ip>, <cf-proxy-ip>"]

The caddy log for whoami app shows:

X-Forwarded-For: ["<client-ip>"]

BUT, I’m not sure I understand this: When I remove trusted_ips from Caddy’s config, the log for caddy shows:

X-Forwarded-For: ["<cf-proxy-ip>"]

How did that happen?

{"level":"debug","ts":1662656963.9671376,"logger":"http.handlers.reverse_proxy","msg":"upstream roundtrip","upstream":"10.0.1.104:8000","duration":0.000748938,"request":{"remote_ip":"172.70.82.129","remote_port":"59626","proto":"HTTP/2.0","method":"GET","host":"whoami.mysmarthome.network","uri":"/","headers":{"Sec-Fetch-Mode":["navigate"],"Sec-Fetch-Site":["none"],"Cf-Connecting-Ip":["<client-ip>"],"X-Forwarded-Proto":["https"],"Sec-Fetch-Dest":["document"],"X-Forwarded-For":["172.70.82.129"],"Sec-Gpc":["1"],"Cf-Ray":["74794aa6dfe809fa-MIA"],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8"],"Accept-Encoding":["gzip"],"Sec-Fetch-User":["?1"],"Priority":["u=0"],"X-Forwarded-Host":["whoami.mysmarthome.network"],"Upgrade-Insecure-Requests":["1"],"Cdn-Loop":["cloudflare"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/105.0.0.0 Safari/537.36"],"Accept-Language":["en-US,en;q=0.7"],"Cf-Ipcountry":["US"],"Cf-Visitor":["{\"scheme\":\"https\"}"]},"tls":{"resumed":false,"version":772,"cipher_suite":4867,"proto":"h2","server_name":"whoami.mysmarthome.network"}},"headers":{"Date":["Thu, 08 Sep 2022 17:09:23 GMT"],"Content-Length":["17"],"Content-Type":["text/plain; charset=utf-8"]},"status":200}

Does any of this matter? I’m too early in my journey here to know if I will ever need to rely on those headers to behave a certain way. Should I run the docker-ingress-routing-daemon to ensure that Caddy/Authelia are getting as much trusted info as possible or should I just let Docker overwrite it? Will it matter to any other services I run in Docker? It sounds like it doesn’t matter from a security perspective, as long as everything untrusted is removed, it doesn’t matter what is left. I’m just wondering if the issue will ever pop up down the road where that header info might be needed?

References:

  1. X-Forwarded-For - HTTP | MDN
  2. reverse_proxy (Caddyfile directive) — Caddy Documentation
  3. Cloudflare HTTP request headers · Cloudflare Fundamentals docs
  4. Forwarded Headers - Integration - Authelia
  5. https://github.com/moby/moby/issues/25526
  6. GitHub - newsnowlabs/docker-ingress-routing-daemon: Docker swarm daemon that modifies ingress mesh routing to expose true client IPs to service containers
1 Like

In this case, they will all be on the same server: Caddy & Authelia on the same physical Docker host.

Here’s the setup and logs. Greateful for any insight.

Cloudflared docker-compose.yml

version: '3.7'

services:
  cloudflared:
    image: "cloudflare/cloudflared:latest"
    command: tunnel --no-autoupdate run
    env_file:
      - .env
    networks:
      - cloudflared
      - proxy
    volumes:
      - /docker-services/cloudflared/config:/etc/cloudflared
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints: [node.role == manager]
      restart_policy:
        condition: on-failure

networks:
  cloudflared:
    external: true
  proxy:
    external: true

Cloudflared config.yml

ingress:
  - hostname: "mysmarthome.network"
    service: https://caddy:443
    originRequest:
      originServerName: "mysmarthome.network"
  - hostname: "*.mysmarthome.network"
    service: https://caddy:443
    originRequest:
      originServerName: "mysmarthome.network"
  - service: http_status:404

Cloudflared logs:

2022-09-07T18:14:14Z INF Starting tunnel tunnelID=11111111-1111-1111-1111-111111111111
2022-09-07T18:14:14Z INF Version 2022.9.0
2022-09-07T18:14:14Z INF GOOS: linux, GOVersion: go1.19.1, GoArch: amd64
2022-09-07T18:14:14Z INF Settings: map[no-autoupdate:true]
2022-09-07T18:14:14Z INF Environmental variables map[TUNNEL_TOKEN:*****]
2022-09-07T18:14:14Z INF Generated Connector ID: 05a09b0c-7ffb-411a-87f5-90c1c7a08285
2022-09-07T18:14:14Z INF Will be fetching remotely managed configuration from Cloudflare API. Defaulting to protocol: quic
2022-09-07T18:14:14Z INF Initial protocol quic
2022-09-07T18:14:14Z INF Starting metrics server on 127.0.0.1:33261/metrics
2022/09/07 18:14:14 failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details.
2022-09-07T18:14:14Z INF Connection 7088d9b5-09fb-49f9-b1df-6f4338698599 registered connIndex=0 ip=198.41.200.23 location=ATL
2022-09-07T18:14:14Z INF Updated to new configuration config="{\"ingress\":[{\"hostname\":\"mysmarthome.network\",\"originRequest\":{},\"service\":\"https://caddy:443\"},{\"hostname\":\"*.mysmarthome.network\",\"originRequest\":{},\"service\":\"https://caddy:443\"},{\"service\":\"http_status:404\"}],\"warp-routing\":{\"enabled\":false}}" version=9
2022-09-07T18:14:15Z INF Connection d44ba591-b8cb-4ccc-ae89-698e01a892a8 registered connIndex=1 ip=198.41.192.27 location=ATL
2022-09-07T18:14:16Z INF Connection 9c20b3dd-8bd8-4023-8a69-c09e84504b1e registered connIndex=2 ip=198.41.200.63 location=ATL
2022-09-07T18:14:17Z INF Connection a073ff49-7e21-4598-9fd8-99c30893497a registered connIndex=3 ip=198.41.192.67 location=ATL

Caddy logs:

https://zerobin.net/?ece28391ac20b6c6#wkIjF6YfR5cXQD/yhn6kBnkWWqPlsim2BzH75BIG4wc=

1 Like

Mostly correct, but a minor point of accuracy; when Cloudflare is acting as a reverse proxy, the principle of good security here dictates that it should throw out X-Forwarded-For from clients and convention dictates that it should add the client’s IP (not itself) to a new X-Forwarded-For so that the actual client IP address of import can be preserved down the entire proxy chain if that is required.

This does seem like a bit of a can of worms in Swarm!

That looks good!

That is really weird! I’d expect CF to get tossed out with the rest here. Is this what DIRD does, maybe? Injects the IP directly external to the overlay network? But if that’s from Caddy’s log, e.g. that’s an incoming request, why would changing trusted_proxies configuration alter the request coming in? I’d expect it only to have an effect on the outgoing, proxied request from Caddy.

Loosely speaking, there’s two reasons you’d want this information to be accurate, reliable, and trusted:

  1. You want access logs to show accurate client IPs.
  2. You have app functionality (often security) that requires accurate, reliable information to do its job (e.g. fail2ban, or Authelia with IP-based access control).

If you don’t care about either in this specific context, you probably don’t care about sorting this issue out because it’s not going to impact you. Just remember never to set up IP-based access control unless you know the whole chain is good.

Generally speaking there’s no gotchas from not having this information, and if there is, it will be obvious from the functionality of the application whether it’s required or not, so you’ll either know going in or it just won’t work. The only real gotcha here for you would be if you blindly accepted all of it; this is where having Caddy with its default-untrusted state is good if you don’t want to bother getting it all 100% ironed out, because it protects you from having an app that silently uses the data for something important and might be getting bad data instead of no data.

Alright, at first blush, it looks like you did actually jump the first and biggest hurdle: configuring SNI for cloudflared proxied requests to https://caddy.

However, the logs are littered with entries that tell a different story…

{"level":"debug","ts":1662576391.2652116,"logger":"http.stdlib","msg":"http: TLS handshake error from 10.0.1.4:41958: no certificate available for 'caddy'"}

Looking at your cloudflared yaml and the Cloudflare tunnel documentation it appears your configuration is correct. I’m stumped just at the moment here, but there’s two things I can suggest:

Try adding a route directly from whoami subdomain directly to whoami:8000 to bypass the SNI issue and make sure it’s the only problem you’re having and that the tunnels will work conceptually once it’s solved.

Secondly, barring us finding a bug or other misconfiguration which might be fixed in yaml or something, consider instead of yaml, making a managed tunnel over at https://dash.teams.cloudflare.com/.

That’s how I currently have my CF tunnels configured; you make a named tunnel there and it spits out a single token - you then don’t put any configuration on the host system at all, just the token in ENV, and cloudflared reaches out to receive its configuration directly from Cloudflare.

From there I configured the Origin Server Name option and this has worked perfectly for me, no SNI for “caddy” issues.

2 Likes

DIRD doesn’t do this directly, it just changes routing information, manipulating iptables. That may be causing Docker to misinterpret the headers maybe? Somewhere along the way remote_ip was changed to the cloudflare proxy ip: "remote_ip":"172.70.82.129". I guess Caddy dropped X-Forwarded-For and replaced it with the value of remote_ip?

I’ve disabled DIRD for the moment. Without it it does mostly right, it just drops the cloudflare proxies from the list, leaving only the source ip and the docker ingress gateway ip. I might test some more later, after get cf tunnel up, and open up a discussion on the DIRD issue tracker.

I’m not sure what you mean here. I’m trying to get my head around all this stuff but there is a ton to learn and even though many docs do a great job of describing what each feature/setting does, I haven’t found much that covers the theory and how everything connects and works together. Matt’s Expert series is helpful, still working through that. That’s the kind of information I need much more of to make sense of all this stuff. Otherwise, most times I just feel like I’m flipping switches and seeing what happens.

Ok, Did this and got though when bypassing Caddy:


❯ curl -L -v http://whoami.mysmarthome.network
*   Trying 104.21.88.128:80...
* Connected to whoami.mysmarthome.network (104.21.88.128) port 80 (#0)
> GET / HTTP/1.1
> Host: whoami.mysmarthome.network
> User-Agent: curl/7.74.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Date: Fri, 09 Sep 2022 17:51:46 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 17
< Connection: keep-alive
< CF-Cache-Status: DYNAMIC
< Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=WgI1Xq3vsQ%2FitOHEebgxDjMga6fxqWPHGsJ7Lm%2FRWN%2BywmCVxmLUNpzY%2BYjFiJyOsPqDfLFEKaCAM2%2FC8fiBNY%2FthVdfyrWcT6V0ZBxRNpeFD94DcEYS9yEqNyVL71S2YQO0ZISG1e5mBycrNw%3D%3D"}],"group":"cf-nel","max_age":604800}
< NEL: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
< Server: cloudflare
< CF-RAY: 7481c6182bf467d4-MIA
< alt-svc: h3=":443"; ma=86400, h3-29=":443"; ma=86400
<
I'm 36fac6105cd9
* Connection #0 to host whoami.mysmarthome.network left intact

I then tried going through Caddy, using http. At least I think I got the syntax right. There were no syntax errors.

services:

  whoami:
    image: jwilder/whoami
    networks:
      - proxy
    deploy:
      labels:
        caddy: whoami.mysmarthome.network
        caddy.reverse_proxy_0: http://whoami:8000
        # caddy.reverse_proxy_0: "{{upstreams 8000}}"
        caddy.reverse_proxy_0.trusted_proxies: "10.0.0.0/24 192.168.99.0/24 192.168.200.0/24 173.245.48.0/20 103.21.244.0/22 103.22.200.0/22 103.31.4.0/22 141.101.64.0/18 108.162.192.0/18 190.93.240.0/20 188.114.96.0/20 197.234.240.0/22 198.41.128.0/17 162.158.0.0/15 104.16.0.0/13 104.24.0.0/14 172.64.0.0/13 131.0.72.0/22 2400:cb00::/32 2606:4700::/32 2803:f800::/32 2405:b500::/32 2405:8100::/32 2a06:98c0::/29 2c0f:f248::/32"
        caddy.tls.resolvers: "172.64.36.1, 172.64.36.2"
        caddy.log.output: file /var/log/caddy/whoami.log

networks:
  proxy:
    external: true

No joy.

❯ dslog cloudflared
2022-09-09T18:11:14Z DBG Loading configuration from /etc/cloudflared/config.yml
2022-09-09T18:11:14Z INF Starting tunnel tunnelID=11111111-1111-1111-1111-111111111111
2022-09-09T18:11:14Z INF Version 2022.9.0
2022-09-09T18:11:14Z INF GOOS: linux, GOVersion: go1.19.1, GoArch: amd64
2022-09-09T18:11:14Z INF Settings: map[loglevel:debug no-autoupdate:true]
2022-09-09T18:11:14Z INF Environmental variables map[TUNNEL_TOKEN:*****]
2022-09-09T18:11:14Z INF Generated Connector ID: 26ecc671-b1c6-4b03-bc07-814a7b709c22
2022-09-09T18:11:14Z INF Will be fetching remotely managed configuration from Cloudflare API. Defaulting to protocol: quic
2022-09-09T18:11:14Z INF Initial protocol quic
2022-09-09T18:11:14Z INF Starting metrics server on 127.0.0.1:38785/metrics
2022-09-09T18:11:14Z DBG looking up edge SRV record domain=_v2-origintunneld._tcp.argotunnel.com
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.192.107:7844 UDP:198.41.192.107:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.192.167:7844 UDP:198.41.192.167:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.192.57:7844 UDP:198.41.192.57:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.192.67:7844 UDP:198.41.192.67:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.192.47:7844 UDP:198.41.192.47:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.192.227:7844 UDP:198.41.192.227:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.192.7:7844 UDP:198.41.192.7:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.192.37:7844 UDP:198.41.192.37:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.192.27:7844 UDP:198.41.192.27:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.192.77:7844 UDP:198.41.192.77:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a0::9]:7844 UDP:[2606:4700:a0::9]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a0::1]:7844 UDP:[2606:4700:a0::1]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a0::2]:7844 UDP:[2606:4700:a0::2]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a0::4]:7844 UDP:[2606:4700:a0::4]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a0::10]:7844 UDP:[2606:4700:a0::10]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a0::7]:7844 UDP:[2606:4700:a0::7]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a0::5]:7844 UDP:[2606:4700:a0::5]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a0::8]:7844 UDP:[2606:4700:a0::8]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a0::6]:7844 UDP:[2606:4700:a0::6]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a0::3]:7844 UDP:[2606:4700:a0::3]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.200.23:7844 UDP:198.41.200.23:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.200.43:7844 UDP:198.41.200.43:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.200.113:7844 UDP:198.41.200.113:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.200.73:7844 UDP:198.41.200.73:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.200.13:7844 UDP:198.41.200.13:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.200.193:7844 UDP:198.41.200.193:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.200.233:7844 UDP:198.41.200.233:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.200.33:7844 UDP:198.41.200.33:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.200.63:7844 UDP:198.41.200.63:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:198.41.200.53:7844 UDP:198.41.200.53:7844 IPVersion:4}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a8::3]:7844 UDP:[2606:4700:a8::3]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a8::5]:7844 UDP:[2606:4700:a8::5]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a8::2]:7844 UDP:[2606:4700:a8::2]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a8::4]:7844 UDP:[2606:4700:a8::4]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a8::1]:7844 UDP:[2606:4700:a8::1]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a8::6]:7844 UDP:[2606:4700:a8::6]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a8::7]:7844 UDP:[2606:4700:a8::7]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a8::10]:7844 UDP:[2606:4700:a8::10]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a8::8]:7844 UDP:[2606:4700:a8::8]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG Edge Address: {TCP:[2606:4700:a8::9]:7844 UDP:[2606:4700:a8::9]:7844 IPVersion:6}
2022-09-09T18:11:14Z DBG edgediscovery - GetAddr: Giving connection its new address connIndex=0 ip=198.41.200.233
2022/09/09 18:11:14 failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/lucas-clemente/quic-go/wiki/UDP-Receive-Buffer-Size for details.
2022-09-09T18:11:14Z DBG rpcconnect: tx (bootstrap = (questionId = 0, deprecatedObjectId = <opaque pointer>))
2022-09-09T18:11:14Z DBG rpcconnect: tx (call = (questionId = 1, target = (promisedAnswer = (questionId = 0, transform = [])), interfaceId = 17804583019846587543, methodId = 0, allowThirdPartyTailCall = false, params = (content = <opaque pointer>, capTable = []), sendResultsTo = (caller = void)))
2022-09-09T18:11:14Z DBG rpcconnect: rx (return = (answerId = 0, releaseParamCaps = false, results = (content = <opaque pointer>, capTable = [(senderHosted = 0)])))
2022-09-09T18:11:14Z DBG rpcconnect: tx (finish = (questionId = 0, releaseResultCaps = false))
2022-09-09T18:11:14Z DBG rpcconnect: rx (return = (answerId = 1, releaseParamCaps = false, results = (content = <opaque pointer>, capTable = [])))
2022-09-09T18:11:14Z DBG rpcconnect: tx (finish = (questionId = 1, releaseResultCaps = false))
2022-09-09T18:11:14Z INF Connection daaba282-a17f-49bb-ab3d-cc88f9759fb2 registered connIndex=0 ip=198.41.200.233 location=ATL
2022-09-09T18:11:14Z DBG edgediscovery - GetAddr: Giving connection its new address connIndex=1 ip=198.41.192.27
2022-09-09T18:11:14Z INF Updated to new configuration config="{\"ingress\":[{\"hostname\":\"mysmarthome.network\",\"originRequest\":{},\"service\":\"https://caddy:443\"},{\"hostname\":\"*.mysmarthome.network\",\"originRequest\":{},\"service\":\"https://caddy:443\"},{\"hostname\":\"whoami.mysmarthome.network\",\"originRequest\":{\"noTLSVerify\":false,\"originServerName\":\"mysmarthome.network\"},\"service\":\"http://whoami:8000\"},{\"service\":\"http_status:404\"}],\"warp-routing\":{\"enabled\":false}}" version=19
2022-09-09T18:11:14Z DBG rpcconnect: tx (bootstrap = (questionId = 0, deprecatedObjectId = <opaque pointer>))
2022-09-09T18:11:14Z DBG rpcconnect: tx (call = (questionId = 1, target = (promisedAnswer = (questionId = 0, transform = [])), interfaceId = 17804583019846587543, methodId = 0, allowThirdPartyTailCall = false, params = (content = <opaque pointer>, capTable = []), sendResultsTo = (caller = void)))
2022-09-09T18:11:14Z DBG rpcconnect: rx (return = (answerId = 0, releaseParamCaps = false, results = (content = <opaque pointer>, capTable = [(senderHosted = 0)])))
2022-09-09T18:11:14Z DBG rpcconnect: tx (finish = (questionId = 0, releaseResultCaps = false))
2022-09-09T18:11:14Z DBG rpcconnect: rx (return = (answerId = 1, releaseParamCaps = false, results = (content = <opaque pointer>, capTable = [])))
2022-09-09T18:11:14Z INF Connection 203508cb-61a2-4288-acd4-471f24c2d231 registered connIndex=1 ip=198.41.192.27 location=ATL
2022-09-09T18:11:14Z DBG rpcconnect: tx (finish = (questionId = 1, releaseResultCaps = false))
2022-09-09T18:11:15Z DBG edgediscovery - GetAddr: Giving connection its new address connIndex=2 ip=198.41.200.63
2022-09-09T18:11:15Z DBG rpcconnect: tx (bootstrap = (questionId = 0, deprecatedObjectId = <opaque pointer>))
2022-09-09T18:11:15Z DBG rpcconnect: tx (call = (questionId = 1, target = (promisedAnswer = (questionId = 0, transform = [])), interfaceId = 17804583019846587543, methodId = 0, allowThirdPartyTailCall = false, params = (content = <opaque pointer>, capTable = []), sendResultsTo = (caller = void)))
2022-09-09T18:11:15Z DBG rpcconnect: rx (return = (answerId = 0, releaseParamCaps = false, results = (content = <opaque pointer>, capTable = [(senderHosted = 0)])))
2022-09-09T18:11:15Z DBG rpcconnect: tx (finish = (questionId = 0, releaseResultCaps = false))
2022-09-09T18:11:15Z DBG rpcconnect: rx (return = (answerId = 1, releaseParamCaps = false, results = (content = <opaque pointer>, capTable = [])))
2022-09-09T18:11:15Z INF Connection 8b1ff7b1-bd8a-44a0-9553-bd51ee397da6 registered connIndex=2 ip=198.41.200.63 location=ATL
2022-09-09T18:11:15Z DBG rpcconnect: tx (finish = (questionId = 1, releaseResultCaps = false))
2022-09-09T18:11:16Z DBG edgediscovery - GetAddr: Giving connection its new address connIndex=3 ip=198.41.192.167
2022-09-09T18:11:16Z DBG rpcconnect: tx (bootstrap = (questionId = 0, deprecatedObjectId = <opaque pointer>))
2022-09-09T18:11:16Z DBG rpcconnect: tx (call = (questionId = 1, target = (promisedAnswer = (questionId = 0, transform = [])), interfaceId = 17804583019846587543, methodId = 0, allowThirdPartyTailCall = false, params = (content = <opaque pointer>, capTable = []), sendResultsTo = (caller = void)))
2022-09-09T18:11:16Z DBG rpcconnect: rx (return = (answerId = 0, releaseParamCaps = false, results = (content = <opaque pointer>, capTable = [(senderHosted = 0)])))
2022-09-09T18:11:16Z DBG rpcconnect: tx (finish = (questionId = 0, releaseResultCaps = false))
2022-09-09T18:11:16Z DBG rpcconnect: rx (return = (answerId = 1, releaseParamCaps = false, results = (content = <opaque pointer>, capTable = [])))
2022-09-09T18:11:16Z INF Connection 102743f8-c552-484f-9498-b080d7ff60b6 registered connIndex=3 ip=198.41.192.167 location=ATL
2022-09-09T18:11:16Z DBG rpcconnect: tx (finish = (questionId = 1, releaseResultCaps = false))
2022-09-09T18:11:56Z DBG CF-RAY: 7481e3a338486dbf-MIA GET http://whoami.mysmarthome.network/ HTTP/1.1
2022-09-09T18:11:56Z DBG Inbound request CF-RAY=7481e3a338486dbf-MIA Header="map[Accept:[*/*] Accept-Encoding:[gzip] Cdn-Loop:[cloudflare] Cf-Connecting-Ip:[99.153.136.108] Cf-Ipcountry:[US] Cf-Ray:[7481e3a338486dbf-MIA] Cf-Visitor:[{\"scheme\":\"http\"}] Cf-Warp-Tag-Id:[1a4cca4d-f1e1-49dc-ae7a-b4cd40eeff8e] User-Agent:[curl/7.74.0] X-Forwarded-For:[99.153.136.108] X-Forwarded-Proto:[http]]" host=whoami.mysmarthome.network path=/ rule=1
2022-09-09T18:11:56Z DBG CF-RAY: 7481e3a338486dbf-MIA Request content length 0
2022-09-09T18:11:56Z ERR  error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: remote error: tls: internal error" cfRay=7481e3a338486dbf-MIA ingressRule=1 originService=https://caddy:443
2022-09-09T18:11:56Z ERR Request failed error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: remote error: tls: internal error" connIndex=3 dest=http://whoami.mysmarthome.network/ ip=198.41.192.167 type=http
❯ \cat /docker-services/caddy/logs/caddy.log
{"level":"info","ts":1662747100.4659154,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//127.0.0.1:2019","//localhost:2019","//[::1]:2019"]}
{"level":"info","ts":1662747100.466212,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1662747100.4662218,"logger":"admin.api","msg":"load complete"}
{"level":"info","ts":1662747100.467395,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
{"level":"debug","ts":1662747111.3112214,"logger":"docker-proxy","msg":"Skipping default Caddyfile because no path is set"}
{"level":"debug","ts":1662747111.3129804,"logger":"docker-proxy","msg":"Swarm service","service":"cloudflared_cloudflared"}
{"level":"debug","ts":1662747111.3130057,"logger":"docker-proxy","msg":"Swarm service","service":"whoami_whoami"}
{"level":"debug","ts":1662747111.3131058,"logger":"docker-proxy","msg":"Swarm service","service":"caddy_caddy"}
{"level":"info","ts":1662747111.3140552,"logger":"docker-proxy","msg":"New Caddyfile","caddyfile":"{\n\tacme_dns cloudflare {env.CF_API_TOKEN}\n\tdebug\n\temail {env.EMAIL}\n\tlog {\n\t\toutput file /var/log/caddy/caddy.log\n\t}\n}\nwhoami.mysmarthome.network {\n\tlog {\n\t\toutput file /var/log/caddy/whoami.log\n\t}\n\treverse_proxy http://whoami:8000 {\n\t\ttrusted_proxies 10.0.0.0/24 192.168.99.0/24 192.168.200.0/24 173.245.48.0/20 103.21.244.0/22 103.22.200.0/22 103.31.4.0/22 141.101.64.0/18 108.162.192.0/18 190.93.240.0/20 188.114.96.0/20 197.234.240.0/22 198.41.128.0/17 162.158.0.0/15 104.16.0.0/13 104.24.0.0/14 172.64.0.0/13 131.0.72.0/22 2400:cb00::/32 2606:4700::/32 2803:f800::/32 2405:b500::/32 2405:8100::/32 2a06:98c0::/29 2c0f:f248::/32\n\t}\n\ttls {\n\t\tresolvers 172.64.36.1, 172.64.36.2\n\t}\n}\n"}
{"level":"info","ts":1662747111.3143327,"logger":"docker-proxy","msg":"New Config JSON","json":"{\"logging\":{\"logs\":{\"default\":{\"writer\":{\"filename\":\"/var/log/caddy/caddy.log\",\"output\":\"file\"},\"level\":\"DEBUG\",\"exclude\":[\"http.log.access.log0\"]},\"log0\":{\"writer\":{\"filename\":\"/var/log/caddy/whoami.log\",\"output\":\"file\"},\"level\":\"DEBUG\",\"include\":[\"http.log.access.log0\"]}}},\"apps\":{\"http\":{\"servers\":{\"srv0\":{\"listen\":[\":443\"],\"routes\":[{\"match\":[{\"host\":[\"whoami.mysmarthome.network\"]}],\"handle\":[{\"handler\":\"subroute\",\"routes\":[{\"handle\":[{\"handler\":\"reverse_proxy\",\"trusted_proxies\":[\"10.0.0.0/24\",\"192.168.99.0/24\",\"192.168.200.0/24\",\"173.245.48.0/20\",\"103.21.244.0/22\",\"103.22.200.0/22\",\"103.31.4.0/22\",\"141.101.64.0/18\",\"108.162.192.0/18\",\"190.93.240.0/20\",\"188.114.96.0/20\",\"197.234.240.0/22\",\"198.41.128.0/17\",\"162.158.0.0/15\",\"104.16.0.0/13\",\"104.24.0.0/14\",\"172.64.0.0/13\",\"131.0.72.0/22\",\"2400:cb00::/32\",\"2606:4700::/32\",\"2803:f800::/32\",\"2405:b500::/32\",\"2405:8100::/32\",\"2a06:98c0::/29\",\"2c0f:f248::/32\"],\"upstreams\":[{\"dial\":\"whoami:8000\"}]}]}]}],\"terminal\":true}],\"logs\":{\"logger_names\":{\"whoami.mysmarthome.network\":\"log0\"}}}}},\"tls\":{\"automation\":{\"policies\":[{\"subjects\":[\"whoami.mysmarthome.network\"],\"issuers\":[{\"challenges\":{\"dns\":{\"resolvers\":[\"172.64.36.1,\",\"172.64.36.2\"]}},\"email\":\"{env.EMAIL}\",\"module\":\"acme\"},{\"challenges\":{\"dns\":{\"resolvers\":[\"172.64.36.1,\",\"172.64.36.2\"]}},\"email\":\"{env.EMAIL}\",\"module\":\"zerossl\"}]}]}}}}"}
{"level":"info","ts":1662747111.3143702,"logger":"docker-proxy","msg":"Sending configuration to","server":"localhost"}
{"level":"info","ts":1662747111.3147678,"logger":"admin.api","msg":"received request","method":"POST","host":"localhost:2019","uri":"/load","remote_ip":"127.0.0.1","remote_port":"38742","headers":{"Accept-Encoding":["gzip"],"Content-Length":["1395"],"Content-Type":["application/json"],"User-Agent":["Go-http-client/1.1"]}}
{"level":"info","ts":1662747111.3151286,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"info","ts":1662747111.3153539,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
{"level":"info","ts":1662747111.3154116,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
{"level":"info","ts":1662747111.3153915,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc00018d880"}
{"level":"info","ts":1662747111.3157737,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
{"level":"debug","ts":1662747111.3158784,"logger":"http","msg":"starting server loop","address":"[::]:443","tls":true,"http3":true}
{"level":"info","ts":1662747111.3158872,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"debug","ts":1662747111.31591,"logger":"http","msg":"starting server loop","address":"[::]:80","tls":false,"http3":false}
{"level":"info","ts":1662747111.3159168,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
{"level":"info","ts":1662747111.3159215,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["whoami.mysmarthome.network"]}
{"level":"debug","ts":1662747111.3161337,"logger":"tls","msg":"loading managed certificate","domain":"whoami.mysmarthome.network","expiration":1668643200,"issuer_key":"acme.zerossl.com-v2-DV90","storage":"FileStorage:/data/caddy"}
{"level":"debug","ts":1662747111.3162796,"logger":"tls.cache","msg":"added certificate to cache","subjects":["whoami.mysmarthome.network"],"expiration":1668643200,"managed":true,"issuer_key":"acme.zerossl.com-v2-DV90","hash":"e6de9710f5c28bb1c6cc61af0f16f550dc9b55d1bc98cffc3e7e77649f7c065e","cache_size":1,"cache_capacity":10000}
{"level":"debug","ts":1662747111.3163033,"logger":"events","msg":"event","name":"cached_managed_cert","id":"39e9015c-2a60-484c-8fde-b3fbb55435cc","origin":"tls","data":{"sans":["whoami.mysmarthome.network"]}}
{"level":"info","ts":1662747111.3163767,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
{"level":"info","ts":1662747111.316401,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1662747111.3167105,"logger":"admin.api","msg":"load complete"}
{"level":"info","ts":1662747111.316799,"logger":"docker-proxy","msg":"Successfully configured","server":"localhost"}
{"level":"info","ts":1662747111.3182921,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"info","ts":1662747111.3186343,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
{"level":"debug","ts":1662747111.5592558,"logger":"docker-proxy","msg":"Skipping default Caddyfile because no path is set"}
{"level":"debug","ts":1662747111.5610723,"logger":"docker-proxy","msg":"Swarm service","service":"cloudflared_cloudflared"}
{"level":"debug","ts":1662747111.5610874,"logger":"docker-proxy","msg":"Swarm service","service":"whoami_whoami"}
{"level":"debug","ts":1662747111.5611908,"logger":"docker-proxy","msg":"Swarm service","service":"caddy_caddy"}
{"level":"debug","ts":1662747111.9919753,"logger":"docker-proxy","msg":"Skipping default Caddyfile because no path is set"}
{"level":"debug","ts":1662747111.993846,"logger":"docker-proxy","msg":"Swarm service","service":"cloudflared_cloudflared"}
{"level":"debug","ts":1662747111.993859,"logger":"docker-proxy","msg":"Swarm service","service":"whoami_whoami"}
{"level":"debug","ts":1662747111.993932,"logger":"docker-proxy","msg":"Swarm service","service":"caddy_caddy"}
{"level":"debug","ts":1662747116.1168952,"logger":"events","msg":"event","name":"tls_get_certificate","id":"45009c2a-95fa-4b9f-b961-e9ca8aed848c","origin":"tls","data":{"client_hello":{"CipherSuites":[52393,52392,49195,49199,49196,49200,49161,49171,49162,49172,156,157,47,53,49170,10,4867,4865,4866],"ServerName":"caddy","SupportedCurves":[29,23,24,25],"SupportedPoints":"AA==","SignatureSchemes":[2052,1027,2055,2053,2054,1025,1281,1537,1283,1539,513,515],"SupportedProtos":null,"SupportedVersions":[772,771],"Conn":{}}}}
{"level":"debug","ts":1662747116.1169837,"logger":"tls.handshake","msg":"no matching certificates and no custom selection logic","identifier":"caddy"}
{"level":"debug","ts":1662747116.1169913,"logger":"tls.handshake","msg":"no matching certificates and no custom selection logic","identifier":"*"}
{"level":"debug","ts":1662747116.116996,"logger":"tls.handshake","msg":"all external certificate managers yielded no certificates and no errors","remote_ip":"10.0.1.165","remote_port":"52588","sni":"caddy"}
{"level":"debug","ts":1662747116.1170015,"logger":"tls.handshake","msg":"no certificate matching TLS ClientHello","remote_ip":"10.0.1.165","remote_port":"52588","server_name":"caddy","remote":"10.0.1.165:52588","identifier":"caddy","cipher_suites":[52393,52392,49195,49199,49196,49200,49161,49171,49162,49172,156,157,47,53,49170,10,4867,4865,4866],"cert_cache_fill":0.0001,"load_if_necessary":true,"obtain_if_necessary":true,"on_demand":false}
{"level":"debug","ts":1662747116.117071,"logger":"http.stdlib","msg":"http: TLS handshake error from 10.0.1.165:52588: no certificate available for 'caddy'"}
{"level":"debug","ts":1662747141.992616,"logger":"docker-proxy","msg":"Skipping default Caddyfile because no path is set"}
{"level":"debug","ts":1662747141.9949396,"logger":"docker-proxy","msg":"Swarm service","service":"cloudflared_cloudflared"}
{"level":"debug","ts":1662747141.9949546,"logger":"docker-proxy","msg":"Swarm service","service":"whoami_whoami"}
{"level":"debug","ts":1662747141.9950364,"logger":"docker-proxy","msg":"Swarm service","service":"caddy_caddy"}
{"level":"debug","ts":1662747171.999186,"logger":"docker-proxy","msg":"Skipping default Caddyfile because no path is set"}
{"level":"debug","ts":1662747172.000841,"logger":"docker-proxy","msg":"Swarm service","service":"cloudflared_cloudflared"}
{"level":"debug","ts":1662747172.0008545,"logger":"docker-proxy","msg":"Swarm service","service":"whoami_whoami"}
{"level":"debug","ts":1662747172.0009308,"logger":"docker-proxy","msg":"Swarm service","service":"caddy_caddy"}

The 10.0.1.165 address in the TLS handshake error is proxy-endpoint on my proxy overlay network

It doesn’t look like Caddy can get out through the tunnel. Or at least it doesn’t seem to be able to get a cert when I launch a new clone of whoami with a different subdomain (whoami431):

https://zerobin.net/?0c456664ea902077#DjZrRd3+s/VYGs6P2cee7A7AmdUY28uCCRhmOHvRgQ0=

Googled for anything with Caddy + Docker Swarm + Cloudflare Tunnel and didn’t find much, but there are a couple of interesting bits that I need to work through.

One suggests that I may need to use host mode networking (https://socialgrep.com/search?query=post%3Ax0tkfz#comments). I’m not sure what limitations that might impose.

Another is an very interesting app that uses Caddy as a reverse proxy, but doesn’t expose it directly (Affluences / open-source / docker-over-argo · GitLab). It opens up a tunnel for each service. I don’t know if this is the solution I want. Not sure about the idea of running a bunch of tunnels. But there may be some solution in his code somewhere.

Or I could go back to running open on port 80/443 without the tunnel. That was working. But I don’t understand the underlying networking(?) issue and worry that if I don’t now, it still might pop up in a different place later.

Don’t know yet. I’m hoping to find a clean solution along the path I’m on now. Appreciate any insights and extremely grateful for the time you’ve already given. I seem to be on a path less chosen with Swarm.

I think I had similar sorts of problems. Reverse proxy redirecting to internal url - Help - Caddy Community

The only way I could get it initially working was setting the Cloudflare http header to the caddy reverse proxy name. But after 24 hours or so it would start redirecting me to the internal proxy name - which might be fine for you, it was not what I wanted.

I ended up reverting to using the http names from cloudflare as the tunnel is therre to do it’s job, and it’s theoretically handling the SSL as well. The only bit I’m not getting is encryption from CF to my service - meansing tehcnically they can read the traffic.

Yes, that is what Caddy will do if you don’t have any trusted_proxies. Caddy will always append the remote_ip to the X-Forwarded-For because that is the design of X-Forwarded-For, the trusted_proxies just determines whether it keeps what’s originally supplied or not.

It helps to think of a request flow; at each point along the chain, a request is received and then another request is made.

The client makes a request to CF; CF receives that, puts it on hold, and makes a request through the tunnel to Caddy; Caddy receives that, puts it on hold, and makes a request to whoami.

At each step of the chain, there’s a new request, and each step needs to be configured to make the right kind of request to the next step.

That includes sending the correct SNI, which it’s possible it may not have been; to isolate/eliminate incorrect SNI as a possible issue, I recommended configuring the tunnel to connect directly to whoami instead of via Caddy, just to get a working tunnel as a proof of concept of sorts.

Narrowing down from:

Client → CF → Tunnel → Caddy → whoami

to

Client → CF → Tunnel → whoami

Just to be clear on terminology and request flow here, it’s not a matter of Caddy getting ‘out’ through the CF tunnel, since all it needs to do is respond to requests from the tunnel. The tunnel acts as a client for Cloudflare, making requests to Caddy like any other request coming in would be.

As for not getting a cert: there are three ways to solve challenges for LetsEncrypt:

  1. Respond to LetsEncrypt making a well-known request over HTTP
  2. Respond to LetsEncrypt during the TLS handshake (TLS-ALPN)
  3. Set a DNS record for LetsEncrypt to check

Number 2 can’t work, because LetsEncrypt can’t make a TLS connection to your Caddy server and never will be with a proxy in the way; this is because clients negotiate TLS with the proxy, which then - as described above, re: request flow - makes another request to Caddy. That means LetsEncrypt can’t negotiate TLS-ALPN challenges, so knock that one out.

Number 1 is normally our fallback, and normally it would work fine even behind Cloudflare - that’s because during normal ‘orange cloud’ operation, Cloudflare still connects to the origin server using the scheme the client connected with. If the client connects with HTTP, the request will go through to Caddy over HTTP. That allows LetsEncrypt and Caddy to originate brand new certificates using HTTP-01 challenges even before Caddy could even allow HTTPS connections (which it won’t normally if it has no certificate to begin with).

However, a CF tunnel is a different story. Since you explicitly configure it for a single port, it won’t respect the scheme the client connected with; if you’re configured for HTTPS, it will always try HTTPS, which will always fail until Caddy has a certificate, but it needs a certificate to allow the connection, which makes this a bit of a chicken-and-egg problem.

So with options 1 and 2 out, to originate a brand new certificate, you’re going to need DNS validation for a Caddy behind a CF tunnel. That said, you have it:

So this is all moot point. The actual problem remains:

{"level":"debug","ts":1662747116.116996,"logger":"tls.handshake","msg":"all external certificate managers yielded no certificates and no errors","remote_ip":"10.0.1.165","remote_port":"52588","sni":"caddy"}

You need the proxy to be correctly signalling SNI. This is what’s holding us up. You said you’ve got the tunnel working directly with whoami but not with Caddy. SNI is the problem.

If we can’t get the Swarm proxy to sort its SNI out, there’s one other option you could go for; configure Caddy for HTTP-only, no HTTPS at all. This isn’t such a bad idea in this case because Caddy won’t ever be accessible to the open internet, it’s purely receiving requests from the CF tunnel to proxy onwards to apps and provide a layer for forward authentication to secure them via Authelia. If you use Caddy for HTTP only we skip all possible SNI issues.

3 Likes

For reference, here are some relevant parts of my setup.

A Caddy+cloudflared configuration on my Authelia host
  cloudflared:
    image: cloudflare/cloudflared:2022.6.3
    restart: unless-stopped
    command: tunnel run
    environment:
      NO_AUTOUPDATE: "true"
      TUNNEL_TOKEN: "[snip]"

  caddy:
    build: .
    restart: unless-stopped
    command: caddy docker-proxy -ingress-networks docker_default
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - caddy-data:/data
      - /var/run/docker.sock:/var/run/docker.sock
    labels:
      caddy_0.acme_dns: "cloudflare [snip]"
      caddy_0.email: "athena@whitestrake.net"
      caddy_1: "(secure)"
      caddy_1.forward_auth: "authelia:9091"
      caddy_1.forward_auth.uri: "/api/verify?rd=https://auth.whitestrake.net"
      caddy_1.forward_auth.copy_headers: "Remote-User Remote-Groups Remote-Name Remote-Email"
Authelia
  authelia:
    image: authelia/authelia:latest
    depends_on:
      - db-authelia
      - redis-authelia
    volumes:
      - ./config:/config
    environment:
      AUTHELIA_JWT_SECRET_FILE: /config/secrets/jwt
      AUTHELIA_SESSION_SECRET_FILE: /config/secrets/session
      AUTHELIA_NOTIFIER_SMTP_PASSWORD_FILE: /config/secrets/smtp
      AUTHELIA_STORAGE_ENCRYPTION_KEY_FILE: /config/secrets/encryption
      AUTHELIA_STORAGE_POSTGRES_PASSWORD_FILE: /config/secrets/postgres
      AUTHELIA_SESSION_REDIS_PASSWORD_FILE: /config/secrets/redis
    labels:
      caddy: "auth.whitestrake.net"
      caddy.reverse_proxy: "{{upstreams 9091}}"
    restart: unless-stopped

Although there’s actually one more piece of configuration that I have purposefully removed from the above:

caddy.reverse_proxy.trusted_proxies 0.0.0.0/0

I’ve removed it from the config to ensure people don’t copy paste the whole config and put it in there without understanding what they’re doing. I have this because I have proxies on other servers and don’t know what IP addresses I will have other Caddies proxying forward authentication to Authelia via this Caddy. It most likely will not be necessary for you, whoever is reading this.

Changedetection
  changedetection:
    image: ghcr.io/dgtlmoon/changedetection.io
    restart: unless-stopped
    environment:
      BASE_URL: https://cd.whitestrake.net
      PLAYWRIGHT_DRIVER_URL: ws://chrome:3000/?stealth=1&--disable-web-security=true
    volumes:
      - ./config:/datastore
    labels:
      caddy: "cd.whitestrake.net"
      caddy.import: "secure"
      caddy.reverse_proxy: "{{upstreams 5000}}"
      com.centurylinklabs.watchtower.enable: "true"

And then on another host:

Caddy
  caddy:
    build: .
    restart: unless-stopped
    command: caddy docker-proxy -ingress-networks docker_default
    volumes:
      - caddy-data:/data
      - /var/run/docker.sock:/var/run/docker.sock
    labels:
      caddy_0.acme_dns: "cloudflare [snip]"
      caddy_0.email: "omnius@whitestrake.net"
      caddy_1: "(secure)"
      caddy_1.forward_auth: "https://auth.whitestrake.net"
      caddy_1.forward_auth.uri: "/api/verify?rd=https://auth.whitestrake.net"
      caddy_1.forward_auth.copy_headers: "Remote-User Remote-Groups Remote-Name Remote-Email"
      caddy_1.forward_auth.header_up: "Host {upstream_hostport}"
Tdarr
  tdarr:
    image: ghcr.io/haveagitgat/tdarr:latest
    restart: unless-stopped
    mem_limit: 2.5G
    memswap_limit: 2.5G
    environment:
      TZ: "Australia/Brisbane"
      PUID: "1000"
      PGID: "1000"
      UMASK_SET: "002"
      serverIP: "0.0.0.0"
      serverPort: "8266"
      webUIPort: "8265"
      #internalNode: "true"
      #nodeID: "omnius"
    volumes:
      - ./tdarr/server:/app/server
      - ./tdarr/configs:/app/configs
      - ./tdarr/logs:/app/logs
      - /mnt/cadmus/media:/media
      - /mnt/downloads:/transcode
    devices:
      - /dev/dri/renderD128:/dev/dri/renderD128
    labels:
      caddy: "tdarr.whitestrake.net"
      caddy.import: "secure"
      caddy.reverse_proxy: "{{upstreams 8265}}"
      com.centurylinklabs.watchtower.monitor-only: "true"

My scenario is obviously different from yours, having exposed Authelia directly to internet and opted for distributed forward authentication rather than using Swarm overlay. Whereas my complication was solving the forward auth Host header and having to blindly trust all proxies because I want to be able to prop up a secured VPS without having to manage trusted IPs, and because I know for a fact I will not be using network address as an authentication mechanism, you don’t have that issue but it seems like SNI is the problem in your case.

3 Likes

Haven’t had a chance to do much further testing… I don’t get a lot of time to spend on side-projects (but still crucial projects) like this. Still have a list of things I want to try, including diving deeper into learning more about swarm networking & related issues here. But I didn’t want to leave this hanging without saying thanks for the posts above: it’s given me a lot to work with, and I am working through it. I’ll post back when I work through it some more and get it going… or exhaust my options.

2 Likes

You’re welcome to drop by in the Authelia Matrix / Discord for help too.

4 Likes

This topic was automatically closed after 30 days. New replies are no longer allowed.