Caddy binary vs docker container

1. The problem I’m having:

Not a problem, I understand that I can install caddy using the static binary or by installing it using docker. I’m kind of torn as to which method to use, are there any benefits of installing either? One reason I’m kind of staying away from docker is the requirement of setting all containers to one network if you want to reverse proxy them (is there any way to get rid of this requirement?)

Any help would be appreciated :slight_smile:

2. Error messages and/or full log output:

N/A

N/A

3. Caddy version:

N/A

4. How I installed and ran Caddy:

N/A

a. System environment:

Ubuntu 22.04, ARM64, 4 core, 24GB ram

b. Command:

N/A

c. Service/unit/compose file:

N/A

d. My complete Caddy config:

N/A

5. Links to relevant resources:

I’m throwing this out here since nobody else has replied, yet.

If you’re wanting to reverse proxy things, its connections will always be from port 80 or 443. There’s not really any way around that as far as I am aware.

If you want different services to connect to Caddy without being able to see or communicate to the others, that’s possible in Docker. You’d give every container its own network and ensure Caddy is also on each of those networks. You could then allow Caddy to be the only container with an outward facing network, making it the only one capable of communicating with the host network.

Thanks for the reply. I have chosen the docker container (docker compose) route because I want everything containerized, not on the bare metal. That aside, I am intrigued about your comment regarding each container getting its own network, but also adding caddy to each of those.

I have seen examples where of other docker compose files where they have caddy plus four or five other services all together. I’d like to avoid coupling of unrelated services if possible.

As my example, I have vaultwarden, jellyfin, & bookstack all containerized running on one host. They are currently unsecure and I would like to use the Caddy reverse proxy to access them while I’m physically away from my network.

Based on your reply, the approach would be to add networks to each of the existing services (docker compose files) and then also add caddy to each one? Would that look like the below set of four services/files?

caddy service

networks:
  jellyfin:
  bookstack:
  vaultwarden:
  
services:
  caddy:
    container_name: caddy
    image: caddy:2.8.4
    ports:
      - "80:80"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
    networks:
      - jellyfin
      - bookstack
      - vaultwarden

jellyfin service

services:
  jellyfin:
    image: jellyfin/jellyfin:10.8.13-1-amd64
    container_name: jellyfin
    group_add:
      - "105"
    network_mode: 'host'
    volumes:
      - /path/to/media2:/media2:ro
    networks:
      - jellyfin

bookstack service

services:
  bookstack:
    image: linuxserver/bookstack
    container_name: bookstack
    volumes:
      - ./bookstack_app_data:/config
    ports:
      - 6875:80
    depends_on:
      - bookstack_db
    networks:
      - bookstack

vaultwarden service

services:
  vaultwarden:
     container_name: vaultwarden
     image: vaultwarden/server:latest
     volumes:
     ports:
       - 80:80
     networks:
       - vaultwarden

I possibly replied too soon. A little more digging and I found this project. Is this what you might have been referring to?

This is Caddy’s official Docker image. I’ll share my setup below, as I use some of the same services as you. I’ll also explain some concepts.

Any hash (#) essentially comments out anything following it. That means whatever has been commented is not part of the parameters set. Lots of these comments are part of what an image has already put in, since they expect you to make changes or remove as needed. I’ve also put my own comments in to help you understand a bit more.

Here’s my compose.yaml, parsed so you can understand what I’ve done.

services:

  vaultwarden:
    image: vaultwarden/server:latest
    container_name: vaultwarden
    restart: unless-stopped
    environment:
### Environment changes configurations in any container that has them. There is always default parameters set, this just sets them different from default.
      DOMAIN: "https://vaultwarden.4rknm.duckdns.org"
      YUBICO_CLIENT_ID: "REDACTED" ### I use a hardware key, YUBICO parameters are for them.
      YUBICO_SECRET_KEY: "REDACTED"
### Volumes mount the host machine's directory and/or files to the container's. This is how data stays persistent when updating or removing containers. The part before the colon is the host directory, and the part after the colon is where it will be mounted to in the container.
    volumes:
      - /srv/vaultwarden/vw-data:/data
    networks:
      vaultwarden_net: {}


### I haven't set Jellyfin in reverse-proxy mode because I don't need this container running yet. It's uncommented for your understanding.
  jellyfin:
    image: jellyfin/jellyfin
    container_name: jellyfin
    ports:
      - 8096:8096/tcp # Http webUI
#      - 8920:8920 # for HTTPS
      - 7359:7359/udp # Allows clients to discover Jellyfin on the local network
      - 1900:1900/udp # Service discovery used by DNLA and clients
    networks:
      jellyfin_net:
    user: 1000:1000
    group_add:
      - "122" # Change this to match your "render" host group id and remove this comment
    volumes:
      - /srv/jellyfin:/config
      - /srv/jellyfin/cache:/cache
      - /dev/shm:/data/transcode # Offload transcoding to RAM if you have enough RAM
      - type: bind
        source: /srv/jellyfin/media
        target: /media
      - type: bind
        source: /srv/jellyfin/media2
        target: /media2
        read_only: true
    restart: 'unless-stopped'
    # Optional - alternative address used for autodiscovery
    environment:
      - TZ=America/Boise
      - JELLYFIN_PublishedServerUrl=192.168.1.60
    # Optional - may be necessary for docker healthcheck to pass if running in host network mode
#    extra_hosts:
#      - 'host.docker.internal:host-gateway'

  caddy:
    image: caddy:latest
    container_name: caddy
    restart: unless-stopped
    ports:
      - 80:80
      - 443:443
      - 443:443/udp # Needed for HTTP/3.
    volumes:
      - /srv/caddy/caddy:/usr/bin/caddy  # Your custom build of Caddy.
      - /srv/caddy/Caddyfile:/etc/caddy/Caddyfile:ro
      - /srv/caddy/caddy-config:/config
      - /srv/caddy/caddy-data:/data
    environment:
      EMAIL: "REDACTED@REDACTED.com"    # The email address to use for ACME registration.
      DUCKDNS_TOKEN: "REDACTED"     # Your Duck DNS token.
      LOG_FILE: "/data/access.log"
    networks:
      caddy_net: {}
      vaultwarden_net:
      jellyfin_net:


### This is the important part. This is what actually creates the Docker networks, listing them in a container's networks: means that container will use only the networks listed there.

networks:
  caddy_net:
    external: true   ### This means it can be seen on the host network (your home network) Without this, Caddy's reverse proxies won't be accessible. Only the Caddy container should have this network.
### These networks have no explicit parameters set, which means Docker will handle all the subnetting, IPv4 addressing, and so on. It only needs to communicate with the containers that are using this network, so I don't need any specifics.
  vaultwarden_net:
  jellyfin_net:

If you have any questions, I can answer. It sounds to me like you’re just getting into Docker and basic networking, so it can be overwhelming without some simple guidance. There’s plenty of documentation for images and Docker’s functionality, but prerequisite knowledge is needed to understand a lot of it.

As for the Caddyfile, this is what it looks like. All of the parameters in this Caddyfile will have more information on Caddy Docs.

#LOCAL SERVICES
### The asterisk before the (sub)domain means it's a wildcard. Any (sub)subdomain will effectively use the ACME certificate for 4rknm.duckdns.org
*.4rknm.duckdns.org {
        log {
                level INFO
                output file {$LOG_FILE} {
                        roll_size 10MB
                        roll_keep 10
                }
        }

### This is what enables HTTPS for the services.
        tls {
              issuer acme {
                      dns duckdns {$DUCKDNS_TOKEN} ### The $ prefacing an environmental variable means it's pulling the parameter from the container's environmental variable. Can make things simpler, but you can also input it manually without the brackets.
              }
        }

        @vaultwarden host vaultwarden.4rknm.duckdns.org
        handle @vaultwarden {
            reverse_proxy vaultwarden:80
        }

#        @jellyfin host jellyfin.4rknm.duckdns.org
#        handle @jellyfin {
#            reverse_proxy jellyfin:80
#        }


        # Fallback for otherwise unhandled domains
        handle {
                abort
        }
        # This setting may have compatibility issues with some browsers
        # (e.g., attachment downloading on Firefox). Try disabling this
        # if you encounter issues.
        encode zstd gzip
}
1 Like

Thanks for the reply. I have three containers (including caddy) contained within two compose files. They are using a network created just for them. Each container and each compose file has the network attached.

  caddy:
    container_name: caddy
    ports:
      - "80:80"
      - "443:443"
      - "443:443/udp"
    volumes:
      #....
    networks:
      - my-caddy-network
   
networks:
  my-caddy-network:
    external: true
services:
  whoami:
    image: traefik/whoami
    container_name: app1
    command:
    # It tells whoami to start listening on 2001 instead of 80
    - --port=2001
    ports:
      - "2001:2001"
    networks:
      - my-caddy-network

  whoami2:
    image: traefik/whoami
    container_name: app2
    command:
    # It tells whoami to start listening on 2001 instead of 80
    - --port=2002
    ports:
      - "2002:2002"
    networks:
      - my-caddy-network

networks:
  my-caddy-network:
    external: true

Next I have this Caddyfile implementation, which when on the host machine I can curl each app, and they respond with the welcome page for caddy. However, on a browser on a laptop, they don’t resolve. I suspect I need something more in the Caddyfile or need to do something based on the caddy welcome page?

Caddyfile implementation:

http://app1.localhost {
    reverse_proxy app1:2001
}

http://app2.localhost {
    reverse_proxy app2:2002
}

caddy_works

When resolving localhost, that will only work on the host machine. There’s two other ways I know of to access it outside of the localhost.

  1. Use the host machine’s IP address instead of localhost.
  2. Use a DNS service to get a domain and access it using a domain instead of an IP address.
  • Option 1
    There are adjustments to be made in both the compose.yaml and Caddyfile. The port must be 80 for whoami to work. Unless there’s an option to change the HTTP listening port in whoami, you cannot change this. Since you want to access it from other computers on the local network, you have to change the domain from app*.localhost to ip.of.host.machine:host's-port

It would look like this for your Traefik compose.yaml:

services:
  whoami:
    image: traefik/whoami
    container_name: app1
    ports:
      - "2001:80"
    networks:
      caddy_net:

  whoami2:
    image: traefik/whoami
    container_name: app2
    command:
    ports:
      - "2002:80"
    networks:
      caddy_net:

You would then change your Caddyfile to:

192.168.1.60:2001 {
    reverse_proxy app1:80
}

192.168.1.60:2002 {
    reverse_proxy app2:80
}
  • Option 2
    I highly recommend HTTPS for security purposes. It prevents eavesdropping, and with Vaultwarden containing numerous items of sensitive information, I don’t see why you would want to leave it unsecure. It’s pretty easy to get an HTTPS certificate, as Caddy’s Certmagic is part of the charm of using it for a web server.
    There’s a few different ways of doing this, so I’ll leave it up to you. If you need guidance, though, I can help.

Also,

This is redundant. I would delete the command configuration and leave only the “ports:” configuration.

If you need anything answered, I’d be happy to help.

Just chiming in here a bit as well - I use caddy in both ways, basically as outlined above by @TheRettom where if I’m using it to direct other services, I use docker but when I’m using just the reverse proxy functionality, I use the binary.

Right now I serve a monitoring suite using caddy where the backend uses grafana/prometheus/alertmanager/etc. This is all done using docker-compose and just connects together like a dream.

One thing to add - you can still use networks with caddy, though this might be flirting with “thinking outside the box” functionality.

Here’s how I only serve specific ports to my vpn address space:

:9090 {
    @vpn {
        remote_ip 10.0.0.0/20
    }
    reverse_proxy @vpn localhost:9090
}
1 Like

Made the changes and option 1 works. However, I’d certainly like the approach of using HTTPS as that was the original goal, and vaultwarden won’t let you log in unless it’s secure.

I have a domain, fancydomain.com, and I’d like to use that. I assume this is where I need to involve the A, AAAA, CNAME configuration? I also need my public IP, I believe.

Is your VPN a commercial offering, like Nord or mullvad? Are you obtaining that 10.0.0.0/20 subnet from server provided by the VPN service?

When proxying from Caddy inside a container, to another container, you don’t need ports at all. That’s only if you want to expose those other containers to be access directly from the outside world. If Caddy is proxying to them, then Caddy makes a connection through the Docker network. ports is only to attach the port to the host so that connections can come in from outside Docker.

This would work, but you would need to be making the request from the same machine you have your Docker/Caddy stack, because localhost only connects to 127.0.0.1.

You don’t need this though, you can have two different containers listening on the same port, as long as they’re not trying to bind that port on the host with ports:. So you could proxy to app1:80 and app2:80

This would actually enable HTTPS on those ports, using Caddy’s internal CA, to issue a certificate for that IP. This means no client will trust connections with Caddy by default, which isn’t very useful. You can set up trust with these instructions Keep Caddy Running — Caddy Documentation but it’s annoying to manage. (Prefixing it with http:// would turn off HTTPS; see Caddyfile Concepts — Caddy Documentation)

A real domain should be used if you plan to expose this publicly (so you can connect from outside your home network), to get a publicly trusted certificate from Let’s Encrypt or ZeroSSL.

2 Likes

EDIT: I found through an old post, I have to create a new binary using xcaddy and go in order to actually use duckdns? Then the new binary can be build as a Docker image and used to create a docker container within my environment?

I believe I’m much further, however having issues with using duckdns. I found the repo for the provider and added it to my Caddyfile. However, I’m getting this error when starting caddy. I’ve also provided my snippet from the Caddyfile

adapting config using caddyfile: parsing caddyfile tokens for ‘tls’: getting module named ‘dns.providers.duckdns’: module not registered: dns.providers.duckdns, at Caddyfile:14
Error: caddy process exited with error: exit status 1

#LOCAL SERVICES
### The asterisk before the (sub)domain means it's a wildcard. Any (sub)subdomain will effectively use the ACME certificate for example.duckdns.org
example.duckdns.org {
        log {
                level INFO
                output file /etc/caddy/logs.txt {
                        roll_size 10MB
                        roll_keep 10
                }
        }

### This is what enables HTTPS for the services.
        tls {
                dns duckdns ${secret_token_from_duck_dns} {
                        # I've tried with this override_domain in play and commented out
                        # override_domain example.duckdns.org
                }
        }

       @app_1 host app_1.example.duckdns.org
       handle @app_1 {
           reverse_proxy app_1:80
       }
}

You can write a Dockerfile to make a custom build of Caddy. See Build from source — Caddy Documentation.

1 Like

ok, perfect, I believe I have it working (had to figure out some Ansible magic on my end) and now I have the docker-compose standing up a container using the plugin for duckdns (is there anyway other than NOT seeing errors to confirm this module is present?).

Now I believe this last piece is where my knowledge falls short. My domain is in CloudFlare.

Is this where I need the A & AAAA pointing to *.example? Am I supposed to be using Duckdns with CloudFlare? Sorry if this is out of scope. I really hijacked the OG thread here.

error after issuing caddy start:

tls.obtain could not get certificate from issuer {“identifier”: “example.duckdns.org”, “issuer”: “acme-v02.api.letsencrypt.org-directory”, “error”: “[example.duckdns.org] solving challenges: waiting for solver certmagic.solverWrapper to be ready: checking DNS propagation of "_acme-challenge.example.duckdns.org." (relative=_acme-challenge.example zone=duckdns.org. resolvers=[127.0.0.11:53]): querying authoritative nameservers: dial tcp 35.183.157.249:53: i/o timeout (order=https://acme-v02.api.letsencrypt.org/acme/order/1841195617/287995133477) (ca=https://acme-v02.api.letsencrypt.org/directory)”}

working Dockerfile

FROM caddy:2.8.4-builder AS builder

RUN xcaddy build \
    --with github.com/caddy-dns/duckdns

FROM caddy:2.8.4

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

If you’re using a real domain in Cloudflare, you shouldn’t be using the DuckDNS plugin. I’m confused about what you’re doing. You only use the DuckDNS plugin if you’re using a DuckDNS domain.

Are you sure you actually need a wildcard certificate? If not, then you don’t need a DNS plugin.

Please elaborate on what you’re trying to achieve here.

2 Likes

Essentially I want to use Caddy to serve as a reverse proxy for as many docker containers running on my host as possible; they’re just applications like jellyfin, immich, bookstack, etc. I want to access them while on and away from my home network, using HTTPS. I was hoping to take advantage of the wildcard certificate so urls can resolve to: jellyfin.fancydomain.com, immich.fancydomain.com, etc.

I see Couldflare has a plugin as well. Perhaps that’s what I need instead?

Yes, you should be using the Cloudflare plugin. The DuckDNS module is only for subdomains registered on duckdns.org.

You don’t need a wildcard certificate for that. You can have multiple site blocks in your Caddyfile, and Caddy will issue certs for each of those subdomains. That’s not a problem.

1 Like

This was really just intended as an example, but yeah, we choose our own subnet space. I just chose 10.0.0.0 because 10.0.0.0/8 is reserved theprivate subnet range.

ok, I switched to only using the CloudFlare plugin after the suggestion. I believe I have the proper certificates being issued, but am having issues with accessing the sites. I ended up following the steps from this blog to get a little further, but I’m guessing I’m having issues because of a lack of port (443 & 80) exposure.

Ultimately when browsing to my sites I see the CloudFlare error 521.

Is this just a matter of choosing to expose ports, not necessarily 443 or 80?

Logs from caddy start inside container:

2024/07/18 23:01:45.981	INFO	http.auto_https	enabling automatic HTTP->HTTPS redirects	{"server_name": "srv0"}
2024/07/18 23:01:45.981	INFO	tls.cache.maintenance	started background certificate maintenance	{"cache": "0xc000120f80"}
2024/07/18 23:01:45.981	INFO	http	enabling HTTP/3 listener	{"addr": ":443"}
2024/07/18 23:01:45.981	INFO	failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 7168 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details.
2024/07/18 23:01:45.981	INFO	http.log	server running	{"name": "srv0", "protocols": ["h1", "h2", "h3"]}
2024/07/18 23:01:45.982	INFO	http.log	server running	{"name": "remaining_auto_https_redirects", "protocols": ["h1", "h2", "h3"]}
2024/07/18 23:01:45.982	INFO	http	enabling automatic TLS certificate management	{"domains": ["vaultwarden.example.com", "app1.example.com", "app2.example.com"]}
2024/07/18 23:01:45.983	INFO	autosaved config (load with --resume flag)	{"file": "/config/caddy/autosave.json"}
2024/07/18 23:01:45.983	INFO	serving initial configuration
Successfully started Caddy (pid=29) - Caddy is running in the background
2024/07/18 23:01:45.983	INFO	tls	storage cleaning happened too recently; skipping for now	{"storage": "FileStorage:/data/caddy", "instance": "0f01db59-910c-4e10-bd40-984e7b00e63c", "try_again": "2024/07/19 23:01:45.983", "try_again_in": 86399.999999548}
2024/07/18 23:01:45.983	INFO	tls	finished cleaning storage units

docker logs caddy output:

{"level":"info","ts":1721343664.5588794,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
{"level":"info","ts":1721343664.559036,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0000baf00"}
{"level":"info","ts":1721343664.559062,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
{"level":"info","ts":1721343664.5590956,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
{"level":"info","ts":1721343664.559458,"logger":"http","msg":"enabling HTTP/3 listener","addr":":443"}
{"level":"info","ts":1721343664.5595055,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 7168 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details."}
{"level":"info","ts":1721343664.5595841,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
{"level":"info","ts":1721343664.5596397,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
{"level":"info","ts":1721343664.5596592,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["vaultwarden.example.com","app1.example.com","app2.example.com"]}
{"level":"info","ts":1721343664.5642488,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1721343664.5642936,"msg":"serving initial configuration"}
{"level":"info","ts":1721343664.5679274,"logger":"tls","msg":"cleaning storage unit","storage":"FileStorage:/data/caddy"}
{"level":"info","ts":1721343664.5702736,"logger":"tls","msg":"finished cleaning storage units"}
{"level":"info","ts":1721343664.7855651,"logger":"tls.issuance.acme.acme_client","msg":"got renewal info","names":["vaultwarden.example.com"],"window_start":1726349205,"window_end":1726522005,"selected_time":1726391728,"recheck_after":1721365264.7855573,"explanation_url":""}
{"level":"info","ts":1721343664.786398,"logger":"tls","msg":"updated ACME renewal information","identifiers":["vaultwarden.example.com"],"cert_hash":"dd43dd41d40606f14918f06c511da94a341d7ebdbcd269db6b5549fbf32db173","ari_unique_id":"nytfzzwhT50Et-0rLMTGcIvS1w0.AyEftawVLi1hg1ZBk-4ebQGt","cert_expiry":1729026405,"selected_time":1726465510,"next_update":1721365264.7855573,"explanation_url":""}
{"level":"info","ts":1721343664.8101387,"logger":"tls.issuance.acme.acme_client","msg":"got renewal info","names":["app1.example.com"],"window_start":1726348681.3333333,"window_end":1726521481.3333333,"selected_time":1726364505,"recheck_after":1721365264.8101313,"explanation_url":""}
{"level":"info","ts":1721343664.8109071,"logger":"tls","msg":"updated ACME renewal information","identifiers":["app1.example.com"],"cert_hash":"3182fa32b6d6c1691ee2ca0471bc03937087f36278f7aede682b8844b535e1de","ari_unique_id":"nytfzzwhT50Et-0rLMTGcIvS1w0.BEHzXl1HmNh58Onrnq_K3ZWn","cert_expiry":1729025881,"selected_time":1726493682,"next_update":1721365264.8101313,"explanation_url":""}
{"level":"info","ts":1721343664.8761413,"logger":"tls.issuance.acme.acme_client","msg":"got renewal info","names":["app2.example.com"],"window_start":1726349183.3333333,"window_end":1726521983.3333333,"selected_time":1726497360,"recheck_after":1721365264.8761332,"explanation_url":""}
{"level":"info","ts":1721343664.8770852,"logger":"tls","msg":"updated ACME renewal information","identifiers":["app2.example.com"],"cert_hash":"57736abed0517f6c010462d1cecd567507ad3b603821ed9597a09aaa647cbec3","ari_unique_id":"kydGmAOpUWiOmNbEQkjbI79YlNI.A_BskbzTlMOY_rkDLwU80atT","cert_expiry":1729026383,"selected_time":1726505572,"next_update":1721365264.8761332,"explanation_url":""}

updated Dockerfile

FROM caddy:2.8.4-builder AS builder

RUN xcaddy build \
    --with github.com/caddy-dns/cloudflare

FROM caddy:2.8.4

COPY --from=builder /usr/bin/caddy /usr/bin/caddy

updated docker-compose for caddy

services:
  caddy:
    build:
      context: ./
      dockerfile: Dockerfile
    image: caddy:cloudflare
    container_name: caddy
      CLOUDFLARE_API_TOKEN: "secret-secret"
    networks:
      - my-caddy-backend
      - my-caddy-frontend
    ports:
      - "80:80"
      - "443:443"
      - "443:443/udp"
    volumes:
      - ./site:/srv
      - data:/data
      - config:/config
    restart: unless-stopped

volumes:
  data:
  config:

networks:
  my-caddy-backend:
  my-caddy-frontend:

updated docker-compose for sample app

  whoami2:
    image: traefik/whoami
    container_name: app2
    expose:
      - 80
    networks:
      - my-caddy-backend
    restart: unless-stopped

networks:
  my-caddy-backend:

updated Caddyfile - I’ve tried both reverse proxy implementations for vaultwarden and app2.

app1.example.com {
        tls {
                dns cloudflare {
                        api_token {env.CLOUDFLARE_API_TOKEN}
                }
        }
        reverse_proxy app2:80
}

vaultwarden.example.com {
        tls {
                dns cloudflare {
                        api_token {env.CLOUDFLARE_API_TOKEN}
                }
        }
        @vaultwarden host vaultwarden.example.com
        handle @vaultwarden {
            reverse_proxy vaultwarden:80
        }
}