Bind caddy to ipv6 only, via docker-compose

1. The problem I’m having:

I’m new to caddy and ipv6 so…

I have a dual-stack machine. It is currently running a web services which is bound to its only ipv4 address.
I’m looking to bring up a second webserver (uptime-kuma) which binds to the ipv6 address.

I have the following docker compose file:

version: '3'
networks:
  default:  
    name: 'proxy_network'
services:
  uptime-kuma:
    image: louislam/uptime-kuma:1
    restart: unless-stopped
    volumes:  
      - /opt/kuma-monitor/kumadata:/app/data
    ports:
      - 2052:3001
    labels:   
      caddy: status.onepub.dev
      caddy.reverse_proxy: "* {{upstreams 2052}}"
  caddy:
    image: "lucaslorentz/caddy-docker-proxy:ci-alpine"
    ports:    
      - " [::]:80:80"
      - " [::]:443:443"
        #      - "80:80" 
        #- "443:443"
    volumes:  
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /opt/kuma-monitor/cadydata/:/data
    restart: unless-stopped
    environment:
      - CADDY_INGRESS_NETWORKS=proxy_network

The port mapping was suggested by chat-gpt but that doesn’t work.

services.caddy.ports contains an invalid type, it should be a number, or an object
services.caddy.ports contains an invalid type, it should be a number, or an object

So what do I need in my docker-compose file and do I need to do any special caddy configuration outside the docker-compose file.

2. Error messages and/or full log output:

 docker-compose up
ERROR: The Compose file './docker-compose.yaml' is invalid because:
services.caddy.ports contains an invalid type, it should be a number, or an object
services.caddy.ports contains an invalid type, it should be a number, or an object

3. Caddy version:

I’m using docker compose the suggested command docker-compose exec caddy caddy version outputs a blank line.

The docker-compose references the image ‘"lucaslorentz/caddy-docker-proxy:ci-alpine’

4. How I installed and ran Caddy:

Using the above noted docker-compose file.

a. System environment:

lsb_release -a
No LSB modules are available.
Distributor ID:	Ubuntu
Description:	Ubuntu 22.04.4 LTS
Release:	22.04
Codename:	jammy

docker --version
Docker version 24.0.5, build 24.0.5-0ubuntu1~22.04.1

docker-compose --version
docker-compose version 1.29.2, build unknown

b. Command:

docker-compose up

c. Service/unit/compose file:

version: '3'
networks:
  default:  
    name: 'proxy_network'
services:
  uptime-kuma:
    image: louislam/uptime-kuma:1
    restart: unless-stopped
    volumes:  
      - /opt/kuma-monitor/kumadata:/app/data
    ports:
      - 2052:3001
    labels:   
      caddy: status.onepub.dev
      caddy.reverse_proxy: "* {{upstreams 2052}}"
  caddy:
    image: "lucaslorentz/caddy-docker-proxy:ci-alpine"
    ports:    
      - " [::]:80:80"
      - " [::]:443:443"
        #      - "80:80" 
        #- "443:443"
    volumes:  
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /opt/kuma-monitor/cadydata/:/data
    restart: unless-stopped
    environment:
      - CADDY_INGRESS_NETWORKS=proxy_network

d. My complete Caddy config:

I’m using an image (as noted above) with no changes to the config.

here is the output from the docker inspect

docker inspect kuma-monitor_caddy_1 
[
    {
        "Id": "280bab62b42df49a406601e58c15c5e4a26fb417ec42922f31d21f6cafb10379",
        "Created": "2024-03-04T05:11:50.415384304Z",
        "Path": "/bin/caddy",
        "Args": [
            "docker-proxy"
        ],
        "State": {
            "Status": "created",
            "Running": false,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 0,
            "ExitCode": 128,
            "Error": "driver failed programming external connectivity on endpoint kuma-monitor_caddy_1 (b1a9d04753823021c4ade09d43187b1f8b3f6d360e19142421f545c81ef27b2b): Error starting userland proxy: listen tcp4 0.0.0.0:443: bind: address already in use",
            "StartedAt": "0001-01-01T00:00:00Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:4a210a0f9d3e579c4c21558bf3f7eb6a4bd9559f06b63b92610ac8c24fb21386",
        "ResolvConfPath": "/var/lib/docker/containers/280bab62b42df49a406601e58c15c5e4a26fb417ec42922f31d21f6cafb10379/resolv.conf",
        "HostnamePath": "",
        "HostsPath": "/var/lib/docker/containers/280bab62b42df49a406601e58c15c5e4a26fb417ec42922f31d21f6cafb10379/hosts",
        "LogPath": "",
        "Name": "/kuma-monitor_caddy_1",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "/var/run/docker.sock:/var/run/docker.sock:ro",
                "/opt/kuma-monitor/cadydata:/data:rw"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "proxy_network",
            "PortBindings": {
                "443/tcp": [
                    {
                        "HostIp": "",
                        "HostPort": "443"
                    }
                ],
                "80/tcp": [
                    {
                        "HostIp": "",
                        "HostPort": "80"
                    }
                ]
            },
            "RestartPolicy": {
                "Name": "unless-stopped",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": [],
            "ConsoleSize": [
                0,
                0
            ],
            "CapAdd": null,
            "CapDrop": null,
            "CgroupnsMode": "private",
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "private",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": null,
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "NanoCpus": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": null,
            "DeviceCgroupRules": null,
            "DeviceRequests": null,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": null,
            "OomKillDisable": null,
            "PidsLimit": null,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": [
                "/proc/asound",
                "/proc/acpi",
                "/proc/kcore",
                "/proc/keys",
                "/proc/latency_stats",
                "/proc/timer_list",
                "/proc/timer_stats",
                "/proc/sched_debug",
                "/proc/scsi",
                "/sys/firmware"
            ],
            "ReadonlyPaths": [
                "/proc/bus",
                "/proc/fs",
                "/proc/irq",
                "/proc/sys",
                "/proc/sysrq-trigger"
            ]
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/var/lib/docker/overlay2/1fa46d5554f26a3bee736eefba34b501637555760588cde52748bc99bb0e523b-init/diff:/var/lib/docker/overlay2/e121c47518595472abe13eacd93392d3c92477ebc2b103bdce9aea74c356bd4a/diff:/var/lib/docker/overlay2/e720a151e44c7bfba51b9b04f2c52029b5b981e2cb785459d0e6d705d33c4b9b/diff:/var/lib/docker/overlay2/158d17841fa3b05713ade9d89047c89608ead8652e5707005142f79d3d74afea/diff",
                "MergedDir": "/var/lib/docker/overlay2/1fa46d5554f26a3bee736eefba34b501637555760588cde52748bc99bb0e523b/merged",
                "UpperDir": "/var/lib/docker/overlay2/1fa46d5554f26a3bee736eefba34b501637555760588cde52748bc99bb0e523b/diff",
                "WorkDir": "/var/lib/docker/overlay2/1fa46d5554f26a3bee736eefba34b501637555760588cde52748bc99bb0e523b/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/opt/kuma-monitor/cadydata",
                "Destination": "/data",
                "Mode": "rw",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Type": "bind",
                "Source": "/var/run/docker.sock",
                "Destination": "/var/run/docker.sock",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            }
        ],
        "Config": {
            "Hostname": "280bab62b42d",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "2019/tcp": {},
                "443/tcp": {},
                "80/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "CADDY_INGRESS_NETWORKS=proxy_network",
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "XDG_CONFIG_HOME=/config",
                "XDG_DATA_HOME=/data"
            ],
            "Cmd": [
                "docker-proxy"
            ],
            "Image": "lucaslorentz/caddy-docker-proxy:ci-alpine",
            "Volumes": {
                "/data": {},
                "/var/run/docker.sock": {}
            },
            "WorkingDir": "",
            "Entrypoint": [
                "/bin/caddy"
            ],
            "OnBuild": null,
            "Labels": {
                "com.docker.compose.config-hash": "3a8f635a31e4b58e232d48cb384434bd5c800bfbcb2354060c184f7758613218",
                "com.docker.compose.container-number": "1",
                "com.docker.compose.oneoff": "False",
                "com.docker.compose.project": "kuma-monitor",
                "com.docker.compose.project.config_files": "docker-compose.yaml",
                "com.docker.compose.project.working_dir": "/opt/kuma-monitor",
                "com.docker.compose.service": "caddy",
                "com.docker.compose.version": "1.29.2",
                "maintainer": "Lucas Lorentz <lucaslorentzlara@hotmail.com>"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "d6278cff3dd3b3484711cb70769fbf16b4012dce8bca781f4af0412f0f440cad",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {},
            "SandboxKey": "/var/run/docker/netns/d6278cff3dd3",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "proxy_network": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": [
                        "caddy",
                        "280bab62b42d"
                    ],
                    "NetworkID": "ad44255a0712950d2b5ea4e85bc9d6dcc74603308d672a0112bcd8b1ede7a0f6",
                    "EndpointID": "",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "",
                    "DriverOpts": null
                }
            }
        }
    }
]

5. Links to relevant resources:

So i’ve found something that looks a little better than what chat-gpt offered for the docker-compose.yaml

version: '3'
networks:
  default:
    name: 'proxy_network'
services:
  uptime-kuma:
    image: louislam/uptime-kuma:1
    restart: unless-stopped
    volumes:
      - /opt/kuma-monitor/kumadata:/app/data
    ports:
      - 2052:3001
    labels:
      caddy: status.onepub.dev
      caddy.reverse_proxy: "* {{upstreams 2052}}"
  caddy:
    image: "lucaslorentz/caddy-docker-proxy:ci-alpine"
    ports:
      - "80:80"
      - "443:443"
    networks:
      - ip6net
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /opt/kuma-monitor/cadydata/:/data
    restart: unless-stopped
    environment:
      - CADDY_INGRESS_NETWORKS=proxy_network

networks:
   ip6net:
     enable_ipv6: true
     ipam:
       config:
         #- subnet: 2401:da80:1000:2::/64        
         - subnet: 2600:1900:4180:bfa7:0:0:0:0/64

I now get the following error:

 docker-compose up
Creating network "kuma-monitor_default" with the default driver
Creating network "kuma-monitor_ip6net" with the default driver
Recreating kuma-monitor_caddy_1 ... 
Recreating kuma-monitor_caddy_1       ... error
WARNING: Host is already in use by another container

ERROR: for kuma-monitor_caddy_1  Cannot start service caddy: driver failed programming external connectivity on endpoint kuma-monitor_caddy_1 (4ba40Recreating kuma-monitor_uptime-kuma_1 ... done

ERROR: for caddy  Cannot start service caddy: driver failed programming external connectivity on endpoint kuma-monitor_caddy_1 (4ba406b811375dea6cd837e76c6105840f991130627e9cd597b66458128c6463): Error starting userland proxy: listen tcp4 0.0.0.0:443: bind: address already in use
ERROR: Encountered errors while bringing up the project.

So its clearly still trying to bind to ipv4 addresses. I guess the docker-compose file doesn’t actually give it any instructions to not bind to the ipv4 address.

In Caddy you need to use the bind directive to change the bind address. You can use [::] I think to bind to only IPv6.

You can use the default_bind global option to set it for all sites in your config at once.

thanks for the response.

So I added an additional mount for /etc/caddy/Caddyfile

 - /opt/kuma-monitor/Caddyfile:/etc/caddy/Caddyfile

I then tried creating a Caddyfile and starting docker

Caddyfile content

{
  default_bind [::]
}

Docker file volume map

 caddy:
    image: "lucaslorentz/caddy-docker-proxy:ci-alpine"
    ports:    
      - "80:80" 
      - "443:443"
      - "443:443/udp" # I assume for quic
    networks:
      - ip6net
    volumes:  
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /opt/kuma-monitor/caddydata/:/data
      - /opt/kuma-monitor/caddyconfig/:/config
      - /opt/kuma-monitor/Caddyfile:/etc/caddy/Caddyfile

When I tried to start docker I got the following error:

ocker-compose up
Removing kuma-monitor_caddy_1
kuma-monitor_uptime-kuma_1 is up-to-date
Recreating 280bab62b42d_kuma-monitor_caddy_1 ... 
Recreating 280bab62b42d_kuma-monitor_caddy_1 ... error

ERROR: for 280bab62b42d_kuma-monitor_caddy_1  Cannot start service caddy: driver failed programming external connectivity on endpoint kuma-monitor_caddy_1 (544506352e2df81561b29eccc0e2c3e25506f568bd23440df56b92aa79e257bc): Error starting userland proxy: listen tcp4 0.0.0.0:443: bind: address already in use

ERROR: for caddy  Cannot start service caddy: driver failed programming external connectivity on endpoint kuma-monitor_caddy_1 (544506352e2df81561b29eccc0e2c3e25506f568bd23440df56b92aa79e257bc): Error starting userland proxy: listen tcp4 0.0.0.0:443: bind: address already in use
ERROR: Encountered errors while bringing up the project.

I assume this means the content of my file is wrong or docker isn’t reading the Caddyfile ?

The Caddyfile directory is owned by the ‘docker’ group.


-rw-rw-r-- 1 yyyyyy docker       24 Mar  5 09:16 Caddyfile
drwxrwxr-x 2 yyyyyy docker     4096 Mar  5 09:20 caddyconfig/
drwxrwxr-x 2 yyyyyy docker     4096 Mar  4 16:03 caddydata/
-rw-rw-r-- 1 yyyyyy yyyyyy     995 Mar  5 09:21 docker-compose.yaml

the mapped data directory is also empty but maybe caddy doesn’t write to it until it gets further into the startup process.

See GitHub - lucaslorentz/caddy-docker-proxy: Caddy as a reverse proxy for Docker, you can set global options by adding a label to your Caddy container itself.

The CDP docs also explain how you can set up a base Caddyfile if you prefer, you need to set an env var for it to get loaded.

So I now have caddy loading my Caddyfile.

The problem is that still appears to be binding to ipv4.

netstat -tnap
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      -                   
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      -                   
tcp6       0      0 :::80                   :::*                    LISTEN      -                   
tcp6       0      0 :::443                  :::*                    LISTEN      -                   

When I shut down caddy I see:

netstat -tnap

Note: I removed lines showing other ports (22, 53).

My current caddyfile is:

{
	default_bind [::]
}

status.onepub.dev {
	bind [::]
	respond "hello, world"
}

logs:

docker-compose up
WARNING: Found orphan containers (kuma) for this project. If you removed or renamed this service in your compose file, you can run this command with the --remove-orphans flag to clean it up.
Recreating caddy ... done
Attaching to caddy
caddy    | {"level":"info","ts":1709771204.8538752,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":""}
caddy    | {"level":"info","ts":1709771204.8561149,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
caddy    | {"level":"info","ts":1709771204.8564017,"logger":"http.auto_https","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
caddy    | {"level":"info","ts":1709771204.8564548,"logger":"http.auto_https","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
caddy    | {"level":"info","ts":1709771204.8564532,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc00016fb00"}
caddy    | {"level":"info","ts":1709771204.8567996,"logger":"http","msg":"enabling HTTP/3 listener","addr":"127.0.0.2:443"}
caddy    | {"level":"info","ts":1709771204.8569274,"msg":"failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details."}
caddy    | {"level":"info","ts":1709771204.857245,"logger":"http","msg":"enabling HTTP/3 listener","addr":"[::]:443"}
caddy    | {"level":"info","ts":1709771204.8574195,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
caddy    | {"level":"info","ts":1709771204.8575099,"logger":"http.log","msg":"server running","name":"remaining_auto_https_redirects","protocols":["h1","h2","h3"]}
caddy    | {"level":"info","ts":1709771204.857529,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["status.onepub.dev"]}
caddy    | {"level":"info","ts":1709771204.8585439,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
caddy    | {"level":"info","ts":1709771204.8585625,"msg":"serving initial configuration"}
caddy    | {"level":"warn","ts":1709771204.860624,"logger":"tls","msg":"storage cleaning happened too recently; skipping for now","storage":"FileStorage:/data/caddy","instance":"295fd264-38cf-4f7b-a3c5-3ca6486a7163","try_again":1709857604.8606222,"try_again_in":86399.999999443}
caddy    | {"level":"info","ts":1709771204.8606982,"logger":"tls","msg":"finished cleaning storage units"}

So I tried to trick caddy into moving to 127.0.0.2

{
	default_bind [::]
}

status.onepub.dev {
	bind 127.0.0.2 [::]
	respond "hello, world"
}

This resulted in the following log message:

caddy    | {"level":"info","ts":1709771516.758325,"logger":"http","msg":"enabling HTTP/3 listener","addr":"127.0.0.2:443"}

With the following ports being listened to.

netstat -unap
udp        0      0 0.0.0.0:443             0.0.0.0:*                           -                   
udp6       0      0 :::443                  :::*                                -                   
udp6       0      0 fe80::4001:aff:fe00:546 :::*                                -                   
udp6       0      0 fe80::4001:aff:fe00:546 :::*                                -       

and for tcp

netstat -tnap
tcp        0      0 0.0.0.0:2019            0.0.0.0:*               LISTEN      -                   
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      -                   
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      -                   
tcp6       0      0 :::2019                 :::*                    LISTEN      -                   
tcp6       0      0 :::22                   :::*                    LISTEN      -                   
tcp6       0      0 :::80                   :::*                    LISTEN      -                   
tcp6       0      0 :::443                  :::*                    LISTEN      - 

(Note I’ve now added in a docker mapping for 2019).

So even though the log say it will be listening on udp:127.0.0.2:443’ we are still seeing it listen on 0.0.0.0:443

Note: I also tried this without the default_bind just incase that was causing caddy to bind to :0:0:0:0 by default - but alas no difference.

I’m pretty sure netstat lies, it doesn’t accurately separate IPv4 from IPv6, it bundles them together and whatnot.

Try to actually connect to Caddy, see what happens. Don’t trust netstat as being authoritative.

I’m pretty sure netstat lies, it doesn’t accurately separate IPv4 from IPv6, it bundles them together and whatnot.

I’m pretty dubious about this statement. The debian netstat bug list doesn’t show anything and my test results agree with netstat - if I have another app listening on 127 then caddy won’t start.
netstat is a core networking tool, if this was actually an issue then there would be bug reports.

ERROR: for caddy  Cannot start service caddy: driver failed programming external connectivity on endpoint caddy (7ac0245fd763135324c75cf6381006145e94d5c8889a45bc0db9d7959cd0a367): Error starting userland proxy: listen tcp4 0.0.0.0:443: bind: address already in use
status.onepub.dev {
	bind [::]
	respond "hello, world"
}

If I’ve understood the bind directive correctly the caddy shouldn’t be making any attempt to listen to on ipv4.

This feels like its a bug in caddy!

It interested me so I tried it out:

# Global Options
{
        storage file_system {
                root /usr/local/etc/caddy

        default_bind 2003:a:1704:63aa:5054:ff:fef8:7236
}
# Sites
:80 {
        bind 2003:a:1704:63aa:5054:ff:fef8:7236
}
root@freebsd:/usr/local/etc/caddy # sockstat -6 -l
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS      
root     caddy      23750 8  tcp6   2003:a:1704:63aa:5054:ff:fef8:7236:443 *:*
root     caddy      23750 9  udp6   2003:a:1704:63aa:5054:ff:fef8:7236:443 *:*
root     caddy      23750 11 tcp6   2003:a:1704:63aa:5054:ff:fef8:7236:80 *:*

Trying the same with :: gave me different results:

# Global Options
{
        storage file_system {
                root /usr/local/etc/caddy

        default_bind ::
}
# Sites
:80 {
        bind ::
}
root@freebsd:/usr/local/etc/caddy # sockstat -6 -l
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS      
root     caddy      58968 8  tcp46  *:443                 *:*
root     caddy      58968 9  udp46  *:443                 *:*

So it seems like Caddy will bind to only IPv6 if you specify a real GUA, but will bind to “any” interface. It’s probably due to dual stack in the operating system’s network stack and a standard to support IPv4 mapped IPv6 addresses or something.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.