Caddy's preference: ipv4 vs ipv6

1. The problem I’m having:

When there is an option, how does caddy choose between using ipv4 and ipv6? For example, during a reverse_proxy myhost where myhost resolves both to an ipv4 and ipv6 address.

Does it do something similar to web browsers and use the happy eyeballs algorithm? Or does it always default to ipv4, or what?

2. Error messages and/or full log output:

N/A

3. Caddy version:

v2.10.0

4. How I installed and ran Caddy:

a. System environment:

Fedora 41, x86_64: docker

d. My complete Caddy config:

An example config for this problem could be:

mysite.example.com {
    reverse_proxy myhost:1234
}

5. Links to relevant resources:

Discussed and answered here

1 Like

What would happen if (according to the post) the first ip (ipv6) address fails the second ip address (ipv4) succeeds; but an hour later fails for whatever reason. Would it continue to retry only the second IP address (ipv4)? Or would it try the ipv6 again? Or would it just invoke getaddrinfo again to and go through the same process again (I assume it is not this one)?

With the simplistic config given in your top post, Caddy does not remember the last successful IP address for a given hostname. It’ll keep asking the system DNS resolver (not customized with the the resolver subdirective) for resolution. If you want Caddy to remember the last successful upstream, you’ll have to add multiple ones and set the appropriate lb_policy .

Load balancing isn’t exactly what I’m looking for in this case, though perhaps it could be fiddled with to fit the problem?

My goal here is basically to ensure that during a single run of caddy (without any knowledge of previous runs) a reverse proxy can ‘come back up’ ASAP even when network conditions are unstable. I am not sure if you meant ‘remember the last successful ip address’ to mean from a previous run, but in this case I am only interested in a single isolated invocation of the caddy CLI command.


For example, consider the following scenario:

  • myhost resolves to 1.2.3.4 and ab:cd:ef:gh:: (one ipv4 and one ipv6) and my reverse proxy is from reverse_proxy myhost:5678
  • At time t=0 ipv4 internet access is down temporarily. As such, caddy’s attempt to connect to the first IP address fails. So it tries for the second ip address, the ipv6- this works.
  • At time t=60 ipv4 connection is restored. Presumably no change since the ipv6 connection is still working.
  • At time t=120 ipv6 connection drops.

What happens next?

  1. Caddy re-requests DNS info from getaddrinfo, once more gets 1.2.3.4 and ab:cd:ef:gh::, tries the first address (the ipv4) again, and that works. In this case, the reverse proxy only lost connection for a moment when ipv6 connectivity dropped as it was able to use the ipv4 that it originally failed to use.
  2. Caddy knows the ipv6 address it was just connected to a moment ago sometimes works, so it keeps retrying it. In this case, the reverse proxy looses connection until ipv6 connectivity is restored, no matter if ipv4 connectivity is present (even though myhost also resolves to 1.2.3.4).
  3. Something else?

I am inclined to say it is (1) based off your response, but if it is not, perhaps I could finagle this into a load balancing policy- but that seems like it might be overkill for this use case. Trying to restart caddy just to force it to re-check all addresses given by getaddrinfo seems worse though. Also: Thanks so much for your help!

You won’t need to restart Caddy. Unlike others, Caddy doesn’t resolve the upstreams once at boot time. The upstream address is resolved continuously.

Without thinking deeply about your scenario and experimenting, I think you may get closer to your goal of robustness by utilizing lb_try_duration and lb_try_interval with maybe lb_retries.

Something similar to what Simon describes in his TIL, except your goal isn’t to replace backend rather to have longer retry duration

https://til.simonwillison.net/caddy/pause-retry-traffic

1 Like

Perfect. I made a toy example that curl’d from myhost every 2 seconds and it seemed to instantly switch between ipv4 and ipv6 as each network individually went down if the other was up. If the delay is ever an issue I’ll look into what you recommended to remedy that. Thanks :slight_smile:

1 Like