What is a correct way to serve multiple server ip's and several DNS records for one application?

Hi, guys.
Got question about .json config.
My goal is to have one caddy to serve two dns records and forward them to any of two ip addresses where my application is running.

I’m trying to use json config shown below, but seems it is not correct.
Because when I power off node I got 502 dial tcp i/o timeout in caddy logs.
I was expected caddy to forward request to the second ip who is still alive.

I’m sure I’m doing something wrong or miss something, please correct me, or push my thoughts in correct direction.

1. Caddy version (caddy version):

custom docker images based on Caddy 2.4.1 with certmagic-s3 and format-encoder modules

2. How I run Caddy:

a. System environment:

Ubuntu 18.04.5 LTS

b. Command:

/usr/bin/docker run --rm --name='proxy' \
--mount type=bind,source=/etc/caddy_config.json,destination=/caddy_config.json \
--publish 80:80 \
--publish 443:443 \
artemius/caddy:2.4.1 caddy run --config /caddy_config.json

c. Dockerfile:

FROM caddy:2.4.1-builder AS builder
RUN xcaddy build --output ./caddy \
    --with github.com/ss098/certmagic-s3 \
    --with github.com/caddyserver/format-encoder

FROM caddy:2.4.1
COPY --from=builder /usr/bin/caddy /usr/bin/caddy

d. My current ‘apps’ part of JSON config:

  "apps": {
    "tls": {
      "certificates": {
        "automate": [ "dev-art.domain.com", "dev-art-2.domain.com" ]
    "http": {
      "servers": {
        "dev-art": {
          "listen": [ ":443" ],
          "routes": [
              "match": [
                { "host": [ "dev-art.domain.com", "dev-art-2.domain.com" ] }
              "handle": [
                  "handler": "reverse_proxy",
                  "upstreams": [
                    { "dial": "" },
                    { "dial": "" }

Sorry, because of NDA, main domain was changed to domain.com , it was containing company name. Other parts/structure as it is

Please upgrade to v2.4.3!

What you’re asking about is called “load balancing” or “failover” depending on how you want to do it.

It helps to read the Caddyfile docs to explain how it works.

Caddy’s default behaviour is to randomly choose a backend. You need to enable at least active or passive health checking for Caddy to mark specific upstreams as unhealthy, which changes which server it will choose.

If you don’t enable health checking, the other option is to add retry logic. You could try a config like this:

reverse_proxy {
	transport http {
		dial_timeout 3s
	lb_try_duration 5s

What this will do is make sure that dial timeouts will only take as much as 3 seconds, and Caddy will try for up to 5 seconds to find a backend it can connect to for the request. This means that if the first attempt randomly picked an unavailable backend, it will try again.


Great and detailed response. Thanks for help Francis!

seems health_checks/active is what I need

1 Like

This topic was automatically closed after 30 days. New replies are no longer allowed.