Hi, guys.
Got question about .json config.
My goal is to have one caddy to serve two dns records and forward them to any of two ip addresses where my application is running.
I’m trying to use json config shown below, but seems it is not correct.
Because when I power off 135.181.24.1
node I got 502 dial tcp 135.181.24.1:3000: i/o timeout
in caddy logs.
I was expected caddy to forward request to the second ip who is still alive.
I’m sure I’m doing something wrong or miss something, please correct me, or push my thoughts in correct direction.
1. Caddy version (caddy version
):
custom docker images based on Caddy 2.4.1
with certmagic-s3
and format-encoder
modules
2. How I run Caddy:
a. System environment:
Ubuntu 18.04.5 LTS
b. Command:
/usr/bin/docker run --rm --name='proxy' \
--mount type=bind,source=/etc/caddy_config.json,destination=/caddy_config.json \
--publish 80:80 \
--publish 443:443 \
artemius/caddy:2.4.1 caddy run --config /caddy_config.json
c. Dockerfile:
FROM caddy:2.4.1-builder AS builder
RUN xcaddy build --output ./caddy \
--with github.com/ss098/certmagic-s3 \
--with github.com/caddyserver/format-encoder
FROM caddy:2.4.1
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
d. My current ‘apps’ part of JSON config:
{
"apps": {
"tls": {
"certificates": {
"automate": [ "dev-art.domain.com", "dev-art-2.domain.com" ]
}
},
"http": {
"servers": {
"dev-art": {
"listen": [ ":443" ],
"routes": [
{
"match": [
{ "host": [ "dev-art.domain.com", "dev-art-2.domain.com" ] }
],
"handle": [
{
"handler": "reverse_proxy",
"upstreams": [
{ "dial": "135.181.24.1:3000" },
{ "dial": "135.181.24.2:3000" }
]
}
]
}
]
}
}
}
}
}
Sorry, because of NDA, main domain was changed to domain.com
, it was containing company name. Other parts/structure as it is