Multiple Docker Hosts, multiple Caddy - controlling dns and redirection

1. Caddy version (caddy version):

v2

2. How I run Caddy:

Caddy-reverse-proxy using docker labels

a. System environment:

Host 1, home assistant
Host 2, main docker host

b. Command:

n/a

c. Service/unit/compose file:

n/a

d. My complete Caddyfile or JSON config:

n/a

3. The problem I’m having:

So what’s I’m trying to do - initially just internally, I’ll work out external later on.
I’ve moving to caddy v2 from v1 - and using caddy-reverse-proxy plugin to automatically configure my caddy instance from running docker containers.

Now, I don’t run in a swarm. And so using this method, I’d technically need a caddy docker instance on each of the hosts as you can’t (as far as I know) interrogate a remote docker instance.

So how might I tell dns, that when I go hass.internal.example.com (for home assistant off host 1) use use the host1. And when I I go unifi.internal.example.com that this is off host2?

I know I could update dnsmasq with each subdomain and point at at each host, but is there something else I should look at - or do I bite the bullet and go swarm?

I guess I’m after, is there something I could put on a single caddy instance that would allow it to be the initial point to farm requests out to each of the hosts.

4. Error messages and/or full log output:

5. What I already tried:

I was thinking of just having host1.example.com and host2.example.com and then just updating dnsmasq for hass = hass.host1.example.com and unifi = unifi.host2.example.com

But I was hoping there was something a little less hard coded.

6. Links to relevant resources: Multiple Docker Hosts, multiple Caddys - controlling dns and redirection Multiple Docker Hosts, multiple Caddy - controlling dns and redirection

Could I use load balancing between 2-3 caddy instances?

reverse_proxy node1:80 node2:80 node3:80

If node 1 has hass subdomain, node2 has radarr and node3 unifi obviously round robin and random wouldn’t work as I’d want it to go to the right one. Or would round robin not find an end point and move on to the next?

So I’m trying this, but not quite sure what I’m doing.

I have 3 caddys. ‘host1’ and ‘host2’ and ‘hostmain’
Host1 has services
Host2 has services (more than likely different from host 1)
Hostmain is the controller.

Host 1 has tt-rss so in caddy I have
reader.host1.domain:443
reverse proxying to the tt-rss service. I’ve confirms this works by using the service. This is auto configured by docker-caddy-proxy.

So What I’m trying to do now is make it so I can call reader.domain and using load balancing to figure out which of host1 and host2 has the reader service.

By putting in an explicit directive for in hostmain

    @reader host reader.domain
        reverse_proxy @reader reader.host1.domain:443 reader.host2.domain:443 {
           lb_policy   ip_hash
            transport http {
                tls_insecure_skip_verify
            }
        }

I could see it choosing host1 (correctly) and sticking with it. But in my laziness, I’d prefer to not have to explicity put entries in hostmain for each of the services.

Can I (and writing sudo code here):

    reverse_proxy <anything> <anything>.host1.domain <anything>.host2.domain

where the subdomain of the request is given to the two sub hosts without alteration…eg back to reader, asking for reader.domain would then go to reader.host1.domain

You can load balance, but it doesn’t make sense to do it this way if those are entirely different apps. Load balancing is for distributing incoming requests to different instances of the same app to spread out the processing/resources.

That’s not what load balancing is for, as I said.

I’m not really sure how to answer your questions though, because I don’t feel I have a good picture of what exactly you’re trying to do. It sounds extremely complicated :thinking:

Could you draw out a diagram to show what you’re trying to achieve? You could use something like https://excalidraw.com/

1 Like

Not sure it’s a great image.

So I will have a few docker hosts using caddy-reverse-proxy ( lucaslorentz/caddy-docker-proxy: Caddy as a reverse proxy for Docker (github.com) ) where each of the containers report back to the caddy instance where it is.

I dont use swarm. I know I probably could, but not sure I trust it.

But then how to I get to sonarr on host1? Something in dnsmasq (which my router uses) needs to know how to get to host1 to forward the request.

So I was thinking of using another caddy instance with load balancing where requests come in and it tries host 1 and host2 until it finds it. I found the lb_policy ip_hash might be a little sticky when it found it.

But I’m not sure what else I can use.

1 Like

By this, are you talking about the labels? Because the containers aren’t “reporting” to CDP. Rather, CDP scans the running containers to grab the labels and turns them into Caddyfile config.

That’s the job of DNS. The DNS server will resolve sonarr.example.com to an IP address. You should make sure that resolves to the IP address of your host1 server.

1 Like

Ah so this is the bit that I’m trying to get my head around. Without manually entering and entry for sonarr = host1 and home assistant=host2 - how can I make it I bit more flexible?

I kinda of want the swarm overlay network without the swarm bit. One entry point (dnsmasq will support a wildcard for a domain) - but it will know where to go.

The more and more I go round on this, the more I think I shoudl just put in swarm, and contrain deployment to specific servers.

1 Like

Why is this a problem? Why can’t you just do that? That’s the point of DNS.

I guess because I’m lazy. Well not the right term. I’d like it to be automatic.

You could use a different DNS server like CoreDNS maybe, which might help you make things more “automatic” there.

Either way, you need something somewhere for your clients to know which server to reach out to. That’s what DNS does.

1 Like

yeah that’s how I was going to use the load balancing, since it seemed to work.

I had another instance that allowed anything for domain (this is all internal stuff). And then it would try the host1 or host2 end points. With round robin, obviously every 2nd time it would get nothing. But I think ip_hash was a little stickier.

But obviously that’s not it intended purpose. I was just curious if there was anything in else in the caddy toolbox that might have helped

So I guess the other options is a smarter dns, which I think you were trying to point me at @francislavoie

Ideally one that can be updated from labels in the same way as caddy-docker-proxy so that the running containers can tell dns where they are.

This topic was automatically closed after 30 days. New replies are no longer allowed.