So what’s I’m trying to do - initially just internally, I’ll work out external later on.
I’ve moving to caddy v2 from v1 - and using caddy-reverse-proxy plugin to automatically configure my caddy instance from running docker containers.
Now, I don’t run in a swarm. And so using this method, I’d technically need a caddy docker instance on each of the hosts as you can’t (as far as I know) interrogate a remote docker instance.
I know I could update dnsmasq with each subdomain and point at at each host, but is there something else I should look at - or do I bite the bullet and go swarm?
I guess I’m after, is there something I could put on a single caddy instance that would allow it to be the initial point to farm requests out to each of the hosts.
But I was hoping there was something a little less hard coded.
6. Links to relevant resources: Multiple Docker Hosts, multiple Caddys - controlling dns and redirection Multiple Docker Hosts, multiple Caddy - controlling dns and redirection
If node 1 has hass subdomain, node2 has radarr and node3 unifi obviously round robin and random wouldn’t work as I’d want it to go to the right one. Or would round robin not find an end point and move on to the next?
So I’m trying this, but not quite sure what I’m doing.
I have 3 caddys. ‘host1’ and ‘host2’ and ‘hostmain’
Host1 has services
Host2 has services (more than likely different from host 1)
Hostmain is the controller.
Host 1 has tt-rss so in caddy I have reader.host1.domain:443
reverse proxying to the tt-rss service. I’ve confirms this works by using the service. This is auto configured by docker-caddy-proxy.
So What I’m trying to do now is make it so I can call reader.domain and using load balancing to figure out which of host1 and host2 has the reader service.
By putting in an explicit directive for in hostmain
I could see it choosing host1 (correctly) and sticking with it. But in my laziness, I’d prefer to not have to explicity put entries in hostmain for each of the services.
where the subdomain of the request is given to the two sub hosts without alteration…eg back to reader, asking for reader.domain would then go to reader.host1.domain
You can load balance, but it doesn’t make sense to do it this way if those are entirely different apps. Load balancing is for distributing incoming requests to different instances of the same app to spread out the processing/resources.
That’s not what load balancing is for, as I said.
I’m not really sure how to answer your questions though, because I don’t feel I have a good picture of what exactly you’re trying to do. It sounds extremely complicated
Could you draw out a diagram to show what you’re trying to achieve? You could use something like https://excalidraw.com/
I dont use swarm. I know I probably could, but not sure I trust it.
But then how to I get to sonarr on host1? Something in dnsmasq (which my router uses) needs to know how to get to host1 to forward the request.
So I was thinking of using another caddy instance with load balancing where requests come in and it tries host 1 and host2 until it finds it. I found the lb_policy ip_hash might be a little sticky when it found it.
By this, are you talking about the labels? Because the containers aren’t “reporting” to CDP. Rather, CDP scans the running containers to grab the labels and turns them into Caddyfile config.
That’s the job of DNS. The DNS server will resolve sonarr.example.com to an IP address. You should make sure that resolves to the IP address of your host1 server.
Ah so this is the bit that I’m trying to get my head around. Without manually entering and entry for sonarr = host1 and home assistant=host2 - how can I make it I bit more flexible?
I kinda of want the swarm overlay network without the swarm bit. One entry point (dnsmasq will support a wildcard for a domain) - but it will know where to go.
The more and more I go round on this, the more I think I shoudl just put in swarm, and contrain deployment to specific servers.
yeah that’s how I was going to use the load balancing, since it seemed to work.
I had another instance that allowed anything for domain (this is all internal stuff). And then it would try the host1 or host2 end points. With round robin, obviously every 2nd time it would get nothing. But I think ip_hash was a little stickier.
But obviously that’s not it intended purpose. I was just curious if there was anything in else in the caddy toolbox that might have helped