Best practise for multiple tenant, multiple HTTPS domain server?

Hello!

I wonder if other Caddy users are dealing with multiple SSL sites on a single server/IP-address, isolating each different “tenant”.

My use case is replacing an aging Cpanel+apache “multiple reseller” self hosting solution with a more modern, secure and TLS1.2+HTTP/2, dockerized approach.

Right now I have a working setup with several personal sites I am serving with HTTPS just fine with one Caddy instance on a internet-exposed container acting as frontend server for all domains. This instance is doing all Let’s Encrypt magic for all domains, then proxying each one to internal Caddy instances, one per tenant with its own container, Caddyfile and isolated / filesystem.

So far, that works like a charm. They are personal low-traffic sites, but I do not foresee any scalability issues.

But I am running into trouble when enabling some advances features on the internal sites, like a http.git hook, as the remote cannot reach into the internal server (the frontend rejects the POST from bitbucket with a 402, as it does not know anything about the git webhook from the internal site)

Am I over-complicating this? Is there another way to handle multi-domain, multi-tenant setups?
Not planning to start a huge hosting business, just migrate from 15-year old server into the new cloud+infra-as-code+devops concept.

This is the Caddfile for proxy:

www.tenantone.com, tenantone.com {
  tls webmaster@tenantone.com
  log stdout
  errors stdout
  proxy / web-tenantone:8000
}

www.tenanttwo.org, tenanttwo.org {
  tls webmaster@tenanttwo.org
  log stdout
  errors stdout
  proxy / web-tenanttwo:8001
}

And this is a typical Caddyfile for a tenant:

:8000 {
  tls off
  log stdout
  errors stdout
  root /data/content
  ext .html
  rewrite / /old/index.html
  rewrite {
    to {uri} /old/{uri}
  }
  rewrite {
    to {uri} /new/{uri}
  }
}

Thanks in advance for any insight!
///Pablo

I’m not sure I follow; in order to have a git webhook endpoint, you would have a git directive somewhere in the Caddyfile, wouldn’t you? This approach you have should work pretty well.

If you have a git directive on a tentant Caddy instance, and the front-end Caddy is faithfully proxying all requests to it properly, the webhooks should function just fine. I’ve done this myself in the past. If you can give us a real example of one of your attempts to do this that didn’t work, maybe we can take a look at that?


If I were going to set up a massively multi-tenanted fully-HTTPS shared hosting service, I would probably put Caddy in front with a really simple file:

:80, 443 {
  tls {
    max_certs [some large number]
  }
  proxy / http://haproxy:80 {
    transparent
  }
}

This would make startup pretty fast. I expect I would set [some large number] to the weekly rate-limit of LetsEncrypt and restart Caddy once a week.

Then I would use jwilder/docker-gen to template out HAProxy’s configuration and do graceful reloads on the fly in reaction to me (or my client management portal) spinning up docker containers. I wouldn’t need to write a single scrap of tenant-specific code that way. They could probably even write their own Caddyfiles (or I could template that too based on some options they could select from the portal).

I’d love other people’s opinions on this concept, too.

2 Likes

Yes, I have the git directive in one of the internal servers, set like this:

web-one/Caddyfile

:8000 {
  tls off
  log stdout
  errors stdout
  git {
    repo        git@bitbucket.org:organization/one-website.git
    path        /data
    branch      production
    key         /data/bitbucket_one-website_id_rsa
    interval    3600
    hook        /webhook
    hook_type   bitbucket
  }
  root /data/content
  ext .html
  rewrite / /old/index.html
  rewrite / {
     to {uri} /new/{uri} /old/{uri}
  }
}

Then, the proxy Caddy instance is set with:

proxy/Caddyfile

www.siteone.com.ar, siteone.com.ar {
  tls one@siteone.com.ar
  log stdout
  errors stdout
  proxy / web-one:8000
}

www.sitetwo.org, sitetwo.org {
  tls two@sitetwo.org
  log stdout
  errors stdout
  proxy / web-two:8001
}

Here are the annotated log snippets, maybe this provides enough info to get to the bottom.
This is personal :slight_smile:

# A simple web request to web-one root, https:/www.siteone.com.ar/
Jun 21 19:46:06 proxy proxy_proxy_1: 190.136.126.164 - [21/Jun/2017:22:46:06 +0000] "GET / HTTP/2.0" 304 0
Jun 21 19:46:06 web-one proxy_web-one_1: 172.18.0.3 - [21/Jun/2017:22:46:06 +0000] "GET /old/index.html HTTP/1.1" 304 0

# Just to show how the rewrite rules work:
# If there is no content at /, try /new then /old
Jun 21 19:46:06 proxy proxy_proxy_1: 190.136.126.164 - [21/Jun/2017:22:46:06 +0000] "GET /rox.css HTTP/2.0" 304 0
Jun 21 19:46:06 web-one proxy_web-one_1: 172.18.0.3 - [21/Jun/2017:22:46:06 +0000] "GET /old/rox.css HTTP/1.1" 304 0
Jun 21 19:46:06 proxy proxy_proxy_1: 190.136.126.164 - [21/Jun/2017:22:46:06 +0000] "GET /images/set092-chica.jpg HTTP/2.0" 304 0
Jun 21 19:46:06 web-one proxy_web-one_1: 172.18.0.3 - [21/Jun/2017:22:46:06 +0000] "GET /old/images/set092-chica.jpg HTTP/1.1" 304 0
# If the request path has /old, then it is simply send as is.
Jun 21 19:46:20 proxy proxy_proxy_1: 190.136.126.164 - [21/Jun/2017:22:46:20 +0000] "GET /old/index.html HTTP/2.0" 200 8661
Jun 21 19:46:20 web-one proxy_web-one_1: 172.18.0.3 - [21/Jun/2017:22:46:20 +0000] "GET /old/index.html HTTP/1.1" 200 8661
Jun 21 19:46:21 proxy proxy_proxy_1: 190.136.126.164 - [21/Jun/2017:22:46:21 +0000] "GET /old/images/set092-chica.jpg HTTP/2.0" 200 22856
Jun 21 19:46:21 web-one proxy_web-one_1: 172.18.0.3 - [21/Jun/2017:22:46:21 +0000] "GET /old/images/set092-chica.jpg HTTP/1.1" 200 22856
Jun 21 19:46:21 web-one proxy_web-one_1: 172.18.0.3 - [21/Jun/2017:22:46:21 +0000] "GET /old/rox.css HTTP/1.1" 200 1864
# Content that is at /new gets retuned fine.
Jun 21 19:46:28 proxy proxy_proxy_1: 190.136.126.164 - [21/Jun/2017:22:46:28 +0000] "GET /new/new.html HTTP/2.0" 200 249
Jun 21 19:46:28 web-one proxy_web-one_1: 172.18.0.3 - [21/Jun/2017:22:46:28 +0000] "GET /new/new.html HTTP/1.1" 200 249 

# Here is Bitbucket webhook call. Does not look that it was proxied to web-one at all
Jun 21 19:40:10 proxy proxy_proxy_1: 104.192.143.193 - [21/Jun/2017:22:40:10 +0000] "POST /webhook HTTP/1.1" 403 14

# This is me, doing a `curl --data-binary @bitbucket-webhook-payload https://www.siteone.com.ar/webhook`
# where bitbucket-webhook-payload is the JSON that Bitbucket was sending. I am sure the format is not appropiate, just posting something.
Jun 21 19:45:05 proxy proxy_proxy_1: 190.136.126.164 - [21/Jun/2017:22:45:05 +0000] "POST /webhook HTTP/2.0" 400 16

# Here the same `curl` to a non-existing webhook with `curl --data-binary @bitbucket-webhook-payload https://www.siteone.com.ar/nonexisting-webhook`
# Funny thing is that here web-one gets a request, and responds with 404 as expected.
# Note all three web-one `rewrite` rules were applied (web-one 404 comes from /old/, the last one) 
Jun 21 19:45:28 proxy proxy_proxy_1: 190.136.126.164 - [21/Jun/2017:22:45:28 +0000] "POST /nonexisting-webhook HTTP/2.0" 404 14
Jun 21 19:45:28 web-one proxy_web-one_1: 172.18.0.3 - [21/Jun/2017:22:45:28 +0000] "POST /old/nonexisting-webhook HTTP/1.1" 404 14

I suspect http.git' and rewrite` may be interacting in funny way?
I will setup a similar git webhook with one of the other sites which have no rewrite rules on it.

That is a good idea.
My setup is not intended to server more than two or three personal sites, so haproxy maybe too much. But for a massive hosting, makes a lot of sense.

HAProxy doesn’t have to be complicated :slight_smile:

In my example it’s purely used so that the edge Caddy can have a single point to route to, letting me go “hands-off” on configuration for it once it’s set up, and template out a HAProxy conf instead which will simply route based on hostname to the appropriate Docker container. It’s not really used for high availability in this case, merely convenience.

# Here is Bitbucket webhook call. Does not look that it was proxied to web-one at all
Jun 21 19:40:10 proxy proxy_proxy_1: 104.192.143.193 - [21/Jun/2017:22:40:10 +0000] "POST /webhook HTTP/1.1" 403 14

This part looks odd to me. For Bitbucket hooks, http.git tests to make sure the remote IP comes from Bitbucket, setting http.StatusForbidden if it doesn’t. But 104.192.143.193 should be in the list of allowed IPs - so you shouldn’t be getting any 403s… as far as I know from a quick look over the code.

I can’t see anything else that might issue a 403.

Any thoughts on that @abiosoft?

Do you mean the proxy is making that assestment?

Shouldn’t be the actual web-one server, having the git webhook configuration, be the one making that check? What I see odd is that seems the request does not get to the internal server…

I don’t think your edge/proxy server has any cause for issuing a 403, no. The only 403 I can see in the chain, anywhere, is the Bitbucket webhook logic on your internal upstream, so while it doesn’t seem to be logging this request, I can only assume the 403 must be coming from upstream somehow.

Oh! Jeez. Can’t believe I didn’t think of this.

Add transparent to your edge proxy. Without it, requests to the internal server look like they’re coming from 172.18.0.3, not 104.192.143.193, so the webhooks fail IP validation against the Bitbucket CIDR block and get 403’d.

I’m pretty confident that this is the issue and the fact that the internal server isn’t logging this is some kind of other issue.

Ok, I almost sure I tried it, but maybe not in the whole proxy stanza, but creating a specific one for the webhook path. Anyway, I tried changing proxy/Caddyfile proxy with

  proxy / web-one:8000 {
    transparent
  }

and I see no change. Just in case, rebuilt the docker image to fetch latest version.

I still see the internal docker IP on the request log in the web-one server. Is that correct?

Jun 21 22:14:48 proxy proxy_proxy_1: 190.136.126.164 - - [22/Jun/2017:01:14:48 +0000] "GET /new/new.html HTTP/2.0" 200 249
Jun 21 22:14:48 web-one proxy_web-one_1: 172.18.0.3 - - [22/Jun/2017:01:14:48 +0000] "GET /new/new.html HTTP/1.1" 200 249

It is, because Caddy logs the actual remote IP, not the transparently forwarded client’s IP.

But, I’ve just realised that the Bitbucket webhook logic tests against r.RemoteAddr, which does not account for {>X-Real-IP} or {>X-Forwarded-For}. You’d need to have your webhook listening on the edge server, or it will always fail IP validation.

Sounds like a feature request for http.git, maybe? What do you think?

I think so. It’s worth mentioning that I still can’t definitely rule out that the edge server isn’t giving out the 403 for some reason - which might explain why the internal server isn’t logging the request - but I can’t see how or why, and the Bitbucket-specific IP validation WILL be an issue for reverse-proxied webhook listeners. Definitely worth bringing up at abiosoft/caddy-git.

1 Like

Ahh… do not know how I missed this issue. A little bit different use-case, but probably the same origin.

The last comment on that issue is a reasonable workaround:

Adding

  realip {
    from 172.18.0.0/16
  }

to the internal server was enough to make the webhook work!

Nothing like a good night sleep to revisit an issue :slight_smile:

2 Likes

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.