Routing by sub domain with auto_https

1. Caddy version (caddy version):

v2.3.0 h1:fnrqJLa3G5vfxcxmOH/+kJOcunPLhSBnjgIvjXV/QTA=

This includes route53 module, which currently isn’t being used. That’s the step after this :slight_smile:

2. How I run Caddy:

sudo ./caddy run --config=new_caddy.json

a. System environment:

MacOS 10.15.7 right now
Eventually Ubuntu 20.14

b. Command:

sudo ./caddy run --config=new_caddy.json

c. Service/unit/compose file:

N/A

d. My complete Caddyfile or JSON config:

{
  "apps": {
    "http": {
      "servers": {
        "srv0": {
          "listen": [
            ":443"
          ],
          "logs": {
            "default_logger_name": "log0"
          },
          "routes": [
            {
              "group":"api",
              "match": [
                {
                  "host": [
                    "api.*"
                  ]
                }
              ],
              "handle": [
                {
                  "handler": "reverse_proxy",
                  "upstreams": [
                    {
                      "dial": "127.0.0.1:8000"
                    }
                  ]
                }
              ]
            },
            {
              "group":"app",
              "match": [
                {
                  "host": [
                    "*"
                  ]
                }
              ],
              "handle": [
                {
                  "handler": "reverse_proxy",
                  "upstreams": [
                    {
                      "dial": "127.0.0.1:5000"
                    }
                  ]
                }
              ]
            }
          ]
        }
      }
    },
    "tls": {
      "automation": {
        "policies": [
          {
            "issuers": [
              {
                "module": "internal"
              }
            ],
            "on_demand": true
          }
        ]
      }
    }
  },
  "logging": {
    "logs": {
      "default": {
        "level": "DEBUG"
      },
      "log0": {
        "include": [
          "http.log.access.log0"
        ],
        "level": "DEBUG",
        "writer": {
          "filename": "/tmp/caddy_access.log",
          "output": "file"
        }
      }
    }
  }
}

3. The problem I’m having:

I run a SaaS where customers bring their own domains. For each of their domains I need HTTPS setup automatically and 3 different routes set up for their domain.
api.$domain.com127.0.0.1:8000 (our API server)
www.$domain.com127.0.0.1:5000 (our APP server)
$domain.com → redirect to www.$domain.com

I’m struggling to figure out how to configure the two backends based on the subdomain. From the log output, I see that I might need TLS connection policies to get this to work. I’m a bit confused though since I don’t need a connection policy with a single route and no match group. At least, when I used a Caddyfile originally, then output the json, it didn’t have the policy explicitly stated.

4. Error messages and/or full log output:

2021/05/05 15:18:49.575	INFO	using provided configuration	{"config_file": "/var/www/new_caddy.json", "config_adapter": ""}
2021/05/05 15:18:49.579	INFO	admin	admin endpoint started	{"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["localhost:2019", "[::1]:2019", "127.0.0.1:2019"]}
2021/05/05 15:18:49.581	INFO	tls.cache.maintenance	started background certificate maintenance	{"cache": "0xc000340a10"}
2021/05/05 15:18:49.592	INFO	http	server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS	{"server_name": "srv0", "https_port": 443}
2021/05/05 15:18:49.592	INFO	http	enabling automatic HTTP->HTTPS redirects	{"server_name": "srv0"}
2021/05/05 15:18:49.601	DEBUG	http	starting server loop	{"address": "[::]:443", "http3": false, "tls": true}
2021/05/05 15:18:49.601	DEBUG	http	starting server loop	{"address": "[::]:80", "http3": false, "tls": false}
2021/05/05 15:18:49.671	INFO	tls	cleaned up storage units
2021/05/05 15:18:49.788	INFO	pki.ca.local	root certificate is already trusted by system	{"path": "storage:pki/authorities/local/root.crt"}
2021/05/05 15:18:49.788	INFO	autosaved config	{"file": "/Users/josh/Library/Application Support/Caddy/autosave.json"}
2021/05/05 15:18:49.788	INFO	serving initial configuration

5. What I already tried:

A simple regex/globbing is my first attempt

"host": [
"api.*"
]

I also tried the placeholder

"host": [
"api.{http.request.host}"
]

I considered the following, but it seemed like it would break on mydomain.co.uk

"host": [
"api.*.*"
]

6. Links to relevant resources:

Placeholders:

Matching on routes:

The usual pattern for On-Demand TLS is something like this (note that setting up an ask endpoint is very important, because otherwise you’re open to abuse, e.g. potential denial of service via forcing your server to continually issue certificates for any domain requested):

{
	on_demand_tls {
		ask http://localhost:5000/ask
	}
}

:443 {
	tls {
		on_demand
	}

	reverse_proxy 127.0.0.1:5000
}

I think the problem you’re hitting is that the host matcher is set up so that * only matches a single label (domain segment between dots). What you could do instead is match by the Host header using the header or header_regexp matcher which doesn’t have this constraint.

@api header Host api.*
handle @api {
	reverse_proxy 127.0.0.1:8000
}

@app header Host www.*
handle @app {
	reverse_proxy 127.0.0.1:5000
}

handle {
	redir https://www.{host}{uri}
}

I think you mean this line, right?

If you read this carefully, this is Caddy saying “your config didn’t have a TLS policy, so I’m going to just add one for you instead because you’re listening on port 443”. Nothing to do here, it’s just letting you know that it did something automatically.

1 Like

@francislavoie Thank you so much!

For posterity, here’s the full Caddyfile

:443 {

    @app header Host www.*
    handle @app {
        reverse_proxy 127.0.0.1:5000
    }

    @api header Host api.*
    handle @api {
        reverse_proxy 127.0.0.1:8000
    }

    handle {
        redir https://www.{host}{uri}
    }

    tls internal {
        on_demand
    }

    log {
        output file /tmp/caddy_access.log
    }
}

That runs as is.

I do have an ask setup in my dns-based experiment. I’ll come back later and add the final config, including the ask with route53, once I have it all working. Hopefully it’ll help someone else out.

1 Like

@francislavoie a follow up question if you don’t mind.

I’m good now on everything but the automatic dns combined with route53

{
    debug
    on_demand_tls {
        ask http://127.0.0.1:5000/_domain_check
    }
}

:443 {

    # variables
    @app header Host www.*
    @api header Host api.*

    handle @app {
        file_server /static/* {
            root /var/www/cb/app/
        }
    }

    handle @app {
        reverse_proxy 127.0.0.1:5000
    }

    handle @api {
        reverse_proxy 127.0.0.1:8000
    }

    tls caddy@changed_for_spam.org {
        dns route53 {
            max_retries 10
            aws_profile "default"
        }
    }

    handle {
        redir https://www.{host}{uri}
    }

    log {
        output file /tmp/caddy_access.log
    }
}

I have route53 dns module installed

$ caddy list-modules
admin.api.load
admin.api.metrics
caddy.adapters.caddyfile
caddy.listeners.tls
caddy.logging.encoders.console
caddy.logging.encoders.filter
caddy.logging.encoders.filter.delete
caddy.logging.encoders.filter.ip_mask
caddy.logging.encoders.json
caddy.logging.encoders.logfmt
caddy.logging.encoders.single_field
caddy.logging.writers.discard
caddy.logging.writers.file
caddy.logging.writers.net
caddy.logging.writers.stderr
caddy.logging.writers.stdout
caddy.storage.file_system
dns.providers.route53
.. truncated

The log output:

2021/05/05 20:55:40.218	INFO	using provided configuration	{"config_file": "/var/www/cb/ops/server/needs_auto_dns", "config_adapter": "caddyfile"}
2021/05/05 20:55:40.220	INFO	admin	admin endpoint started	{"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["localhost:2019", "[::1]:2019", "127.0.0.1:2019"]}
2021/05/05 20:55:40.221	INFO	tls.cache.maintenance	started background certificate maintenance	{"cache": "0xc000357b20"}
2021/05/05 20:55:40.221	INFO	http	server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS	{"server_name": "srv0", "https_port": 443}
2021/05/05 20:55:40.221	INFO	http	enabling automatic HTTP->HTTPS redirects	{"server_name": "srv0"}
2021/05/05 20:55:40.221	DEBUG	http	starting server loop	{"address": "[::]:80", "http3": false, "tls": false}
2021/05/05 20:55:40.221	DEBUG	http	starting server loop	{"address": "[::]:443", "http3": false, "tls": true}
2021/05/05 20:55:40.222	INFO	autosaved config	{"file": "/Users/josh/Library/Application Support/Caddy/autosave.json"}
2021/05/05 20:55:40.222	INFO	serving initial configuration
2021/05/05 20:55:40.235	INFO	tls	cleaned up storage units
2021/05/05 20:55:47.549	DEBUG	http.stdlib	http: TLS handshake error from 127.0.0.1:52910: no certificate available for 'www.somedomainiown.com'
2021/05/05 20:55:47.550	DEBUG	http.stdlib	http: TLS handshake error from 127.0.0.1:52911: no certificate available for 'www.somedomainiown.com'

I previously had the route53 module working but with a specific domain. So I’m pretty sure the .aws config is good to go.

To clarify my issue, what am I missing to get route53 certs working with dynamic domains?
Relatedly, is there a better way to dig into this myself beyond reading the source code? Some extra debug output I can turn on?

Thanks again!

You can simplify all this (matchers aren’t variables, by the way):

    @app header Host www.*
    handle @app {
        @notStatic not path /static/*
        reverse_proxy @notStatic 127.0.0.1:5000

        root * /var/www/cb/app/
        file_server
    }

    @api header Host api.*
    handle @api {
        reverse_proxy 127.0.0.1:8000
    }

Do you actually control all the domains under your AWS account that you plan to issue certificates for? If not, then you can’t use the DNS challenge. The point of the DNS challenge is to prove that you have control of those DNS zones, which is a proof of ownership.

The purpose of On-Demand TLS is to allow you to issue certificates on demand for domains that you haven’t specifically configured Caddy to handle. Caddy doesn’t use the DNS challenge for on-demand, because it doesn’t really make sense. (Well technically it could, but the situations where you’d use on-demand and the situations where you need the DNS challenge are orthogonal).

If a client can connect to your server publicly, with a domain Caddy doesn’t yet have a certificate for, then it means that your server is accessible over port 443 (and probably 80 as well), so the HTTP or TLS-ALPN challenges are sufficient.

The times where you need the DNS challenge are when either you need a wildcard certificate (because you might have unlimited subdomains under a given domain which you know ahead of time) or because your server is not publicly accessible over port 80 or 443 (so HTTP or TLS-ALPN challenges can’t be solved).

So really, I think you should just stick with on-demand and not worry about the DNS challenge, probably.

1 Like

Thanks @francislavoie !

We have the customer domains pointing to our name servers, so DNS challenge works. I initially went with DNS challenge since it allowed for local development with some /etc/hosts entries for test domains. The plan was to build the DNS challenge out locally then reliably deploy the code from local to prod.

In production TLS-ALPN should work fine though. I’ll give this a try. Thanks again!

1 Like

Typically SaaS setups with Caddy have their customers use a CNAME record to point their domain to one of yours which points to your server. I find it strange to have customers change name servers because trust, but :man_shrugging:

This topic was automatically closed after 30 days. New replies are no longer allowed.