Possible to opt out of tls on_demand per route?

1. Caddy version (caddy version):


2. How I run Caddy:

Caddy API as a systemd service

a. System environment:

Ubuntu 20.04

b. Command:

service caddy-api start

c. Service/unit/compose file:


d. My complete Caddyfile or JSON config:

  "admin": {
    "disabled": false,
    "listen": "localhost:2019"
  "apps": {
    "http": {
      "servers": {
        "srv0": {
          "listen": [
          "routes": []
    "tls": {
      "automation": {
        "on_demand": {
          "ask": "https://mydomain.com/check-domain"
  "logging": {
    "logs": {
      "default": {
        "level": "DEBUG"

3. The problem I’m having:

More wondering if this is already possible. I want to turn off tls on_demand for specific routes that will be behind a CDN anyways. Otherwise it keeps trying to generate a cert and failing, and I don’t need it for that route anyways.

4. Error messages and/or full log output:

No error log on this one

5. What I already tried:

I’ve been looking at the docs, and I see that I could potentially set the subjects in the automation policy for the json config, but I’m hoping there’s an easier way to do it within the config for a route. Or some other way that doesn’t require explicitly setting and maintaining a list of hostnames that it applies to. Instead I’d rather have it be on by default, and opted out per route or hostname (if possible).

6. Links to relevant resources:

What do you mean by “specific routes”? Do you mean for specific domains? In that case, that’s what the ask endpoint is for.


Thanks Francis, I appreciate the answer, and yes I meant a specific route in the sense of having a match for a domain/subdomain.

The ask endpoint makes sense, I hadn’t considered that as an option. I’d only been thinking of it as a security feature, not as a use case for this.

Would it need to hit the ask endpoint on every request then, or does it remember the result?

If it remembers it, is there any way to change that later, in case I want to turn on_demand back on for that domain later? Not that I’d expect that very often, but it would be good to know if that might be tricky.

Well the ask endpoint tells Caddy whether it should have a TLS cert issued for the domain in the current request, if Caddy doesn’t already have one for that domain. If Caddy doesn’t issue one, then it’ll just fail the TLS handshake and the browser will present an error to the user. There’s no opportunity to “route” if you don’t have a valid certificate, if the request hits Caddy.

I’m not sure if Caddy caches the ask result, but ideally it should for a short time to avoid repeats (I’d say something like caching for 1 to 5 minutes would be plenty). I’ll look into it.

I still don’t understand what you mean by “route”. If you want something else to handle those domains, then point the DNS to the IP address of your other server, and not to your Caddy server.

Thanks Francois. After writing the rest of this reply, I realized I should probably be serving over TLS from Caddy in all cases anyways - even if a CDN is in front - so that there’s no leg of the journey where it’s not secured.

So I guess the point is fairly moot, and doesn’t really need an answer anymore, but I’ll post the rest of my reply just because it’s already written:

Sorry, I haven’t explained this very well.

By route I mean the routes in the json config:

Which for my use case will be configured with a handler for reverse proxy. On nginx I’d call it a virtual host.

I want to have a CDN pointed at my Caddy server because it’s what serves my applications to the web. It could be reverse proxying to a local application on the same server (using an internal port for instance) or it might be from another server (perhaps over a private network), but either way it’s about how Caddy serves it to the web. The DNS would be pointed at the CDN in this case.

My goal is to have tls on demand turned on in general for caddy server, as I’d want most routes to be served over https and have certs generated on first request.

Some of those routes though, I’ll want to not serve over https and not create or renew certs for. The two cases I can think of are:

  1. Stopping serving over tls after they’ve been created and switching a route to serve over http instead.
    -In cases where it wasn’t behind a CDN at first but I put a CDN in front of it later
    -And if I could do this, would it still keep trying to renew the cert when it expires?

  2. Creating new routes that serve over http from the start and never generate a cert with Caddy.
    -For cases where I already have a CDN setup and pointed at the caddy server for this domain/subdomain

If you plan to serve things differently over HTTP than over HTTPS, then you should separate your config into two servers, i.e. srv0 and srv1, one for HTTP (port :80) and the other for HTTPS (port :443). On-demand config is server-wide (as in srv0) so you need to split up the servers (and it doesn’t make sense to have on-demand on a server only serving HTTP, anyways).

If you’ll be using a CDN, are you sure you need on-demand? The purpose of on-demand is to solve the problem where you don’t know the domains that you’ll be serving upfront. If you do know them, then you should let Caddy manage certificates for them automatically (Automatic HTTPS), i.e. not on-demand. But be aware that for ACME challenges to be solved, whatever CDN you use must proxy through HTTP requests, and/or they must not terminate TLS to solve the ALPN challenge. Or use the DNS challenge to avoid this problem.

That’s a great point about having two servers in the config, makes much more sense than what I had originally been thinking. I suppose I don’t need on demand for those that will be behind CDN, so I can move those to the non on-demand server config.

I still want requests to go browser <=> CDN <=> Caddy <=> app/server because then I still get a lot of the benefits of Caddy, like health checks, header transforms, etc. that I could do with CDNs but are often more difficult to work with or more expensive. Plus having Caddy in there adds very little overhead, especially when most requests will be cached at the CDN edge.

Thanks for the help!

This topic was automatically closed after 30 days. New replies are no longer allowed.