Possible to configure caddy to serve bare IPs with internal wildcard?

1. The problem I’m having:

The basic idea here is: I want to setup a typical reverse proxy but also want to be able to serve TLS-termating requests that arrive destined anywhere on :443. Use case would be Caddy sitting in between a cloud load balancer and an application backend: I want the load balancer to be able to hit the instance’s routable IP address, but the load balancer may not know about what hostname the backend instance is serving: it just wants to see a response (either HTTP or HTTPS) after making the request.

I strongly suspect this is SNI-related, since I can invoke curl with --resolve and get a response back for any hostname - but the hostname has to be there first.
I can also get an internal wildcard name for the IP address - as long as its’ listed in a host matcher. But the trick here is that the instance doesn’t know how the upstream load balancer is going to hit it.

The sort of things that I expect might make what I want work, but haven’t so far:

  • Using default_sni, fallback_sni, or strict_sni_host insecure_off
  • Ensuring that a host * matcher is set

2. Error messages and/or full log output:

What I get back from curl:

$ curl -k https://127.0.0.1
curl: (35) OpenSSL/3.0.14: error:0A000438:SSL routines::tlsv1 alert internal error

What I want to see - note I can trick this into working by using --resolve, I assume because of SNI?

$ curl --resolve foo:443:127.0.0.1 -k https://foo
Hi

3. Caddy version:

v.2.8.4

4. How I installed and ran Caddy:

I’m isolating this behavior into a local xcaddy build (because my environment also ships the rate-limiter module).

a. System environment:

NixOS 24.05, though the smallest-reproducible-config is pretty simple and my OS/systemd/docker versions don’t directly apply here.

b. Command:

caddy run

c. Service/unit/compose file:

N/A

d. My complete Caddy config:

A few settings changed including a few SNI-related settings I tried that didn’t do much:

{
    default_sni example.com
    fallback_sni example.com
    servers {
      strict_sni_host insecure_off
    }
}

*:443 {
    tls internal
    respond "Hi"
}

Howdy @tylerjl,

You probably want some kind of TLS On-Demand. Caddy still wants to get some kind of certificate to try match what you’re requesting.

I’m not sure why foo worked for you - maybe you already had an internal cert for foo - but it didn’t on mine, not at first.

➜ curl --resolve foo:443:127.0.0.1 -k https://foo
curl: (35) OpenSSL/3.3.1: error:0A000438:SSL routines::tlsv1 alert internal error

When I added an on_demand config to the HTTPS catch-all it went and made an internal cert, no problem.

➜ curl --resolve foo:443:127.0.0.1 -k https://foo
2024/07/18 00:47:26.379	INFO	tls.on_demand	obtaining new certificate	{"remote_ip": "127.0.0.1", "remote_port": "63343", "server_name": "foo"}
2024/07/18 00:47:26.380	INFO	tls.obtain	acquiring lock	{"identifier": "foo"}
2024/07/18 00:47:26.386	INFO	tls.obtain	lock acquired	{"identifier": "foo"}
2024/07/18 00:47:26.386	INFO	tls.obtain	obtaining certificate	{"identifier": "foo"}
2024/07/18 00:47:26.389	INFO	tls.obtain	certificate obtained successfully	{"identifier": "foo", "issuer": "local"}
2024/07/18 00:47:26.389	INFO	tls.obtain	releasing lock	{"identifier": "foo"}
2024/07/18 00:47:26.389	WARN	tls	stapling OCSP	{"error": "no OCSP stapling for [foo]: no OCSP server specified in certificate", "identifiers": ["foo"]}
Hello⏎

It also went and got certs for example.com when I tried to grab an IP address with default_sni example.com enabled:

➜ curl -k https://127.0.0.1
2024/07/18 00:47:46.185	INFO	tls.on_demand	obtaining new certificate	{"remote_ip": "127.0.0.1", "remote_port": "63359", "server_name": "example.com"}
2024/07/18 00:47:46.186	INFO	tls.obtain	acquiring lock	{"identifier": "example.com"}
2024/07/18 00:47:46.191	INFO	tls.obtain	lock acquired	{"identifier": "example.com"}
2024/07/18 00:47:46.192	INFO	tls.obtain	obtaining certificate	{"identifier": "example.com"}
2024/07/18 00:47:46.193	INFO	tls.obtain	certificate obtained successfully	{"identifier": "example.com", "issuer": "local"}
2024/07/18 00:47:46.193	INFO	tls.obtain	releasing lock	{"identifier": "example.com"}
2024/07/18 00:47:46.194	WARN	tls	stapling OCSP	{"error": "no OCSP stapling for [example.com]: no OCSP server specified in certificate", "identifiers": ["example.com"]}
Hello⏎

And with default_sni disabled it went and got one for the IP address itself:

➜ curl -k https://127.0.0.1
2024/07/18 00:49:54.496	INFO	tls.on_demand	obtaining new certificate	{"remote_ip": "127.0.0.1", "remote_port": "63460", "server_name": "127.0.0.1"}
2024/07/18 00:49:54.497	INFO	tls.obtain	acquiring lock	{"identifier": "127.0.0.1"}
2024/07/18 00:49:54.504	INFO	tls.obtain	lock acquired	{"identifier": "127.0.0.1"}
2024/07/18 00:49:54.504	INFO	tls.obtain	obtaining certificate	{"identifier": "127.0.0.1"}
2024/07/18 00:49:54.506	INFO	tls.obtain	certificate obtained successfully	{"identifier": "127.0.0.1", "issuer": "local"}
2024/07/18 00:49:54.506	INFO	tls.obtain	releasing lock	{"identifier": "127.0.0.1"}
2024/07/18 00:49:54.506	WARN	tls	stapling OCSP	{"error": "no OCSP stapling for [127.0.0.1]: no OCSP server specified in certificate", "identifiers": ["127.0.0.1"]}
Hello⏎

Without an on_demand policy, or a valid hostname in the site address, there is no way for Caddy to requisition a certificate. You either need to grab that cert at startup or you grab it when a request comes in, and if you don’t do either, Caddy simply doesn’t have a cert to provide at all.

➜ caddy version
v2.8.4 h1:q3pe0wpBj1OcHFZ3n/1nl4V4bxBrYoSoab7rL9BMYNk=

➜ cat Caddyfile
{
  default_sni example.com
  servers {
    strict_sni_host insecure_off
  }
}

https:// {
  tls internal {
    on_demand
  }
  respond "Hello"
}

Note: don’t configure on_demand on a production server without ask.

2 Likes

Alternately, I suppose you could have a https://, https://example.com site block with fallback_sni example.com global option without on_demand. I imagine that would basically have Caddy give the example.com cert for all requests since it wouldn’t have certs for anything else, but it would maintain the internal cert for example.com since it’s a specified host.

1 Like

Thanks for the suggestion, @Whitestrake. Your answer led me down a path that I think might work slightly better for my particular use case, so I’m going to document here in case 1) it helps someone else or 2) somebody sees something egregiously wrong with the strategy.

I’m using the JSON API a fair bit so when it comes time to enable TLS and flip the switch, I can POST this:

POST :2019/config/apps/tls
{
  "automation": {
    "policies": [
      {"issuers": [{"module": "acme"}], "subjects": [hostname]},
      {"issuers": [{"module": "internal"}], "subjects": ["*"]},
    ]
  }
}

Which - please correct me if I’m wrong - will essentially do what automatic TLS does but also ask Caddy to provision an internal wildcard certificate as well.

Then later on as long as 1) I have a server listening on :443 and 2) permissive matches (i.e., don’t enforce a host foo.bar anywhere) it seems to pull the wildcard as a fallback. Note that in my case I have two top-level server routes, one that matches the hostname I’m provisioning from ACME, and the second is the permissive one that seems to pull in the wildcard.

The only thing I’m not sure about with this strategy is whether that policy is 1:1 with the automatic TLS defaults that Caddy ships with, since my goal is basically to get the same out-of-the-box defaults and just also ask for an internal wildcard to use in special cases. But it seems to work so far.

Gotcha.

I’m a little surprised since I didn’t think you’d be able to make use of a * wildcard very effectively.

Wildcard certificates can only have one glob, which only covers a single level, and must be the leftmost element of the domain. That means that * would cover localhost but it wouldn’t cover foo.local, example.com, or 127.0.0.1. You could have a *.example.com wildcard, or a *.foo.example.com wildcard, but you can’t have a *.*.example.com or a bar.*.example.com wildcard, to my knowledge.

I figured you’d probably need fallback_sni to get Caddy to use it since I didn’t think Caddy would pick it for an invalid server name - or on_demand to make internal certs as required. But if it seems to be working, I’m glad.

1 Like

I appreciate your comments here @Whitestrake since they’re helping me ensure I actually understand what’s happening, and, uh, I actually didn’t understand what was happening here :melting_face:

Indeed, my configuration was insufficient to provide both a single-label wildcard (for, example, localhost) and a fallthrough certificate for bare IP TLS termination. I’ve since revised my setup to use the following, which I’m currently testing by running for a while and evaluating whether certificates renew properly, etc.

{
  "tls": {
    "automation": {
      "policies": [
        {
          "issuers": [
            {
              "module": "acme"
            }
          ],
          "subjects": [
            "my.example.com"
          ]
        },
        {
          "issuers": [
            {
              "module": "internal"
            }
          ],
          "subjects": [
            "*"
          ]
        },
        {
          "issuers": [
            {
              "module": "internal"
            }
          ],
          "on_demand": true
        }
      ]
    }
  }
}

This is paired with a few match statements, some that declare host and some that don’t (to catch incoming requests that lack SNI hints, like for bare IPs)

{
  "match": [
    {
      "host": [
        "*" # Catches things like localhost
      ]
    },
    ... # Other matchers have no `host` matches to catch bare IPs
  ]
}

The primary change is to add a final TLS policy that doesn’t have any subject filter and includes on_demand, as you noted. The only things I’m still not 100% sure on are these:

  1. I understand that on_demand should usually be paired with a functional ask endpoint to avoid overprovisioning for too many names, names that aren’t those I expect, etc. But if I’ve paired on_demand with internal in order to just create self-signed certificates in the fallthrough scenario - that is, requests without SNI hints or a Host header - that should be okay, right? It’s not as if I could accidentally self-sign a cert for google.com because my second policy doesn’t match (two labels) and the third doesn’t provision a cert for Google since it’s not keying off any names once it provisions a “nameless” cert for bare IP requests. I’ve tested with a fake SNI like curl --resolve google.com:443:127.0.0.1 -v -k https://google.com/ and Caddy uses the “nameless” cert it creates in the third policy without any appearance of google.com in its names.
  2. My first policy is intended to emulate the default ACME behavior, but I’m not sure whether this is exactly what Caddy does. Does it look right? Or does the default behavior allocate a ZeroSSL ACME issuer as well? I didn’t find a way to emit the default config behavior (I have hate to mention him here since I know he’s busy, but maybe @matt knows)

(I tried to ninja-edit that post to strikethrough “have” to “hate” but this markdwon format doesn’t support it; I don’t assume I have your ear @matt, I know you’re busy and don’t demand your attention if you’re busy :sweat_smile: )

strikethrough uses double ~, ~~like this~~

2 Likes

That’s a bad idea. You should only do that if your server is not publicly accessible. The attack vector is anyone could pump infinite TLS connections to your server, forcing your server to fill up storage with garbage certs & keys until you run out of storage space, causing a DDoS. Using ask gives you a way to prevent that by only accepting to issue certs for names you know about.

Yes, if you configure an email (since v2.8.0) then ZeroSSL is also enabled as a default issuer. See the release notes which explains Release v2.8.0 · caddyserver/caddy · GitHub

I didn’t read the whole thread, but I’m pretty sure this only matches single-level, so like only localhost but not foo.localhost.

1 Like

Hmm, that makes sense @francislavoie. I went back to the drawing board to see if there was some way I was overlooking “provision an ACME cert for this chosen domain and always use it”, but still can’t think of a way to do that. So I think I’ll write a small REST sidecar to perform some basic checks and lean more into on_demand a bit more.

I think logic to perform “Is domain one of my assigned IP addresses?” should be sufficient here and avoid 1) allocating names I don’t own and 2) avoid DoS if the set of valid domains is constrained to only addresses assigned to the host.

Perhaps you can probably use the acme_server with IP address policy?

1 Like

Thanks @Mohammed90, I wasn’t aware of that directive either. I’d already written the /ok endpoint which seems to work, so maybe I’ll reference that later.

At this point I think my problem is resolved with the key being to leverage on_demand as @Whitestrake suggested paired with an ask endpoint that allowlists only addresses that the host has assigned (my handler does a simple iteration across network interfaces and returns 200 if domain’s value matches any address).

As an aside: does the feature “always provision IP-like certificates for addresses a host has assigned to it” seem like something useful to upstream Caddy? It’s a relatively niche use case but a) the implementation seems simple enough that even I could PR something and 2) Caddy’s hallmarks are good defaults and great quality-of-life, so maybe this could be something like

{
  "on_demand": {
    "permission": {
      "module": "local_addresses",
    }
  }
}

But maybe that’s not as useful as I think it is to others.

Not really, considering it’s easy to provide as a permission plugin like you did (or via a small standalone HTTP server). You can make an account on Download Caddy to add your plugin there for anyone else to use.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.