Local valid SSL cert setup

1. The problem I’m having:

Hi All,

I’d like to use Caddy for what I think is a fairly simple setup. I want to have valid local SSL certificates for my home lab services. I absolutely do not want any exposure to the internet/external access for security reasons, simply internal usage only.

I have found guides for doing this using NGINX proxy manager, wildcard subdomains, and my Cloudflare domain name with my Adguard Home + Unbound DNS setup on OPNsense. I would like to incorporate the OPNsense Caddy plugin to keep it all on the same platform.

I have not yet been able to find a similar guide that I can follow to achieve my goal using Caddy. Can anybody help me through this setup process, or suggest a guide to follow?

Thanks for your help, it is much appreciated!

2. Error messages and/or full log output:

NA

3. Caddy version:

NA

4. How I installed and ran Caddy:

NA

a. System environment:

OPNsense

b. Command:

NA

c. Service/unit/compose file:

NA

d. My complete Caddy config:

NA

5. Links to relevant resources:

NA

Hey there, you can do this easily with the Caddy plugin by using the access list feature.

Additionally you can add basic auth or forward auth to additionally enhance security.

Here is the current guide, following it will naturally lead you to the access list feature.

https://docs.opnsense.org/manual/how-tos/caddy.html

When you use the DNS-01 challenge, you dont even need to open any Ports on WAN. It can stay entirely local with only Firewall rules on the LAN.

1 Like

Thanks for your advice and for your great work on this project!

I had looked at that guide a while back but was concerned by one of the initial steps:

" Creating a Simple Reverse Proxy

The domain has to be externally resolvable. Create an A-Record with an external DNS Provider that points to the external IP Address of the OPNsense."

My OPNsense router is behind an ISP router, does not have a static external IP address, and in any case I do not want any access to it from the external internet. I had interpreted that step to mean that I cannot use my local OPNsense router ip address. Are you indicating that I can use the local IP address as an A name record in Cloudflare and that only I will be able to use this internally then?

Regarding access lists, in my setup I am wondering what role exactly they play. If I can setup with a local IP, does this automatically mean that nobody can externally access my services? Or does external access get enabled by default somehow and so I do need to additionally limit access to internal IP addresses only using the method in the guide? If so, I would not want this to fail/me to misconfigure it and introduce a security vulnerability to my network.

Another question I have regards the basic or forward auth that you suggest. I like the idea of adding security in this way. Am I correct though in thinking that the security only gets applied when trying to resolve the server using the domain name? If I just type the IP address and port like normal into a web browser, does this bypass the auth setup? Perhaps this is less important since I am only running it on my local network and it may cause complications for automated scripts that communicate between my servers, but I would like to add security wherever possible.

Sorry for all the questions but I am very keen to learn and use this. Thanks for your time!

You can have public or private DNS, whatever the client connecting to the address would hit first to resolve the domain to an IP.

Your DNS records can point to any IP, even private range, if someone tried to connect from their system on a different network, it would then direct them to that same private IP but on their subnet not yours.

Some may choose to resolve to a public IP that can reach their private network (such as via router or via VPN), allowing access from public networks. That often comes with additional setup to restrict access which can be done in a variety of ways.

If you’ll only be connecting from systems on your private network, you can just resolve to the appropriate private IP that Caddy runs on.


The benefit of the DNS challenge with public DNS services like Cloudflare is for LetsEncrypt certs that already have the root CA in the trust stores of systems.

Without that, you could provision your own certificates internally with a private CA instead of LetsEncrypt, but then each client connecting needs to add that CA cert into their trust stores, otherwise they will present a warning about insecure HTTPS or other software will require skipping verification (chain of trust) like curl.


If I can setup with a local IP, does this automatically mean that nobody can externally access my services?

If you were to go the private CA route, you can use your own local/private DNS and manage domains with reserved TLDs for private use (.localhost for the same system, .test, .home.arpa, and I think also .internal, do not use .local which is for mDNS). These will not be resolved by public DNS services, so it may provide some additional comfort vs domains with TLDs that could be registered on public internet by someone else (at the time or in future).

That similar to the private CA trust store concern however means each device connecting should be using that private DNS. When that’s managed by a router that provides the devices network connection that’s a little less friction.

So this can all work completely offline. The DNS challenge with LetsEncrypt cert and public DNS to your private IP is easier.

It does have some privacy concerns IIRC:

  • Obviously public DNS can be queried, so you do reveal a bit of context to an attacker for which IP on your network is used for this, and that the domain is being used for internal self-hosted services. This all gets scraped by sites like shodan, thus history is logged and it can be searched/filtered.
  • When you provision certificates through LetsEncrypt, there is a public log that can also have historic records searched. If the cert is not for a wildcard domain, this may reveal more information about services you are running (since you might have a subdomain reflecting each service name).
  • Usual privacy concerns with traffic leaving your system/network.

There’s also split DNS, or running private DNS for the same public DNS domain you own (if you do this and don’t mirror/sync records that you don’t override with local IPs, the public records that are missing in private DNS won’t be resolved which might cause some failures).


If you have a reverse proxy like Caddy, along with the certificates for HTTPS, your inbound traffic only comes in via the HTTP/HTTPS ports (80/443), there’s no need for the other services to have ports exposed/open on interfaces outside of that host system unless they need to be connected to directly (like Caddy directing the traffic to a service running on another system in your network).

So provided you can only reach services indirectly through Caddy, there’s no concern there with IP based access. You have any HTTP connections route to HTTPS (which is the default behaviour with Caddy for a site address with no scheme prefix), and then handle the auth requirement.

Your auth can be as simple as Caddy’s own basic auth directive, prompting for username and password. Or leverage forward_auth for a common login portal that protects multiple sites (instead of individual basic auth for each), Authelia is good for that.

You also have the option of mutual TLS, which requires a client to provide their own certificate you’ve provisioned to authenticate with instead of username/password credentials (or you can use both).

Don’t just take my word for it though, once you get a basic setup going check for yourself that your concerns with connecting via IP+port aren’t viable. Sometimes there are configuration mistakes or caveats in technology used (for example some setups unintentionally lose the original client IP and it instead appears as your networks gateway IP or reverse proxy IP, which may have a higher level of trust if you have any software configured to relax security when an IP seems more trustworthy, Docker has been known for this problem with IPv6 client connections being replaced with IPv4 docker network gateway IP when not configured correctly, others with public DNS that points to their router forwarding a port to another system).

1 Like

Thanks for the detailed response, great to learn from people who know this stuff in such depth.

The private CA option does sound quite appealing. I guess it’s not the biggest data leak for people to know that I use internal services and perhaps some of their names (I would want a wildcard approach I think for convenience so that slightly decreases the problem) but I would of course rather avoid it.

So I guess it comes down to just how difficult/time consuming it is to set up and use compared to the public DNS approach. Can you go into more detail on how this would work and how I can set it up using OPNsense? I currently use Unbound for all my local services as DNS. Are there any other downsides to this approach to be aware of or just straight up preferred for my relatively simple use case?

For the most part I am wanting to access services running on other servers from one central access VM server. So does that mean I only need to setup one certificate and store it in that one trust store potentially? Eg. For accessing the Proxmox web admin GUI running on another server.

Regarding the basic vs forward auth question:

To clarify, I am only wanting to access services from within the same local network from home. No access required from the Internet and so I don’t want to open/forward any ports. My question was more around the need for auth at the Caddy level at all. I had the impression that I could still type the ip and port into my web browser as usual and bypass Caddy’s domain name resolution stuff and hence its basic auth too. But I think you are suggesting that is not the case?

In either case, I do like the idea of an added security layer to stop attackers moving within my network. Most services have their own authentication ( eg. Proxmox admin page) but it could be good to have another layer Eg. With Authelia as you suggest. I would just need to work out if that would complicate future automation attempts. Mostly I imagine I can handle those jobs by communication between servers with ssh though, rather than needing direct service access, so it’s probably all good!

Thanks for your help!

I don’t have such a setup myself with a router.

I typically just run all services on one system, but I can expose those to the local network for a device like my phone to connect to. For me it’s often just disposable dev/test environments that I use to reproduce bug reports with to investigate, or for learning new concepts with an isolated environment.

My understanding though is your devices on the network don’t need to configure DNS explicitly like I would my phone as the router connection handles that. If you have no need for equivalent access to those services on public networks, there should be no DNS concerns that come to mind as you only manage the DNS internally.

For private cert provisioning instead of a public service like LetsEncrypt, you can use Smallstep CA (or it’s simpler CLI program step-cli) to generate a root CA. Caddy actually uses that project under the hood itself when you use local_certs global config or the tls internal site block directive, that should create a root CA cert in the Caddy data directory at pki/authorities/local/root.crt.

On the system that runs Caddy, this will typically also attempt to install that root cert into your system trust store (there’s also caddy trust / caddy untrust commands you can look at from the docs). step-cli has similar trust management commands on it’s docs if you prefer to manually create the certificates instead of having Caddy automatically manage them.

As for your question, the VM running Caddy needs the root CA private key to provision new certificates, but all clients that connect will need a copy of the root CA cert in their trust stores to verify chain of trust. You don’t have to worry about this with LetsEncrypt as it’s already bundled in your trust store by the OS. Only on very old devices like some smart TVs where the trust store was no longer updated and that root CA for LetsEncrypt expired is that an issue, normally your systems receive updates for stuff like this to keep everything running smoothly :+1:

If it’s not possible to reach your services via public internet connections because your router forbids it (standard), then this is only possible on your private network.

Caddy listens on port 80 / 443, so if you connect via IP only, there’s no FQDN to do SNI routing IIRC. You might have configured Caddy to server something on :80 / :443 or for one of the specific IPs that Caddy is listening on and that could serve something… but other than that Caddy needs the FQDN (my-service.example.com) to match a site address you’ve configured Caddy to serve.

You can take Caddy out of this setup for a moment. If you have any service that is reachable within your network via IP + port, Caddy isn’t going to magically prevent that.

If all your services are running on the same system like that VM, you should be able to configure them to only bind/listen to loopback interface via 127.0.0.1 / localhost. This way nothing outside of the VM can reach those services. Caddy would need to bind to other network interfaces in the VM to be reachable via port 80 + 443.

Depending how your network is setup with the VM affects the next concern with access. It might only be on a bridge network, keeping it private to the hypervisor host running it. If that is the case even if your other services were not configured to bind to 127.0.0.1, you could reach them via that host the network is bridged to, but you shouldn’t be able to reach via other systems on your network (which need to go through the host running the VM and thus to hop to the internal network for that VM have the ports forwarded from one interface to another IIRC). There are other networking configurations that I’m less familiar with which assigns the VM an IP to the same subnet your router is serving every other device on the local network with.

I haven’t used VMs in a while as I mostly use Docker these days. There are some scenarios where other systems on a network (L2 switch I think was the term) could indirectly route from their localhost to the Docker host systems LAN IP but forward that localhost request to the Docker host which then routes traffic to a docker network that was presumed private and inaccessible :man_shrugging: (this was specific to linux IIRC, I reproduced it with VMs and some cloud providers where you can share a private network between multiple VPS instances, it was avoidable via a kernel tunable IIRC, or using firewalld with zones, I don’t think UFW had a way to properly isolate it)

So uhh… better that you just verify queries like this yourself as it may vary. Just get some basic deployment going first before adding your real services/config. Should be easy enough to double check.

Ideally your services are not reachable directly, and there is a reverse proxy like Caddy sitting in front for any incoming connections. If you had services across several systems/VMs you might have more than one instance of Caddy to act as the entrypoint to access those services. If there is more than one proxy inbetween a service between the client, you’ll want to look at the docs about trusted_proxies as this layering can risk losing the original client IP otherwise.

There’s a variety of ways to layer on authentication. Authelia is nice for protecting access to web services that lack any login process, but it can also provide SSO via OIDC/OAuth2 when your services can integrate with that, helping to centralize account management.

For automation, you could also consider mTLS (Smallstep website has some good articles on this, including with SSH certificates, but you’ll also find some stuff here on the Caddy forum). mTLS has the client provide their own certificate to authenticate that they’re a trustworthy connection, those certificates can be short-lived when issued for improved security (such as by Smallstep CA).

1 Like

Thanks again for the great overview!

To clarify further, I am using an OPNsense router and want to run as much of my networking on that device as possible. Currently my servers are accessible within the local network via their IP and port which seems to be the default approach as I’ve been setting things up eg. Portainer admin panel (ie. VMs have a local IP assigned by OPNsense DHCp). I would be interested in removing access in that way and forcing it through a central Authelia + reverse proxy + domain name setup, but currently don’t know how to do that.

So with either the local CA approach or the public DNS approach, I would want it running on my OPNsense. I have looked into your Smallstep CA suggestion now and it does look like a cool project! I have not yet found a way to set this up as an OPNsense plugin though. The OPNsense does support ACME which seems related, but I don’t yet have a way of getting this all to work. I do like the idea of mTLS which is an advantage to the Smallstep CA approach I think.

Regarding the privacy concern of public DNS leaking information about my local services:
I have read that using wildcard subdomains does not actually leak information. Do you know if this is the case? This would also be appealing since I don’t seem to then have to create and distribute a new certificate for each service which seems easier. I have also seen guides that don’t require A names to be created on Cloudflare ie. no internal IP address needs to be specified since using DNS01 which seems nice.

So maybe the public DNS approach can actually be private and just less hassle in the end?

I guess it comes down to whether you know of a way to run SmallStep CA in a robust way from OPNsense, or if your experience agrees with the potential for avoiding privacy leaks using public DNS with certain internal setups? Very open to suggestions/guides you may be aware of, thanks!

You can just create a self signed CA, and server certificates on the OPNsense and use them in the Caddy plugin. Theres dedicated menus for everything.

Just look at things like “System - Trust” and experiment a bit.

Import the CA you create in the browser of your devices.

1 Like

Thanks for the tips! I’ve now tried to get that working as follows;

I am following this guide to create self signed certificates in OPNsense:

https://docs.opnsense.org/manual/how-tos/self-signed-chain.html

I am confused by how Caddy comes into play here in your suggestion. The guide seems to suggest that I have to follow this process of creating 3 certificates (and storing the intermediate in the web browser) for each service I want to protect. It does not mention IP and port number, so I’m guessing I have to do some sort of overrride or something in my Unbound DNS from the port and IPs to the local domain specified in the certificates for each service.

This is a bit of work for every new service, and also I’m not sure on the security implications of just leaving these certificates lying around in my filesystem as they seem to be in that guide…

Am I correct that you are suggesting the following more convenient alternative?

  1. Setup a single wildcard certificate chain in OPNsense
  2. Connect this to Caddy somehow and then Caddy acts as a reverse proxy where I can configure each subdomain according to your previous guide in the wildcard subdomains section : Caddy: Reverse Proxy — OPNsense documentation
  3. I guess I will need to point my local Unbound DNS to the Caddy IP and port as upstream to get the DNS names to resolve?
  4. Put the intermediate certificate in any browser I want this to work for

If that is the case, and I can work out how to actually do it, then maybe this does strike a good balance of security and ease of use.

I still wonder about my previous question though. If I use a wildcard subdomain for my A name on Cloudflare, is it true that it doesn’t actually expose what services I’m using it for locally and so can’t be logged publicly? This would alleviate a major concern of mine with the DNS01 approach that you originally suggested. If that way has no issues, it may be even more convenient to use and perhaps has an advantage that I don’t need to worry about storing certificates in my filesystem which could be stolen under a breach?

I am very new to this stuff and am finding it quite confusing, so any help is much appreciated thanks!

Focus on just the Caddy setup and TLS certs for now. Once you are happy with that, then look at Authelia.

Authelia has an integration docs example for Caddy, you don’t have to use the OIDC/OAuth2 support it has (for your services to delegate login through), but you can use it with the Caddy directive forward_auth for a central login process as an auth gateway (if you’re familiar with basic auth, this would be an improvement on that).

They have a Discord community where you can get assistance if you need any. Authelia is configured with YAML, while you can find alternatives that have CLI or Web UIs / APIs to use instead.


Yeah it’s nice, but again just try narrow focus on what your basic requirements are. Get that working and then explore these nice to haves, I sometimes talk about them like they’re a walk in the park and while they’re fairly easy once you’ve invested the time to grok it, I was quite slow to adopt each of these myself due to how unfamiliar I was and concerns with doing so securely/correctly :sweat_smile:

Yeah, CT (Certificate Transparency) logs the public provisioned certs. So a wildcard is useful there when your FQDN would reveal insights into what services you’re running, which due to the historic and searchable logs, an attacker could query to get a broader picture of your infrastructure. It’s not usually a significant vector to be concerned about, wildcard certs do have their own disadvantage if the attacker compromised your private key for that too. Depends on the trade-off you’re comfortable with.

I’m not really a target of notable value, so my threat model to protect against is simpler. If you think about it from the attackers POV, they are usually automated or going for low-hanging fruit (scanning for exploits and the like), they might get involved more directly when that automation identifies such opportunities but otherwise you’re just one of many and they only have so much resources to budget.

Anyone carrying out a targeted attack may have more resources (time, money, etc), but they’ll likewise have a cut off point too - unless they are very determined in which case they’ll probably be successful (either by skills, or other affordable avenues of attack).

You only need the root CA cert in the client trust stores. Beyond that you have Caddy managing the leaf certs for each service (or a wildcard leaf cert), which the client receives to verify trust against before establishing TLS.

Thus for internal usage with a private CA, it wouldn’t matter too much AFAIK. If you want the convenience of clients using an already trusted CA like LetsEncrypt you can go the public route instead and prefer a wildcard.

In Caddy this may be a little more annoying to manage with handle directives as a workaround (see this comment which talks about the same topic for more info).

This is regarding the LetsEncrypt approach instead of managing private root CA certs (Smallstep or Caddy generated)?

You can rely on the public DNS to point to an internal IP if you like. It’s a little bit of information that leaks I guess, but it’s private range and likely very common. If the attacker had access to your network they could do some port knocking, it’s unlikely to be too big of a concern.

If you don’t need that convenience of public DNS, because the clients are all connected to your router and get your own local DNS queried without needing additional configuration, then that’s fine too and you won’t need public DNS records for such as your clients go through your private DNS service first anyway.

If you do go with LetsEncrypt and Cloudflare DNS challenge however, then it’s a public TLD. Not an issue since you own the domain. If you choose to go fully private/internal, use a special-use TLD (like .home.arpa or possibly the recently approved .internal, definitely not .local), and these should not risk leaking to public DNS (.internal might take a while for broader adoption of rejecting it, not sure).

There is pro/cons to whatever approach you decide to go with, and that varies based on your requirements/comfort.

I can’t comment on OPNsense, I am aware of it only by name. Along with others like PiHole, it’s just not something I’ve dabbled with.

My usage for self-hosted services is still very much ephemeral, so I have it all configured internally to bring up/down. I used Smallstep CA in the past with ACME for provisioning certs for a mail server, but usually I just have Caddy manage the certs works for me, or I provide Caddy with a cert I created externally via step-cli.

If you’d like guidance with how I use step-cli, I gave some examples here, 2 commands to provision root + leaf certs, 1 command to add root CA to trust store, then add tls directive in Caddyfile for leaf cert (wildcard).

The cert is a public key, it’s not a secret and is used for HTTPS / TLS. The private key is a secret that should be kept safely. You’ll want to do that regardless of how it’s provisioned as anyone with that private key is able to claim to be the server the certificate was granted for (the explicit FQDN or wildcard).

The root CA (and any intermediates) are similar, but they tend to have longer expiry dates and are used to provision your leaf certs. Intermediates aren’t required to provision a leaf, just a best practice so that the root CA cert (which is what clients have in their trust stores) can more safely secure it’s private key, where it’s used to renew intermediate certs that then provision the leafs.

Look at the link I provided for step-cli examples if it helps. You can technically throw away the private key for the root CA if you don’t intend to sign anything with it again (which is fine for manual provisioning if you’re comfortable updating each client store again). Since my usage is typically disposable setups, I don’t really need to proactively protect this secret so much as I don’t have external clients to update.

In a proper production scenario you’d have automated renewals and your root CA cert is not something you want to update on each client frequently, so intermediates are worthwhile. Should an attacker gain access to the private key on the filesystem for the intermediate, you’ve probably got bigger worries with your security :sweat_smile:

Now that I’ve probably clarified more about cons with the fully private approach, and your own experiences trying to pursue that… you can see why many prefer to just go with LetsEncrypt and public DNS :stuck_out_tongue:

Your wildcard cert would still have the private key stored on the filesystem, you can look at Caddy docs for where that’ll be stored. If the attacker could compromise access to the private key for that, they’d likely be able to do the same without a wildcard cert when you have it all managed by the same Caddy instance, so that disadvantage from wildcard certs is less relevant.

Likewise, the wildcard leaf cert or the root/intermediate certs in your case aren’t likely to have much difference in risk for the same reasons if it’s all centralized, especially if you only provision for that single cert.

Try LetsEncrypt with Cloudflare, it’s fairly cheap to lease a domain annually and will likely be less of a headache for you :sunglasses: You can still have your clients resolve DNS from your router for your setup, Cloudflare is only needs to manage the DNS challenge for LetsEncrypt with the domain you register.

Thanks again for the great summary!

I am inspired to try fully private certificates… but for now I have opted to try the DNS01 public DNS approach since both seem worthwhile learning anyway, and this method seems to avoid the privacy concerns by using wildcard DNS.

I am now trying to debug why it is not working and was hoping you could provide some insights based on what I have done to set it up:

  1. I created a Domain Override in Unbound (my local recursive DNS for all servers) to send the domain name I want to use to reach my service of interest (X.domainname.com) to the IP address of OPNsense which is running the Caddy plugin.
  2. I have enabled the Caddy plugin, gave it an ACME email, specified Cloudflare as my DNS provide and gave it an API key
  3. Created a wildcard (*) A name record in Cloudflare pointing to the OPNsense internal IP address
  4. Created a domain and subdomain based on this domain in the Reverse Proxy tab of Caddy.
  5. Created a HTTP handler pointing upstream to the IP and port of my service that I want to securely connect to.

When I type the target domain into my Firefox browser (X.domainname.com) I get the error:

Can’t connect to the server. Having trouble finding that site.

So I seem to have messed something up. Do you see any obvious issues/things to improve in my approach?

Thanks for your time!

Caddy: Reverse Proxy — Help Nothing Works

This will most likely answer most of your questions and help you to troubleshoot this.

My advice here is still, do the /normal/ bog standard setup first, harden security later once you have verified it working once.

Doing your approach will give you a lot of unnecessary pain figuring things out the first time.

3 Likes

Thanks, I will start going through that guide section to try and troubleshoot!

I agree it is best to start simple and build complexity from there. I figured that’s what I was doing by not worrying about access lists, Authelia, Crowdsec, etc yet. I got the impression that just setting up the DNS01 remote proxy was meant to be simple in Caddy and was following your link for this in your earlier post. Perhaps I misinterpreted your advice and there is an even simpler setup I should be trying first?

In either case, I have gone through this guide checking and trying various things and am still not able to reach my services via their domain name (they still work if I type their IP:port number into the web browser as normal).

It would be great to get feedback on the steps I have followed above and see if there are any glaring mistakes. Some additional details:

  1. I am trying to use wildcard domains and subdomains. So I just specified ‘*’ for the A name in Cloudflare and pointed it to the internal IP of my OPNsense router.
  2. I have tried various combinations, though none work. Eg. using *.subname.domainname.com vs subname.domainname.com as Domains in Caddy.
  3. I have tried both with using a Domain Override for subname.domainname.com pointing to the OPNsense IP address and without one. I initially thought this would be needed but now on reading that guide further, I am feeling like it might not be? eg. My Unbound recursive DNS will directly resolve the Cloudflare domain name and the A record will point it straight to the OPNsense directly? From there Caddy acts as a reverse proxy and matches the subdomain name to the HTTP handler pointing to my service IP and port number. Is that right?

As a test I used nslookup on my subdomain.domainname.com and get the following results:

with the Unbound override turned on:

** server can’t find subdomain.domainname.com: NXDOMAIN

With it turned off:

Non-authoritative answer:
*** Can’t find subdomain.domainname.com: No answer

Hopefully this helps to debug what is going on and it is just some simple error I have made, as I am feeling quite stuck right now!

UPDATE:

I think I was using the wrong Overrides in Unbound. Swapped to using a wildcard host override (rather than a Domain override which I think are for a different purpose entirely…) for my domain name.

I can now get a padlock (verified by Letsencrypt) with the format subdomain.domainname.com by setting up each record in Caddy. This even works if I remove my A name record in Cloudflare and just use the local record in Unbound DNS. I wonder if this is just a changover delay and it will soon stop working, or actually I don’t need the public record at all? If the latter it is a shame that I setup the old one which leaked my private IP address, but not the biggest error I suppose in the grand scheme of things.

What is still not working though, is getting the following format to work: subdomain1.subdomain2.domainname.com. Going to these in the web browsers does give the padlock secured, but I am presented with a white/blank page rather than my service page.

Any thoughts on this latest problem that I could try out? Thanks!

You should start simple with getting a minimal config going and expand from there:

*.example.localhost {
    tls internal
    respond "Hello world!"
}

You can run that locally on your system, and you’ll be able to access any subdomain for that via the browser with HTTPS.

  • If you let Caddy add the root CA into the trust store, there’ll be no warning about the page being potentially insecure either.
  • If you don’t want to let Caddy install it’s own root CA cert, you can opt-out via global option skip_install_trust. You can always undo the trust regardless with caddy untrust CLI command.

You can then update the config to go with the ACME DNS challenge and provision an actual LetsEncrypt wildcard cert with Cloudflare (using Caddy with the cloudflare dns plugin and token as an ENV):

*.example.com {
  tls {
    dns cloudflare {env.CF_API_TOKEN}
    resolvers 1.1.1.1
  }

  respond "Hello world!"
}

You can optionally use LetsEncrypt staging endpoint for testing. The root CA isn’t trusted IIRC, so you will get the browser warning but it also avoids logging the cert registration I think? So you can explore with full FQDN instead of wildcard certs too if you need to test that.

As a reminder, using a site address with the FQDN itself will provision separate certs instead of using an existing wildcard. You’d have to use the handle directive workaround as mentioned previously with a wildcard site address to manage subdomains for it (unless using tls /path/to/cert /path/to/key in each separate site block, where that refers to your wildcard cert files).


Adapting the Caddyfile from the above, or deploying elsewhere like on your router and other environment changes like using Unbound for DNS, you can better isolate where your problems are coming from.

As you’ve already made progress, feel free to ignore the advice above but it’s usually a good way to troubleshoot by simplifying the problem, breaking away the different layers so that you can pinpoint what works and what changes break it.


DNS challenge uses temporary TXT records. You don’t need anything in public DNS pointing to your private network / router if your clients are getting their DNS information elsewhere.

Caddy by default will send a 200 OK response if it’s working. That will look like a blank white page and can be confusing, hence my suggestion to test with a simple site block with the respond directive.

After you got that confirmation of it working, you can add in your reverse_proxy directive or anything else you have going on to troubleshoot that further.

Thanks a lot for all your help! I tried your simpler setup and I reckon I have a better understanding overall now. eg. I realised that by creating multiple domains for my testing, I was actually leaking public certificate records found on eg. crt.sh.

I now have a reliable setup using *.domainname.com which gives me SSL for all my services which is really great. For some reason I still have not found a way to get *.subdomain.domainname.com format SSL certs to work, still just stuck on the white page with padlock output. If Caddy is meant to work with such a setup, then I probably just have something misconfigured somewhere (all done via the OPNsense GUI).

For now I guess I will leave it in this working confguration, but I would hope to fix it in future if possible. Still very open to suggestions if you do think of some obvious thing I could try out.

I will next try the other method of SSL certs that you described by creating my own CA in a VM. This seems handy for my standalone workstations that aren’t connected to my router that is currently handling the Let’sEncrypt DNS01 certs.

Thanks again for your patience and great help!

Yes, that is what I was referring to earlier with Certificate Transparency.

A historic log of all certs provisioned for any FQDN under that domain will be logged, so without wildcerts if the FQDN reveals anything about a service, that information and how long it’s been available for becomes available to search engines.

Not much you can do about that now.

Take a look at the Caddy data directory, it should have the certificates it acquired/created stored there. If you find the one for that wildcard, you can then manually load it with tls /path/to/cert /path/to/key in a site block to ensure that it’s being loaded (or on the white/blank page loaded try inspect the certificate, you could do similar with curl --verbose).

If the certificate is correct, HTTPS is working, it’s just something about your site block or the service is misconfigured that your response sent to the browser is the default Caddy 200 status (with no body).

With the certificate files I mentioned, you could run your own Caddyfile config with caddy locally on the same system, without the router involved if that would be quicker to experiment/troubleshoot.

I would advise clearing out the whole site block temporarily, to just use the respond "Hello!" approach I showed you earlier. That way if it is working correctly, instead of a blank page you’ll get some text, then it’s just troubleshooting your site block or service for misconfiguration.

FWIW, Caddy will probably fulfill that need too. At least as a private CA that automates cert creation and renewal for you.

It can also be configured to be an ACME service too, for other software to request certificates from.

Not sure about some of the other features Smallstep CA has, I haven’t explored that too much myself.

You’re welcome! Glad you were successful :partying_face:

Yeah, that was a mistake on my part. I didn’t expose any sensitive names at least so at worst I guess it’s just a bit of noise added. The main thing is I think I understand it now so it shouldn’t happen again.

I’m not currently sure how to test these suggestions in OPNsense since the caddy file seems to be autogenerated based on my GUI selections and recommends not to edit it. I expect there is a way to override those manually though so I will look into this once I’ve worked it out. It is weird that the single level wildcard domain works but not for nested ones.

In the mean time, I’d like to setup the local CA tls internal method with Caddy and handles based on the discussion you posted earlier:

I would like to do this all in Docker for a server on a different network (but local only usage within that network). So hopefully I can find some docker compose files for getting Unbound DNS (to point local hostname calls to Caddy), and Caddy running together. I hope not to have applications in that same Compose file and can just rely on their link to Unbound then to Caddy for other servers on that local network. I will put the Caddy root certificate into the OS trust store and web browser store for each client on that network I want this working for.

Am I right in thinking that in such a setup I would need no further intervention for it to work, eg. Caddy handles auto renewing of certificates? Or maybe it’s just a matter of setting a hundred year expiry for the certificates? Or, do I need the added ACME functionality that you suggested?

Thanks again!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.