Behavior of automatic HTTPS on failure to generate cert

I’m using virtual hosts and automatic HTTPS. When I have added a new virtual host config and then send Caddy a SIGUSR1 to reload config, Caddy seems to look at the new host and try to get a cert for it, which is awesome and is the behavior I hope for (actually whether it’s at config reload time or the first time a site is accessed doesn’t matter).

So, here’s my scenario: two vhosts have been added, and the DNS for one of them (the first Caddy looks at let’s say) has not yet propagated. So, Let’s Encrypt rightly says “no way pal” (actually more like “Incorrect validation certificate…”. Now, it seems like Caddy stops trying, and the second one doesn’t get processed.

Is this correct? (rather, is this desired/defined behavior?) Would it be simple to let Caddy try to process certs for all the new virtual hosts, even if one failed?

Thanks!

1 Like

Yes. :slight_smile:

Yes, but it would be incorrect. :wink: If there’s an error, especially related to security, you better make sure it’s fixed!

See issue #642 for a related discussion.

Matt, thank you for your response - I can see from #642 that you’ve gone over this ground (probably more than you cared to) and I’m not trying to rehash it - please bear with me just for a minute :slight_smile:

My use case is the following: I’m doing dynamic virtual hosts, and people can point their domains at the virtual hosts my system creates. For me, it’s absolutely not a security issue if DNS has not yet resolved for a one/some of those domains; I would be happy to simply not serve their vhost until DNS resolves and LE gives me/Caddy the cert.

Originally, I was going to write the code to do the LE cert generation and management, then I found Caddy so I didn’t have to. @robertp REJOICES

But given Caddy’s (intended, thought out) behavior, it doesn’t work out quite that way. So, my question(s).

  • Is this behavior that could be implemented in a Caddy plugin? (I haven’t written go but it looks reasonable; but I don’t know if plugins can meddle in that part of Caddy’s TLS lifecycle)
  • If it could, would it be unwelcome to provide this workaround? If not, I’d be willing to dig in and work on it.

In either event, what I’ll have to do in the meantime is manage checking whether DNS has resolved before trying to add the vhost. Not terrible, just a bit more bookkeeping so to speak, and not a worry for you :slight_smile:

Thanks,
Robert

Thanks for checking out that discussion!

I don’t recommend using Caddy as a certificate manager. Use it as a web server.

Well, it is a security issue, because HTTP is insecure, and you can’t use HTTPS until the DNS changes are applied by your provider.

That’s fine, but again, Caddy is not a certificate manager. (I’m working on one of those on the side as a separate project.)

For your case, I highly recommend this solution instead. Less work for me, less stress on Caddy, and your business logic is confined to your space. Sounds like an all-around win to me. :slight_smile:

So - what about my plugin questions?

Sorry, your questions weren’t really clear to me.

What behavior exactly? Setting DNS records?

TLDR: When serving virtual hosts, I’d like to be able to tell Caddy to keep trying to obtain certs (for other vhosts) if obtaining a cert for one vhost fails. I’d like to do this through configuration. I’m wondering if that (meddling in the TLS workflow) is something one would achieve with a plugin.

If there are 99 sites/vhosts that are fine and 1 that’s not yet resolved, and that 1 is the first Caddy attempts, the 99 valid sites don’t get certs and don’t get served. I’d like to be able to (e.g., with configuration) choose whether Caddy will continue to attempt to obtain certs if an attempt fails for one of the vhosts.

For my specific use case, serving dynamic vhosts, it’s not unusual that DNS won’t have resolved for one of them yet, and it is not a security concern for any of the other vhosts. Well, probably.

With ACME/LE, every major HTTP server will shortly do as you’ve done, and build in automagic HTTPS (I’ve already read something about nginx). I would guess that Apache, or nginx, or fooHTTP will build in this kind of configurability to support this use case.

I understand and agree in principle with your separation of concerns between cert management and serving HTTP. My opinion however is that with ACME and CAs that support it, it suddenly becomes rather a detail of serving HTTP; at least, that’s how eg a devops mindset would view it. Again, just my opinion.

(I also think that to compete with LE, Verisign et al will start offering free renewable 90-day certs via ACME)

(I also think that TLDR should go at the top, not the bottom)

Cheers,
Robert

I think what you need is a layer above Caddy; Caddy serves sites you tell it to serve. If it can’t, it won’t. If your site isn’t ready to serve yet, don’t tell Caddy to serve it.

Message received Matt. The DNS resolve testing logic is super simple, already done and deployed. Thanks for Caddy.

1 Like

Awesome. That’s definitely a better, easier way to fix your problem. :slight_smile:

Hi,

I recently discovered Caddy and working on a usage very similar to Robert’s case.

And, Caddy failing to start the server only if one website fails due to “DNS not being ready” caused me to find this conversation :).

@matt Totally understand that HTTP is not secure and it is ok to not serve a site with HTTP if it was intended/configured to run with HTTPs.

Yet, mustn’t we only stop that site from being served rather than causing the whole Caddy server to fail?

As an example, we’ll be hosting 100+ sites on a Caddy server and, if one of those sites experience a DNS issue or the domain expires, the whole Caddy server won’t be able to start. Can’t we just not start/serve that problematic site?

And, I’m so amazed with Caddy, unbelievable job and effort. So appreciate.

Like I was suggesting above, you’re looking for functionality that lives about one or two layers above Caddy. Caddy is a tool, a web server that serves what you tell it to serve, and it does so securely. You are welcome to write a program that checks for DNS errors before you tell Caddy to serve a site, but Caddy absolutely needs DNS in place before it will be able to properly serve your site over a secure connection. If Caddy just starts without serving some sites, the web server is broken. NGINX is the same way: if you tell it to bind to a hostname that can’t resolve, it doesn’t start.

Here’s a related thread: How to have bad domain in config skipped

Anyway, glad you like Caddy. :slight_smile: Thanks for sharing your appreciation. Means a lot.

1 Like

Hi Matt,

Thanks very much for the reply. Totally understand and can’t argue with the “strict starting policy”.

I was thinking that this could have been an option for users who prefer to run websites where the control of their DNS are not on their hands.

Thanks again and have a great day.

This is not safe. :confused: Do not do that.

Thanks! You too. :slight_smile:

This is not safe. :confused: Do not do that.

That is how the whole shared hosting system works.

I may be missing a serious point but what makes it unsafe to host a customer’s website where a customer can change its DNS address (except the site failing to start)?

1 Like

Ah, a customer’s website; I thought by “user” you were referring to the site owner. It’s dangerous for a site owner not to control their DNS.

:slight_smile: Thanks very much again.

Just adding an update in case any others look for a workaround for this:

defining the websites as “wildcards” and using:

tls {
    max_certs 10
}

forces Caddy to skip validation (it actually switches to on-demand validation).

Hope it helps.

2 Likes

I have first applied the DNS-checking-in-advance workaround, worked fine.

After that, I have seen your workaround suggestion on using wildcards here: Support for OnDemand even for non-wildcard domains - #2 by matt , applied it, works also great and using it in production for 1000s of sites now.

Thanks very much.

1 Like

This is awesome!
I am going to study this solution.

Thank you guys!


My server just block by let’s encrypt over 20 domains a week :joy:
I am trying to use:

tls { dns linode }

But it didn’t work.
It just hang on:

2017/05/31 23:25:12 [INFO][tarot.larvata.tw] Checking DNS record propagation using [106.187.35.20:53 106.187.36.20:53 106.187.90.5:53]

I had read that post, I thought that this configuration is the best workaround for me:

abc.larvata.tw {
   tls { max_certs 10 } 
}

xyz.larvata.tw {
   tls { max_certs 10 } 
}

not a wildcard domain, but every domain with a max_certs config.