Internal/external acme+dns setup

Some of the issues are a continuation of the discussion in this thread. Here I want to focus on the general acme dns setup that I want to setup. First let me describe my setup and what I want to achieve:

  • I have a domain example.com with the dns managed by the domain registrar
  • On the internal network I am overriding the example.com with the local addresses and adding local-only records. I have a few reasons to do so:
    • Retain publicly trusted certificates for some services. For example some virtualenv python do not trust the system certificates, and this is the best resolution for now.
    • Avoid routing to the router and back when talking to local services
    • Allow ssh-ing and such from local network IPs, without opening the ports publicly (port 22) and/or when some ports cannot be open to the public (port 25).
  • I have a few non-HTTP services SMTP/IMAP etc. which I want caddy to manage the automatic certificate automation. Some I want to be managed by local caddy CA, some by public LetsEncrypt and some by local step-ca
  • I want to create public certificate authentication which requires: exposing a CA server (locally), creating a server certificate from this CA (hopefully managed by caddy) and creating client certificates from this same CA (each local users creates their own). Branching off a CA from the same root-CA as the one managed by caddy would be ideal here.
  • Bonus: Expose caddy api on the internal network

Some of the issues I have so far:

  • Can’t publicly sign certificates via dns-challenge with local dns override
    • Probably because it cannot check if the records are written by the plugin.
    • If it’s possible to define a dns server to use for the public setup that would solve half of the issues.
  • Would want to dns-challenge on A-records only present in local dns, but challenged on public dns, i.e. just the TXT-record is checked
  • Unclear what the Global option acme_dns <provider> is for.
  • Is it possible to setup the tls.dns <provider_name> [<params...>] for the whole domain? Not necessarily wild-card tls, just not having to specify the provider and api for each sub-domain.
  • Cannot connect to the acme_server via step interface

Some ACME issues are solved if I use a separate step-ca where I can specify the dns-server and have a separate certificate renewal for the non-HTTP, which I’ll try to setup next.

Any advice on how to make this setup work better?

Yep this is pretty typical. This is usually called “split DNS”

I do this for my home server using CoreDNS

Just use the admin global option to change the admin API listen address. By default, it’s localhost:2019 which is only accessible on the same machine, but you could change this to :2019 which would allow connecting from any source. Make sure to not port forward this port though, because otherwise you would be exposing the admin API to the public, and bad actors could do bad things.

That’s not a problem. The DNS challenge is solved by the ACME issuer. They know nothing of your internal network. You don’t need to worry about split-DNS here, as long as you’re able to use a DNS provider module that can update the public DNS records. You haven’t said what DNS provider you’ll use though.

That’s a way to configure the DNS challenge globally, for all relevant issuers. Unfortunately there’s a known issue with it in v2.3.0 which will be fixed in the next release (already fixed on the master branch), so I’d recommend using the tls subdirective for now.

You can make use of the Caddyfile’s “snippets” feature to reuse config in multiple site blocks:

I don’t know what you mean. You haven’t shared your config or logs, so I can’t do much to help.

2 Likes

I will get back to the rest of the problems when I get back home and gather the error logs. For now the ones that I can respond to:

The setup I currently test on is caddy version: v2.3.0 h1:fnrqJLa3G5vfxcxmOH/+kJOcunPLhSBnjgIvjXV/QTA=, however I had similar issues on head built with xcaddy.

I’ve tried this in combination with origins and enforce_origin but it fails to even systemctl reload caddy. The syntax I used is:

   admin :2019 {
      origins 127.0.0./8 192.168.0.0/16
      enforce_origin
   }

And the error I get:

"error":"client is not allowed to access from origin :2019"

Or when I access localhost:2019/config for example,:

{"error":"missing required Origin header"}

I want to route and reverse_proxy to have something like this, but still have caddy block unwanted origin IPs.

caddy.example.com {
   handle_path /server1/* {
      reverse_proxy server1.example.com:2019
   }
   handle_path /server2/* {
      reverse_proxy server2.example.com:2019
   }
#   reverse_proxy :2019
} 

For the split-DNS issue, I am using tls dns gandi for external, and I will work on making tls dns powerDNS for local as well. I will have to get back to my home setup to get the log errors for this issue, but basically if the local machine has nameserver 127.0.0.1 as priority in /etc/resolv.conf, it fails to server with tls dns gandi no matter how the internal and external A-record is set.

For the acme_server issue the Caddyfile I use is like this:

acme.example.com {
   acme_server
}

Accessing https://acme.example.com/acme/local/directory works well, but when I try to run a step ca command, e.g.:

step ca certificate test.example2.com test.crt test.key --ca-url https://acme.example.com/acme --root ./cady_root.crt 

I get issues like:

error reading https://acme.example.com/provisioners?limit=100: EOF
error reading https://acme.example.com/roots: EOF
....

Indeed those paths are not served with the acme_server. Ideally I would want to be able to make child CAs from acme.example.com in order to separate SMTP client certificates from other services. I’m not sure if there is any support for caddy to run step ca init and step ca bootstrap to create additional child CAs or how to chain different acme federations accross multiple caddy servers with acme_server.

That’s because Caddy uses the admin API to perform config reloads, and because:

The origins option doesn’t support CIDRs, it matches against the Host and Origin headers (plus you seem to have a typo in there, missing a number before /8, not that it would work anyways):

Use the remote_ip matcher to handle requests you wish to block:

I think PowerDNS supports RFC2136, right? I started work on a plugin to support that, but it’s untested (also missing the caddy-dns piece but that’s trivial). I don’t really plan on testing it or using it myself, I just felt like taking a crack at implementing it the other day. Feel free to try it out or let me know if you need help.

Yeah, logs will be needed. I don’t really understand the problem.

I think you have a typo here, cady_root.crt :thinking:

But anyways, @matt should be able to answer that bit re acme_server.

I have tested with 127.0.0.1 and all of the local IP addresses and still it gives the same error. I guess the best resolution to this is to firewall port 2019 instead.

Didn’t know this is already available now. Works like a charm, although is it possible to simplify the configuration, e.g. if it’s possible to nest the matchers like @server1 {@ips...}:

caddy.example.com {
   @server1 {
      path /server1/*
      remote_ip 127.0.0.0/8 192.168.0.0/16
   }
   handle @server1 {
      uri strip_prefix /server1
      reverse_proxy server1.example.com:2019
   }
   @server2 {
      path /server2/*
      remote_ip 127.0.0.0/8 192.168.0.0/16
   }
   handle @server2 {
      uri strip_prefix /server2
      reverse_proxy server2.example.com:2019
   }
#   reverse_proxy :2019
} 

Indeed so, and it also has a simple API which I was planning to target as well as third-party admin like powerDNS-Admin which would offer the appropriate access control. Thanks for the rfc approach, I will use it to test the overall framework until I learn enough go-lang to hack a plugin for the other interface methods (DynDNS2 to be exact).

Most of my typos here are from anonymizing my setup. If it was the wrong .crt I would get a more obvious error.

Did you actually make the request with the Origin header set?

Yep:

handle_path /server1/* {
	@blocked not remote_ip 127.0.0.0/8 192.168.0.0/16
	respond @blocked "Nope" 403

	reverse_proxy server1.example.com:2019
}

handle {
	# Fallback for any otherwise unhandled requests
}

You can also make use of snippets to avoid duplication:

I’ve tried with systemctl reload caddy built from this documentation part.

ExecReload=/usr/bin/caddy reload --config /etc/caddy/Caddyfile

I’m not sure on that part if the Origin header is set.

Will the CIDR format be implemented for origins? If not remote_ip might be sufficient, although I’m not sure in either case what the security issues are there.

You’ll likely need to reload like this instead:

curl -X POST "http://localhost:2019/load" \
	-H "Content-Type: text/caddyfile" \
	-H "Origin: whatever" \
	--data-binary @/etc/caddy/Caddyfile

But I don’t think what you’re trying to do makes sense (see below).

Those are two separate concepts altogether. One is for the admin API, and the other is for your actual sites you’re serving.

There’s no existing plans for augmenting the enforce_origins feature, but Caddy v2.4.0 will introduce new features to admin that may be what you’re hoping for:

What are you looking to do with the API though? Because generally it’s either-or on Caddyfile vs JSON+API. Since Caddyfile to JSON is one-way, any time you make config changes via the JSON API, those changes will be lost the next time you reload from the Caddyfile.

(Although Caddy does persist an autosave.json which can be paired with the --resume option to make Caddy load from that on initial startup instead – you should probably use the caddy-api service instead of caddy if you’re going that route.)

So either go all-in on JSON, (and you may use the Caddyfile adapter to as a basis for your initial JSON config), or go all-in on the Caddyfile and limit yourself to the functionality it provides.

My plan is to have a basic Caddyfile configuration for initial deployment and tests, then switch to JSON+API and implement the remaining features from there, like the non-HTTP certificate management. I think I’ll eventually transition to a JSON only approach, and hopefully the caddy web-gui or TygerCaddy will be fully developed by then.

The new features in v2.4 look interesting. Maybe I’ll be able to configure the caddy server to start from another and serve it as a cluster. Will it be possible to save the json configuration in a SQL or Redis database?

Yeah, but you’ll probably want to save it there yourself (like whenever you make an API change, then call GET /config/ to get the whole thing then save it to your DB), then for Caddy startup you could either implement an API endpoint that reads out the config from the database, or implement a config loader module like caddy.config_loaders.redis to load it if you want to go more direct.

Tyger seems dead, it’s been archived. GitHub - morph1904/Tyger2: A Reverse Proxy Application

Yes, I’ve checked the official site https://tygercaddy.co.uk/ and the linked gitlab, but can’t find any info on the development either. The docker conatiner has been updated 1 month ago so can only hope there will be new updates some day

Hopefully this the relevant part of the log for the dns-challenge issue. I’ve editted some minor things

Summary
Feb 15 21:40:58 HostName systemd[1]: Started Caddy.
Feb 15 21:40:58 HostName caddy[80323]: caddy.HomeDir=/var/lib/caddy
Feb 15 21:40:58 HostName caddy[80323]: caddy.AppDataDir=/var/lib/caddy/.local/share/caddy
Feb 15 21:40:58 HostName caddy[80323]: caddy.AppConfigDir=/var/lib/caddy/.config/caddy
Feb 15 21:40:58 HostName caddy[80323]: caddy.ConfigAutosavePath=/var/lib/caddy/.config/caddy/autosave.json
Feb 15 21:40:58 HostName caddy[80323]: caddy.Version=v2.3.0
Feb 15 21:40:58 HostName caddy[80323]: runtime.GOOS=linux
Feb 15 21:40:58 HostName caddy[80323]: runtime.GOARCH=amd64
Feb 15 21:40:58 HostName caddy[80323]: runtime.Compiler=gc
Feb 15 21:40:58 HostName caddy[80323]: runtime.NumCPU=4
Feb 15 21:40:58 HostName caddy[80323]: runtime.GOMAXPROCS=4
Feb 15 21:40:58 HostName caddy[80323]: runtime.Version=go1.15.5
Feb 15 21:40:58 HostName caddy[80323]: os.Getwd=/
Feb 15 21:40:58 HostName caddy[80323]: LANG=en_US.UTF-8
Feb 15 21:40:58 HostName caddy[80323]: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
Feb 15 21:40:58 HostName caddy[80323]: HOME=/var/lib/caddy
Feb 15 21:40:58 HostName caddy[80323]: LOGNAME=caddy
Feb 15 21:40:58 HostName caddy[80323]: USER=caddy
Feb 15 21:40:58 HostName caddy[80323]: INVOCATION_ID=e74bae9163984a54bc5f7103b748174d
Feb 15 21:40:58 HostName caddy[80323]: JOURNAL_STREAM=8:2452170
Feb 15 21:40:58 HostName caddy[80323]: GANDI_API_TOKEN=***************
Feb 15 21:40:58 HostName caddy[80323]: {"level":"info","ts":1613392858.4652934,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":""}
Feb 15 21:40:58 HostName caddy[80323]: {"level":"info","ts":1613392858.4680827,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["localhost:2019","[::1]:2019","127.0.0.1:2019"]}
Feb 15 21:40:58 HostName caddy[80323]: {"level":"info","ts":1613392858.4686565,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
Feb 15 21:40:58 HostName caddy[80323]: {"level":"info","ts":1613392858.4686894,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
Feb 15 21:40:58 HostName caddy[80323]: {"level":"info","ts":1613392858.4696753,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["test-www6.example.com"]}
Feb 15 21:40:58 HostName caddy[80323]: {"level":"info","ts":1613392858.4701457,"msg":"autosaved config","file":"/var/lib/caddy/.config/caddy/autosave.json"}
Feb 15 21:40:58 HostName caddy[80323]: {"level":"info","ts":1613392858.4701688,"msg":"serving initial configuration"}
Feb 15 21:40:58 HostName caddy[80323]: {"level":"info","ts":1613392858.4705825,"logger":"tls.obtain","msg":"acquiring lock","identifier":"test-www6.example.com"}
Feb 15 21:40:58 HostName caddy[80323]: {"level":"info","ts":1613392858.4709985,"logger":"tls.obtain","msg":"lock acquired","identifier":"test-www6.example.com"}
Feb 15 21:40:58 HostName caddy[80323]: {"level":"info","ts":1613392858.4720163,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc000402fc0"}
Feb 15 21:40:58 HostName caddy[80323]: {"level":"info","ts":1613392858.4897351,"logger":"tls","msg":"cleaned up storage units"}
Feb 15 21:40:58 HostName caddy[80323]: {"level":"info","ts":1613392858.5096207,"logger":"tls.issuance.acme","msg":"waiting on internal rate limiter","identifiers":["test-www6.example.com"]}
Feb 15 21:40:58 HostName caddy[80323]: {"level":"info","ts":1613392858.5096762,"logger":"tls.issuance.acme","msg":"done waiting on internal rate limiter","identifiers":["test-www6.example.com"]}
Feb 15 21:41:00 HostName caddy[80323]: {"level":"info","ts":1613392860.1417341,"logger":"tls.issuance.acme.acme_client","msg":"trying to solve challenge","identifier":"test-www6.example.com","challenge_type":"dns-01","ca":"https://acme-v02.api.letsencrypt.org/directory"}
Feb 15 21:43:06 HostName caddy[80323]: {"level":"error","ts":1613392986.1164048,"logger":"tls.obtain","msg":"will retry","error":"[test-www6.example.com] Obtain: [test-www6.example.com] solving challenges: waiting for solver *certmagic.DNS01Solver to be ready: timed out waiting for record to fully propagate; verify DNS provider configuration is correct - last error: <nil> (order=https://acme-v02.api.letsencrypt.org/acme/order/92406927/7923469431) (ca=https://acme-v02.api.letsencrypt.org/directory)","attempt":1,"retrying_in":60,"elapsed":127.645385583,"max_duration":2592000}
Feb 15 21:44:07 HostName caddy[80323]: {"level":"info","ts":1613393047.4827397,"logger":"tls.issuance.acme.acme_client","msg":"trying to solve challenge","identifier":"test-www6.example.com","challenge_type":"dns-01","ca":"https://acme-staging-v02.api.letsencrypt.org/directory"}
Feb 15 21:46:13 HostName caddy[80323]: {"level":"error","ts":1613393173.1980343,"logger":"tls.obtain","msg":"will retry","error":"[test-www6.example.com] Obtain: [test-www6.example.com] solving challenges: waiting for solver *certmagic.DNS01Solver to be ready: timed out waiting for record to fully propagate; verify DNS provider configuration is correct - last error: <nil> (order=https://acme-staging-v02.api.letsencrypt.org/acme/order/15223924/240897156) (ca=https://acme-staging-v02.api.letsencrypt.org/directory)","attempt":2,"retrying_in":120,"elapsed":314.727015398,"max_duration":2592000}
Feb 15 21:48:14 HostName caddy[80323]: {"level":"info","ts":1613393294.5295618,"logger":"tls.issuance.acme.acme_client","msg":"trying to solve challenge","identifier":"test-www6.example.com","challenge_type":"dns-01","ca":"https://acme-staging-v02.api.letsencrypt.org/directory"}

Running the same setup, but this time disabling the local dns, everything works fine:

Summary
Feb 15 21:59:20 HostName systemd[1]: Started Caddy.
Feb 15 21:59:20 HostName caddy[83550]: caddy.HomeDir=/var/lib/caddy
Feb 15 21:59:20 HostName caddy[83550]: caddy.AppDataDir=/var/lib/caddy/.local/share/caddy
Feb 15 21:59:20 HostName caddy[83550]: caddy.AppConfigDir=/var/lib/caddy/.config/caddy
Feb 15 21:59:20 HostName caddy[83550]: caddy.ConfigAutosavePath=/var/lib/caddy/.config/caddy/autosave.json
Feb 15 21:59:20 HostName caddy[83550]: caddy.Version=v2.3.0
Feb 15 21:59:20 HostName caddy[83550]: runtime.GOOS=linux
Feb 15 21:59:20 HostName caddy[83550]: runtime.GOARCH=amd64
Feb 15 21:59:20 HostName caddy[83550]: runtime.Compiler=gc
Feb 15 21:59:20 HostName caddy[83550]: runtime.NumCPU=4
Feb 15 21:59:20 HostName caddy[83550]: runtime.GOMAXPROCS=4
Feb 15 21:59:20 HostName caddy[83550]: runtime.Version=go1.15.5
Feb 15 21:59:20 HostName caddy[83550]: os.Getwd=/
Feb 15 21:59:20 HostName caddy[83550]: LANG=en_US.UTF-8
Feb 15 21:59:20 HostName caddy[83550]: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
Feb 15 21:59:20 HostName caddy[83550]: HOME=/var/lib/caddy
Feb 15 21:59:20 HostName caddy[83550]: LOGNAME=caddy
Feb 15 21:59:20 HostName caddy[83550]: USER=caddy
Feb 15 21:59:20 HostName caddy[83550]: INVOCATION_ID=4b152798ecdd4d51b881525160c5f6bf
Feb 15 21:59:20 HostName caddy[83550]: JOURNAL_STREAM=8:3450425
Feb 15 21:59:20 HostName caddy[83550]: GANDI_API_TOKEN=*********
Feb 15 21:59:20 HostName caddy[83550]: {"level":"info","ts":1613393960.6904345,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":""}
Feb 15 21:59:20 HostName caddy[83550]: {"level":"info","ts":1613393960.69307,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["localhost:2019","[::1]:2019","127.0.0.1:2019"]}
Feb 15 21:59:20 HostName caddy[83550]: {"level":"info","ts":1613393960.6934047,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0004a5d50"}
Feb 15 21:59:20 HostName caddy[83550]: {"level":"info","ts":1613393960.6935573,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
Feb 15 21:59:20 HostName caddy[83550]: {"level":"info","ts":1613393960.6935859,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
Feb 15 21:59:20 HostName caddy[83550]: {"level":"info","ts":1613393960.6944902,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["test-www7.example.com"]}
Feb 15 21:59:20 HostName caddy[83550]: {"level":"info","ts":1613393960.6948476,"msg":"autosaved config","file":"/var/lib/caddy/.config/caddy/autosave.json"}
Feb 15 21:59:20 HostName caddy[83550]: {"level":"info","ts":1613393960.6948757,"msg":"serving initial configuration"}
Feb 15 21:59:20 HostName caddy[83550]: {"level":"info","ts":1613393960.6949959,"logger":"tls.obtain","msg":"acquiring lock","identifier":"test-www7.example.com"}
Feb 15 21:59:20 HostName caddy[83550]: {"level":"info","ts":1613393960.6953661,"logger":"tls.obtain","msg":"lock acquired","identifier":"test-www7.example.com"}
Feb 15 21:59:20 HostName caddy[83550]: {"level":"info","ts":1613393960.7131727,"logger":"tls","msg":"cleaned up storage units"}
Feb 15 21:59:20 HostName caddy[83550]: {"level":"info","ts":1613393960.7340736,"logger":"tls.issuance.acme","msg":"waiting on internal rate limiter","identifiers":["test-www7.example.com"]}
Feb 15 21:59:20 HostName caddy[83550]: {"level":"info","ts":1613393960.7341263,"logger":"tls.issuance.acme","msg":"done waiting on internal rate limiter","identifiers":["test-www7.example.com"]}
Feb 15 21:59:22 HostName caddy[83550]: {"level":"info","ts":1613393962.3324714,"logger":"tls.issuance.acme.acme_client","msg":"trying to solve challenge","identifier":"test-www7.example.com","challenge_type":"dns-01","ca":"https://acme-v02.api.letsencrypt.org/directory"}
Feb 15 21:59:44 HostName caddy[83550]: {"level":"info","ts":1613393984.2012053,"logger":"tls.issuance.acme.acme_client","msg":"validations succeeded; finalizing order","order":"https://acme-v02.api.letsencrypt.org/acme/order/92406927/7923712404"}
Feb 15 21:59:45 HostName caddy[83550]: {"level":"info","ts":1613393985.1136372,"logger":"tls.issuance.acme.acme_client","msg":"successfully downloaded available certificate chains","count":2,"first_url":"https://acme-v02.api.letsencrypt.org/acme/cert/04bfd7580f0a05ac3f33da0663a51f9eb71d"}
Feb 15 21:59:45 HostName caddy[83550]: {"level":"info","ts":1613393985.1143544,"logger":"tls.obtain","msg":"certificate obtained successfully","identifier":"test-www7.example.com"}
Feb 15 21:59:45 HostName caddy[83550]: {"level":"info","ts":1613393985.1143756,"logger":"tls.obtain","msg":"releasing lock","identifier":"test-www7.example.com"}

I did check that all the records are propagated before starting caddy each time.

That seems like a truncated paste; it should be https://acme.example.com/acme/local/directory like you posted directly above it.

I did try all sensible endpoints, but as you see in the error log, it strips all the URI and searches for /roots, /provisioners etc. from the root URL.

Solved the dns-challenge issue with this snippet

(public_tls) {
   tls {
      issuer acme {
         dns gandi {env.GANDI_API_TOKEN}
         resolvers 1.1.1.1
      }
   }
}

Would appreciate if this can be set globally with the resolver as well.

This topic was automatically closed after 30 days. New replies are no longer allowed.