Local IP address creates SSL error

1. Caddy version (caddy version):

v2.3.0

2. How I run Caddy:

a. System environment:

caddy docker image

b. Command:

Unsure, the default command the docker image uses

c. Service/unit/compose file:

version: "3"
services:
  caddy:
    image: caddy
    restart: always
    ports:
      - 80:80
      - 443:443
      - 8000-8100:8000-8100
    volumes:
      - ./config/Caddyfile:/etc/caddy/Caddyfile
      - ./persistent/caddy_pages:/pages
      - ./non_backup_data/caddy_data:/data
      - ./non_backup_data/caddy_config:/config

d. My complete Caddyfile or JSON config:

‘simon’ is the name my server has been given
Usually my Caddyfile is much longer, but I swear this is all there is in my Caddyfile right now for testing!
I know the template states not to redact, but I’m not comfortable sharing my domain, and it’s not the issue here.

https://redacted.com https://192.168.1.25:8012 https://simon.local:8012 {
  reverse_proxy home_assistant:8123
  log {
    output stdout
  }
}

3. The problem I’m having:

When I connect to my .com domain or the .local address, the webpage is presented. So local caddy CA works and the Let’s Encrypt CA works.
However, when I connect to the IP address, I am presented with a SSL_ERROR_INTERNAL_ERROR_ALERT on firefox, and a ERR_SSL_PROTOCOL_ERROR on chrome.
The page does not get displayed

4. Error messages and/or full log output:

Log from startup of the container:

caddy_1        | {"level":"info","ts":1612108050.2341197,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
caddy_1        | {"level":"info","ts":1612108050.238755,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["[::1]:2019","127.0.0.1:2019","localhost:2019"]}
caddy_1        | {"level":"info","ts":1612108050.2398407,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
caddy_1        | {"level":"info","ts":1612108050.240863,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
caddy_1        | {"level":"info","ts":1612108050.2409132,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv1"}
caddy_1        | {"level":"info","ts":1612108050.240822,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc000373570"}
caddy_1        | {"level":"info","ts":1612108050.2810113,"logger":"tls","msg":"cleaned up storage units"}
caddy_1        | {"level":"info","ts":1612108050.3017972,"logger":"pki.ca.local","msg":"root certificate is already trusted by system","path":"storage:pki/authorities/local/root.crt"}
caddy_1        | {"level":"info","ts":1612108050.3030016,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["simon.local","redacted.nl","192.168.1.25"]}
caddy_1        | {"level":"warn","ts":1612108050.304018,"logger":"tls","msg":"stapling OCSP","error":"no OCSP stapling for [simon.local]: no OCSP server specified in certificate"}
caddy_1        | {"level":"warn","ts":1612108050.3045423,"logger":"tls","msg":"stapling OCSP","error":"no OCSP stapling for [192.168.1.25]: no OCSP server specified in certificate"}
caddy_1        | {"level":"info","ts":1612108050.305942,"msg":"autosaved config","file":"/config/caddy/autosave.json"}
caddy_1        | {"level":"info","ts":1612108050.305992,"msg":"serving initial configuration"}

Then, when I try to connect with https://simon.local:8012 a couple of the following log entries are created:

caddy_1        | {"level":"info","ts":1612108215.9996169,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_addr":"192.168.1.170:37796","proto":"HTTP/2.0","method":"GET","host":"simon.local:8012","uri":"/","heade
rs":{"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"],"Accept-Language":["en-GB,en;q=0.5"],"Accept-Encoding":["gzip, deflate, br"],"Upgrade-Insecure-Requests":["1"],"Dnt":["1"],"Cookie":["default-t
heme=ngax; type=user; google2fa_token={redacted_long_string}; XSRF-TOKEN={redacted_long_string}"],"Te":["trail
ers"],"User-Agent":["Mozilla/5.0 (X11; Linux x86_64; rv:85.0) Gecko/20100101 Firefox/85.0"]},"tls":{"resumed":false,"version":772,"cipher_suite":4865,"proto":"h2","proto_mutual":true,"server_name":"simon.local"}},"common_log":"192.168.1
.170 - - [31/Jan/2021:15:50:15 +0000] \"GET / HTTP/2.0\" 200 3386","duration":0.002423514,"size":3386,"status":200,"resp_headers":{"Server":["Caddy","Python/3.8 aiohttp/3.7.3"],"Content-Type":["text/html; charset=utf-8"],"Content-Length
":["3386"],"Date":["Sun, 31 Jan 2021 15:50:15 GMT"]}}

However, when I connect using the ip address https://192.168.1.25:8012 no log is created at all.

5. What I already tried:

Tried to decipher logs, tried to find logs at all.
Searched the forum, searched the documentation.
Searched on google for solutions.

However, I have not found this specific problem anywhere. Many topics cover issues with the local CA or accepting certificates. However, there seems to be an error in the handling of IP addresses.

If I try the following Caddyfile (to ensure there is no conflict between the IP and .local address) the issue remains.

https://192.168.1.25:8012 {
  reverse_proxy home_assistant:8123
  log {
    output stdout
  }
}

I have also tried a simple file server, with the same result:

https://192.168.1.25:8012 {
  root * /pages/
  file_server
  log {
    output stdout
  }
}

6. Links to relevant resources:

What shows in the logs when you enable debug mode?

Thanks for your reply!
Nothing extra, no log is created still.
Actually, I have also switched to file mode:

https://redacted.com https://192.168.1.25:8012 https://simon.local:8012 {
  reverse_proxy home_assistant:8123
  log {
    level DEBUG
    output file /pages/log/caddy_test.log
  }
}

Because I thought maybe something was amiss with the stdout logging.

No log file is created on startup of caddy (which is to be expected, I assume)
No log file is created when I browse to https://192.168.1.25:8012
When I browse to https://simon.local:8012 a log file is created, and filled with similar entries as I pasted in item 4 of my original post.

Could something in the routing be wrong? Caddy does not recognise at all that the request should be handled by this block in the Caddyfile, it seems

Yeah, if caddy isn’t logging anything in debug mode when you make a request to it, then you’re not actually reaching Caddy.

Thanks for thinking with me on this issue.

Do you have any idea where I could look to troubleshoot this?
I’m not sure why it wouldn’t reach Caddy.
Disabling HTTPS simply works, so the IP and port are correct (see config below). The SSL port and CA setup works, as HTTPS to the .local address works.

http://192.168.1.25:8012 {
  reverse_proxy home_assistant:8123
}

The log directive configures HTTP access logs only, not other logs. If you can’t even connect, you won’t get any access logs. Other debug logs will be written to stderr. Use the global option to enable debug mode at the top of the Caddyfile instead.

Although it wasn’t immediately obvious how to, I enabled logging using:

and

and

Turns out it was as simple as:

{
  debug
}

https://redacted.com https://192.168.1.25:8012 https://simon.local:8012 {
  reverse_proxy home_assistant:8123
}

This results in the following, rather brief, log line being spat out.

caddy_1        | {"level":"debug","ts":1612168776.013658,"logger":"http.stdlib","msg":"http: TLS handshake error from 192.168.1.164:53180: no certificate available for '172.19.0.8'"}

This points to the docker abstraction as the culprit. Caddy’s creating a certificate for 192.168.1.25, but the address that’s ‘requested’ is 172.19.0.8. An internal docker address.
This post seems to support this:

It doesn’t provide an answer, other than configuring a domain name (which, of course, I am doing via the .local domain)

I have not been able to find any solution so far.

1 Like

I found another thread discussing this problem.
Again, it doesn’t supply a solution, it states the issue is resolved by using a domain name, and that’s that.
I would like to be able to use the internal IP address too.

I found another thread with exactly the same issue as in this thread, but the issue was not resolved there either, and the thread simply died.

Does anyone know how to resolve this?

Yep, so, if that is indeed an identifier by which your server is known, you need to add 172.19.0.8 to your site’s addresses so Caddy knows to manage a certificate for that IP. It will be a locally-trusted cert of course, not one that you can get from a free public CA.

Hmm, so there would not be a way to instruct Caddy to ‘listen’ for the IP address of the server itself?

I know I could use network_mode: "host" to force the docker container to assume the IP address (and other network properties) of the host, but I’d prefer not to do so.
I did try to add the docker IP to the caddyfile:

{
  debug
}

https://redacted.com https://172.19.0.12:8012 https://simon.local:8012 {
  reverse_proxy home_assistant:8123
}

While this works, it does present a warning message: SSL_ERROR_BAD_CERT_DOMAIN on Firefox.
Understandable, because Firefox requested 192.168.1.25, but received a certificate for 172.19.0.12 instead.
This is okay for now, as connecting to the local IP is only a fallback for me. I would only need to use it if MDNS, and thus the .local domain name would not resolve for some reason.

The problem with listing the docker container’s IP address in the Caddyfile is that it could change. The IP address of the container stated in post 7 has since changed from 172.19.0.8 to 172.19.0.12, simply by restarting.


I was trying to resolve the IP randomness by something like setting an environment variable to the current docker container address somehow. While searching on google I ‘accidentally’ stumbled upon this thread:

It could provide a solution, but I’m scratching my head configuring it.
The problem is that I know where the request is coming from; a .com or .local domain, ór a random docker IP. How do I combine both known and unknown sites, while enforcing HTTPS everywhere?
What I’m currently using:

https://redacted.com {
  reverse_proxy home_assistant:8123
}
:8012 {
  tls internal {
    on_demand
  }
  reverse_proxy home_assistant:8123
}

But this results in HTTP also working, something which I was trying to prevent with this new configuration.

https://172.19.0.*:8012 https://simon.local:8012 {
  tls internal {
    on_demand
  }
  reverse_proxy home_assistant:8123
}

Interestingly, this configuration also does not work, and results in the SSL_ERROR_INTERNAL_ERROR_ALERT message in Firefox, along with the following debug log:

{"level":"debug","ts":1612453283.9875581,"logger":"http.stdlib","msg":"http: TLS handshake error from 192.168.1.164:33018: no certificate available for '172.19.0.12'"}

You can configure static IPs in your docker-compose network

Another option is setting up On-Demand TLS which uses your local CA, which would work for serving IP certs on the fly, but it would also open the door for abuse.

If I were you, I’d just run a local DNS server (something like CoreDNS possibly) that resolves whatever your redacted.com is, to the LAN IP of your machine running Docker/Caddy. That way, requests from within your LAN will resolve to the LAN IP, and requests from outside will use the WAN IP.

A CoreDNS Corefile might look like this:


. {
    hosts {
        192.168.1.25 redacted.com
        fallthrough
    }
    forward . 8.8.8.8
}

So it would resolve redacted.com to 192.168.1.25, and anything else will be forwarded to 8.8.8.8, i.e. Google’s DNS. You can change this however you like.

Then configure your router or your individual machines to use the IP of whatever machine you run CoreDNS on, as their DNS server instead.

2 Likes

Probably because 172.19.0.* is not a valid wildcard identifier. According to spec, wildcards must be left-most domain labels: see RFC 6125 - Representation and Verification of Domain-Based Application Service Identity within Internet Public Key Infrastructure Using X.509 (PKIX) Certificates in the Context of Transport Layer Security (TLS) and RFC 2818 - HTTP Over TLS - you cannot have wildcards for IP addresses (though Caddy does not make this distinction, since from a text string only, domain names can still look like IP addresses sometimes).

By getting rid of the host portion of the address, i.e. leaving only https://, you could get it to work. But do what Francis says where you restrict which names are allowed to get certificates if your server is public-facing. For simple cases like yours, you can actually do it completely within your Caddy config by defining an internal-accessible-only route that matches on the IP ranges and domain names you want to allow to get certificates for, then reply with 200. Otherwise reply with 4xx.

1 Like

Thanks for both of your replies and suggestions.

The wildcard at the end is indeed not valid. Later I found some complaints about this configuration in my logs.

I ended up giving the Caddy container a static IP address in docker. This works great, apart from the remaining SSL_ERROR_BAD_CERT_DOMAIN error.

Maybe eventually I will configure a DNS server, but I’m not comfortable running it off the same server as Caddy and Home Assistant, as if I ever turn that server off for maintenance, it would cause problems.
Probably eventually I will get extra hardware or a better router.

Thanks a lot again for helping me out!

That’s why routers etc let you configure both a primary and a secondary DNS server! If your first one is down, it should use your secondary without trouble.

But fair enough.

Ah, that would make sense of course.
I have spotted an option in my router’s configuration to set up custom DNS addresses.

If I were to use a custom DNS, I could indeed redirect my external domain name directly to my server’s address. But what would I do for other services that are not available externally?
Then I would simply be setting the simon.local address to the IP address of my server, but then I’d be replicating what MDNS is for.

This topic was automatically closed after 30 days. New replies are no longer allowed.