Redirect from pure IP over https

1. Caddy version (caddy version):

v2.4.3 h1:Y1FaV2N4WO3rBqxSYA8UZsZTQdN+PwcoOcAiZTM8C0I=

2. How I run Caddy:

a. System environment:

$ uname -a
Linux ip-172-31-31-101 5.4.0-1045-aws #47-Ubuntu SMP Tue Apr 13 07:04:23 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux

$docker version
Client: Docker Engine - Community
 Version:           20.10.7
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        f0df350
 Built:             Wed Jun  2 11:57:03 2021
 OS/Arch:           linux/arm64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.7
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       b0f5bc3
  Built:            Wed Jun  2 11:55:14 2021
  OS/Arch:          linux/arm64
  Experimental:     false
 containerd:
  Version:          1.4.6
  GitCommit:        d71fcd7d8303cbf684402823e425e9dd2e99285d
 runc:
  Version:          1.0.0-rc95
  GitCommit:        b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

b. Command:

export DOMAIN=$(curl http://169.254.169.254/latest/meta-data/public-hostname)
export IP=$(curl http://169.254.169.254/latest/meta-data/public-ipv4)
docker run --rm -e DOMAIN -e IP --name caddy -d -p 80:80 -p 443:443 \
  -v /home/ubuntu/Caddyfile:/etc/caddy/Caddyfile \
  -v /srv:/srv -v caddy_data:/data \
  -v caddy_config:/config caddy

c. Service/unit/compose file:

None

d. My complete Caddyfile or JSON config:

{$DOMAIN} {
	root * /srv
	file_server browse
}
# Avoid invalid SSL errors.
{$IP}:80 {
	redir https://{$DOMAIN}
}

3. The problem I’m having:

I have EC2 instance running on AWS. There is an public IP and public DNS assigned with it. I want to call it from any combination without errors. What does work:

curl DNS
curl http://IP

with both being redirected to https://DNS. However, when I call curl -v https://IP then I get an TLS error.

☺  curl -v 13.57.16.173
*   Trying 13.57.16.173...
* TCP_NODELAY set
* Connected to 13.57.16.173 (13.57.16.173) port 80 (#0)
> GET / HTTP/1.1
> Host: 13.57.16.173
> User-Agent: curl/7.64.1
> Accept: */*
> 
< HTTP/1.1 302 Found
< Location: https://ec2-13-57-16-173.us-west-1.compute.amazonaws.com
< Server: Caddy
< Date: Sat, 17 Jul 2021 17:30:21 GMT
< Content-Length: 0
< 
* Connection #0 to host 13.57.16.173 left intact
* Closing connection 0

~/Developer/projects/openremote/openremote
☺  curl -v https://13.57.16.173
*   Trying 13.57.16.173...
* TCP_NODELAY set
* Connected to 13.57.16.173 (13.57.16.173) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS alert, internal error (592):
* error:14004438:SSL routines:CONNECT_CR_SRVR_HELLO:tlsv1 alert internal error
* Closing connection 0
curl: (35) error:14004438:SSL routines:CONNECT_CR_SRVR_HELLO:tlsv1 alert internal error

Ideally I would like to have the same response for both, i.e. redirection to https://DNS. Is it possible?

4. Error messages and/or full log output:

curl: (35) error:14004438:SSL routines:CONNECT_CR_SRVR_HELLO:tlsv1 alert internal error

5. What I already tried:

I’ve already tried different redirect configurations in Caddyfile, and the one I’ve showed above works the best. The only combination which gives errors is curl https://IP. The problem is that exactly this call is embedded in AWS EC2 web console. Therefore, anyone which click it will bump in this SSL error, bad user experience.

6. Links to relevant resources:

Not really, no. ACME CAs don’t issue certificates for IP addresses. A valid, trusted certificate is necessary to complete the TLS handshake. If the handshake doesn’t complete, then no request handlers can run because it’s not a trusted connection.

You could make a site block with tls internal which would have Caddy issue certificates signed by its internal CA, but those aren’t publicly trusted, and requires additional setup to establish trust (installing Caddy’s root CA cert into the trust stores of whatever browser/system you’re connecting from), but this is probably way too much effort for what you’re trying to do.

Thank you for the tip. Serving untrusted cert would be definitively better than the SSL error which I have right now. At least I should be able to use --insecure flag to fetch the page. So, I’m trying to use your suggestion and made a site block with tls internal. Unfortunately, I was unable to get rid of the SSL error. Probably I’m doing something wrong. My Caddyfile is now:

{$DOMAIN} {
	root * /srv
	file_server browse
}
# Avoid invalid SSL errors.
{$IP}:80 {
	redir https://{$DOMAIN}
}
https://{$IP} {
	tls internal
	redir https://{$DOMAIN}
}

How can I get it working?

What’s the error? What’s in your logs? That should be fine as long as you make your client ignore certificate verification (which kinda defeats the purpose of TLS)

It is the same as above. Here is the new run for your convenience:

% curl -vL https://54.176.185.152
*   Trying 54.176.185.152...
* TCP_NODELAY set
* Connected to 54.176.185.152 (54.176.185.152) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS alert, internal error (592):
* error:14004438:SSL routines:CONNECT_CR_SRVR_HELLO:tlsv1 alert internal error
* Closing connection 0
curl: (35) error:14004438:SSL routines:CONNECT_CR_SRVR_HELLO:tlsv1 alert internal error

There is nothing about this in caddy logs.

I agree that using --insecure is a bad thing, however returning an error from TLS handshake is even worse as it does not give you any chance to reach the site.

You’re not telling curl to ignore verification in that command. You need the -k or --insecure option.

If you enable debug logs (via the debug global option) you should see some mention of failed TLS handshakes.

I still don’t understand what practical usecase you have for this. When would you actually make a request to the IP address instead of the domain? And why would you make that request with HTTPS at all?

With -k is the same response. After enabling debug log I see:

{"level":"debug","ts":1626712551.0203524,"logger":"http.stdlib","msg":"http: TLS handshake error from 95.241.8.200:57783: no certificate available for '172.17.0.2'"}

Now, 172.17.0.2 is internal docker address as my docker0 interface is 172.17.0.1. I’ve changed my Caddyfile to

{
  debug
}
{$DOMAIN} {
  root * /srv
  file_server browse
}
# Avoid invalid SSL errors.
{$IP}:80 {
  redir https://{$DOMAIN}
}
:443 {
  tls internal
  redir https://{$DOMAIN}
}

But it does not help. Still the same error.

The use case is as follow - on AWS EC2 instance dashboard there are 2 links to click - one with pure IP and second one with AWS generated URL. Both are https because the AWS portal itself is https so no http link can be present there. User can click any one of these, when he clicks URL everything is OK, however when he clicks IP then a nasty error page occurs. I would prefer a warning browser page here that the cert is not trusted instead of this error, because the error is terminal but untrusted cert gives an option of sandboxing it and move forward.

If you’re running Docker in swarm mode, then this is a longstanding known issue, the original client IP is not preserved.

And curl doesn’t set the IP address in SNI since that’s disallowed by RFC3546

If you do this, then you’d need to enable on_demand for Caddy to issue a certificate on the fly for requests to hostnames not explicitly configured. Caddy will only manage certificates for names actually present in the config by default.

:443 {
  tls internal {
    on_demand
  }
  redir https://{$DOMAIN}
}

You are my hero now! :slight_smile: Thank you very much, it works as expected, this is the best I can get in this situation.

Be aware that turning on on_demand without anything to limit it is an avenue for abuse and potential DDOS. A bad actor could repeatedly make requests with a wildcard domain they control and force your server to issue new certificates for each one ad infinitum. You could run out of storage space from issued certificates etc.

Good point indeed. Not relevant in this particular application, but I must keep it in mind.

This topic was automatically closed after 30 days. New replies are no longer allowed.