Public and Internal DNS using same port (443)?

1. Caddy version (caddy version):

2.5.1

2. How I run Caddy:

Currently using Caddy to expose Plex publicly with my own DNS. Also for exposing other services internally only using self signed cert.

a. System environment:

Docker v20.10.17

b. Command:

docker-compose up -d

c. Service/unit/compose file:

services:
  caddy:
      image: pleasestopasking/caddy:2.5.1_02
      hostname: caddy
      container_name: caddy
      restart: always
      ports:
        - 80:80
        - 443:443
      volumes:
        - $PWD/docker-configs/caddy/Caddyfile:/etc/caddy/Caddyfile
        - caddy_data:/data
        - caddy_config:/config

volumes:
  caddy_data:
      name: caddy_data
  
    caddy_config:
      name: caddy_config

d. My complete Caddyfile or JSON config:

(baseline-headers) {
	header {
		Permissions-Policy: "interest-cohort=(), camera=(), geolocation=(), microphone=(), payment=(), usb=(), vr=()"
		Access-Control-Allow-Methods: GET, OPTIONS, PUT
		Access-Control-Max-Age: 100
		Cross-Origin-Opener-Policy: same-origin
		Cross-Origin-Resource-Policy: same-site
		Server: null
		Vary: Origin
		X-Forwarded-Proto: https
		Strict-Transport-Security: "max-age=31536000; includeSubDomains; preload"
		X-Frame-Options: DENY
		X-Content-Type-Options: nosniff
		X-XSS-Protection: "1; mode=block"
		Referrer-Policy: strict-origin-when-cross-origin
		X-Robots-Tag: none,noarchive,nosnippet,notranslate,noimageindex
	}
}

(plex-headers) {
	header {
		Content-Security-Policy: "default-src 'self'; base-uri 'self'; script-src 'self' 'unsafe-eval' 'sha256-nJQTRKTrsNC7POCKq7aJgohAiPwBISLvR7aJylcnMeE=' 'sha256-pKO/nNgeauDINvYfxdygP3mGssdVQRpRNxaF7uPRoGM=' 'sha256-WbMRMEGI/3b4tpLvamts9byuWeI9lP2bKREKq08ujJU=' 'sha256-Jak/x3IyEWydZxb+2CUd6rJQgpjhTbdymhm4fpt8EVQ=' 'sha256-4yWHSc589xcanc7GAAy3++M4EvUxNtUsJySeoYEE6z8=' 'sha256-9YWnVu29Ew4LEW4tEiPWEdcHvzlbbwpiazu4PZR3oTY='; style-src 'self' 'unsafe-hashes' 'sha256-ZdHxw9eWtnxUb3mk6tBS+gIiVUPE3pGM470keHPDFlE='; img-src 'self' https://provider-static.plex.tv data: blob:; font-src 'self' data:; connect-src 'self' https://analytics.plex.tv https://metadata.provider.plex.tv https://together.plex.tv https://plex.tv https://*.plex.direct:* wss://*.plex.direct:* wss://pubsub.plex.tv; media-src 'self' https://*.plex.direct:*; object-src 'self'; child-src 'none'; frame-src 'none'; frame-ancestors 'none'; form-action 'self'; upgrade-insecure-requests; block-all-mixed-content"
	}
}

*.geordi.lan:443 {
	tls internal

	@omv host omv.geordi.lan
	handle @omv {
		reverse_proxy 192.168.4.46:6080
		encode gzip
		import baseline-headers
		header {
			Access-Control-Allow-Origin: https://omv.geordi.lan
		}
	}

	@homer host homer.geordi.lan
	handle @homer {
		reverse_proxy homer:8080
		encode gzip
		import baseline-headers
		header {
			Access-Control-Allow-Origin: https://homer.geordi.lan
		}
	}

	@adguard host adguard.geordi.lan
	handle @adguard {
		reverse_proxy 192.168.4.71
		encode gzip
		import baseline-headers
		header {
			Access-Control-Allow-Origin: https://adguard.geordi.lan
		}
	}

	@grafana host grafana.geordi.lan
	handle @grafana {
		reverse_proxy grafana:3000
		encode gzip
		import baseline-headers
		header {
			Access-Control-Allow-Origin: https://grafana.geordi.lan
		}
	}

	@prometheus host prometheus.geordi.lan
	handle @prometheus {
		reverse_proxy prometheus:9090
		encode gzip
		import baseline-headers
		header {
			Access-Control-Allow-Origin: https://prometheus.geordi.lan
		}
	}

	# Fallback for otherwise unhandled domains
	handle {
		abort
	}
}

plex.htchr.dev:443 {
	reverse_proxy plex:32400
	encode gzip
	import baseline-headers
	import plex-headers
	header {
		Access-Control-Allow-Origin: https://plex.htchr.dev
	}
}

3. The problem I’m having:

My current implementation is working just fine but I would like to move my internal services from *geordi.lan to *.internal.htchr.dev while ensuring they are never exposed publicly and ALSO using LE certs. I could switch from port 443 on the internal services to something else that I am not forwarding from my router but I would like to stick with the standard 443.

After reading through the documentation and others implementations, I believe I have come to the conclusion that the only way to properly do this is to purchase another domain name such as htchr2.dev and use that for internal services by using DNS validation but not setting any A/CNAME records that point to my public IP.

I am hoping someone can provide feedback on my idea and whether or not I am on the right path here.

4. Error messages and/or full log output:

None at this time

5. What I already tried:

I have tried the following config and while it does successfully grab a wildcard certificate, my DNS is publicly available and the only real security provided is the @denied if the requester is not coming from a private CIDR range.

Currently using Hover as my registrar which does have a DNS module available so I have also use DuckDNS delegation to obtain the wildcard certificate which does work.

*.internal.htchr.dev:443 {
	tls {
		dns duckdns {
			api_token <redacted>
			override_domain htchr-internal.duckdns.org
		}
	}

	@denied not remote_ip private_ranges
	abort @denied

	respond "You have been granted access!"
}

6. Links to relevant resources: Public and Internal DNS using same port (443)? Public and Internal DNS using same port (443)?

Hi :wave:

There are quite some ways to achieve what you are trying to do :slight_smile:

vhosts/hostnames that issue their TLS certificates via DNS challenge don’t need A/AAAA records that point to their IP.
A record can still be set or pointing to the wrong IP or whatever, but the DNS challenge only cares about TXT _acme-challenge.<hostname> (e.g. TXT _acme-challenge.internal.htchr.dev)

Your *.geordi.lan is currently actually reachable from outside your network via a pretty trivial request manipulation.
Not really a secret or anything and a pretty common “issue” among other webservers too, but I just don’t want to explain it right now, risking backstabbing you right now.
(Though I will edit this after this has been solved or might end up writing a wiki post about that [that would be the third planned - with none actually done so far :sweat_smile:]).

Since you run in docker and are manually setting up port forwards into your router, a pretty funny way would be to run two separate caddy containers, with different ports exposed via docker:

  1. “Internal Caddy” reachable only within your intranet on port :80 and :443
    ports:
      - 80:80
      - 443:443
    
  2. “Public Caddy” reachable only from outside your intranet on port :80 and :443 but your router is forwarding your public IP’s port :80 and :443 to something else, for example:
    ports:
      - 8080:80
      - 8443:443
    

Your A *.internal.htchr.dev record would then point to the actual intranet IP (e.g. 192.168.0.2).

Another way would be to separate internal and public vhosts to bind to different network interfaces or IPs via bind (Caddyfile directive) — Caddy Documentation, but that isn’t as really practical in a sensible way with docker.

Oh, and fyi you could pass both Caddys the same data directories without risking interfering with each other :slight_smile:


PS: The :443 you append to each of your vhosts is actually redundant (but no harm keeping it!). Caddy will only serve your vhost on :443 by default (and redirect any http://:80 to https://:443). See Automatic HTTPS — Caddy Documentation

3 Likes

I have went ahead and moved my internal vhost to a different port for now so please feel free to elaborate on the “issue” you mentioned as I am interested in that.

Aside from that, it sounds like deploying two (2) caddy instances might be the best solution but I thought that best practice was to never us internal/private IP ranges in public DNS records. That is more information for malicious actors, though I do understand that obscurity is not security.

If an attacker accesses your private network, it’s already game over. Addresses are not secrets.

Ok, here is what I have implemented as of today and it seems to be working as expected as well as solves another issue that had been plaguing me for a bit (will touch on below). As suggested by @IndeedNotJames, I moved to using two (2) caddy containers.

I started by setting up my CNAMES in Hover to point to DuckDNS as necessary while making sure that I used the internal IP for the internal record on the DuckDNS side. I then forwarded port 443 from my router to port 4443 on the host running caddy.

I finally deployed my two caddy instances with the following docker-compose along with separate Caddyfiles for each respective caddy instance. Unless I am missing something here, this solves my issue.

As I mentioned above, this also solved another issue I have been seeing around the usage of *.geordi.lan from outside my network when connected to my tailscale network. I believe this issue around that was the use of a self signed CA but now that I am leveraging LE certs, it appears to work just fine with *.int.htchr.dev.

services:
  caddy-ext:
    image: pleasestopasking/caddy:2.5.1_02
    hostname: caddy-ext
    container_name: caddy-ext
    restart: always
    environment:
      - DUCKDNS_TOKEN=${duckdns_token}
    ports:
      - 4443:443
    volumes:
      - $PWD/docker-configs/caddy/external/Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
      - caddy_config:/config

  caddy-int:
    image: pleasestopasking/caddy:2.5.1_02
    hostname: caddy-int
    container_name: caddy-int
    restart: always
    environment:
      - DUCKDNS_TOKEN=${duckdns_token}
    ports:
      - 443:443
    volumes:
      - $PWD/docker-configs/caddy/internal/Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
      - caddy_config:/config
#External Caddyfile
{
	acme_dns duckdns {
			api_token {$DUCKDNS_TOKEN}
			override_domain htchr.duckdns.org
		}
}

(baseline-headers) {
	header {
		Permissions-Policy: "interest-cohort=(), camera=(), geolocation=(), microphone=(), payment=(), usb=(), vr=()"
		Access-Control-Allow-Methods: GET, OPTIONS, PUT
		Access-Control-Max-Age: 100
		Cross-Origin-Opener-Policy: same-origin
		Cross-Origin-Resource-Policy: same-site
		Server: null
		Vary: Origin
		X-Forwarded-Proto: https
		Strict-Transport-Security: "max-age=31536000; includeSubDomains; preload"
		X-Frame-Options: DENY
		X-Content-Type-Options: nosniff
		X-XSS-Protection: "1; mode=block"
		Referrer-Policy: strict-origin-when-cross-origin
		X-Robots-Tag: none,noarchive,nosnippet,notranslate,noimageindex
	}
}

(plex-headers) {
	header {
		Content-Security-Policy: "default-src 'self'; base-uri 'self'; script-src 'self' 'unsafe-eval' 'sha256-nJQTRKTrsNC7POCKq7aJgohAiPwBISLvR7aJylcnMeE=' 'sha256-pKO/nNgeauDINvYfxdygP3mGssdVQRpRNxaF7uPRoGM=' 'sha256-WbMRMEGI/3b4tpLvamts9byuWeI9lP2bKREKq08ujJU=' 'sha256-Jak/x3IyEWydZxb+2CUd6rJQgpjhTbdymhm4fpt8EVQ=' 'sha256-4yWHSc589xcanc7GAAy3++M4EvUxNtUsJySeoYEE6z8=' 'sha256-9YWnVu29Ew4LEW4tEiPWEdcHvzlbbwpiazu4PZR3oTY='; style-src 'self' 'unsafe-hashes' 'sha256-ZdHxw9eWtnxUb3mk6tBS+gIiVUPE3pGM470keHPDFlE='; img-src 'self' https://provider-static.plex.tv data: blob:; font-src 'self' data:; connect-src 'self' https://analytics.plex.tv https://metadata.provider.plex.tv https://together.plex.tv https://plex.tv https://*.plex.direct:* wss://*.plex.direct:* wss://pubsub.plex.tv; media-src 'self' https://*.plex.direct:*; object-src 'self'; child-src 'none'; frame-src 'none'; frame-ancestors 'none'; form-action 'self'; upgrade-insecure-requests; block-all-mixed-content"
	}
}

plex.htchr.dev:443 {
	reverse_proxy plex:32400
	encode gzip
	import baseline-headers
	import plex-headers
	header {
		Access-Control-Allow-Origin: https://plex.htchr.dev
	}
}
#Internal Caddyfile
{
	acme_dns duckdns {
		api_token {$DUCKDNS_TOKEN}
		override_domain htchr-int.duckdns.org
	}

}

(baseline-headers) {
	header {
		Permissions-Policy: "interest-cohort=(), camera=(), geolocation=(), microphone=(), payment=(), usb=(), vr=()"
		Access-Control-Allow-Methods: GET, OPTIONS, PUT
		Access-Control-Max-Age: 100
		Cross-Origin-Opener-Policy: same-origin
		Cross-Origin-Resource-Policy: same-site
		Server: null
		Vary: Origin
		X-Forwarded-Proto: https
		Strict-Transport-Security: "max-age=31536000; includeSubDomains; preload"
		X-Frame-Options: DENY
		X-Content-Type-Options: nosniff
		X-XSS-Protection: "1; mode=block"
		Referrer-Policy: strict-origin-when-cross-origin
		X-Robots-Tag: none,noarchive,nosnippet,notranslate,noimageindex
	}
}

*.int.htchr.dev:443 {
	@omv host omv.int.htchr.dev
	handle @omv {
		reverse_proxy 192.168.4.46:6080
		encode gzip
		import baseline-headers
		header {
			Access-Control-Allow-Origin: https://omv.int.htchr.dev
		}
	}

	@homer host homer.int.htchr.dev
	handle @homer {
		reverse_proxy homer:8080
		encode gzip
		import baseline-headers
		header {
			Access-Control-Allow-Origin: https://homer.int.htchr.dev
		}
	}

	@adguard host adguard.int.htchr.dev
	handle @adguard {
		reverse_proxy 192.168.4.71
		encode gzip
		import baseline-headers
		header {
			Access-Control-Allow-Origin: https://adguard.int.htchr.dev
		}
	}

	@grafana host grafana.int.htchr.dev
	handle @grafana {
		reverse_proxy grafana:3000
		encode gzip
		import baseline-headers
		header {
			Access-Control-Allow-Origin: https://grafana.int.htchr.dev
		}
	}

	# Fallback for otherwise unhandled domains
	handle {
		abort
	}
}
2 Likes

Glad to hear everything works now :party:

Caddy, and other webservers look at the http Host header of the client to decide to which vhosts the request should be forwarded.

This header can be freely changed. Perhaps the easiest way to do make your browser (and any other program on your computer, actually) connect to a different IP for some hostname, would be the /etc/hosts/ file.

Windows has one too.

plex.htchr.dev from your original Caddyfile provided one with the public IP of your server (ping, dig, nslookup, etc.) and all one had to do it is to pick one of your private geordi.lan subdomain and map it in the /etc/hosts to the known public IP.

E.g.

grafana.geordi.lan 71.71.228.213

Now one could just open https://grafana.geordi.lan in a browser and access the page.
Due to the entry in the /etc/hosts, grafana.geordi.lan would resolve to 71.71.228.213, the browser thinks “hey, this is just like any other page!” and sets its Host: grafana.geordi.lan when connecting to 71.71.228.213.


Another rather simple example uses curl and perhaps explains that with less words:

curl https://grafana.geordi.lan --resolve grafana.geordi.lan:443:71.71.228.213 -k

Note: The -k flag is only to skip tls certificate verification, because you used self signed certificated for *.geordi.lan.

If it had been plain http://, one could have also used

curl http://71.71.228.213 -H "Host: grafana.geordi.lan"

The https:// example is a bit longer, because besides the Host header, it also has to set the freely changeable TLS SNI.
A quick web search for “TLS SNI” should provide plenty of explanations if you are interested :sweat_smile:


Both workarounds I proposed (running two caddys or using the bind directive) essentially seperated the Listeners (different IP or different port).
Where one Listener would serve the externally reachable sites and another one only for the internal.
You essentially move it one layer up, so the routing deciding “internal or public” no longer happens within Caddy.
There just aren’t any additional details where a webserver could figure which vhost you are trying to reach.

Except one thing, the clients’ IP!
If you had used the

@denied not remote_ip private_ranges
abort @denied

for your *.geordi.lan:443 { then your internal pages would not have been accessible from any outside IPs :slight_smile:

I could have sworn I mentioned that in my original response, but looks like I didn’t :sweat_smile:
Sorry for that, my bad^^

2 Likes

Funny enough, I did initially start testing the remote_ip deny logic but never followed through as it somehow felt like the wrong solution.

I plan on digging into the feasibility of moving back to a single caddy instance by exposing multiple ports on the caddy instance, having my router only forward port 443 from the public side to port 4443 on the host running caddy and then within caddy, attempt to define my public and private sites on the different ports. Not sure if the below will work, just something I am pondering on at this point.

  caddy:
    image: pleasestopasking/caddy:2.5.1_02
    hostname: caddy
    container_name: caddy
    restart: always
    environment:
      - DUCKDNS_TOKEN=${duckdns_token}
    ports:
      - 443:443
      - 4443:4443
    volumes:
      - $PWD/docker-configs/caddy/Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data
      - caddy_config:/config
(baseline-headers) {
	header {
		Permissions-Policy: "interest-cohort=(), camera=(), geolocation=(), microphone=(), payment=(), usb=(), vr=()"
		Access-Control-Allow-Methods: GET, OPTIONS, PUT
		Access-Control-Max-Age: 100
		Cross-Origin-Opener-Policy: same-origin
		Cross-Origin-Resource-Policy: same-site
		Server: null
		Vary: Origin
		X-Forwarded-Proto: https
		Strict-Transport-Security: "max-age=31536000; includeSubDomains; preload"
		X-Frame-Options: DENY
		X-Content-Type-Options: nosniff
		X-XSS-Protection: "1; mode=block"
		Referrer-Policy: strict-origin-when-cross-origin
		X-Robots-Tag: none,noarchive,nosnippet,notranslate,noimageindex
	}
}

(plex-headers) {
	header {
		Content-Security-Policy: "default-src 'self'; base-uri 'self'; script-src 'self' 'unsafe-eval' 'sha256-nJQTRKTrsNC7POCKq7aJgohAiPwBISLvR7aJylcnMeE=' 'sha256-pKO/nNgeauDINvYfxdygP3mGssdVQRpRNxaF7uPRoGM=' 'sha256-WbMRMEGI/3b4tpLvamts9byuWeI9lP2bKREKq08ujJU=' 'sha256-Jak/x3IyEWydZxb+2CUd6rJQgpjhTbdymhm4fpt8EVQ=' 'sha256-4yWHSc589xcanc7GAAy3++M4EvUxNtUsJySeoYEE6z8=' 'sha256-9YWnVu29Ew4LEW4tEiPWEdcHvzlbbwpiazu4PZR3oTY='; style-src 'self' 'unsafe-hashes' 'sha256-ZdHxw9eWtnxUb3mk6tBS+gIiVUPE3pGM470keHPDFlE='; img-src 'self' https://provider-static.plex.tv data: blob:; font-src 'self' data:; connect-src 'self' https://analytics.plex.tv https://metadata.provider.plex.tv https://together.plex.tv https://plex.tv https://*.plex.direct:* wss://*.plex.direct:* wss://pubsub.plex.tv; media-src 'self' https://*.plex.direct:*; object-src 'self'; child-src 'none'; frame-src 'none'; frame-ancestors 'none'; form-action 'self'; upgrade-insecure-requests; block-all-mixed-content"
	}
}

:2020 {
	metrics
}

plex.htchr.dev:4443 {

    acme_dns duckdns {
		api_token {$DUCKDNS_TOKEN}
		override_domain htchr.duckdns.org
	}

	reverse_proxy plex:32400
	encode gzip
	import baseline-headers
	import plex-headers
	header {
		Access-Control-Allow-Origin: https://plex.htchr.dev
	}
}

*.int.htchr.dev:443 {

    acme_dns duckdns {
		api_token {$DUCKDNS_TOKEN}
		override_domain htchr-int.duckdns.org
	}

	@homer host homer.int.htchr.dev
	handle @homer {
		reverse_proxy homer:8080
		encode gzip
		import baseline-headers
		header {
			Access-Control-Allow-Origin: https://homer.int.htchr.dev
		}
	}

	# Fallback for otherwise unhandled domains
	handle {
		abort
	}
}

This topic was automatically closed after 30 days. New replies are no longer allowed.