Immich, duckdns and local network access

1. The problem I’m having:

I want to access my ghi.duckdns.org (example name) web name on my local network. Since Immich can only store 1 server in the app, it’s useless at the moment.

I can access Immich outside of my network using ghi.duckdns.org, using mobile internet for example. If I’m on my local network, I can’t access Immich through ghi.duckdns.org. Result:

Immich is only accessible with my local ip number (192.168.1.118:2283) when connected to my local network.

I heard something about hairpin NAT. I do have a DD-WRT router. Pi-hole is also available. It’s only used for add blocking when a client uses it’s ip address as a DNS server.

I don’t fully understand how to set that up though. Or is it possible to do a 2x reverse proxy?

2. Error messages and/or full log output:

INF ts=1732470770.3918643 msg=shutting down apps, then terminating signal=SIGTERM

WRN ts=1732470770.3919299 msg=exiting; byeee!! 👋 signal=SIGTERM

INF ts=1732470770.3919897 logger=http msg=servers shutting down with eternal grace period

INF ts=1732470770.3923128 logger=admin msg=stopped previous server address=localhost:2019

INF ts=1732470770.3923397 msg=shutdown complete signal=SIGTERM exit_code=0

INF ts=1732470770.9323874 msg=using config from file file=/etc/caddy/Caddyfile

INF ts=1732470770.934202 msg=adapted config to JSON adapter=caddyfile

WRN ts=1732470770.934216 msg=Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies adapter=caddyfile file=/etc/caddy/Caddyfile line=2

INF ts=1732470770.9352982 logger=admin msg=admin endpoint started address=localhost:2019 enforce_origin=false origins=["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]

INF ts=1732470770.935483 logger=http.auto_https msg=server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS server_name=srv0 https_port=443

INF ts=1732470770.9354994 logger=http.auto_https msg=enabling automatic HTTP->HTTPS redirects server_name=srv0

INF ts=1732470770.9355443 logger=tls.cache.maintenance msg=started background certificate maintenance cache=0xc0001cb480

INF ts=1732470770.935939 logger=http msg=enabling HTTP/3 listener addr=:443

INF ts=1732470770.9360428 msg=failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 7168 kiB, got: 416 kiB). See https://github.com/quic-go/quic-go/wiki/UDP-Buffer-Sizes for details.

INF ts=1732470770.9361987 logger=http.log msg=server running name=srv0 protocols=["h1","h2","h3"]

INF ts=1732470770.9362736 logger=http.log msg=server running name=remaining_auto_https_redirects protocols=["h1","h2","h3"]

INF ts=1732470770.9362867 logger=http msg=enabling automatic TLS certificate management domains=["abc.duckdns.org","def.duckdns.org","ghi.duckdns.org"]

INF ts=1732470770.9404688 msg=autosaved config (load with --resume flag) file=/config/caddy/autosave.json

INF ts=1732470770.9404778 msg=serving initial configuration


{"level":"info","ts":1732470770.9448776,"logger":"tls","msg":"storage cleaning happened too recently; skipping for now","storage":"FileStorage:/data/caddy","instance":"5e3d6eab-6eeb-40f0-8c89-334c52d95c80","try_again":1732557170.9448757,"try_again_in":86399.999999583}

INF ts=1732470770.9449556 logger=tls msg=finished cleaning storage units

3. Caddy version:

Version 2.8.4

4. How I installed and ran Caddy:

So now I have caddy installed in docker using portainer:

version: "3.7"
services:
  caddy:
    image: serfriz/caddy-duckdns:latest  # replace with the desired Caddy build name
    container_name: caddy  # feel free to choose your own container name
    restart: "unless-stopped"  # run container unless stopped by user (optional) 
    ports:
      - "80:80"  # HTTP port
      - "443:443"  # HTTPS port
      - "443:443/udp"  # HTTP/3 port (optional)
    volumes:
      - /storage/Caddy/caddy-data:/data  # volume mount for certificates data
      - /storage/Caddy/caddy-config:/config  # volume mount for configuration data
      - $PWD/storage/Caddyfile:/etc/caddy/Caddyfile  # to use your own Caddyfile
      - /storage/Caddy/$PWD/log:/var/log  # bind mount for the log directory (optional)
      - /storage/Caddy/$PWD/srv:/srv  # bind mount to serve static sites or files (optional)
    environment:
#      - CLOUDFLARE_API_TOKEN=<token-value>  # Cloudflare API token (if applicable)
      - DUCKDNS_API_TOKEN=my_token  # DuckDNS API token (if applicable)
#      - CROWDSEC_API_KEY=<key-value>  # CrowdSec API key (if applicable)
#      - GANDI_BEARER_TOKEN=<token-value>  # Gandi API token (if applicable)
#      - NETCUP_CUSTOMER_NUMBER=<number-value>  # Netcup customer number (if applicable)
#      - NETCUP_API_KEY=<key-value>  # Netcup API key (if applicable)
#      - NETCUP_API_PASSWORD=<password-value>  # Netcup API password (if applicable)
#      - PORKBUN_API_KEY=<key-value>  # Porkbun API key (if applicable)
#      - PORKBUN_API_SECRET_KEY=<secret-key-value>  # Porkbun API secret key (if applicable)
#      - OVH_ENDPOINT=<endpoint-value>  # OVH endpoint (if applicable)
#      - OVH_APPLICATION_KEY=<application-value>  # OVH application key (if applicable)
#      - OVH_APPLICATION_SECRET=<secret-value>  # OVH application secret (if applicable)
#      - OVH_CONSUMER_KEY=<consumer-key-value>  # OVH consumer key (if applicable)
volumes:
  caddy-data:
    external: true
  caddy-config:

a. System environment:

Libreelec and docker. N100 MiniPC

d. My complete Caddy config:

Caddyfile:

abc.duckdns.org {
        tls {
                dns duckdns token
        }
        reverse_proxy 192.168.1.118:xxxx
}
def.duckdns.org {
        tls {
                dns duckdns token
        }
        reverse_proxy 192.168.1.118:xxxx
}
ghi.duckdns.org {
        tls {
                dns duckdns token
        }
        reverse_proxy 192.168.1.118:2283
}

So the last one is Immich.

Howdy @CypherMK, welcome to the Caddy community.

It sounds like you’re looking for help configuring your network; this is almost certainly not a Caddy-specific issue. We might be able to give you some guidance in getting your network sorted, but before anyone advises you, I want to hone in on this part:

I don’t want to make any assumptions about what’s going on. It might be the issue you’ve heard something about, or it might be something different entirely. This description of the problem is nowhere near enough for us to positively identify specifically what’s happening and what needs to be done to resolve it - we really need you to explain, in detail, EXACTLY how that part is failing, what you’re seeing in your browser when you try it, exact error messages, etc.

Run caddy version. With Compose, you might use docker compose <servicename> caddy version.

1 Like

I updated my opening post to clarify everything. Hopefully there is a workaround.

Just to clarify, do you have your duckdns address pointed at your own external IP address? And there’s no split DNS in play? The duckdns address resolves to the same IP both on and off the LAN?

We had another user very recently with DD-WRT and DNS rebind protection in action preventing them from accessing their site, but it looked like a different result.

Specifically, they got a webpage, served with a DD-WRT certificate, that indicated it rejected a request from a LAN IP to its public interface - check it out: Caddy-docker-proxy: Connection Refused: "type":"urn:ietf:params:acme:error:connection" - #11 by the-bort-the

Your ERR_CONNECTION_RESET looks like a different symptom.

On the duckdns.org web page I’ve used my external IP address for my ghi.duckdns.org page. I don’t know what split DNS is, so I probably don’t have that. My DD wrt router doesn’t have anything fancy configured, just a few open ports.

You could probably give a shot at the loopback NAT option I mentioned in that thread and see if that helps you.

I’ve heard of that option but I don’t know how to set it up in DD-WRT.

I don’t really know either, sorry - I’m using OPNsense, myself.

Unless someone else here on these forums can nudge you in the right direction, you might have to look to DD-WRT-specific communities for help instead.