The problem I am having:
I have Docker Desktop installed in Windows 11, and in it I have Nextcloud AIO installed that is working quite well. I also have Jellyfin installed in Windows directly and it has been working for about 3 years with no issues.
I would like to access Jellyfin out of my home and give my friends access. I decided to use Caddy to do this securely. It works quite well if I install and run caddy in windows. The only issue is that Nextclould in docker will no longer be reachable from outside my home. When I kill Caddy running on Windows, then Nextcloud can again be reachable on WAN.
Both (Caddy on Windows and Nextcloud in Docker) cannot run at the same time, as they both fight over ports 80 and 443.
I tried then installing Caddy in Docker instead and thought that would work, but the same port issues arise. I have been on this for a week trying to resolve it, and finally found this community. I would appreciate any help offered. I am very new to this please, so kindly be patient.
Error messages and/or full log output:
Port is already in use. Same error for 80 and 443.
Caddy version:
2.9.1-builder-alpine
How I installed and ran Caddy:
Directly in Windows 11 and via Docker Compose.
a. System environment: Windows and Docker Desktop for Windows.
You can’t have 2 processes/servers listening on the same port. Assume a request received on that port, how should Windows know which process/server should handle it? Keeping in mind that Windows cannot know the address of the request. It’s just seeing TCP/UDP packet.
I recommend you run nextcloud on a different port, and use Caddy to reverse-proxy to nextcloud. In this setup, Caddy has to also manage the certificates for nextloud.
I sort of figured this might be the solution. But unfortunately would involve me having to redo my entire setup for nextcloud from scratch as I haven’t found a way to run it on another port after it’s already been installed. Thank you for your input.
I figured out what I was doing wrong and how to move it forward. The only issue I have now is that the websites are not loading in https, just rather http.
I don’t think Caddy is creating certificates.
I made some changes in the docker-compose files.
ports:
- “8096:443”
- “8181:443/udp”
Any idea what I can do to fix this?
Caddy can only do HTTPS over port 443. Utilizing a different port in your hostname will stop HTTPS from working unless the service itself is generating a certificate. You may be able to have Caddy act as a Certificate Authority instead of Let’sEncrypt, but certificates have to be manually loaded on all devices.
Instead, you could have Jellyfin inside of a container. If you do run a container, and you don’t need DLNA, you can stop binding your port and just have Caddy reverse_proxy the container’s (internal) port. If you do need DLNA, you could use a macvlan configuration.
An alternative is to use WireGuard to access your home network externally. Then you wouldn’t have to publicly expose the Jellyfin service, only WireGuard.
If you need help with any of these options, let me know.
Doing this only changed what ports are binding on your computer to Caddy’s container. 8096 is now the local port listening on the Caddy container’s 443 port. Port 8181 is now listening on the container’s UDP protocol of port 443 (HTTPS 3.0). Basically, unless traffic is being redirected from your host’s 443 to 8096/8181, this just made Caddy unable to properly reverse_proxy.
I wouldn’t mind moving Jellyfin to a container within Docker, and would love to try this method, since all it will involve is recreating the users and pointing Jellyfin to the media folders to recreate everything. Netter than redoing entire nextcloud configuration. Please, I am open to what you have to guide me through and teach me. Whenever you are ready. Thank You.
I’ll paste my old Docker compose.yaml but modify for basic use. You can use a few different images, but I use the developer’s one.
jellyfin:
image: jellyfin/jellyfin
container_name: jellyfin
# network_mode: 'host' # if you do this, the port configuration is not necessary
ports:
- 8096:8096/tcp # HTTP webUI
- 7359:7359/udp # Allows clients to discover Jellyfin on the local network
- 1900:1900/udp # Service discovery used by DNLA and clients
volumes:
- C:/yourpath:/config
- C:/yourpath:/cache
- type: bind
source: C:/yourpathto/media
target: /media
- type: bind
source: C:/yourpathto/media2
target: /media2
read_only: true
restart: 'unless-stopped'
environment:
- TZ=Your/Timezone
# Optional - alternative address used for autodiscovery
- JELLYFIN_PublishedServerUrl=https://example.duckdns.org
# Optional - may be necessary for docker healthcheck to pass if running in host network mode
#extra_hosts:
# - 'host.docker.internal:host-gateway'
Your Caddyfile should work as is if you’re using host mode. Otherwise, add the Jellyfin container into the networks: - default by adding:
networks:
- default
and changing the reverse_proxy from the local IP to the hostname jellyfin.
Do you mean like this? I have replaced the IP address with jellyfin
Here are the jellyfin-compose.yml and updated docker-compose.yml for Caddy.
In the caddy docker compose.yml, should I change the name of the network to host? Or leave it as is?
Kindly let me know if there are things I need to correct before creating and running these.
Yep, that looks fine to me. You don’t need to use network_mode: 'host' if you have ports exposed. That means it uses your computer’s network instead of a Docker network. However, if your clients are having a hard time discovering your Jellyfin server, you may need to use it. In your current Caddyfile, you’d need to remove host networking and put the jellyfin service in the network damionixnet. That’s the only way it can resolve by the hostname jellyfin. If you decide to keep it on host, then use the local IP like you were before.
I have added the networks : damionixnet to the Jellyfin compose.yml and removed network_mode:‘host’.
I am so new to all this, and learning as I go.
I have left the Caddy docker compose and all else as is.
Also in the caddy compose.yml, I kept the ports 80,443 in there, will the same error come up agan, as far as ports already being listened to?
Thanks.
Quick summary: Letsencrypt has to check that you own the domain name (like damionix.duckdns.org) that you are trying to get a certificate for. The default way is that it connects on HTTP, which means that Caddy has to be on port 80 and you can’t use that for something else like Nextcloud.
The alternative way is that you could put some special token that they give you into the DNS, which in your case is via duckdns.org, to prove that you own that particular domain name.
It’s a bit more complicated to set up though because you need an extension for Caddy to support the DNS provider you’re using. It’s covered in that link at the start of my reply.
To add to what @hmoffatt said, the DNS provider you are using is DuckDNS. It looks like you use certificates generated from somewhere else. You can have Caddy take care of all the certificates without needing to touch it in your case. If you want to do that, you can remove the - C:/Caddy/certs:/certs volume mount, as Caddy already has a default folder in the container for certificate storage. You already have the DuckDNS module, so that’s already taken care of. If you are curious about other ways,
I've listed them here.
The module can be downloaded, built with a Dockerfile as you have already done, or you can use xcaddy in the CLI (command prompt in Windows).
If you downloaded it instead of building it with the Docker image, you’d add/replace the caddy executable, and you could probably put it in C:/Caddy as the name caddy. So if you did that, you would add the following to your volume mounts in the Caddy service compose.yaml:
- C:/Caddy/caddy:/usr/bin/caddy
Restart Caddy, and the caddy executable will immediately take effect.
All you need to do is add this to your global config in Caddyfile:
{
acme_dns dns duckdns your-token-here
}
A block starts with { and ends with }. The global block is the first block in your Caddyfile.