Thank you for your guide. It really helps to deploy caddy with nextcloud and collabora. However, I do have a small issue with collabora.
First, I don’t use a frontend or backend scenario. I only have one Caddy instance, that points to my domain and forwards the requests.
I went through the checking steps you mentioned above, and one step doesn’t work well. When I try accessing https://nextcloud.wonkypaw.org/loleaflet/dist/admin/admin.html, I am getting a 400 error.
Looking at the collabora docker logs I see the below error. For some reason, it doesn’t recognise my domain.
# Nextcloud and Collabora
nextcloud.wonkypaw.org {
encode gzip
@collabora {
path /loleaflet/* # Loleaflet is the client part of LibreOffice Online
path /hosting/discovery # WOPI discovery URL
path /hosting/capabilities # Show capabilities as json
path /lool/* # Main websocket, uploads/downloads, presentations
}
reverse_proxy @collabora https://127.0.0.1:9980 {
header_up Host "nextcloud.wonkypaw.org"
transport http {
tls_insecure_skip_verify
}
}
tls {
dns cloudflare {env.CLOUDFLARE_API_TOKEN}
}
file_server
root * /var/www/html
php_fastcgi 127.0.0.1:9080 {
env front_controller_active true # Remove index.php form url
}
reverse_proxy http://127.0.0.1:9080 {
header_up X-Forwarded-Host {host}
}
header {
Strict-Transport-Security max-age=31536000; # enable HSTS
}
# .htaccess / data / config / ... shouldn't be accessible from outside
@forbidden {
path /.htaccess
path /data/*
path /config/*
path /db_structure
path /.xml
path /README
path /3rdparty/*
path /lib/*
path /templates/*
path /occ
path /console.php
}
respond @forbidden 404
}
What am I doing wrong? I only added the TSL configuration and reverse proxy for my nextcloud instance. The rest of the configuration stays the same as yours.
The docker-compose is the same, I deploy all three containers with the same docker-compose file.
If anyone is interested in how to deploy Nextcloud with Collabora behind caddy this is my final Caddy file configuration. The below configuration is for NextCloud 23+ version.
In the latest version of NC they renamed two paths, loleaflet was renamed to browser and lool was renamed to cool.
For docker-compose configuration follow the tutorial above, nothing much changed there.
If it happens that Collabora is not able to resolve the DNS of your domain add those entries inside the /etc/hosts file where you are running the server. In my case, I added two entries pointing to my server IP where the docker instances are running. I wasn’t expecting this behaviour to happen, since I am running my own Pihole DNS server. These records are already in pihole.
Docker is also configured to use my Pihole DNS server to resolve names. But for some weird reason is not able to resolve it.
Also: does NextCloud + Caddy work for you as expected?
I see major performance issues to the point, that it becomes basically unusable due to 3896 - Frequent hangs when using http/2 push and it seems there’s no way to disable HTTP/2 (to use only HTTP1.1) in Caddy V2 (in Caddy V1, -http2=false used to work).
The only major difference between our setups: I’m using PHP-FPM’s AF_UNIX socket instead of AF_INET on 127.0.0.1.
I just finished deploying nextcloud, so far I don’t see any issues with performance. It works as expected.
I am using the default docker image, which from what I understood it has apache server behind it.
I tried using the FPM image but I didn’t have any luck with it. I kept getting error 502 when I was trying to log in. I am new to this setup btw so if I talk crap please let me know
Thanks for the typo, I tried editing the post but I can’t do it anymore. Weird!
I might give a non-FPM setup a try to see whether it behaves differently.
Right now, I’m about to switch from php_fastcgi to reverse_proxy to see if this helps in any way or whether the problem is not so much about the backend connection, but the HTTP/2 bug actually is about the Client <> Caddy connection.
My setup is Docker with Rancher server 1.6 to manage my containers. Good luck with your migration, hopefully, it would be better.
Speaking of performance. I configured the client to connect to my server, and because I am using virtual files it takes a while to scan all my files from SMB share. I did notice that for about 2-3mins the client connection dropped. I am using OpenSuse TW, the virtual files feature is still under development in Linux. Maybe this has nothing to do with the server and is more about the client. I will keep an eye and let you know.
I want to ask you a question. Is Nextcloud always slow on the first initial sync?
I am synching around 5.6GB of data and it took like 3hs to do that. Whereas, when I use something like resilioSync it takes minutes. I am using everything inside the local network, at 1GB speed. So there’s no external connectivity happening or stuff like that.
This thing kind of puts me off. Is this what you were talking about performance?
I referred mostly to the web UI’s/API performance which seems to have quite some room for improvement.
Can’t really speak in terms of file sync performance, as I use it only for a few smaller files so far - I only recently synced around 10GiB of photos for a few functional tests and while I didn’t closely monitor it, the performance seamed reasonable for a cloud sync (non-local network).
For me, it seems synching internally was slow on the first initial setup. Maybe this is how nextcloud works, it needs to scan all the files first before syncing them.