Slow Transfer Speed

1. The problem I’m having:

I would like to express my sincere gratitude to the community for the invaluable support provided throughout my journey.

Currently, I am facing an issue with slow transfer speeds on my Caddy and Nextcloud setup. To provide some context, I am running two Ubuntu cloud-init VMs on Proxmox, each residing in separate VLANs and equipped with Docker. One VM hosts a Caddy container, with the QUIC protocol configured, while the other hosts a Nextcloud container. I have them in separate VMs incase i have to restore my Nextcloud all of my site redirects/services will still work while im fixing my Nextclod

Interestingly, individual speed tests conducted on both VMs demonstrate the full utilization of the 1 Gig bandwidth. Moreover, remote clients, equipped with a 1 Gig internet connection, report similar speed test results to mine.

However, during actual transfers to clients, the speed does not exceed 120 Mbps.

It’s worth noting that I utilize Cloudflare for DNS management, operating in DNS mode only without proxying for my domains.

Regarding firewall configurations, there are no speed or bandwidth restrictions imposed on the Nextcloud and Caddy VMs. Specifically, the firewall settings allow the Nextcloud VM to communicate with the Caddy VM over ports 443 and 80, and the Caddy VM can communicate with Nextcloud on all ports.

I would appreciate insights on whether any unnecessary configurations exist in my Caddyfile or Nextcloud setup, and if there are recommendations for optimizing the container setup or composing a more efficient configuration.

2. Caddy version: v2.7.6

3. How I installed and ran Caddy: Docker

a. System environment:

VM in serprate vlan on Pro

b. My complete Caddy config:

test.example.com, test2.example.com, test1.example.com { 
 
    reverse_proxy 192.168.1.16:8443
     root    * /var/www/html
     file_server
     encode zstd gzip

    redir /.well-known/carddav /remote.php/dav/ 301
    redir /.well-known/caldav /remote.php/dav/ 301
    redir /.well-known/webfinger /index.php/.well-known/webfinger 301
    redir /.well-known/nodeinfo /index.php/.well-known/nodeinfo 301
 
     header {
         # disable FLoC tracking
         Permissions-Policy interest-cohort=()

         # enable HSTS
         Strict-Transport-Security max-age=31536000;

        # keep referrer data off of HTTP connections
         Referrer-Policy no-referrer-when-downgrade
     }
             # .htaccess / data / config / ... shouldn't be accessible from outside
     @forbidden {
                path    /.htaccess
                path    /data/*
                path    /config/*
                path    /db_structure
                path    /.xml
                path    /README
                path    /3rdparty/*
                path    /lib/*
                path    /templates/*
                path    /occ
                path    /console.php
        }
        handle @forbidden {
                respond 404
        }
}

c. Caddy Docker compose file:

version: "3.8"
services:
  caddy:
    image: caddy:latest
    container_name: caddy
    restart: unless-stopped
    ports:
      - 80:80
      - 443:443
      - 443:443/udp
    volumes:
      - /home/ubuntu/docker/caddy/Caddyfile:/etc/caddy/Caddyfile
      - /home/ubuntu/docker/caddy/site:/srv
      - /home/ubuntu/docker/caddy/caddy_data:/data
      - /home/ubuntu/docker/caddy/caddy_config:/config

    labels:
      - com.centurylinklabs.watchtower.monitor-only=true
    network_mode: host

d. My Nextcloud Docker Compose:

version: '3'

services:
  db:
    image: mariadb:11.3.2
    container_name: nc_db
    restart: unless-stopped
    command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW --innodb-file-per-table=1 --innodb-read-only-compressed=OFF

    volumes:
      - /home/nc/docker/nextcloud/nextca/db:/var/lib/mysql
    ports:
      - 3306:3306
      
    environment:
      - MYSQL_ROOT_PASSWORD=password
      - MYSQL_PASSWORD=password
      - MYSQL_DATABASE=db
      - MYSQL_USER=nc
    labels:
      - com.centurylinklabs.watchtower.monitor-only=true
    networks:
      - dmz_net
  redis:
    image: redis:7.2.4
    container_name: nc_redis
    restart: unless-stopped
    command: redis-server --requirepass password
   # ports:
    #  - 6378:6378
    labels:
      - com.centurylinklabs.watchtower.monitor-only=true
    networks:
      - dmz_net 
  app:
    image: nextcloud:28.0.4
    container_name: nc
    restart: unless-stopped
    ports:
      - 8443:80

    links:
      - db
      - redis
    volumes:
      - /home/nc/docker/nextcloud/nextca/data:/var/www/html
      - /home/nc/cloud:/ext_next
      - /home/nc/docker/nextcloud/nextca/nextcloud-apache.conf:/etc/apache2/conf-enabled/nextcloud-apache.conf:ro

    
    environment:
      - MYSQL_PASSWORD=password
      - MYSQL_DATABASE=db
      - MYSQL_USER=nc
      - MYSQL_HOST=nc_db
      - REDIS_HOST=nc_redis
      - REDIS_HOST_PASSWORD=password
      - NEXTCLOUD_INIT_HTACCESS=true
      - TZ=America/NewYork

      
    labels:
      - com.centurylinklabs.watchtower.monitor-only=true
    extra_hosts:
        - test.example.com:192.168.60.60 #host and ip
  
    depends_on:
      - db
      - redis
    networks:
      - dmz_net

networks:
  dmz_net:
    external: true

This doesn’t really make sense – reverse_proxy has a higher directive order than file_server, so it “shadows” file_server (i.e. it never gets run).

What are you trying to do exactly?

You need to use request matchers to tell Caddy how to split up the traffic (i.e. which traffic to send to the file server, and what to proxy).

I’m confused by this, the port number 8443 implies that this is probably an HTTPS endpoint (443 is the HTTPS port, so 8443 implies HTTPS).

You should use a different port number like 8080 or something like that which doesn’t have the implication of HTTPS. Less confusing.

Why are you running Caddy in host mode?

The recommended way to use Caddy as a proxy is to have the containers you want to proxy to share a Docker network with Caddy, then you can simply proxy to the container using its container name. So you’d do reverse_proxy nc:80 for example. Then you don’t need to expose a port for Nextcloud on the host (i.e. you can remove ports:).

The advantage of that is you guarantee that the only way to reach the service is through Caddy, with TLS. Having the app bind to the host means any machine in the same network can reach your app directly without TLS.

You probably shouldn’t be exposing your database to the host. That’s a security vulnerability because now any machine on your network can directly access the database, bypassing app authentication etc. (Obviously they still need the DB user/pass, but it’s still a big risk).

Sorry i saw this in another config on a post. Thought to use in my config because Nextcloud is a fileserver ill remove that. or do you know the config to make it work if its not need to help with speed let me know.

Sorry about that i put a random port off the top of my head to not show my real port i have a different port but we can say its 8080.

I read that if ran in network mode it will run faster. But I have them Caddy and Nextcloud in separate VMs so, if something happens to my Nextcloud and i have to roll it back i dont want to loose my redirects for my other sites and services.

I see the value in your solution. Can I run 2 Caddy instance to remedy this by copying over the Caddy data and config and only using it to reverse proxy my Nextcloud and the other Caddy instance on my VM to host my redirects and other services?

I had an error with the database not being able to connect to Nextcloud this fixed it but im not sure why it doesnt work esp if there in the same docker network. (i just removed them and it works now thank you )

Besides these issues any other reason my transfers are slow?

Port numbers are not secret information. You don’t need to obfuscate that, it doesn’t matter.

Domains are not secret either. All TLS certs that are issued get publicized via Certificate Transparency logs (e.g. https://crt.sh/). Omitting that information only makes it harder for us to help because we can’t do testing from our end to confirm/debug the problems you’re facing.

Not really. If there’s any difference, it’s on the order of 1% or less.

Ah, then possibly there’s added latency from the VM’s networking. Ideally you’re using very low overhead VM software.

That wouldn’t help with performance.

Ok thank you below is a link to my next cloud instance if you have time try uploading a file to it
and you can see the speed. and the port is 8189

Test link

Its also in a separate VLAN as well. Any other reason performance mayb taking such a hit?

(it was working perfectly when i had it baremetal. I made a Proxmox so im able to restore if there is any issues esp with new Nextcloud updates)

Just want to say thank you again @francislavoie for your time and helping me clean up my configs . I found out the issue it was the client moved to a different site. I am used to there connection being 1G up and down. But the new location only downloads at 1G but doesn’t upload at the full 1G so there was the problem. When they downloaded from my server all was well but when then had to upload there was cap on the speed and it was an issue on there end.

2 Likes