Asset reloading unexpectedly when using reverse proxy unless "private browsing"

1. The problem I’m having:

I use two Caddy servers, one is a server for nextcloud-fpm and another works as a reverse proxy onto the first. If I connect to nextcloud via the reverse proxy, some images are reloaded on every refresh, same as in this reddit thread. Note how the cloudy background image gets reloaded every time.
If I instead connect directly to the nextcloud server caddy OR if I go via reverse proxy, but in a “private browsing” tab, the problem doesn’t occur.
I’m seeing this behavior both in firefox and chrome.

2. Error messages and/or full log output:

I’m not sure which logs/under which circumstances might be useful.

3. Caddy version:

reverse proxy:

v2.9.1 h1:OEYiZ7DbCzAWVb6TNEkjRcSCRGHVoZsJinoDR/n9oaY=

server:

v2.9.1 h1:OEYiZ7DbCzAWVb6TNEkjRcSCRGHVoZsJinoDR/n9oaY=

4. How I installed and ran Caddy:

a. System environment:

Docker

b. Command:

docker compose up -d

c. Service/unit/compose file:

reverse-proxy:

services:

  reverse-proxy:
    container_name: reverse-proxy
    image: caddy:alpine
    restart: unless-stopped
    ports:
      - 80:80
      - 443:443
    security_opt:
      - no-new-privileges
    environment:
      - TZ=Europe/Berlin
    networks:
      - reverse_proxy
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - ./caddy/data:/data
      - ./caddy/config:/config
      - ./certs:/certs


networks:

  reverse_proxy:
    external: true

nextcloud:

secrets:

  nextcloud_db_passwd:
    file: ./nextcloud_db_passwd
  nextcloud_redis_passwd:
    file: ./nextcloud_redis_passwd
  nextcloud_admin_passwd:
    file: ./nextcloud_admin_passwd


services:

  db:
    container_name: nc_db
    image: postgres:17-alpine
    restart: unless-stopped
    security_opt:
      - no-new-privileges
    secrets:
      - nextcloud_db_passwd
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -d postgres -U $${POSTGRES_USER}"]
      start_period: 10s
      interval: 30s
      retries: 5
      timeout: 5s
    environment:
      - TZ=Europe/Berlin
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD_FILE=/run/secrets/nextcloud_db_passwd
    networks:
      - internal
    volumes:
      - ./dbdata:/var/lib/postgresql/data

  redis:
    container_name: nc_redis
    image: docker.io/valkey/valkey:8-alpine
    restart: unless-stopped
    security_opt:
      - no-new-privileges
    secrets:
      - nextcloud_redis_passwd
    healthcheck:
      test: ["CMD-SHELL", "redis-cli", "-a", "$$(cat /run/secrets/paperless_redis_passwd)", "--raw", "incr", "ping"]
      start_period: 10s
      interval: 30s
      retries: 5
      timeout: 3s
    environment:
      - TZ=Europe/Berlin
    command: sh -c 'valkey-server --requirepass "$$(cat /run/secrets/nextcloud_redis_passwd)"'
    networks:
      - internal

  server:
    container_name: nc_server
    image: caddy:alpine
    restart: unless-stopped
    networks:
      - reverse_proxy
      - internal
    security_opt:
      - no-new-privileges
    environment:
      - TZ=Europe/Berlin
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - ./caddy/data:/data
      - ./caddy/config:/config
      - ./ncdata:/var/www/html
      - /mnt/md0/ncuserdata:/var/www/html/data
    depends_on:
      - nextcloud

  nextcloud:
    container_name: nc_app
    image: nextcloud:stable-fpm-alpine
    restart: unless-stopped
    user: 82:82
    security_opt:
      - no-new-privileges
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_healthy
    secrets:
      - nextcloud_db_passwd
      - nextcloud_redis_passwd
      - nextcloud_admin_passwd
    environment:
      - TZ=Europe/Berlin
      - REDIS_HOST=redis
      - REDIS_HOST_PASSWORD_FILE=/run/secrets/nextcloud_redis_passwd
      - POSTGRES_HOST=db
      - POSTGRES_DB=postgres
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD_FILE=/run/secrets/nextcloud_db_passwd
      - NEXTCLOUD_ADMIN_USER=s
      - NEXTCLOUD_ADMIN_PASSWORD_FILE=/run/secrets/nextcloud_admin_passwd
      - NEXTCLOUD_TRUSTED_DOMAINS=nc_server nc.local  # nc is for selfconnect
      - TRUSTED_PROXIES=172.19.0.0/24 172.27.0.0/24
    networks:
      internal:
        ipv4_address: 172.21.0.10  # for ufw rule
    volumes:
      - ./ncdata:/var/www/html
      - /mnt/md0/ncuserdata:/var/www/html/data
      - ./nextcloud-php-fpm.conf:/usr/local/etc/php-fpm.d/zzz-custom.conf:ro  # php-fpm custom conf


networks:

  internal:
    driver: bridge
    ipam:
      config:
        - subnet: 172.21.0.0/16
          gateway: 172.21.0.1

  reverse_proxy:
    external: true

d. My complete Caddy config:

reverse-proxy:

{

#    log {
#        level DEBUG
#        output file /var/log/access.log
#    }

}

nc.local {

    tls /certs/nc.local.crt /certs/nc.local.key


    redir /.well-known/carddav /remote.php/dav/ 301
    redir /.well-known/caldav /remote.php/dav/ 301

    reverse_proxy nc_server:443 {
        transport http {
            tls_insecure_skip_verify  # because local cert
        }
    }

}

nc_server:

{

#    log {
#        level DEBUG
#        output file /var/log/access.log
#    }

}

(nc) {

    request_body {
        max_size 10G
    }

    # Enable gzip but do not remove ETag headers
    encode {
        zstd  # 'fastest', 'better', 'best', 'default'
        gzip

        minimum_length 256

        match {
                header Content-Type application/atom+xml
                header Content-Type application/javascript
                header Content-Type application/json
                header Content-Type application/ld+json
                header Content-Type application/manifest+json
                header Content-Type application/rss+xml
                header Content-Type application/vnd.geo+json
                header Content-Type application/vnd.ms-fontobject
                header Content-Type application/wasm
                header Content-Type application/x-font-ttf
                header Content-Type application/x-web-app-manifest+json
                header Content-Type application/xhtml+xml
                header Content-Type application/xml
                header Content-Type font/opentype
                header Content-Type image/bmp
                header Content-Type image/svg+xml
                header Content-Type image/x-icon
                header Content-Type text/cache-manifest
                header Content-Type text/css
                header Content-Type text/plain
                header Content-Type text/vcard
                header Content-Type text/vnd.rim.location.xloc
                header Content-Type text/vtt
                header Content-Type text/x-component
                header Content-Type text/x-cross-domain-policy
         }
    }

    header {
        Strict-Transport-Security "max-age=15768000; includeSubDomains"
        Referrer-Policy no-referrer
        X-Content-Type-Options nosniff
        X-Download-Options noopen
        X-Frame-Options SAMEORIGIN
        X-Permitted-Cross-Domain-Policies none
        X-Robots-Tag noindex,nofollow
        X-XSS-Protection "1; mode=block"
    }

    root * /var/www/html

    route {

        route /robots.txt {
            log_skip
            file_server
        }


        # Add exception for `/.well-known` so that clients can still access it
        # despite the existence of the `error @internal 404` rule which would
        # otherwise handle requests for `/.well-known` below
        route /.well-known/* {

            redir /.well-known/carddav /remote.php/dav/ permanent
            redir /.well-known/caldav /remote.php/dav/ permanent

            @well-known-static path \
                /.well-known/acme-challenge /.well-known/acme-challenge/* \
                /.well-known/pki-validation /.well-known/pki-validation/*

            route @well-known-static {
                try_files {path} {path}/ =404
                file_server
            }

            redir * /index.php{path} permanent
        }


        @internal path \
            /build /build/* \
            /tests /tests/* \
            /config /config/* \
            /lib /lib/* \
            /3rdparty /3rdparty/* \
            /templates /templates/* \
            /data /data/* \
            \
            /.* \
            /autotest* \
            /occ* \
            /issue* \
            /indie* \
            /db_* \
            /console*
        error @internal 404

        @assets {
            path *.css *.js *.mjs *.svg *.gif *.png *.jpg *.jpeg *.webp *.ico *.wasm *.tflite *.woff2
            file {path}  # Only if requested file exists on disk, otherwise /index.php will take care of it
        }
        route @assets {
            header /*       Cache-Control "max-age=15552000"   # Cache-Control policy borrowed from `.htaccess`
            header /*.woff2 Cache-Control "max-age=604800"     # Cache-Control policy borrowed from `.htaccess`
            log_skip                                           # Optional: Don't log access to assets
            file_server {
                precompressed gzip
            }
        }

        # Rule borrowed from `.htaccess`
        redir /remote/* /remote.php{path} permanent

        # required for legacy support
        @notlegacy {
                path *.php *.php/
                not path /index*
                not path /remote*
                not path /public*
                not path /cron*
                not path /core/ajax/update*
                not path /status*
                not path /ocs/v1*
                not path /ocs/v2*
                not path /ocs-provider/*
                not path /updater/*
                not path */richdocumentscode/proxy*
        }
        rewrite @notlegacy /index.php{uri}

        # Serve found static files, continuing to the PHP default handler below if not found
        try_files {path} {path}/
        @notphpordir not path /*.php /*.php/* / /*/
        file_server @notphpordir {
            pass_thru
        }

        # Let everything else be handled by the PHP-FPM component
        php_fastcgi nextcloud:9000 {
            env modHeadersAvailable true         # Avoid sending the security headers twice
            env front_controller_active true     # Enable pretty urls
        }

    }

}


:80 {
    import nc
}

:443 {
    tls internal {
        on_demand
    }
    import nc
}

5. Links to relevant resources:

If I understand your problem, you are getting an image flash under certain reverse proxy configurations.

Couple of debugging thoughts:

  1. Network tab: how the assets are fetched or cached differently under different setups? Are all your assets fetched as you expected under both conditions?
  2. you have a few matchers doing a number of specific things. may be worth verifying that they are going through the paths you expect. I do this by liberally putting in respond directives and respond with a string that expands to some variable values so I see what is going on, example from one of my Caddyfiles:
# respond "s3Get {host} {http.vars.root}{path} auth: 0.0.0.0:{core_port}/auth{http.request.uri}"

I can easily turn on the response by uncommenting. I have these sprinkled through my Caddyfile to verify all paths I expect.

Hi, thank you for your input.

First, to clarify. With the same set up (reverse proxy caddy → nextcloud caddy), I’m experiencing two different behaviors. When I access nextcloud in a normal browser window, some things, most notably the background image, are reloaded every time I switch the view within nextcloud (e.g., going from “Files” to “Settings”). This causes the static background color to show briefly before the background image loads in again on top, causing the flashing effect.
If I instead open nextcloud the same way in a “Private Browsing” window, I get what I assume to be the intended behavior, where the background image persists between views, which feels much smoother.
I’m not sure what Firefox’ Private Browsing does that would cause this difference. I’ve cleared browser cookies and site data to the best of my knowledge between attempts to ensure it’s not just that.


Looking at the Network tab: In the PB window, the background image (jenna-kim-the-globe-dark.webp in my case) appears on the initial page load or if I hard refresh via ctrl+F5 (two entries, as in the above image). Afterwards, when switching views or refreshing via just F5, this particular image does not appear in the Network tab anymore.
In the normal browser window, the background image (both entries) appear every time, but they both say “cached” under the “transferred” column.
Both cases also show different “requests” and “MB transferred” in the Network window on a page refresh. I’m not sure what else to look for here that might help debug.

Regarding the Caddyfiles, I’ve reduced the matchers. Here are the current Caddyfiles that still show the same behavior:

nc.local {

    tls /certs/nc.local.crt /certs/nc.local.key

    reverse_proxy nc_server:443 {
        transport http {
            tls_insecure_skip_verify  # because local cert
        }
    }

}
(nc) {

    root * /var/www/html

    file_server
    php_fastcgi nextcloud:9000

}


:80 {
    import nc
}

:443 {
    tls internal {
        on_demand
    }
    import nc
}

Edit: I just did a fresh install of chromium and everything works as expected there. Not sure what to make of that.

That is progress…

  1. Why is your image being fetched for both http:// protocol and https:// protocol? I assume you only want/need it from https://?
  • What is the url you are fetching? Which protocol?
  • If you are using http:// protocol. It is possible that the browser may be automatically changing it to https://
  1. When your page fetches images from cache it is not going to your caddy server at all. So your fail case is because your images are in cache, not likely a caddy issue.
  2. Why do you even serve http://data?
  3. Is there a reason you are using local certs? Caddy will server up your certs, assuming you have a domain.

I know nothing about nextcloud, have you asked these questions of a decent AI? Saved my bacon multiple times, especially with caddy configuration (a sometimes dark art). Back up and explain what your trying to do and the AI often times will pull in a way I had not thought of.

  1. Why is your image being fetched for both http:// protocol and https:// protocol? I assume you only want/need it from https://?
  1. Why do you even serve http://data?

I’m not sure. That’s the output as it shows up in the Network tab if I visit the website via https://nc.local, which my local dns points to the device IP on the local network. It could be a nextcloud thing or an issue with the configurations above.

Is there a reason you are using local certs? Caddy will server up your certs, assuming you have a domain.

No domain. I set up local certs for learning and it’s working fine for me otherwise so I just kept it. It gets rid of the “insecure connection” nags :sweat_smile:

No success with google-fu or chatbots, unfortunately. Also, as per my edit, I’ve tried two new cases: 1) chromium fresh install on the same machine. Works as expected, i.e., no flashing images. 2) firefox fresh install on a different machine. Immediately shows flashing behavior also. So I’m guessing maybe its a firefox thing. Its just very curious to me as a layman why a firefox “Private Browsing” tab behaves differently.

OK, here are a couple of thoughts:

  1. Your public site is nc.local. And you are handling certs for that.
  2. you are reverse proxying to the 443 port. It is not necessary for this internal connection.
  3. Currently you have 2 caddy servers listening on port 443: and they will both process them.

Try this change:

Reverse Proxy caddy:

nc.local {

    tls /certs/nc.local.crt /certs/nc.local.key

    reverse_proxy nc_server:4001

}

nextcloud caddy:

(nc) {

    root * /var/www/html

    file_server
    php_fastcgi nextcloud:9000

}

:4001 {
    import nc
}

Then the caddy reverse proxy is the only server listening on 443

Of course you could collapse these into one caddy server too. No need to run 2 servers in this case.

Your public site is nc.local. And you are handling certs for that.

It seems to be generally working as far as I can see. That said, if I disable https and go http all the way I’m not experiencing the flashing problem. I do have access to another nextcloud instance that does use a proper certificate and that instance also has the flashing issue, so I would assume its not a local cert problem.

you are reverse proxying to the 443 port. It is not necessary for this internal connection.

True. Fwiw, the nextcloud caddy had a :80 matcher because nextcloud has a self-diagnosis (are certain paths accessible that should/shouldn’t be open? Are recommended headers set? etc.) that did not work correctly until I added the :80 matcher. I’ve removed that matcher now for the purposes of debugging this problem.

Currently you have 2 caddy servers listening on port 443: and they will both process them.

I was hoping to avoid problems there because only the reverse-proxy caddy docker container is listening on the system ports 443, 80. In terms of the internal docker network, the reverse proxy’s url matcher should avoid overlap.

Try this change:

Flashing didn’t change, unfortunately.

Of course you could collapse these into one caddy server too. No need to run 2 servers in this case.

Usually, the reverse proxy caddy is reverse-proxying a bunch of other services. I could still collapse them, but architecturally I would prefer to keep the services separate.

Again, thank you for your time.

I’ve replaced Caddy with nginx for testing and am seeing the exact same thing, so I’m guessing it’s more of a nextcloud or firefox thing and less something that Caddy should work around.