Peertube Video Platform: Nginx → Caddy

Hi all! This is my first post. :wave:

I’m Robert and work in the EU institutions on some digitalisation projects. We want to use Peertube for our videos and I wonder if we can have a slimmer setup if we replace nginx by caddy.

I do not have a bug, but I would appreciate your feedback. I plan to blog about the final configue and possibly contribute upstream to peertube.

1. Caddy version (caddy version):

I am using docker with the image caddy:2-alpine. Caddy version: v2.4.6 h1:HGkGICFGvyrodcqOOclHKfvJC0qTU7vny/7FhYp9hNw=

2. How I run Caddy:

I replaced in the Peertube Docker Compose file nginx by caddy caddy:2-alpine. No problems here either.

d. My complete Caddyfile or JSON config:

The idea is to rewrite the nginx config for Caddy. Find the original here:

# kate: indent-width 8; space-indent on;
{
        # Global options block. Entirely optional, https is on by default
        # Optional email key for lets encrypt
        email {$LETS_ENCRYPT_EMAIL}
        # Optional staging lets encrypt for testing. Comment out for production.
        # acme_ca https://acme-staging-v02.api.letsencrypt.org/directory

        # admin off

        debug

        servers {
                timeouts {
                        idle 1d
                }
        }
}

# https://gist.github.com/yukimochi/bb7c90cbe628f216f821e835df1aeac1?permalink_comment_id=3607303#gistcomment-3607303

{$PEERTUBE_DOMAIN} {
        log {
                level debug
                # format single_field common_log
                output file /logs/access.log
        }

        encode gzip

        root * /var/www/peertube

        @upload_video {
                method POST HEAD
                path /api/v1/videos/upload
        }

        @upload_assets {
                path_regexp ^/api/v1/(videos|video-playlists|video-channels|users/me)
        }

        @download_assets_overrides {
                path_regexp ^/client/(assets/images/(icons/icon-36x36\.png|icons/icon-48x48\.png|icons/icon-72x72\.png|icons/icon-96x96\.png|icons/icon-144x144\.png|icons/icon-192x192\.png|icons/icon-512x512\.png|logo\.svg|favicon\.png|default-playlist\.jpg|default-avatar-account\.png|default-avatar-account-48x48\.png|default-avatar-video-channel\.png|default-avatar-video-channel-48x48\.png))$

                file /storage/client-overrides/{path} /peertube-latest/{path}
        }

        @download_assets {
                path_regexp ^/client/(.*\.(js|css|png|svg|woff2|otf|ttf|woff|eot))$
        }

        @tracker {
                path /tracker/socket*
        }

        @static {
                file /storage/{path}
        }

        handle @download_assets_overrides {
                header Cache-Control "public, max-age=31536000, immutable"
                try_files file /storage/client-overrides/{path} /peertube-latest/{path}
                file_server
        }

        handle @download_assets {
                header Cache-Control "public, max-age=31536000, immutable"
                rewrite * /peertube-latest/{path}
                file_server
        }

        handle @static {
                file_server
        }

        handle {
                request_body @upload_video {
                        max_size 12GB
                }
                header @upload_video X-File-Maximum-Size 8G

                request_body @upload_assets {
                        max_size 6MB
                }
                header @upload_assets X-File-Maximum-Size 4M

                reverse_proxy @tracker peertube:9000 {
                        transport http {
                                response_header_timeout 15m
                        }
                }

                reverse_proxy peertube:9000 {
                        transport http {
                                response_header_timeout 10m
                        }
                }
        }
}

3. The problem I’m having:

I would appreciate your feedback on

  1. how I can even better replicate the nginx setup at PeerTube/peertube at develop · Chocobozzz/PeerTube · GitHub
  2. how I can possibly be more DRY and shorter with my Caddyfile above.

Thanks for your considerations!

/ Robert

1 Like

I’d recommend moving your matchers to just above the thing that uses the matcher. It’s easier to read top-down what is being done. Like, in code, you wouldn’t tend to write all your conditions… THEN all your if statements.

Also, you can make use of Caddy’s single-line named matcher syntax, so matchers like this:

Can just be:

        @static file /storage/{path}

Using try_files here is redundant, because you’ve already matched with file. So instead, you probably want to use rewrite to actually perform the rewrite. Using try_files causes double the filesystem lookups.

Use this instead of try_files. This placeholder is the filled with the path that the file matcher matched.

rewrite {http.matchers.file.relative}

Also FYI, I’m not sure if the order of your matchers matters here, but the order of your handle blocks isn’t guaranteed when they don’t involve a path matcher. You can run caddy adapt --pretty to see the order they end up in, in the final JSON config.

If you need to guarantee their order, you can wrap the handles with a route.

That seems insanely long. Is that really necessary? :scream:

FYI this field is getting removed completely in v2.5.0, so don’t rely on this.

I think this header is supposed to be passed in header_up to the upstream, not sent back to the client. The header directive writes a response header back to the client. The reverse_proxy directive has a header_up option to modify the request headers before sending it upstream.

I haven’t closely compared with the nginx config, there’s a lot going on in there and I don’t have the time to dig too deep. But I hope this helps.

2 Likes

I tried here to reproduce the original behaviour from nginx:

  location /tracker/socket {
    # Peers send a message to the tracker every 15 minutes
    # Don't close the websocket before then
    proxy_read_timeout 15m; # default is 60s

    try_files /dev/null @api_websocket;
  }

Does this make sense?

The nginx config also has:

  ssl_session_timeout       1d; # defaults to 5m
  ssl_session_cache         shared:SSL:10m; # estimated to 40k sessions
  ssl_session_tickets       off;
  ssl_stapling              on;
  ssl_stapling_verify       on;

I have translated this with the following in the global options blog:

        servers {
                timeouts {
                        idle 1d
                }
        }


I know about header_up from another project. The nginx config has this:

  location ~ ^/api/v1/(videos|video-playlists|video-channels|users/me) {
    client_max_body_size                      6M; # default is 1M
    add_header            X-File-Maximum-Size 4M always; # inform backend of the set value in bytes before mime-encoding (x * 1.4 >= client_max_body_size)

    try_files /dev/null @api;
  }

if it would go to the backend, it would be proxy_set_header instead of add_header in NGINX. The webserver informs here the client-side app in the browser of file upload limits configured in the webserver:

It’s probably fine I guess, I just find it weird to configure such long timeouts in general. But I suppose it might just be a requirement for this type of software.

I’m not sure that timeout is the same, but sure :man_shrugging:

I guess so. The comment in the nginx config seems misleading. It says “inform backend”, and usually the backend is the upstream app being proxied to.

This topic was automatically closed after 30 days. New replies are no longer allowed.