Caddy container 500 MB RAM within ~10h, 1 GB at some point

1. Caddy version (caddy version):

Docker image 2.2.0-alpine

(Note: I had the same problem with previous 2.x versions as well)

2. How I run Caddy:

  • I run Caddy on several servers behind Cloudflare
  • I only use Caddy as reverse proxy (so with the reverse_proxy directive for all domains)
  • I use Cloudflares certs for the tls directives for all domains
  • I run it as Docker container in --network host mode
  • I mount the Caddyfile and data as volumes. I don’t mount config.
  • The downstream services on the servers are Go services that return JSON and facilitate the HTTP cache headers to limit the amount of requests for clients

Regarding some metrics (these are very rough estimates, I don’t know how accurate they are!):

  • From looking at the Cloudflare stats (the “Analytics” dashboard) I’d say the low end on a day is 1 req/s, the high end is 3 req/s (per server). I’m not sure about this metric.
  • Per day there are 20,000 different clients sending requests through Cloudflare to one server. Not sure if Cloudflare keeps connections open for each of them.
  • Most of the traffic (number of requests) goes to one reverse-proxied domain, bar.mydomain.com shown in below’s Caddyfile
  • Most of the data transfer volume goes to another reverse-proxied domain, baz.mydomain.com shown in below’s Caddyfile
  • Most of computation (and longer taking time to respond, up to 2s) go to foo.mydomain.com shown in below’s Caddyfile

Again: I’m not sure if these metrics are accurate!

a. System environment:

  • Ubuntu 18.04
  • Docker 19.03
  • 2 core CPU, 2 GB RAM, fairly cheap VPS

b. Command:

docker run -d --name caddy --restart unless-stopped -v /home/john/docker/caddy/Caddyfile:/etc/caddy/Caddyfile -v /home/john/docker/caddy/_vol:/data --network host caddy:2.2.0-alpine

c. Service/unit/compose file:

I don’t use Docker Compose.

d. My complete Caddyfile or JSON config:

foo.mydomain.com 1.mydomain.com {
        reverse_proxy localhost:8080
        tls /data/mydomain.com.pem /data/mydomain.com.key
        basicauth /private* {
                john ENCODED-PW
        }
}

bar.mydomain.com {
        reverse_proxy localhost:8081
        tls /data/mydomain.com.pem /data/mydomain.com.key
}

baz.mydomain.com {
        reverse_proxy localhost:8082
        tls /data/mydomain.com.pem /data/mydomain.com.key
}

3. The problem I’m having:

When I stop, remove and restart the Caddy container it doesn’t take long until Docker says it takes 500 MB RAM via docker stats. After a week or so it’s at 1 GB.

The servers I’m running this on have 2 GB RAM, so it works, but there’s not much room left for running actual applications.

4. Error messages and/or full log output:

No errors, just very high memory usage.

5. What I already tried:

I found this thread: Caddy memory usage over 300MB... any clue?

It mentions that Cloudflare keeps open a lot of connections to downstream servers. @matt suggested to set the keep-alive to a low value, but also that it would be the default soon. The Caddy version mentioned in that thread is really old, so I assume 2.2.0 has this low setting by default by now? So I didn’t add that to the Caddyfile yet.

6. Links to relevant resources:

/

Could you try with v2.1.1 to compare? I’m wondering if this was a regression, since we haven’t gotten other reports of memory issues recently.

I’m thinking it might be related to the new metrics feature introduced in 2.2.0.

Not sure why that’d happen - the metrics are pretty lightweight.

However, using those metrics would be a good way of determining what’s going on :wink:

@golfer20 If you poll http://localhost:2019/metrics inside the container you’ll see some metrics that start with go_memstats which can be useful to look at over time. If you want to gather these automatically you can run Prometheus and you’ll be able to query it to show you memory growth over time.

It would also be useful to do some profiling with pprof - if you’re not familiar with pprof this is a good introduction: Profiling Go programs with pprof - what you’re looking for specifically is to profile the heap. A command like go tool pprof -http=127.0.0.1:8080 127.0.0.1:2019/debug/pprof/heap can help with that.

Oops, sorry I missed this line first time I read through :man_facepalming:

But yeah, please try what @hairyhenderson suggested :slightly_smiling_face:

Note that since you’re running in Docker, localhost:2019 is only accessible from inside the container. You would need to rebind the admin endpoint to 0.0.0.0:2019 for it to be accessible outside of the container.

Oh also, as for the idle timeout setting, I just checked and for Caddy v2 there is not a default setting. For the file_server module it defaults to 30s, but you’re using reverse_proxy where IMO it’d be dangerous to set a default timeout.

If you use the JSON config format instead of a Caddyfile (just use caddy adapt to get started by converting your existing Caddyfile), you can set the idle_timeout property on the server entry:

{
  "apps": {
    "http": {
      "servers": {
        "srv0": {
          "idle_timeout": "5m",
          "listen": [
            ":443"
          ],
          "routes": [
            {
              "match": [
                {
                  "host": [
                    "foo.mydomain.com",
                    "1.mydomain.com"
                  ]
                }
              ],
              "handle": [
                {
                  "handler": "subroute",
                  "routes": [
                    {
                      "handle": [
                        {
                          "handler": "authentication",
                          "providers": {
                            "http_basic": {
                              "accounts": [
                                {
                                  "password": "ENCODED-PW",
                                  "username": "john"
                                }
                              ],
                              "hash": {
                                "algorithm": "bcrypt"
                              },
                              "hash_cache": {}
                            }
                          }
                        }
                      ],
                      "match": [
                        {
                          "path": [
                            "/private*"
                          ]
                        }
                      ]
                    },
                    {
                      "handle": [
                        {
                          "handler": "reverse_proxy",
                          "upstreams": [
                            {
                              "dial": "localhost:8080"
                            }
                          ]
                        }
                      ]
                    }
                  ]
                }
              ],
              "terminal": true
            },
            {
              "match": [
                {
                  "host": [
                    "bar.mydomain.com"
                  ]
                }
              ],
              "handle": [
                {
                  "handler": "subroute",
                  "routes": [
                    {
                      "handle": [
                        {
                          "handler": "reverse_proxy",
                          "upstreams": [
                            {
                              "dial": "localhost:8081"
                            }
                          ]
                        }
                      ]
                    }
                  ]
                }
              ],
              "terminal": true
            },
            {
              "match": [
                {
                  "host": [
                    "baz.mydomain.com"
                  ]
                }
              ],
              "handle": [
                {
                  "handler": "subroute",
                  "routes": [
                    {
                      "handle": [
                        {
                          "handler": "reverse_proxy",
                          "upstreams": [
                            {
                              "dial": "localhost:8082"
                            }
                          ]
                        }
                      ]
                    }
                  ]
                }
              ],
              "terminal": true
            }
          ],
          "tls_connection_policies": [
            {
              "match": {
                "sni": [
                  "foo.mydomain.com",
                  "1.mydomain.com"
                ]
              },
              "certificate_selection": {
                "any_tag": [
                  "cert0"
                ]
              }
            },
            {
              "match": {
                "sni": [
                  "bar.mydomain.com"
                ]
              },
              "certificate_selection": {
                "any_tag": [
                  "cert0"
                ]
              }
            },
            {
              "match": {
                "sni": [
                  "baz.mydomain.com"
                ]
              },
              "certificate_selection": {
                "any_tag": [
                  "cert0"
                ]
              }
            },
            {}
          ]
        }
      }
    },
    "tls": {
      "certificates": {
        "load_files": [
          {
            "certificate": "/data/mydomain.com.pem",
            "key": "/data/mydomain.com.key",
            "tags": [
              "cert0"
            ]
          }
        ]
      }
    }
  }
}
1 Like

Thanks for your replies Francis and Dave!

The container accessibility is less of an issue as I’m running it in --network host mode anyways.

Regarding idle timeout I’d prefer to keep using a Caddyfile, the JSON equivalent is not very user friendly in comparison.

So I followed your advice to check some stats:

pprof

$ docker exec -it caddy sh
$ apk add --update go
$ go tool pprof  127.0.0.1:2019/debug/pprof/heap
(pprof) top

Showing nodes accounting for 426.85MB, 90.56% of 471.35MB total
Dropped 57 nodes (cum <= 2.36MB)
Showing top 10 nodes out of 97
      flat  flat%   sum%        cum   cum%
  177.89MB 37.74% 37.74%   177.89MB 37.74%  crypto/tls.(*halfConn).encrypt
  140.84MB 29.88% 67.62%   285.17MB 60.50%  net/http.(*http2Framer).WriteDataPadded
   32.52MB  6.90% 74.52%    32.52MB  6.90%  vendor/golang.org/x/net/http2/hpack.(*headerFieldTable).addEntry
   19.02MB  4.04% 78.56%    19.02MB  4.04%  bytes.makeSlice
   16.50MB  3.50% 82.06%    41.51MB  8.81%  net/http.(*http2Server).ServeConn
   12.01MB  2.55% 84.61%    12.01MB  2.55%  crypto/tls.Server
    7.53MB  1.60% 86.20%     7.53MB  1.60%  bufio.NewReaderSize
       7MB  1.49% 87.69%     7.50MB  1.59%  net/http.(*http2serverConn).canonicalHeader
       7MB  1.49% 89.17%        7MB  1.49%  runtime.malg
    6.53MB  1.38% 90.56%     6.53MB  1.38%  bufio.NewWriterSize

Note that it says something about 471.35MB total, while docker stats shows already more than 800 MB for the Caddy container (started ~24h ago). Or is the profile one just the total allocations from a limited duration like 10s or something?

Another one:

(pprof) top 20 -cum

Showing nodes accounting for 337.73MB, 71.65% of 471.35MB total
Dropped 57 nodes (cum <= 2.36MB)
Showing top 20 nodes out of 97
      flat  flat%   sum%        cum   cum%
         0     0%     0%   296.17MB 62.83%  net/http.(*http2serverConn).writeFrameAsync
         0     0%     0%   285.17MB 60.50%  net/http.(*http2Framer).WriteData (inline)
  140.84MB 29.88% 29.88%   285.17MB 60.50%  net/http.(*http2Framer).WriteDataPadded
         0     0% 29.88%   285.17MB 60.50%  net/http.(*http2writeData).writeFrame
         0     0% 29.88%   177.89MB 37.74%  crypto/tls.(*Conn).writeRecordLocked
  177.89MB 37.74% 67.62%   177.89MB 37.74%  crypto/tls.(*halfConn).encrypt
         0     0% 67.62%   177.39MB 37.63%  bufio.(*Writer).Write
         0     0% 67.62%   177.39MB 37.63%  crypto/tls.(*Conn).Write
         0     0% 67.62%   144.33MB 30.62%  net/http.(*http2Framer).endWrite
         0     0% 67.62%   144.33MB 30.62%  net/http.(*http2bufferedWriter).Write
    1.50MB  0.32% 67.94%    87.74MB 18.61%  net/http.(*conn).serve
   16.50MB  3.50% 71.44%    41.51MB  8.81%  net/http.(*http2Server).ServeConn
         0     0% 71.44%    41.51MB  8.81%  net/http.http2ConfigureServer.func1
       1MB  0.21% 71.65%    38.53MB  8.17%  net/http.(*http2serverConn).readFrames
         0     0% 71.65%    37.53MB  7.96%  net/http.(*http2Framer).ReadFrame
         0     0% 71.65%    34.06MB  7.23%  github.com/caddyserver/caddy/v2/modules/caddyhttp.(*Server).enforcementHandler
         0     0% 71.65%    34.06MB  7.23%  github.com/caddyserver/caddy/v2/modules/caddyhttp.(*Server).wrapPrimaryRoute.func1
         0     0% 71.65%    34.06MB  7.23%  github.com/caddyserver/caddy/v2/modules/caddyhttp.(*Subroute).ServeHTTP
         0     0% 71.65%    34.06MB  7.23%  github.com/caddyserver/caddy/v2/modules/caddyhttp.(*metricsInstrumentedHandler).ServeHTTP
         0     0% 71.65%    34.06MB  7.23%  github.com/caddyserver/caddy/v2/modules/caddyhttp.HandlerFunc.ServeHTTP

Caddy metrics

I’m not sure what I personally could see from periodically getting data from the metrics endpoint other than that the memory usage is constantly increasing, but maybe a snapshot of just now makes you see something interesting?

$ docker exec -it caddy sh
$ apk add --update curl
$ curl "localhost:2019/metrics"

# HELP go_build_info Build information about the main Go module.
# TYPE go_build_info gauge
go_build_info{checksum="",path="caddy",version="(devel)"} 1
# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0.000874646
go_gc_duration_seconds{quantile="0.25"} 0.001226682
go_gc_duration_seconds{quantile="0.5"} 0.001450993
go_gc_duration_seconds{quantile="0.75"} 0.00208208
go_gc_duration_seconds{quantile="1"} 0.015850969
go_gc_duration_seconds_sum 1.610284799
go_gc_duration_seconds_count 791
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 21619
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.15.2"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 5.19931992e+08
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 2.0389939544e+10
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.779018e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 1.58289168e+08
# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since the program started.
# TYPE go_memstats_gc_cpu_fraction gauge
go_memstats_gc_cpu_fraction 0.0007892310368506903
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 3.3228432e+07
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 5.19931992e+08
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 6.3791104e+07
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 5.31832832e+08
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 1.48127e+06
# HELP go_memstats_heap_released_bytes Number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes gauge
go_memstats_heap_released_bytes 5.8384384e+07
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 5.95623936e+08
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.6024421545144086e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 0
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 1.59770438e+08
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 3472
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 5.2734e+06
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 5.40672e+06
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 9.902528e+08
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 4.19239e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 2.76791296e+08
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 2.76791296e+08
# HELP go_memstats_sys_bytes Number of bytes obtained from system.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 9.17038176e+08
# HELP go_threads Number of OS threads created.
# TYPE go_threads gauge
go_threads 12
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 2590.51
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 11997
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 8.80504832e+08
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.60234782852e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.597468672e+09
# HELP process_virtual_memory_max_bytes Maximum amount of virtual memory available in bytes.
# TYPE process_virtual_memory_max_bytes gauge
process_virtual_memory_max_bytes -1
# HELP promhttp_metric_handler_requests_in_flight Current number of scrapes being served.
# TYPE promhttp_metric_handler_requests_in_flight gauge
promhttp_metric_handler_requests_in_flight 1
# HELP promhttp_metric_handler_requests_total Total number of scrapes by HTTP status code.
# TYPE promhttp_metric_handler_requests_total counter
promhttp_metric_handler_requests_total{code="200"} 1
promhttp_metric_handler_requests_total{code="500"} 0
promhttp_metric_handler_requests_total{code="503"} 0

Guess

From the two stats there’s one that pops out: The HTTP/2 TLS encryption and data writing. Or does that look normal?

Let me know if I can deliver any other stats, or run any specific commands to get more info on why the memory usage is so high!

Like how do I find out how many connections there are from Cloudflare to Caddy?

Unfortunately it’s not possible to set idle timeouts from the Caddyfile because site blocks don’t map 1:1 to servers, but timeouts must be set on the servers. The Caddyfile does some magic to consolidate sites to servers as appropriate based on the listener addresses. It’s a tricky problem to solve without ambiguity.

I wrote up a proposal for this though, but it’s still under discussion:

2 Likes

Ok I see. But given that Dave considers it dangerous to set a default timeout for a reverse_proxy, what other options do I have?

  • Does the Go profile show anything suspicious or even obvious?
  • Or the metrics?
  • How do I find out how many connections there are from Cloudflare to Caddy?

The Caddy container that was at 800 MB 24h ago is now at almost 1.2 GB.

I’m a bit lost here and am relying on your ideas to find the root cause of this very high memory usage. I don’t see how I’m doing anything unusual with Caddy, it seems like a very typical use case, so I’m sure finding the issue and fixing it brings a lot of value to many Caddy users. I could even try to contribute to that fix. But first I need to know what the root cause is.

Or would you say it’s the normal behavior when Caddy runs behind Cloudflare and there are lots of users?

1 Like

Dave was talking about Caddy itself having a non-zero default, not you setting a different timeout value.

Ah sorry I misunderstood that!

Ok so I give that a try and report back if it helped. And if it did we have a lead that the handling of many open connections in Caddy could maybe be looked at in detail with an eye on allocations?

I didn’t get around to convert the Caddyfile yet to add the reduced keep-alive, but I checked on the number of connections on the server with netstat -an|grep ESTABLISHED | grep -w 443 | wc -l which yielded between 6100 and 7400 on the different servers. I don’t know if this is an unusual number, just mentioning it here in case someone goes “Aha, Caddy is designed to work with only 200-300 max” :slight_smile:.

1 Like

Ok, I converted a Caddyfile to the native JSON format on one of the servers and added the idle_timeout setting.

After 2 days the Caddy container is at 85 MB instead of 800 MB, that’s great!

netstat -an|grep ESTABLISHED | grep -w 443 | wc -l shows 650 connections on that server (instead of 6500).

Now:

  • Why does Caddy need so much memory when there are a few thousand open connections (or maybe HTTP/2 connections in particular)? Is that something you’d consider as an issue that’s worth fixing?
  • Do you know how NGINX, Envoy and Traefik (maybe particularly interesting as it’s written in Go as well) compare to Caddy in this regard?
  • Given that running Caddy as reverse proxy behind Cloudflare is a really typical use case and that the Caddyfile format is really popular (and the JSON equivalent really ugly), are there plans to add the idle_timeout setting to the Caddyfile format?
    • I could contribute to this. If I understand correctly it would be a global option like these and the parsing and setting the value would be done to the object that’s created here, right?

Ah only now I remember @francislavoie already said that it’s not easily possible to add it to the Caddyfile, so I guess my previous suggestion / offering to contribute is obsolete. On the other hand the proposal sounds like a general per-server config improvement. Why not make it a global option first - I’m sure many (if not most) people would be fine to have the same idle timeout for all servers.

Looking at how others deal with this it seems Traefik limits the number of idle connections to 2 by default: Overview - Traefik

1 Like

Interesting, turns out Caddy doesn’t make that configurable right now, but Caddy inherit’s Go’s stdlib default of 2 automatically, because the default is not overridden.

I’ll wire that up to be configurable (no harm in doing so, just exposes a stdlib option), and make it available via the Caddyfile as well (which also made me notice another field wasn’t Caddyfile configurable either).

Edit: Here’s the PR:

We could, but it’ll be deprecated later anyways. I rather just implement it correctly from the start instead of making our lives harder later on, because we’ll need to check “if the standalone property is set and the new config is used, then reject the config” and we’ll need to keep that logic in there basically forever (until like v3 or whatever) which doesn’t seem fun.

1 Like

Ah I guess I misinterpreted the Traefik config, it’s just for the connections from Traefik to the reverse proxied backends, and not for the connections from say Cloudflare to Traefik, right? And same for the change you added to Caddy? It’s great to have that configurable, so thanks! But if I understood correctly it doesn’t help with Caddy’s memory usage for a high amount of connections, right?

We could, but it’ll be deprecated later anyways. I rather just implement it correctly from the start

Makes sense!


There’s one thing that’s not been addressed very well in this thread so far, which is the root cause of the memory issue. The limitation of the number of connections definitely helps with the memory usage and for my use case it’s a good enough solution, but it’s still a bit of a band-aid. Could there be a way to actually reduce the memory usage per connection?

Good news, I spent a bunch of time yesterday implementing the server config stuff:

Yeah. Same for Caddy. The fix for you will still be the server idle timeout rather than the proxy ones.

We’ve since had some more similar reports, so we’re considering setting a default idle timeout to something like 5 minutes (turns out Caddy v1 did this).

2 Likes

Both your work on the server configurability via Caddyfile and the consideration of setting a default idle limit to 5m are very good news! Thank you very much!

Looking forward to trying that out. Let me know if there’s a revision / beta / RC I can try out.

1 Like

On GitHub you can click the “watch” button in the top-right of the repo, and choose “releases” to get notified as soon a new release is tagged.

If you only care to try the new servers config feature, you can click the “checks” tab on the PR and click the Test group in the sidebar to find the CI build artifacts, i.e. testing builds for that PR.

3 Likes

This topic was automatically closed after 30 days. New replies are no longer allowed.