Lightning Fast WordPress: Caddy+Varnish+PHP-FPM

Hi folks,

I recently migrated a WordPress 4.7 site to a new VPS and thought I’d throw Varnish in front of it and run the whole thing under Caddy. I’m extremely happy with the final solution and it’s quirky enough that I figured it’d be worth documenting and sharing with others. There’s a lot of information in this gist (it should probably be a blog post and/or full-blown repo) but hopefully it helps the next poor soul trying to figure out how to do it!

You can see it in action here. My next task is cleaning up all the JavaScript…

Let me know here or in the gist comments if you have any questions or feedback. Thanks!

Mike

4 Likes

That’s great. Caddy’s website will soon be up-to-date (and will eventually be totally new) so you will be able to update that part of your gist. :wink: Thanks for sharing!

1 Like

Thanks Matt, and thank you for all your hard work on Caddy! It’s a great product and I look forward to seeing it deployed more widely for more diverse use-cases.

1 Like

@mwpastore can you help me replicate your setup ?

Would you kindly please take a look at my attempt and guide me where I am going wrong?

@Nirjhor It doesn’t look like you’re following my gist at all, so I guess my first suggestion would be to read through it and follow the instructions. I just updated it for Caddy 0.10. Let me know if you’re still stuck.

1 Like

@mwpastore not stuck anymore, I blogged the process here with credit to you : Loading...

1 Like

have you actually tested performance with/without varnish cache yet ?

Oh yes. YMMV depending on the complexity of your WordPress site, the underlying hardware, database, benchmark configuration, etc., and it’s hard to get an apples-to-apples comparison without completely reconfiguring everything, but on a $10/mo 2GB Linode VPS*, it’s approximately 550 req/sec w/ (https**) vs. 25 req/sec w/o (http***) to retrieve the home page of my blog, or a speedup of 22X.

* Ubuntu 16.04.3 LTS, Linux 4.9.36, Docker 17.06.2-ce, Caddy 0.10.9, PHP-FPM 7.1.9 (dockerized), Varnish 5.2.0 (partitioned), MariaDB 10.2.8, WordPress 4.8.1
** wrk -t12 -c400 -d30s https://perlkour.pl
*** wrk -t12 -c400 -d30s http://localhost:2020 -H 'Host: perlkour.pl'

1 Like

Cheers @mwpastore looks good. Definitely will test this out myself (CentOS 7 though). Wonder how much difference would there would be comparing the usual Varnish recommended SSL proxy (Hitch) for terminating HTTPS connections to Varnish ? Caddy is definitely easier to setup that having another piece of the puzzle added for Hitch install/configuration and Caddy handles the SSL certs :slight_smile:

  • Caddy HTTPS 443 > Varnish 6081 > Caddy HTTP backend
  • Hitch HTTPS 443 > Varnish 6081 > Caddy HTTP backend

Oh, absolutely. There’s a million pieces you can plug in and change around: HAProxy, Hitch, Pound, etc. You can do the whole thing in Nginx, or just SSL termination, or just caching and compression, or just for serving the static content and FastCGI. You can use Squid instead of Varnish. And so forth and so on.

My go-to is usually HAProxy+Varnish. Caddy was attractive to me because of its simplicity and certificate management features, and its feature set seemed to complement Varnish’s well. But it’s by no means the end-all, be-all solution. Interestingly, since I wrote this gist, Caddy has added its own caching plugin, which I need to find time to play with.

1 Like

Yup tested Caddy http.cache plugin here Announce: New http cache plugin - #26 by eva2000

@mwpastore might want to test this setup’s HTTP/2 HTTPS loads as well as Varnish Cache might have bugs with HTTP/2 Caddy 0.10.9 + Varnish Cache 5.2 HTTP/2 thread starvation bug

I have no doubt it’d show the same results as your benchmarks. I had some issues with the http2 feature when I tried it in this configuration with Varnish 5.0 and never revisited it. Now I have good reason not to!

What I really want is Caddy to support PROXY protocol up- and down-stream.

1 Like

Yeah Varnish Cache’s HTTP/2 implementation still needs more work it seems !

1 Like

I re-ran this benchmark with the Accept-Encoding: gzip header (this makes more sense because Varnish is serving the gzipped content straight from memory without gunzipping it as it would for an older client) and the number is closer to 900 req/sec.

1 Like

Guess for HTTP/1.1 based HTTPS with wrk load testing it looks good at least :slight_smile:

I ran some benchmarks with h2load too and they look good! Just not really comparable, apples-to-apples, to the wrk http/1.1 numbers. I’ll keep playing with it.

1 Like

Ah interesting… look forward to your results :slight_smile:

FYI, for h2load i am using -m100 for max concurrent streams as h2load defaults to max concurrent streams = 1 if not set h2load - HTTP/2 benchmarking tool - HOW-TO — nghttp2 1.47.0-DEV documentation.