My overall experience with Caddy + Some ideas

Hi there!

I was an avid Apache user for 5 years, but I’ve been using Caddy for a couple of months now, and I’m overall satisfied! I’m not looking back ever since :slight_smile:
I love how easy it is to set-up, understand and have TLS certificates working (having Apache+Let’s Encrypt was a real hassle). I’ve had a very good experience, and I would like to thank Matt Holt for starting the project and the +100 contributors on GitHub: you are awesome, guys!
I’m starting to learn Go because of it, and I’ll be contributing soon (I hope ;)).

But there are some features missing for me, that I would consider to be core:

  • Dynamic loading of Caddyfiles per domain.
    Wouldn’t be nice if, instead having to edit a single Caddyfile, Caddy loaded dynamically Caddyfiles “on-demand”? (not necessary to be on-demand, it could be cached).
    Imagine that you have 3 folders: domain1.com, domain2.com and domain3.com, each one of those has its own Caddyfile. You could add new domains without having to restart/reload the Caddy service at all, and get the Let’s Encrypt certificate with the new DNS-challenge feature deployed in the 0.9 release.

  • Background processing of new certificates
    If you don’t like the idea above (I can understand why), let me contribute with another idea. Right now, service caddy restart just loops over every domain and if there is a new domain, it has to retrieve the Let’s Encrypt certificate (going through all the ACME thing), and then it deploys the server… meanwhile having the rest of the domains halted, waiting for the new one to finish. (this is what I see from the SSH terminal, don’t know if this is how it actually works). It would be nice if it used the DNS-challenge to get the new domains, without interrupting the other domains, or at least, having a CLI option.

  • CLI option to not halt on invalid domain
    Okay, this caught me off-guard yesterday, and had to post it here. And I admit that it’s my fault, an human mistake. I moved one of my servers, and changed every single DNS record but one. The result? Whoops, Caddy won’t start because of it. And I spent like 2 hours trying to figure out what I did wrong on migration. I would appreciate to have an option for Caddy to just ignore the invalid ones and log them in /var/log/ or whatever we put in the -log flag. This would be excellent for automated environments, and to help people by avoiding mistakes.

  • Resource limit per domain in Caddyfile
    Having directives like cpu_limit, mem_limit and bandwidth_limit would be great in order to avoid DDoS attacks and have a fine grain control over every domain. I know this is possible using proxies, but keeping it simple with Caddy would be nice to have.

What’s your opinion about these ideas?

Have a great day,
Ryan.

Hey Ryan, thanks for your detailed feedback and the kind words!

Dynamic loading of Caddyfiles per domain.

What is the benefit of this? Or, in other words, what is wrong with signaling Caddy?

Personally I don’t find the file system itself to be a reliable configuration manager.

Background processing of new certificates

service caddy restart isn’t something I recognize. What is it?

meanwhile having the rest of the domains halted, waiting for the new one to finish

Yikes, are you sure? Caddy reloads are graceful; even obtaining new certificates shouldn’t block requests to sites. I’ve tested this before, and it all checks out. How can I see what you’re seeing? (Please try it without any fancy init scripts, just raw USR1 would be good.)

CLI option to not halt on invalid domain

Nope nope nope. Sorry. :slight_smile: As you said, this is user error. Fix it. :wink:

Resource limit per domain in Caddyfile

You can limit CPU use for the whole process (-cpu), but I’m not sure what the Go runtime allows us to do at any finer level of detail.

1 Like

Resource limit per domain in Caddyfile

Could you provide a use case for this feature?
Most times your webserver won’t be your cpu hog but the dynamic language interpreter behind it (like PHP)

Let’s stick with the PHP example for a second:
Caddy and the PHP Interpreter are two different, autonomous processes. So you can limit your PHP Workers without changes to Caddy. You can even control the memory limit of PHP Workers directly from your Caddyfile via env PHP_VALUE "memory_limit = 512M"

example.org {
    root /var/docker/www/example.org/web
    fastcgi / 127.0.0.1:9002 php {
        env PHP_VALUE "display_errors=off
                        error_log=/var/docker/www/example.org/fpm-php.www.log
                        memory_limit = 512M
                        post_max_size=256M
                        log_errors=on
                        upload_max_filesize=256M
                        expose_php=off"
    }
    header / {
        Strict-Transport-Security "max-age=63072000"
        X-Backend-Server {hostname}
        -Server
    }
    log /var/docker/www/example.org/std.log
    errors /var/docker/www/example.org/err.log
    rewrite / {
        to {path} {path}/ /maintenance.html /index.php
    }
}
2 Likes

Hi Matt! You’re welcome :slight_smile:

What is the benefit of this? Or, in other words, what is wrong with signaling Caddy?

Caddy takes between 3 and 10 secs to completely restart with my configuration of 17 domains (even when it does not need to pull any new certificate from Let’s Encrypt), on a DigitalOcean droplet (I can see the “Request rejected” message in the browser if I try to reload it after I signaled Caddy). I have Caddy + MariaDB + PHP7.0-FPM running… and nothing else.

Personally I don’t find the file system itself to be a reliable configuration manager.

Can you go further in this point? I’m interested to know why, since Caddyfile is a file in the filesystem anyways.

service caddy restart isn’t something I recognize. What is it?

Whoops, I should have explained that. I took Caddy, and I made it a service via systemd on Ubuntu 16.04.1 LTS (and Xubuntu 16.04.1 LTS on my local machine, but it’s the same thing, apart from having a Desktop GUI), following this configuration guide (and got it perfectly running). service caddy reload doesn’t work for me (like service apache2 reload did, I mean).

Yikes, are you sure? Caddy reloads are graceful; even obtaining new certificates shouldn’t block requests to sites. I’ve tested this before, and it all checks out. How can I see what you’re seeing? (Please try it without any fancy init scripts, just raw USR1 would be good.)

Yep, I’m sure. Caddy is for some reason blocking domain requests while getting new certificates I do not have any fancy init script apart from systemd.

Nope nope nope. Sorry. :slight_smile: As you said, this is user error. Fix it. :wink:

Ok :wink:

You can limit CPU use for the whole process (-cpu), but I’m not sure what the Go runtime allows us to do at any finer level of detail.

Yep, I was aware of this option, but what I was asking for is about having it in the Caddyfile (since I have domains which I want to allocate more resources, and there are others which I want to limit). Would it be possible?

Thanks for your answers, Matt!

Hi Robert!

Could you provide a use case for this feature?
Most times your webserver won’t be your cpu hog but the dynamic language interpreter behind it (like PHP)

This is Caddy eating my CPU by itself on a single-core VPS I use as my local testing server (it has an Intel® Xeon® E5-2620 inside), running Ubuntu 16.04.1 LTS:

This is why I need to limit it per domain. I’m not using any CMS (such as Wordpress, Joomla, etc) or Framework (PHPCake, CodeIgniter, etc), everything is custom PHP code made from scratch (and no, it’s not resource intensive).

Let’s stick with the PHP example for a second:
Caddy and the PHP Interpreter are two different, autonomous processes. So you can limit your PHP Workers without changes to Caddy. You can even control the memory limit of PHP Workers directly from your Caddyfile via env PHP_VALUE "memory_limit = 512M"

Is memory_limit per worker, or total memory limit of PHP for that domain? Do I need to make a new PHP-FPM pool per domain to apply your solution?

Thanks, for your answer, I’ll test that.

Well… that’s odd :neutral_face:

This is a setup with MySQL / Percona 5.7, PHP 5.6 (stupid dependencies on ionCube Loader) inside a docker container and Caddy v0.8.3
It’s plain caddy without any plugins and the caddyfile is pretty straightforward as you can see in my previous post.
Could you give us some info about your setup? caddy -version? Plugins? Caddyfile?
Are you using complex regex in your Caddyfile?

memory_limit limits the memory usage per php script.

memory_limit integer
This sets the maximum amount of memory in bytes that a script is allowed to allocate.
http://php.net/manual/en/ini.core.php

If you set env PHP_VALUE "memory_limit=128M" in your caddyfile for domain example1.org only requests for this domain will be limited but not for example2.org

Please note that this limit parameter limits the memory PER REQUEST! So in my case with a memory limit of 512MB four requests may(!) take up to 2 Gigabytes of RAM.

If you want domain-wide memory limits I would suggest one PHP worker pool per domain.

I’m using the same version you have, Caddy v0.8.3, with the CORS plugin (which I’ll ditch for pure header rules very soon, so I can upgrade Caddy without any dependency). PHP7 + MariaDB and nothing else is running in the server, as I stated on Matt’s answer.

My Caddyfile is nothing special (formatting is difficult in Discourse):

# Remember that I'm in localhost, so no HTTPs
http://server7.dev {
root /var/www/dev
errors /var/www/dev/.logs/error.log

fastcgi / 127.0.0.1:9000 php

gzip
cors
rewrite / {
if {file} not_has .js
if {file} not_has .css
if {file} not_has .woff
if {file} not_has .svg
if {file} not_has .eot
if {file} not_has .jpg
if {file} not_has .jpeg
if {file} not_has .gif
if {file} not_has .png
if {file} not_has .webp
if {file} not_has .pdf
if {file} not_has .ttf
if {file} not_has .otf
if {file} not_has .ico

  r (.*)
  to /index.php

}
}

Thanks for the insight and advice about the memory_limit. I’ll be testing it later.

A graceful reload waits for connections to terminate, with a cap set at 5 seconds by default (this is a command line option you can customize). Any extra time would be spent in getting certificates as needed, and indeed only one can be obtained at a time (currently). But that’s a one-time deal.

Blocking requests for the new domains?

Not to my knowledge.

I noticed that if one of my domains is not setup properly (in the configuration for caddy) that none will end up working. Is this what you are referring to possible? It would be nice to have all other sites/domains keep working that can work fine. I experienced this when a domain did not receive the letsencrypt cert yet, all my domains/sites stopped working until I restarted caddy and it was able to get the certificate.

Possible a way to revert a site to tls no if ssl retrieval fails and keeps running?

Did you use signal USR1? That keeps your current sites up while Caddy gets new certificates.

No.

I just wanted to expand on this, since I’m seeing very similar request here and wanted to add my own on record.

Dynamic loading of Caddyfiles per domain.

What is the benefit of this? Or, in other words, what is wrong with signaling Caddy?

For those familiar with Apache .htaccess override files, this is what multiple folks are requesting. An ability to customize config for certain portion of the site without editing the main config file. This is useful in shared hosting scenarios (where you don’t want multiple users editing main config). This is very similar to per-domain config. Since some sites are hosted off the same file system, different site merely reside in different folders.

Would you allow any user of a shared webhosting system to edit main Caddy config file and affect all other sites/users? Would you allow any user to send signals to Caddy instance with same consequence?

One way this can be accomplish is described here by me:

Where you can optionally check the filesystem upon each request for presence of override files to the main config. Augment the in-memory config tree (or config hash or whatever data-structure) and present this new configs structure to any subsequently invoked extensions. Thus minimizing impact on existing extensions. As far as extensions would care at this point is your Caddy server was loaded without any overrides.

This feature would be optionally enabled. Thus alleviating any concerns for performance hits for those of you who don’t care for it.

service caddy restart isn’t something I recognize. What is it?

Most Linux webserver packages come with startup scripts to allow ease of administration. Some OS’s use utilities like (service or systemctl) to start/stop/enable-at-boot various services. service is just a wrapper utility that redirects to init/systemd scripts.

Personally I don’t find the file system itself to be a reliable configuration manager.

You can import as many different config files as you want.

I know, but there are no official init/systemd scripts for Caddy. So when somebody reports a problem with it here, it’s something that I hope somebody else can answer because I don’t know how. :disappointed:

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.