Fun times! Anyway, looking into both of the problems, I guess…
The standard lib should not be panicking…
@mattiasgeniar Was that your first config? If not, how are you reloading it?
Fun fact, the static file server does not use code 302 (“Found”) to redirect. In fact, searching the code base here, I’m not finding anywhere in Caddy that emits a 302.
Anything else sitting in front of your server?
Also @mattiasgeniar what is the log output? It should be in your journal, the full log would be helpful in trying to reproduce this. Thanks!
FWIW, we can do that ourselves since the Caddyfile was posted (I have done that in an attempt to reproduce the behavior.)
Hm I figured out my problem, and as a result changed the systemd startup script for me. Not sure if others should too, but for a Caddyfile user, this makes more sense:
- ExecStart=/usr/local/bin/caddy2 run --config /etc/caddy2/Caddyfile --resume --environ
+ ExecStart=/usr/local/bin/caddy2 run --config /etc/caddy2/Caddyfile
--resume uses the last loaded configuration. This flag is useful primarily in API-heavy deployments, and overrides --config if a saved config exists.
In other words: I was modifying my Caddyfile contents, but they weren’t taking any effect since it had an (old?) version on disk. Would that make sense?
It would also explain why none of my changes appeared to be coming through. The redirect loop might be an old issue with a v1 config that I hadn’t rewritten then.
At this point: everything seems to be running smoothly on v2 rc1!
Ah, indeed, that’s why I was wondering if this was your first config and if not, how you reloaded it.
Restarting the server does not replace its configuration – that is what reloading is for.
The --resume flag is necessary to prevent data/config loss if the machine gets rebooted or the process gets restarted for some reason. If we take that flag out and people use the config API independently of their config file, and the machine is rebooted, it would result in data loss!
Anyway, that’s why we tell people to make sure the command is correct before using it.
Still, a head-scratch is better than data loss.
(Also, I typically recommend leaving the --environ flag enabled, because it can be useful when troubleshooting later.)
Glad you figured it out!! Thanks for using Caddy. Stay tuned for when we release Caddy 2.0!
This is the part that bit me, I’m from the old-school Linux camp where a restart usually loads a new config.
For Caddyfile usage, that perhaps hasn’t seen an API call just yet (a counter at 0?), wouldn’t it make more sense to load the Caddyfile config when it starts?
I can imagine many users that are upgrading would bump into this.
Yes, this is not an ideal situation, but as of right now, the two known viable defaults are:
Data loss (without --resume)
Scratching heads (with --resume unnecessarily)
Obviously, data loss is unacceptable, as we must guarantee a durable system.
I’m open to ideas for an alternative default that strikes a happy balance between the two and has good guarantees!
Hmm, I don’t think that guarantees durability in the situation where an initial config file is loaded, then the config file undergoes changes but is not intended to be reloaded yet, then the machine loses power and is brought back up. Because the counter is at 0, Caddy would unintentionally load a broken/unintended configuration instead of resuming its previous, intended one.
True, but this is essentially how any Linux daemon/service has worked for the last 10+ years - I think for Caddy to reach wider adoption, it might make sense to follow the traditional methods over inventing a different strategy?
Mind you: this might just be a documentation issue instead. Because Caddy has 2 very powerful ways of configuring (Caddyfile + API) - and it don’t seem like a particularly good idea to start mixing them - the install instructions could simply differ for Caddyfile users vs. API users?
Those that do want to mix both methods are probably advanced users and might take a different systemd config altogether?
Traditional methods are broken, but we’ve reached a compromise by offering 2 service files to choose from, instead of 1, even though the only difference is basically that one flag.