V2: getting infinite loop redirects to itself

1. My Caddy version (caddy version):

v2.0.0-rc.1 h1:DxUlg4kMisXwXVnWND7KEPl1f+vjFpIOzYpKpfmwyj8=

2. How I run Caddy:

The recommended systemd unit file as-is.

a. System environment:

Ubuntu 18.04.3, systemd 237

b. Command:

From the systemd unit file:

/usr/local/bin/caddy run --config /etc/caddy/Caddyfile --resume --environ

c. Service/unit/compose file:

[Unit]
Description=Caddy 2 Web Server
Documentation=https://caddyserver.com/docs/
After=network.target

[Service]
User=www-data
Group=www-data
ExecStart=/usr/local/bin/caddy run --config /etc/caddy/Caddyfile --resume --environ
ExecReload=/usr/local/bin/caddy reload --config /etc/caddy/Caddyfile
TimeoutStopSec=5s
LimitNOFILE=1048576
LimitNPROC=512
PrivateTmp=true
ProtectSystem=full
AmbientCapabilities=CAP_NET_BIND_SERVICE

[Install]
WantedBy=multi-user.target

d. My complete Caddyfile or JSON config:

/etc/caddy/Caddyfile

import /etc/caddy/conf.d/*.conf

The only vhost:

ma.ttias.be {
	root * /var/www/html/ma.ttias.be/public/
	file_server
	encode gzip

	log {
		output file         /var/www/html/ma.ttias.be/logs/access.log
		format single_field common_log
	}
}

3. The problem I’m having:

This config keeps redirecting the site to itself:

$ curl -I https://ma.ttias.be/
HTTP/2 302
location: https://ma.ttias.be/
server: Caddy
date: Fri, 03 Apr 2020 21:20:29 GMT

4. Error messages and/or full log output:

None, curl output see above.

5. What I already tried:

The config has been trimmed down to its bare essentials, I’m not sure what else I can modify.

You should have stdout logs from Caddy, check systemctl logs

1 Like

Ah that refers to any kind of HTTP log too, all I get is confirmation that the 302s are indeed being served:

84.197.60.218 - - [03/Apr/2020:21:28:16 +0000] "GET / HTTP/2.0" 302 0
84.197.60.218 - - [03/Apr/2020:21:28:16 +0000] "GET / HTTP/2.0" 302 0
84.197.60.218 - - [03/Apr/2020:21:28:16 +0000] "GET / HTTP/2.0" 302 0
84.197.60.218 - - [03/Apr/2020:21:28:16 +0000] "GET / HTTP/2.0" 302 0

Hmm. That’s pretty strange.

Could you run caddy adapt --config /etc/caddy/Caddyfile --pretty and paste the output?

Since your config might be multiple files, that’ll merge them into a JSON config and it’ll be easier to understand how it actually looks.

Also, you must have Caddy startup logs, Caddy prints to stdout when the service starts. Maybe sudo systemctl caddy logs?

Random guess, I haven’t verified this, but maybe the trailing / is the issue? Try:

root * /var/www/html/ma.ttias.be/public

While trying to reproduce this, I encountered a panic in the standard library:

panic: runtime error: invalid memory address or nil pointer dereference
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x1340b1d]

goroutine 64 [running]:
net/http.(*onceCloseListener).close(...)
        /usr/local/go/src/net/http/server.go:3337
sync.(*Once).doSlow(0xc000a08010, 0xc00008cc18)
        /usr/local/go/src/sync/once.go:66 +0xec
sync.(*Once).Do(...)
        /usr/local/go/src/sync/once.go:57
net/http.(*onceCloseListener).Close(0xc000a08000, 0x0, 0x0)
        /usr/local/go/src/net/http/server.go:3333 +0x77
panic(0x200a160, 0x30126b0)
        /usr/local/go/src/runtime/panic.go:967 +0x166
net/http.(*onceCloseListener).Accept(0xc000a08000, 0xc000134018, 0x1feaa60, 0x30125e0, 0x2194580)
        <autogenerated>:1 +0x32
net/http.(*Server).Serve(0xc0009ae380, 0x0, 0x0, 0x0, 0x0)
        /usr/local/go/src/net/http/server.go:2901 +0x25d
...

Fun times! Anyway, looking into both of the problems, I guess…

The standard lib should not be panicking…

@mattiasgeniar Was that your first config? If not, how are you reloading it?

Fun fact, the static file server does not use code 302 (“Found”) to redirect. In fact, searching the code base here, I’m not finding anywhere in Caddy that emits a 302.

Anything else sitting in front of your server?

Also @mattiasgeniar what is the log output? It should be in your journal, the full log would be helpful in trying to reproduce this. Thanks!

FWIW, we can do that ourselves since the Caddyfile was posted :slight_smile: (I have done that in an attempt to reproduce the behavior.)

Hm I figured out my problem, and as a result changed the systemd startup script for me. Not sure if others should too, but for a Caddyfile user, this makes more sense:

- ExecStart=/usr/local/bin/caddy2 run --config /etc/caddy2/Caddyfile  --resume --environ
+ ExecStart=/usr/local/bin/caddy2 run --config /etc/caddy2/Caddyfile

What bit me was the --resume option, which according to the docs:

--resume uses the last loaded configuration. This flag is useful primarily in API-heavy deployments, and overrides --config if a saved config exists.

In other words: I was modifying my Caddyfile contents, but they weren’t taking any effect since it had an (old?) version on disk. Would that make sense?

It would also explain why none of my changes appeared to be coming through. The redirect loop might be an old issue with a v1 config that I hadn’t rewritten then.

At this point: everything seems to be running smoothly on v2 rc1!

1 Like

Ah, indeed, that’s why I was wondering if this was your first config and if not, how you reloaded it.

Restarting the server does not replace its configuration – that is what reloading is for. :slight_smile:

The --resume flag is necessary to prevent data/config loss if the machine gets rebooted or the process gets restarted for some reason. If we take that flag out and people use the config API independently of their config file, and the machine is rebooted, it would result in data loss! :scream:

Anyway, that’s why we tell people to make sure the command is correct before using it. :+1:

Still, a head-scratch is better than data loss.

(Also, I typically recommend leaving the --environ flag enabled, because it can be useful when troubleshooting later.)

Glad you figured it out!! Thanks for using Caddy. Stay tuned for when we release Caddy 2.0!

1 Like

This is the part that bit me, I’m from the old-school Linux camp where a restart usually loads a new config. :slight_smile:

For Caddyfile usage, that perhaps hasn’t seen an API call just yet (a counter at 0?), wouldn’t it make more sense to load the Caddyfile config when it starts?

I can imagine many users that are upgrading would bump into this.

1 Like

Yes, this is not an ideal situation, but as of right now, the two known viable defaults are:

  • Data loss (without --resume)
  • Scratching heads (with --resume unnecessarily)

Obviously, data loss is unacceptable, as we must guarantee a durable system.

I’m open to ideas for an alternative default that strikes a happy balance between the two and has good guarantees!

Hmm, I don’t think that guarantees durability in the situation where an initial config file is loaded, then the config file undergoes changes but is not intended to be reloaded yet, then the machine loses power and is brought back up. Because the counter is at 0, Caddy would unintentionally load a broken/unintended configuration instead of resuming its previous, intended one.

True, but this is essentially how any Linux daemon/service has worked for the last 10+ years - I think for Caddy to reach wider adoption, it might make sense to follow the traditional methods over inventing a different strategy?

Mind you: this might just be a documentation issue instead. Because Caddy has 2 very powerful ways of configuring (Caddyfile + API) - and it don’t seem like a particularly good idea to start mixing them - the install instructions could simply differ for Caddyfile users vs. API users?

Those that do want to mix both methods are probably advanced users and might take a different systemd config altogether?

Traditional methods are broken, but we’ve reached a compromise by offering 2 service files to choose from, instead of 1, even though the only difference is basically that one flag.

The docs do go over that in our introductory guide: Getting Started — Caddy Documentation

Thanks for using Caddy, Mattias! Can’t wait to get 2.0 out the door :slight_smile:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.