I am using caddy to redirect multiple old domains and / or paths.
However, I see that while the https part is working well, the http requests just get served by my catch-all rule (to redirect to an error page).
If I replace the domain names in the config that I need to redirect with http://domain, https://domain everything works fine.
I was sure somehow that a domain block is the same as http://domain, https://domain. Unfortunately I was not able to find a clear explanation on why and how.
2. Error messages and/or full log output:
No error logs
3. Caddy version:
2.7.5
4. How I installed and ran Caddy:
built and ran a docker image
a. System environment:
Linux, docker.
b. Command:
caddy run --config /etc/caddy/Caddyfile --adapter caddyfile
c. Service/unit/compose file:
PASTE OVER THIS, BETWEEN THE ``` LINES.
Please use the preview pane to ensure it looks nice.
d. My complete Caddy config:
this config works properly over https, but it only serve the error pages defined in my :80 block on http
The order of the routes in the HTTP server when HTTP->HTTPS redirects are enabled is:
User-defined site starting with http:// (e.g. http://example.com)
HTTP->HTTPS redirects for HTTPS site addresses (e.g. example.com)
User-defined catch-all site (e.g. http:// or :80)
Always-included fallback catch-all redirecting HTTP traffic to HTTPS using the incoming Host header
In this case, you have no (1), your two HTTPS sites are (2), and you have user-defined (3), and (4) is always included.
Caddy will serve a redirect from http://old1.example.local/ to https://old1.example.local/ first, and then the client after connecting to HTTPS will be served with the redirect from https://old1.example.local/ to https://error-pages.newdom.local/.
Make sure when testing you use the curl -vL (-L meaning Location header, to follow redirects). This is working as intended.
Also, you probably want to add {uri} at the end of all your redirects, to preserve the request URI, otherwise it gets dropped completely from the request. For example:
Well, doing simple examples works. Going a bit complicated, adding multiple domains and subdomains per address block will make the http requests fall through. I was not able to find any pattern here
The only consistent behaviour I found was to replace the simple domain name in the address block with http://domain, https://domain. In this case, no fall-through occurs, but no redirect to https also.
When things “don’t work” the json config have no routes for the http listener but the fallback one.
It’s because Automatic HTTPS adds the routes at runtime, after the config is loaded. If you enable the debug global option, you’ll see a log http.auto_https adjusted config which outputs the transformed config (but it doesn’t encode the new catch-all routes correctly and they just show up as {} in the HTTP server’s routes; they work correctly though).
You haven’t shown a reproducible case. The config you showed earlier seems to work as intended, so I’m not sure what to tell you.
I’m only curl -IX GET to check the Location header.
Yes, thank you. I am using it where needed.
For simple configurations it does. Until it doesn’t. Since the config is under my control completely (not having to deal with external imports etc), I’ll probably just create two address blocks for every domain, the first one the http with the redirect to https, and the second one as before.
Should do everything I need.
I would still be interested to hear what is happening with the routes suddenly when i am adding “something extra”.
I cannot post the actual config (I know, I am sorry), but I would still be interested to hear your opinion, @francislavoie about why removing just one address block makes it work fine, but adding it back (in any position in the config, even changing the domain names, even re-typing everything by hand, although I am using a jinja template to generate the whole thing) makes all http routes besides the fallback disappear.
That’ll only show you the first redirect, but not the second one. When you request HTTP, it’ll get redirected to HTTPS, and then from HTTPS->HTTPS using your configured routes. You need to use -L to actually see the full chain.
My point was that the first curl is going off the rails. I did just tried with -L. There’s no turning back. The second request just shows the headers coming from github pages (where the fallback error pages are).
For reference, this is the main part of the template:
Yeah, it is strange. I have a 6K source.txt with the domains and the redirects. Only one of them is problematic and there is nothing special about that line. I have more lines like that, but only that one has issues. Even with the domain names changed on that line, even movesd in the file. I’ve tried everything :))
Thank you for the 1-2-3-4 explanation. I missed 1. I’ll make sure it wont happen in this cases.
When the config “does not work”, the autosave.json looks like this - there is a single route on the port 80 listener which is the default “catch-all” redirect to the error pages:
Okay so you’re saying that your http:// catch-all is being used instead of the automatic HTTP->HTTPS redirect for that domain?
In your example in OP you had redir * https://error-pages.newdom.local 301 in old1. Are you sure that’s not just working as intended?
I need a minimally reproducible example. Can you try to replicate it using *.localhost domains? (curl and browsers handle *.localhost correctly automatically, resolving to 127.0.0.1).