Setting up Caddy with Abiosoft's Docker - Help needed

Hello, all! I’m new to Caddy, been only using Nginx for the past few months so bear with me, please!

I’m trying to set up a WordPress installation I’ve been using with Nginx with Caddy. The Docker I’m using currently is this one:

I used the php variant because I don’t know if WordPress uses php or not (or whether I need it in the first place), but it seemed like a good idea because I’m also going to set up _h5ai similarly later.

What I’m having trouble with is the configuration of the Caddyfile. In the past (Nginx), I used this configuration:

# Main server block (for wordpress)
server {
        listen 443 ssl;

        root /config/www/wordpress;
        index index.html index.htm index.php;

        server_name example.com www.example.com;

        ssl_certificate /config/keys/fullchain.pem;
        ssl_certificate_key /config/keys/privkey.pem;
        ssl_dhparam /config/nginx/dhparams.pem;
	ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
        ssl_prefer_server_ciphers on;

        client_max_body_size 0;

        location / {
                try_files $uri $uri/ /index.html /index.php?$args =404;
        }

        location ~ \.php$ {
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_pass unix:/var/run/php5-fpm.sock;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
        }
}

I read up on some files and managed to make this Caddyfile. But I’m not sure if this is correct.

example.com www.example.com {
    tls example@gmail.com
    root ./wordpress
    log ./storage/logs/caddy-access.log
    errors ./storage/logs/caddy-error.log

    fastcgi / unix:/var/run/php5-fpm.sock {
        index index.php
    }

    rewrite {
        to {path} {path}/ /index.php?{query}
    }
}

This is the error I can see on the Docker logs. Note that I’m using UnRAID with a customized Docker.

2016/11/24 07:40:14 get directory at 'https://acme-v01.api.letsencrypt.org/directory': failed to get "https://acme-v01.api.letsencrypt.org/directory": Get https://acme-v01.api.letsencrypt.org/directory: x509: failed to load system roots and no roots provided

Activating privacy features...

It spits this out, then shuts down. When I try to restart it, the above log repeats, and it shuts down.

I have another question as well. With Nginx and LetsEncrypt, I was unable to use CloudFlare because Letsencrypt wouldn’t give me a certificate because the IP didn’t match or something. Can I use CloudFlare with this Docker image?

How do I configure Caddy? What am I doing wrong? Can I use CF? Thanks in advance.

Not a bad effort! In Abiosoft’s example Caddyfile for the PHP tag you’ll note the line startup php-fpm - in the Github readme, he explains this is necessary to have in your Caddyfile so that PHP is actually running.

You’ll also note his fastcgi directive is a little different - he uses the local host on port 9000, and he uses the php preset from the fastcgi directive in order to simplify things.

I also use Abiosoft’s caddy:php container on unRAID:

But I’m afraid I haven’t encountered your ACME error. It looks like Caddy thinks you have no root certificates installed, maybe? I’m not sure how that could be the case given you’re using a Docker container.

What volumes and ports have you given the Caddy container in unRAID?

P.S. WordPress very much requires PHP! :smiley: The whole platform is based on it. You’re also going to need to set up MySQL or MariaDB. You might be better served, actually, by having an official WordPress container with a paired MariaDB container and simply proxying to your WP via Caddy. It’d probably be simpler.

First off, sorry for the late reply. I was somewhere without a computer and couldn’t answer fast enough. Thrilled to meet another UnRAID user :slight_smile:

To start off, before I read your answer, my configuration went something like this:

Stupid me. This is what I changed my configuration to, based on your configuration. I didn’t add the 2015 port because I thought that was for test purposes. Here’s the new config:

This is the new Caddyfile I wrote based on your recommendation:
ideaman924.com www.ideaman924.com {
tls ideaman924@gmail.com
root ./wordpress
log ./storage/logs/caddy-access.log
errors ./storage/logs/caddy-error.log

    fastcgi / 127.0.0.1:9000 php {
        index index.php
    }

    startup php-fpm

    rewrite {
        to {path} {path}/ /index.php?{query}
    }
}

Same results. The folders on the appdata was clean, with the exception of the www directory. After the run, the only change was that there were three files added to /root/.caddy: hostname hosts resolv.conf . All three files are empty.

EDIT: How did you add the icon for Caddy? Looks damn good.

EDIT2: I’m not that good with the networking administration and web development. I don’t even know how to write html files. So all the php stuff is just killing me. I do have MariaDB and Nextcloud set up in another docker. Once I’m past the initial barrier, I can just add these as reverse proxies. I will also need help with that though. Don’t know how to set up reverse proxies in Caddy.

EDIT3: I noticed the ‘Privileged’ option in UnRAID, and checked Nginx. It seems like it’s on, so I came back to Caddy and enabled it. No difference.

Likewise! :smiley:

Yeah, I just point some sites at it for testing. It doesn’t hurt to leave it up in unRAID. I close it at the edge (router), less effort than editing the Docker config.

That’s weird. I’ve got two folders: acme/ and oscp/. /root/.caddy should contain certificates, not resolver files. This seems very odd, more odd than the possibility of my container being in place for longer than yours would accommodate.

Your repo is definitely set to abiosoft/caddy:php? Looks like you’re on the latest. What’s your unRAID version?

I cheated! I inspected the source of this very page, lifted the link to the icon (https://caddy-forum-uploads.s3.amazonaws.com/original/1X/347413cfd8716670149d5ba1f9923f3485745fd6.png), and pasted it in the Icon URL field in the Advanced View of the Update Container page :smiley:

Easy as. Check my Caddyfile - it used to be a bit more complicated, complete with a git-middleware served site, gzip and fastcgi, but I recently moved the sites to AWS and just proxy my home apps now. Keep in mind though that while I’ve put them in subfolders, it only works because each app is configured to expect that subfolder as its URI base. Most apps aren’t as accomodating, and you’ll have to give it a subdomain, most likely.

Does pretty much nothing Caddy needs. Leave it off.

Definitely. I checked, just in case. Docker URL is set to:
https://hub.docker.com/r/abiosoft/caddy/~/dockerfile/
And Docker Repository is set to:
abiosoft/caddy:php

Is it worth mentioning that this was automatically imported from Community Applications? I did all the path adding and port setup manually. So this probably shouldn’t be an issue,

My UnRAID is on the latest version, 6.2.4.

Did the same thing. Thanks!

I always use subdomains :slight_smile: So probably shouldn’t matter.

Turned it off.

So this is really stumping me. All I can speculate is Caddy is probing a directory it shouldn’t be in and freaking out when it finds weird files there. But I’ve searched everywhere, there’s no files where Caddy goes.

OH GOD. I found out what my f**k-up was. I’m really really sorry, I didn’t think this would cause an issue.

Turns out the Caddy docker is very sensitive to /etc. I had it linked to /etc ----> /mnt/user/appdata/Caddy/caddy. This was preventing Caddy from doing something or whatever. I changed it to match your config and it’s all working fine now.

My website is finally back up. Now I just need to add _h5ai and Nextcloud…

I have one more question, though. I also use Cloudflare, does this docker image support adding the Cloudflare DNS provider or something? How do I add it?

Of course, but I don’t think it’s the issue. I did the same (before making my changes, of course).

I’m technically still on 6.2.3, pending restart. Starting to wonder if I should not restart :confused:

Next step if I were you would be to SSH into the unRAID box and docker exec -it caddy /bin/sh and poke around, check that you’re in the /srv directory as expected, check the /root directory and see what’s going on, etc. Run Caddy from shell once you’re inside and see what it spits out.

P.S. noticed this question I meant to address from your first post but forgot:

Yes, this issue will pop up when you “orange cloud” the DNS record in CloudFlare. CF is a reverse proxy in itself, and a DNS record that is orange clouded doesn’t actually point to your server, it points to CF’s edge, and CF then proxies onwards to your server instead. When the ACME server receives a request from 1.1.1.1 for domain.com, goes to grab domain.com/well-known and is told to get it from 2.2.2.2 instead, it fails out. A grey-cloud A record is a traditional, non-proxied record and will work for LetsEncrypt’s challenge. Alternately, configure DNS validation (it’s pretty cool, albeit a bit slower).

I think you missed my last post :wink: Got that solved. You can reboot your UnRAID server, rest assured. It was a mess-up on my part.

So how do I set up DNS validation? It says I need to ‘add’ a provider, but I don’t know how to do that.

Bahahahahahahahahahaha

Haha

Nice.

See my PS in my long-winded and now unneccessary previous post :smiley: We’re posting a little out of sync.

You can say that again :yum:

So I’ve been searching around. On the official site, there’s a checkbox to add CF, but since I’m using a Docker container, I don’t think that’s the correct way, right?

How do I set up the DNS validation within the Docker container?

You’ve got a few options. You can use the container you’ve got, docker exec -it caddy /bin/sh in manually and replace the binary with an updated one from caddyserver.com. That will persist until the container updates.

However, it’s probably easier to look for another Docker container that already has this taken care of. Check out this one by zzrot which has all available middleware. Some volume tweaking may be necessary.

It occurs to me that this thread has definitely strayed beyond Caddy support and well into unRAID and Docker support, eh.

Going back over this part, you can also remove the /root/.ssh → keys mapping. I put some deploy keys in there to take advantage of the git middleware over ssh from a private repo. Unimportant unless you plan to deploy from git.

And this part - this is important. /etc keeps a lot more than just the stuff Caddy uses. For example… your ca-certificates. Without these root certificates, no TLS will work as the system has nobody to trust. Always be wary of volume mounting over the top of things, and know exactly what you’re leaving behind. This would have brought any container to its knees, more than likely, not just this one.

OK. So I have successfully set up Wordpress and _h5ai within the Caddy. Now I need to set up one reverse proxy and then I will be done.

So this is what I came up with before with Nginx:

server {
        listen 443 ssl;

        server_name cloud.example.com www.cloud.example.com;

        location / {
                include /config/nginx/proxy.conf;
                proxy_pass https://192.168.0.100:9000;
        }
}

So port 9000 points to port 443. If I pass an http request, it would fail. So https only.

This is the proxy.conf mentioned above:

client_max_body_size 10m;
client_body_buffer_size 128k;

#Timeout if the real server is dead
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503;

# Advanced Proxy Config
send_timeout 5m;
proxy_read_timeout 240;
proxy_send_timeout 240;
proxy_connect_timeout 240;

# Basic Proxy Config
proxy_set_header Host $host:$server_port;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_redirect  http://  $scheme://;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_cache_bypass $cookie_session;
proxy_no_cache $cookie_session;
proxy_buffers 32 4k;

So this is what I came up with with Caddyfile:

cloud.example.com www.cloud.example.com {
    tls example@gmail.com

    proxy / https://192.168.0.100:9000 {
        transparent
    }
}

However, the response is a 502 bad gateway. Presumably there’s a step I missed. Connecting directly via IP works, and I know Nginx worked before, so the problem lies in my Caddyfile configuration. Help me out, please? Thanks.

You shouldn’t have a default trusted certificate for a private IP, so Caddy is probably trying to negotiate TLS on port 9000 to the private IP address and failing. Unless you want to manually sign a certificate for 192.168.0.100 and manually add it to the trusted certificates in the Caddy container, try without https:// -

    proxy / 192.168.0.100:9000 {
        transparent
    }

I told you, it errors. The Nextcloud container is configured with Nginx, and it gives a status code of 400, The plain HTTP request was sent over HTTPS port. I can’t open any HTTP ports and I don’t want to, because I’d like to send everything via 443. How can I ask Caddy to not make a SSL certificate for the internal IP?

I mean, Caddy’s not trying to make an SSL cert because it can’t make a trusted one for a private IP. It’s not Caddy’s responsibility anyway, because the issue is the trust level of the certificate provided by the upstream server you want to proxy to (192.168.0.100:9000).

It’s the nginx server that needs to make and provide the trusted certificate if you want to only accept HTTPS on your upstream. There’s very little point in that, though, because:

  • You can’t have working TLS without a trusted certificate, and;
  • You can’t get a CA signed certificate for a private IP address, which means;
  • To make this work you must generate the certificate on the server (nginx) and add it directly to the client’s (Caddy’s) trusted certificate store (this might be possible with some clever Docker volume usage between the nginx container and the Caddy container)

Your other option is to use insecure_skip_verify in your proxy directive which entirely defeats the point of using HTTPS in the first place - you’re literally using HTTP over HTTPS in this case.

May I ask what you’d like your end result to look like? It seems to me that if you simply want to disallow HTTP connections to your site, you’re already good if you have Caddy in front proxying upstream to non-public-facing port. Caddy will automatically direct HTTP requests to the HTTPS site, or you can override this behaviour via the Caddyfile and simply not allow HTTP requests.

I think I’ll use the proxy directive.

What I want the end result to be; all connections between public and Caddy uses HTTPS with TLS. (Done automatically) I don’t care if Caddy talks HTTP over HTTPS in the internal network because a) the Docker containers are hosted on the same UnRAID server, and b) even if that wasn’t the case, the server is at home and I can trust people not to monitor network connections. (they have trouble connecting to Wi-Fi. So I can rest assured they can’t monitor anything)

So adding the proxy directive like you mentioned should be enough, right? Don’t want to go through the hackery of adding certificates, because I don’t need them.

Given what I understand of the issue, I’m pretty confident it will stop the Caddy 502s / nginx 400s.

I guess I don’t understand why it’s necessary. You’ve got two options to get Caddy and nginx to talk together nicely, and they have the same effect:

  1. Add insecure_skip_verify to the Caddyfile
  2. Remove ssl from the nginx config

And option 2 will probably see a (infinitesimal) improvement because nginx won’t be bothering to try to present a certificate that Caddy will simply ignore. It just doesn’t make sense to me to add ssl to the nginx config in the first place, only to add insecure_skip_verify to the Caddyfile to counteract what you’ve just done. It’s more complex than required, one careless edit could easily break this weird configuration, and it has zero effect on the end result.

I guess it’s a really minor thing, though, so if it works for you, that’s what it’s there for. :thumbsup:

Oh… Didn’t know about the ssl thing. (I think you didn’t say or you did and I just didn’t understand it) I can certainly do that.

The guy who made the container has that set to default in the first place, so don’t judge me. :stuck_out_tongue: I will do the second option.

Heh, I wouldn’t be surprised if it’s designed to get its own LE certificates too, if you pointed to it directly. It’s so easy to put Certbot or some other ACME script in your container.