Cant run caddy after apt install that did not seem to have errors

1. Caddy version (caddy version):

v2.3.0

2. How I run Caddy:

Ubuntu 20 server
I just installed it, it doesnt run like it said it would, followed instructions on the main caddy website

a. System environment:

Ubuntu 20 server on an RPI

b. Command:

caddy start
sudo systemctl start caddy
sudo systemctl disable caddy
sudo systemctl enable caddy
caddy stop
caddy start
ect

paste command here

c. Service/unit/compose file:

paste full file contents here

d. My complete Caddyfile or JSON config:

# The Caddyfile is an easy way to configure your Caddy web server.
#
# Unless the file starts with a global options block, the first
# uncommented line is always the address of your site.
#
# To use your own domain name (with automatic HTTPS), first make
# sure your domain's A/AAAA DNS records are properly pointed to
# this machine's public IP, then replace the line below with your
# domain name.


https://shinobi.kaveman.tech {
        reverse_proxy localhost:8080
}

https://pihole.kaveman.tech {
        reverse_proxy localhost:8081/admin/index.php
}

# Set this path to your site's directory.
root * /usr/share/caddy

# Enable the static file server.
file_server

# Another common task is to set up a reverse proxy:
# reverse_proxy localhost:8080


# Or serve a PHP site through php-fpm:
# php_fastcgi localhost:9000

# Refer to the Caddy docs for more information:
# https://caddyserver.com/docs/caddyfile

3. The problem I’m having:

After playing with config a lot I realized Caddy was not actually running in the first place. I noticed some error messages sugesting 443 was already occupied but the only app on that was Caddy, I saw some issues other people had related to systemd running one instance while you try to start caddy through CLI and so tried the systemctl stuff above. Overall the youtube tutorial I watched had this up and running in 5 mins lol, I am lost however. Nothing I’ve tried gives back the return that seems to be expected for other folks with similar issues.

4. Error messages and/or full log output:

● caddy.service - Caddy
Loaded: loaded (/lib/systemd/system/caddy.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2021-01-30 17:11:22 CST; 4s ago
Docs: Welcome — Caddy Documentation
Process: 74355 ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile (code=exited, status=1/FAILURE)
Main PID: 74355 (code=exited, status=1/FAILURE)
########################################################
kevin@RP1:/etc/caddy$ caddy start
2021/01/30 23:22:56.669 INFO using adjacent Caddyfile
run: adapting config using caddyfile: server listening on [:443] is configured for HTTPS and cannot natively multiplex HTTP and HTTPS: /usr/share/caddy (try specifying https:// in the address)
start: caddy process exited with error: exit status 1

5. What I already tried:

A ton of stuff that possibly fucked Caddy right up and I may be better off purging at this point I guess

6. Links to relevant resources:

You should remove or comment out these lines. Caddy is parsing these as site addresses.

This is not valid syntax, Caddy v2 does not support paths on proxy addresses. I’m not sure what you’re trying to do here, but I think you may be looking for the rewrite directive, depending on your goal.

1 Like

ty! i did not understand the caddyfile until that ^

now on caddy reload i get the following:

2021/01/31 00:18:22.553 INFO using adjacent Caddyfile
reload: sending configuration to instance: performing request: Post “http://localhost:2019/load”: dial tcp: lookup localhost on 1.1.1.1:53: no such host

New error message from systemctl status:
I don’t get it though as both reverse proxy domains im using I have https in the config for them and local host for the target
`

● caddy.service - Caddy
Loaded: loaded (/lib/systemd/system/caddy.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2021-01-30 17:11:22 CST; 2h 28min ago
Docs: Welcome — Caddy Documentation
Process: 74355 ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile (code=exited, status=1/FAILURE)
Main PID: 74355 (code=exited, status=1/FAILURE)

Jan 30 17:11:22 RP1 caddy[74355]: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
Jan 30 17:11:22 RP1 caddy[74355]: HOME=/var/lib/caddy
Jan 30 17:11:22 RP1 caddy[74355]: LOGNAME=caddy
Jan 30 17:11:22 RP1 caddy[74355]: USER=caddy
Jan 30 17:11:22 RP1 caddy[74355]: INVOCATION_ID=b66d6cf6234d4612926349353a574779
Jan 30 17:11:22 RP1 caddy[74355]: JOURNAL_STREAM=9:312417
Jan 30 17:11:22 RP1 caddy[74355]: {“level”:“info”,“ts”:1612048282.8718786,“msg”:“using provided configuration”,“config_file”:"/etc/caddy/Caddyfile",“config_adapter”:""}
Jan 30 17:11:22 RP1 caddy[74355]: run: adapting config using caddyfile: server listening on [:443] is configured for HTTPS and cannot natively multiplex HTTP and HTTPS: /usr/share/caddy (try specifying https:>
Jan 30 17:11:22 RP1 systemd[1]: caddy.service: Main process exited, code=exited, status=1/FAILURE
Jan 30 17:11:22 RP1 systemd[1]: caddy.service: Failed with result ‘exit-code’.
`
My current Caddyfile:

`

https://shinobi.kaveman.tech {

  •    reverse_proxy localhost:8080*
    

}

https://pihole.kaveman.tech {

  •    reverse_proxy localhost:8081*
    

}

# Set this path to your site’s directory.
# root * /usr/share/caddy

# Enable the static file server.
# file_server

# Another common task is to set up a reverse proxy:
# reverse_proxy localhost:8080

# Or serve a PHP site through php-fpm:
# php_fastcgi localhost:9000
`

Please use ``` on the lines before and after your config to use code formatting. You used block quoting instead which does not preserve formatting of your config, making it very difficult to read.

As for the error about resolving localhost, that’s very strange because it looks like your system is not resolving it to ::1 or 127.0.0.1 as it should but instead reaching out to Cloudflare’s DNS servers (i.e. 1.1.1.1) to resolve it, which won’t work.

The reason Caddy tries to resolve localhost is because it attempts to start up its admin API endpoint at localhost:2019, which is necessary to make features like caddy reload work.

1 Like

Is there a way to manually set the dns it will use? I’m running pihole and I think cloud flare is one of the fail over dns servers but haven’t heard of this coming up from pihole use

PS I have set 127.0.0.1 instead of “localhost” in my caddyfile, but I think there is a config somewhere that has “localhost” listed, maybe I can solve this by setting 127.0.0.1 wherever that is?

PPS:
would it be resolved if I set “127.0.0.1:2019” as the listen value for the admin section of the config like below? Where can I find the caddy config file?
"admin": { "disabled": false, "listen": "**127.0.0.1:2019**", "enforce_origin": false, "origins": [""], "config": { "persist": false

You can change the admin listen address using global options in the Caddyfile. But that won’t solve the underlying issue, which is that your system is somehow misconfigured to not resolve localhost as expected; that’s standard and expected behaviour of Linux systems by default, but something you did must’ve broken that behaviour. I haven’t the slightest idea what might’ve caused that.

But like I said, please post your Caddyfile again, using proper code formatting so that we can read it correctly.

1 Like

I have Pihole running on the same machine I’m trying to setup Caddy on, could that cause the issue?

Current Caddyfile:

# The Caddyfile is an easy way to configure your Caddy web server.
#
# Unless the file starts with a global options block, the first
# uncommented line is always the address of your site.
#
# To use your own domain name (with automatic HTTPS), first make
# sure your domain's A/AAAA DNS records are properly pointed to
# this machine's public IP, then replace the line below with your
# domain name.


https://shinobi.kaveman.tech {
        reverse_proxy 127.0.0.1:8080
}

https://pihole.kaveman.tech {
        reverse_proxy 127.0.0.1:8081
}

# Set this path to your site's directory.
# root * /usr/share/caddy

# Enable the static file server.
# file_server

# Another common task is to set up a reverse proxy:
# reverse_proxy localhost:8080


# Or serve a PHP site through php-fpm:
# php_fastcgi localhost:9000

# Refer to the Caddy docs for more information:
# https://caddyserver.com/docs/caddyfile`

Very most likely, yes – if you disable pihole, and the problem goes away, then it’s pihole meddling where it shouldn’t.

I tried disabling pihole and seem to get the same results, I did notice though that I have 1.1.1.1 setup as the backup DNS on my modem for cases where my PI doesnt answer, so the Pi itself seems to be calling to the backup DNS on my modem rather than itself for resolving domains I guess? Not sure how to resolve this

current systemctl status

● caddy.service - Caddy
     Loaded: loaded (/lib/systemd/system/caddy.service; enabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Sat 2021-01-30 17:11:22 CST; 20h ago
       Docs: https://caddyserver.com/docs/
    Process: 74355 ExecStart=/usr/bin/caddy run --environ --config /etc/caddy/Caddyfile (code=exited, status=1/FAILURE)
   Main PID: 74355 (code=exited, status=1/FAILURE)

Jan 30 17:11:22 RP1 caddy[74355]: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
Jan 30 17:11:22 RP1 caddy[74355]: HOME=/var/lib/caddy
Jan 30 17:11:22 RP1 caddy[74355]: LOGNAME=caddy
Jan 30 17:11:22 RP1 caddy[74355]: USER=caddy
Jan 30 17:11:22 RP1 caddy[74355]: INVOCATION_ID=b66d6cf6234d4612926349353a574779
Jan 30 17:11:22 RP1 caddy[74355]: JOURNAL_STREAM=9:312417
Jan 30 17:11:22 RP1 caddy[74355]: {"level":"info","ts":1612048282.8718786,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":""}
Jan 30 17:11:22 RP1 caddy[74355]: run: adapting config using caddyfile: server listening on [:443] is configured for HTTPS and cannot natively multiplex HTTP and HTTPS: /usr/share/caddy (try specifying https:>
Jan 30 17:11:22 RP1 systemd[1]: caddy.service: Main process exited, code=exited, status=1/FAILURE
Jan 30 17:11:22 RP1 systemd[1]: caddy.service: Failed with result 'exit-code'.```

I tried disabling pihole and seem to get the same results, I did notice though that I have 1.1.1.1 setup as the backup DNS on my modem for cases where my PI doesnt answer, so the Pi itself seems to be calling to the backup DNS on my modem rather than itself for resolving domains I guess? Not sure how to resolve this

Can you check the content of /etc/hosts?

2 Likes

THis gave me a hint and I’ve reconfigured my hosts the way I think its supposed to be, here is my hosts along with the new systemctl status outpu

systemctl status:

     Loaded: loaded (/lib/systemd/system/caddy.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2021-01-31 19:03:28 CST; 4min 48s ago
       Docs: https://caddyserver.com/docs/
   Main PID: 1740 (caddy)
      Tasks: 9 (limit: 2102)
     CGroup: /system.slice/caddy.service
             └─1740 /usr/bin/caddy run --environ --config /etc/caddy/Caddyfile

Jan 31 19:03:50 RP1 caddy[1740]: {"level":"warn","ts":1612141430.684181,"logger":"tls.issuance.zerossl","msg":"missing email address for ZeroSSL; it is strongly recommended to set one for next time"}
Jan 31 19:03:50 RP1 caddy[1740]: {"level":"warn","ts":1612141430.8393412,"logger":"tls.issuance.zerossl","msg":"missing email address for ZeroSSL; it is strongly recommended to set one for next time"}
Jan 31 19:03:51 RP1 caddy[1740]: {"level":"info","ts":1612141431.7457292,"logger":"tls.issuance.zerossl","msg":"generated EAB credentials","key_id":"wnww70BGLoCtbYf1vmUcZg"}
Jan 31 19:03:51 RP1 caddy[1740]: {"level":"info","ts":1612141431.8362978,"logger":"tls.issuance.zerossl","msg":"generated EAB credentials","key_id":"GIqarvWeMDjDt-PvhrC0ow"}
Jan 31 19:03:53 RP1 caddy[1740]: {"level":"info","ts":1612141433.3484256,"logger":"tls.issuance.acme","msg":"waiting on internal rate limiter","identifiers":["shinobi.kaveman.tech"]}
Jan 31 19:03:53 RP1 caddy[1740]: {"level":"info","ts":1612141433.3485186,"logger":"tls.issuance.acme","msg":"done waiting on internal rate limiter","identifiers":["shinobi.kaveman.tech"]}
Jan 31 19:03:53 RP1 caddy[1740]: {"level":"info","ts":1612141433.5452144,"logger":"tls.issuance.acme","msg":"waiting on internal rate limiter","identifiers":["pihole.kaveman.tech"]}
Jan 31 19:03:53 RP1 caddy[1740]: {"level":"info","ts":1612141433.5453763,"logger":"tls.issuance.acme","msg":"done waiting on internal rate limiter","identifiers":["pihole.kaveman.tech"]}
Jan 31 19:03:54 RP1 caddy[1740]: {"level":"info","ts":1612141434.201923,"logger":"tls.issuance.acme.acme_client","msg":"trying to solve challenge","identifier":"shinobi.kaveman.tech","challenge_type":"http-01>
Jan 31 19:03:54 RP1 caddy[1740]: {"level":"info","ts":1612141434.4072847,"logger":"tls.issuance.acme.acme_client","msg":"trying to solve challenge","identifier":"pihole.kaveman.tech","challenge_type":"http-01>

when I use caddy reload:

kevin@RP1:/etc/caddy$ caddy reload
2021/02/01 01:09:53.035 INFO    using adjacent Caddyfile
reload: sending configuration to instance: performing request: Post "http://localhost:2019/load": dial tcp: lookup localhost on 1.1.1.1:53: no such host

current caddyfile:

# The Caddyfile is an easy way to configure your Caddy web server.
#
# Unless the file starts with a global options block, the first
# uncommented line is always the address of your site.
#
# To use your own domain name (with automatic HTTPS), first make
# sure your domain's A/AAAA DNS records are properly pointed to
# this machine's public IP, then replace the line below with your
# domain name.


https://shinobi.kaveman.tech {
        reverse_proxy 127.0.0.1:8080
}

https://pihole.kaveman.tech {
        reverse_proxy 127.0.0.1:8081
}

# Set this path to your site's directory.
# root * /usr/share/caddy

# Enable the static file server.
# file_server

# Another common task is to set up a reverse proxy:
# reverse_proxy localhost:8080


# Or serve a PHP site through php-fpm:
# php_fastcgi localhost:9000

# Refer to the Caddy docs for more information:
# https://caddyserver.com/docs/caddyfile

Current Hosts file:

127.0.0.1 kaveman.tech kaveman

# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

PS: maybe my DNS is fucked up, should I have A records that have “shinobi” as the name or should I have CNAME records that say “shinobi” and point to “kaveman.tech” ?

Typically there should be entries in /etc/hosts pointing localhost to the loopback address as such

127.0.0.1 localhost
::1 localhost

I’m not sure whether PiHole manipulated your /etc/hosts file or not, so this is up to you to either investigate PiHole or just edit the file directly. Given your /etc/hosts file content, here’s what happened:

You issue a caddy reload which prompts Caddy to issue a POST HTTP request to http://localhost:2019/config/. Because the host part of the URL isn’t an IP address, your system tries to look it up in the /etc/hosts file first. It couldn’t find any entry for “localhost”, so it sent the query to the configured DNS nameserver, which happens to be 1.1.1.1 as set by, presumably, the DHCP server. Of course Cloudflare has no info on “localhost”, so it probably returned NXDOMAIN or SRVFAIL, either way it replied with failure message.

As for your PS comment, the semantics of CNAME vs. A record are a bit different and it depends on whether you want “shinobi” to be equivalent in essence to plain “kaveman.tech” (been awhile since I reviewed DNS theory, so I might me a little off, but not too far). Typically you’d want to go with A record.

3 Likes

ok I think that has caddy working properly, now im pretty sure the errors im getting are due to bad DNS config

my goal is to reach the ports/lan ips in my caddyfile from those domains, at this point i’ve got kaveman.tech pointing at the ip and caddy has 80 and 443 open from there, I use to have CNAME records pointing at the same IP with the name being only the first part

i.e. x.kaveman.tech

^ should the additional A records I make for each subdomain be the full domain or only the X part

validate:

kevin@RP1:/etc/caddy$ caddy validate
2021/02/01 02:27:11.026 INFO    using adjacent Caddyfile
2021/02/01 02:27:11.029 INFO    http    server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS {"server_name": "srv0", "https_port": 443}
2021/02/01 02:27:11.030 INFO    http    enabling automatic HTTP->HTTPS redirects        {"server_name": "srv0"}
2021/02/01 02:27:11.029 INFO    tls.cache.maintenance   started background certificate maintenance      {"cache": "0x40003f84d0"}
2021/02/01 02:27:11.031 INFO    tls.cache.maintenance   stopped background certificate maintenance      {"cache": "0x40003f84d0"}
Valid configuration
Jan 31 19:50:46 kaveman caddy[1740]: {"level":"error","ts":1612144246.0955596,"logger":"tls.issuance.acme.acme_client","msg":"validating authorization","identifier":"shinobi.kaveman.tech","error":"authorizati>
Jan 31 19:50:48 kaveman caddy[1740]: {"level":"info","ts":1612144248.879432,"logger":"tls.issuance.acme.acme_client","msg":"trying to solve challenge","identifier":"shinobi.kaveman.tech","challenge_type":"htt>
Jan 31 19:50:51 kaveman caddy[1740]: {"level":"error","ts":1612144251.797428,"logger":"tls.issuance.acme.acme_client","msg":"challenge failed","identifier":"pihole.kaveman.tech","challenge_type":"tls-alpn-01">
Jan 31 19:50:51 kaveman caddy[1740]: {"level":"error","ts":1612144251.797561,"logger":"tls.issuance.acme.acme_client","msg":"validating authorization","identifier":"pihole.kaveman.tech","error":"authorization>
Jan 31 19:50:54 kaveman caddy[1740]: {"level":"info","ts":1612144254.537563,"logger":"tls.issuance.acme.acme_client","msg":"trying to solve challenge","identifier":"pihole.kaveman.tech","challenge_type":"http>
Jan 31 19:55:49 kaveman caddy[1740]: {"level":"error","ts":1612144549.8892455,"logger":"tls.obtain","msg":"will retry","error":"[shinobi.kaveman.tech] Obtain: [shinobi.kaveman.tech] solving challenges: [shino>
Jan 31 19:55:55 kaveman caddy[1740]: {"level":"error","ts":1612144555.9340043,"logger":"tls.obtain","msg":"will retry","error":"[pihole.kaveman.tech] Obtain: [pihole.kaveman.tech] solving challenges: [pihole.>
Jan 31 20:10:19 kaveman caddy[1740]: {"level":"info","ts":1612145419.0356784,"logger":"admin.api","msg":"received request","method":"POST","host":"localhost:2019","uri":"/load","remote_addr":"127.0.0.1:54898">
Jan 31 20:10:19 kaveman caddy[1740]: {"level":"info","ts":1612145419.0363846,"logger":"admin.api","msg":"config is unchanged"}
Jan 31 20:10:19 kaveman caddy[1740]: {"level":"info","ts":1612145419.0364573,"logger":"admin.api","msg":"load complete"}

Please use journalctl -u caddy --no-pager | less to look at your logs. Your current logs are truncated, see the > at the end of each line.

As you can see, Caddy isn’t able to solve the ACME challenges - this means your server likely isn’t publicly reachable on ports 80 and 443, likely due to misconfigured DNS. Make sure your A record points to your server’s IP address, and that you have ports 80 and 443 open and port forwarded to your RPI.

2 Likes

Thanks very much to both of you, I think it’s resolved now and at this point I will continue configuration

The remaining issue was with my DNS setup

This topic was automatically closed after 30 days. New replies are no longer allowed.