Server not reachable after a few days

Not my case. It stays broken until I restart one of the containers. Yesterday I restarted the PHP container and it was available again. So I it is not clear that this is a Caddy problem. But again, I didn’t have this issue with Nginx.

I will paste curl -v response when it does the same thing again. Here is the json that gets logged when its down or not responding.

{
  "level": "info",
  "ts": 1639460479.0482252,
  "logger": "http.log.access.log0",
  "msg": "handled request",
  "request": {
    "remote_addr": "na",
    "proto": "HTTP/2.0",
    "method": "GET",
    "host": "na",
    "uri": "/",
    "headers": {
      "Accept-Language": ["en-US,en;q=0.9"],
      "Accept-Encoding": ["gzip, deflate, br"],
      "User-Agent": [
        "Mozilla/5.0 (iPhone; CPU iPhone OS 15_1_1 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/15.1 Mobile/15E148 Safari/604.1"
      ],
      "Accept": [
        "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"
      ]
    },
    "tls": {
      "resumed": false,
      "version": 772,
      "cipher_suite": 4865,
      "proto": "h2",
      "proto_mutual": true,
      "server_name": "na"
    }
  },
  "common_log": "Upstream IP - - [14/Dec/2021:05:41:19 +0000] \"GET / HTTP/2.0\" 0 0",
  "duration": 0.62777269,
  "size": 0,
  "status": 0,
  "resp_headers": { "Server": ["Caddy"] }
}

My configuration is simple listening on 443 port, having a reverse proxy and ondemand tls. This is how it looks

{
    on_demand_tls {
        ask https://askendpoint
        interval 1m
        burst    5
    }
}
:443 {
    tls email {
        on_demand
    }
    reverse_proxy upstream_hostname {
        header_up Host hostname
        header_up Domain {host}
        header_up X-Forwarded-Port {server_port}
        health_timeout 5s
    }
    log {
        output file /var/log/caddy/access.log {
                roll_size 1gb
                roll_keep 4
                roll_keep_for 720h

                format filter {
                        wrap console
                        fields {
                                request>headers>Authorization delete
                                request>resp_headers delete
                                request>headers>Cookie delete
                        }
                }
        }
    }
}

I have confirmed when the caddy server does not respond the upstream server is up and responding fine. There is no problem with upstream server. This is the curl output @matt

It happened again. I was able to do curl -v . Downdetecter notified connection timedout after 25 seconds. curl -v output is listed below

curl -X GET customerdomain -v
Note: Unnecessary use of -X or --request, GET is already inferred.
*   Trying caddyserverip:443...
* TCP_NODELAY set
* Connected to customerdomain (caddyserverip) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN, server accepted to use h2
* Server certificate:
*  subject: CN=customerdomain
*  start date: Jan 13 08:58:11 2022 GMT
*  expire date: Apr 13 08:58:10 2022 GMT
*  subjectAltName: host "customerdomain" matched cert's "customerdomain"
*  issuer: C=US; O=Let's Encrypt; CN=R3
*  SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55a8c0cabe30)
> GET / HTTP/2
> Host: customerdomain
> user-agent: curl/7.68.0
> accept: */*
> 
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* Connection state changed (MAX_CONCURRENT_STREAMS == 250)!



* Empty reply from server
* Connection #0 to host customerdomain left intact
curl: (52) Empty reply from server

Hmm, okay from those last logs, it looks like it does connect, but it seems like you didn’t even receive any response headers? For example I don’t see Server: Caddy in there.

It looks like you didn’t have the debug global option turned on here though, I think we’ll need to see those to have an idea what Caddy is doing. Caddy will print its additional logs to stdout/stderr (I’m not talking about access logs here, I’m talking about runtime logs, the log directive configures access logs)

If Caddy does get to the point that it does write out the access logs, I have to think that it did attempt to write a response because that happens at the end of the request.

Yes its not turned on . Its a production server. I will try to add it to provide additional info, i don’t know exactly when that happens will try to add it to see how much caddy generates logs

@francislavoie i added debug above the on_demand_tls . like

{
    debug
    on_demand_tls {
        ask https://askendpoint
        interval 1m
        burst    5
    }
}

But i don’t see any log in access.log with level debug. is that fine.

Yes, access logs don’t currently write debug-level logs; you’ll see them in your regular / process logs (stderr by default).

2 Likes

What does this mean precisely, though? Please post the full command and its full output (use curl -v).

Yes, I will when the server goes down again. It usually takes a few days. What I ment was that I have built an API on another DO droplet, that also is experiencing the same problem. And a WordPress plugin that I’ve made is fetching data from this droplet using Curl in PHP. The plugin doesn’t get any response from the API server when this problem occurs. But that is probably not the same as using curl -v. I’ll get back with details when my server is down again.

@matt @francislavoie

it happened now caddy didn’t respond to any request. i got the debug logs. Here is the pattern for one url.

{
    "level": "debug",
    "ts": 1642596149.0729373,
    "logger": "http.handlers.reverse_proxy",
    "msg": "upstream roundtrip",
    "upstream": "upstreamserver:443",
    "request": {
      "remote_addr": "REMOTEADDRESS",
      "proto": "HTTP/2.0",
      "method": "GET",
      "host": "upstreamserver",
      "uri": "/",
      "headers": {
        "Sec-Ch-Ua-Mobile": ["?0"],
        "Sec-Fetch-User": ["?1"],
        "Sec-Ch-Ua-Platform": ["\"Linux\""],
        "User-Agent": [
          "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.43"
        ],
        "Sec-Fetch-Dest": ["document"],
        "Accept-Encoding": ["gzip, deflate, br"],
        "X-Forwarded-For": ["XFORWARDEDFOR"],
        "Accept-Language": ["en-GB,en;q=0.9,en-US;q=0.8"],
        "Upgrade-Insecure-Requests": ["1"],
        "Sec-Fetch-Mode": ["navigate"],
        "Sec-Ch-Ua": [
          "\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"96\", \"Microsoft Edge\";v=\"96\""
        ],        
        "Domain": ["userdomain"],
        "X-Forwarded-Port": [""],
        "X-Forwarded-Proto": ["https"],
        "Accept": [
          "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9"
        ],
        "Sec-Fetch-Site": ["none"]
      },
      "tls": {
        "resumed": true,
        "version": 772,
        "cipher_suite": 4865,
        "proto": "h2",
        "proto_mutual": true,
        "server_name": "userdomain"
      }
    },
    "duration": 45.448787874,
    "error": "context canceled"
  },
  {
    "level": "info",
    "ts": 1642596149.0730722,
    "logger": "http.log.access.log0",
    "msg": "handled request",
    "request": {
      "remote_addr": "REMOTEADDRESS",
      "proto": "HTTP/2.0",
      "method": "GET",
      "host": "userdomain",
      "uri": "/",
      "headers": {
        "Accept-Encoding": ["gzip, deflate, br"],
        "Sec-Ch-Ua-Platform": ["\"Linux\""],
        "Upgrade-Insecure-Requests": ["1"],
        "User-Agent": [
          "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.55 Safari/537.36 Edg/96.0.1054.43"
        ],
        "Accept": [
          "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9"
        ],
        "Sec-Fetch-Mode": ["navigate"],
        "Sec-Fetch-Dest": ["document"],
        "Sec-Ch-Ua": [
          "\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"96\", \"Microsoft Edge\";v=\"96\""
        ],
        "Sec-Ch-Ua-Mobile": ["?0"],
        "Sec-Fetch-Site": ["none"],
        "Sec-Fetch-User": ["?1"],
        "Accept-Language": ["en-GB,en;q=0.9,en-US;q=0.8"],        
      },
      "tls": {
        "resumed": true,
        "version": 772,
        "cipher_suite": 4865,
        "proto": "h2",
        "proto_mutual": true,
        "server_name": "userdomain"
      }
    },
    "common_log": "XFORWARDEDFOR - - [19/Jan/2022:12:42:29 +0000] \"GET / HTTP/2.0\" 0 0",
    "duration": 45.449036952,
    "size": 0,
    "status": 0,
    "resp_headers": { "Server": ["Caddy"] }
  },

In the debug log context canceled. what does it mean. Can you please help. I restarted the server immediately and it started processing the requests

The “context canceled” error when it happens in reverse_proxy, is often because the client (browser) canceled the request before it could finish.

What “context” means here, is in Go, there’s an object called the “Context” which is passed from high level code down into lower level code which is like a handle on the current operation (in this case the HTTP request) so that if at any point the operation needs to be canceled, it can be done gracefully (there’s some other benefits, but they’re off-topic).

What we do see though is that that last request took very long, 45 seconds. This makes me think that the upstream just hung/died while Caddy was waiting for a response back. But what actually canceled it was the browser canceling the request, i.e. signaling “nevermind you took too long, I’m done waiting”.

That’s all I can read from these logs.

1 Like

Yep, seconding this, it looks like the client is simply canceling the request.

If you want us to look into it further, we need a curl -v command like we’ve been asking for, with its output, and ideally a way to reproduce it as minimally as possible.

But so far, with the information given, looks like everything is working fine.

(to be clear, Matt means “everything is working fine” in Caddy, problem is likely elsewhere)

1 Like

I will have another look at this. I already pasted curl -v command output when it happens. I will monitor the upstream to look for errors. one thing which is not clear is once i restart the server all requests start working fine, There is no hang up. The calls are not api requests. Its a direct page load and i think the timeout will be more for url loading in browsers

Thanks for mentioning that, it looks like it was edited into a previous post instead of being appended as a new post, and I didn’t see the edit.

“empty reply from server” is interesting, maybe the proxied backend is doing something funky with the request stream (when proxying, there are more ways to cancel the context than just the client aborting the request).

proxied server is a vercel next js project

Hello again. So now, first time since I posted this thread, my website is down and unreachable. I will now keep it down to be able to debug. Usually I can do a docker stop caddy followed by a docker start caddy to get the site up again.

I’m able to log into my server through SSH and here I’ve tried a curl -v.

curl -v https://bonsy.se

*   Trying 64.225.66.142:443...
* TCP_NODELAY set
* Connected to bonsy.se (64.225.66.142) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN, server accepted to use h2
* Server certificate:
*  subject: CN=bonsy.se
*  start date: Feb  4 10:55:59 2022 GMT
*  expire date: May  5 10:55:58 2022 GMT
*  subjectAltName: host "bonsy.se" matched cert's "bonsy.se"
*  issuer: C=US; O=Let's Encrypt; CN=R3
*  SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x558a6690a880)
> GET / HTTP/2
> Host: bonsy.se
> user-agent: curl/7.68.0
> accept: */*
> 
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* Connection state changed (MAX_CONCURRENT_STREAMS == 250)!
< HTTP/2 502 
< server: Caddy
< content-length: 0
< date: Tue, 08 Feb 2022 10:10:57 GMT
< 
* Connection #0 to host bonsy.se left intact

Some debug logs from same time in Docker

{
   "level":"debug",
   "ts":1644314928.1362245,
   "logger":"tls.handshake",
   "msg":"choosing certificate",
   "identifier":"bonsy.se",
   "num_choices":1
}
{
   "level":"debug",
   "ts":1644314928.136262,
   "logger":"tls.handshake",
   "msg":"default certificate selection results",
   "identifier":"bonsy.se",
   "subjects":[
      "bonsy.se"
   ],
   "managed":true,
   "issuer_key":"acme-v02.api.letsencrypt.org-directory",
   "hash":"bee36bfe4698584977f2c81c8c4b4212a0a2a9ad767fdac937c044a0eae82928"
}
{
   "level":"debug",
   "ts":1644314928.1362693,
   "logger":"tls.handshake",
   "msg":"matched certificate in cache",
   "subjects":[
      "bonsy.se"
   ],
   "managed":true,
   "expiration":1651748158,
   "hash":"bee36bfe4698584977f2c81c8c4b4212a0a2a9ad767fdac937c044a0eae82928"
}
{
   "level":"debug",
   "ts":1644314928.1393726,
   "logger":"http.handlers.rewrite",
   "msg":"rewrote request",
   "request":{
      "remote_addr":"64.225.66.142:50098",
      "proto":"HTTP/2.0",
      "method":"GET",
      "host":"bonsy.se",
      "uri":"/",
      "headers":{
         "User-Agent":[
            "curl/7.68.0"
         ],
         "Accept":[
            "*/*"
         ]
      },
      "tls":{
         "resumed":false,
         "version":772,
         "cipher_suite":4865,
         "proto":"h2",
         "proto_mutual":true,
         "server_name":"bonsy.se"
      }
   },
   "method":"GET",
   "uri":"/index.php"
}
{
   "level":"debug",
   "ts":1644314928.1394913,
   "logger":"http.reverse_proxy.transport.fastcgi",
   "msg":"roundtrip",
   "request":{
      "remote_addr":"64.225.66.142:50098",
      "proto":"HTTP/2.0",
      "method":"GET",
      "host":"bonsy.se",
      "uri":"/index.php",
      "headers":{
         "X-Forwarded-For":[
            "64.225.66.142"
         ],
         "X-Forwarded-Proto":[
            "https"
         ],
         "User-Agent":[
            "curl/7.68.0"
         ],
         "Accept":[
            "*/*"
         ]
      },
      "tls":{
         "resumed":false,
         "version":772,
         "cipher_suite":4865,
         "proto":"h2",
         "proto_mutual":true,
         "server_name":"bonsy.se"
      }
   },
   "dial":"php:9000",
   "env":{
      "AUTH_TYPE":"",
      "CONTENT_LENGTH":"",
      "CONTENT_TYPE":"",
      "DOCUMENT_ROOT":"/var/www/public",
      "DOCUMENT_URI":"/index.php",
      "GATEWAY_INTERFACE":"CGI/1.1",
      "HTTPS":"on",
      "HTTP_ACCEPT":"*/*",
      "HTTP_HOST":"bonsy.se",
      "HTTP_USER_AGENT":"curl/7.68.0",
      "HTTP_X_FORWARDED_FOR":"64.225.66.142",
      "HTTP_X_FORWARDED_PROTO":"https",
      "PATH_INFO":"",
      "QUERY_STRING":"",
      "REMOTE_ADDR":"64.225.66.142",
      "REMOTE_HOST":"64.225.66.142",
      "REMOTE_IDENT":"",
      "REMOTE_PORT":"50098",
      "REMOTE_USER":"",
      "REQUEST_METHOD":"GET",
      "REQUEST_SCHEME":"https",
      "REQUEST_URI":"/",
      "SCRIPT_FILENAME":"/var/www/public/index.php",
      "SCRIPT_NAME":"/index.php",
      "SERVER_NAME":"bonsy.se",
      "SERVER_PROTOCOL":"HTTP/2.0",
      "SERVER_SOFTWARE":"Caddy/v2.4.6",
      "SSL_CIPHER":"TLS_AES_128_GCM_SHA256",
      "SSL_PROTOCOL":"TLSv1.3"
   }
}
{
   "level":"debug",
   "ts":1644315057.5020938,
   "logger":"http.handlers.reverse_proxy",
   "msg":"upstream roundtrip",
   "upstream":"php:9000",
   "duration":129.362634004,
   "request":{
      "remote_addr":"64.225.66.142:50098",
      "proto":"HTTP/2.0",
      "method":"GET",
      "host":"bonsy.se",
      "uri":"/index.php",
      "headers":{
         "User-Agent":[
            "curl/7.68.0"
         ],
         "Accept":[
            "*/*"
         ],
         "X-Forwarded-For":[
            "64.225.66.142"
         ],
         "X-Forwarded-Proto":[
            "https"
         ]
      },
      "tls":{
         "resumed":false,
         "version":772,
         "cipher_suite":4865,
         "proto":"h2",
         "proto_mutual":true,
         "server_name":"bonsy.se"
      }
   },
   "error":"dialing backend: dial tcp 172.19.0.3:9000: connect: connection timed out"
}
{
   "level":"error",
   "ts":1644315057.5021753,
   "logger":"http.log.error",
   "msg":"dialing backend: dial tcp 172.19.0.3:9000: connect: connection timed out",
   "request":{
      "remote_addr":"64.225.66.142:50098",
      "proto":"HTTP/2.0",
      "method":"GET",
      "host":"bonsy.se",
      "uri":"/",
      "headers":{
         "User-Agent":[
            "curl/7.68.0"
         ],
         "Accept":[
            "*/*"
         ]
      },
      "tls":{
         "resumed":false,
         "version":772,
         "cipher_suite":4865,
         "proto":"h2",
         "proto_mutual":true,
         "server_name":"bonsy.se"
      }
   },
   "duration":129.362957129,
   "status":502,
   "err_id":"yxibd6cuj",
   "err_trace":"reverseproxy.statusError (reverseproxy.go:886)"
}

I would really appreciate any guidens/help to continue debugging so I’m able to fix this issue.

I do seem to get the same result if I curl -v the IP-address as well.

PHP logs only show this: (which was yesterday when things stopped working):

php_1      | 172.19.0.4 -  07/Feb/2022:06:50:08 +0000 "POST /index.php" 200
php_1      | 172.19.0.4 -  07/Feb/2022:06:50:08 +0000 "POST /index.php" 200

I can verify that my PHP server is running

.....$ docker-compose exec php php -v
PHP 8.1.0 (cli) (built: Nov 30 2021 06:20:01) (NTS)
Copyright (c) The PHP Group
Zend Engine v4.1.0, Copyright (c) Zend Technologies

IP-address inspection

# docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' bonsy_php
172.19.0.3
# docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' bonsy_caddy
172.19.0.4

That doesn’t prove that the container is working though. You’re ran PHP in CLI there, while the actual app is PHP-FPM which is another process. You should try to monitor PHP-FPM’s status. See Real-time PHP-FPM Status · GitHub for example.

I don’t see any evidence of a problem with Caddy itself here, it looks like a problem with PHP-FPM no longer accepting connections. You might need to tweak PHP-FPM’s settings to allow more connections, maybe. I dunno.

This topic was automatically closed after 30 days. New replies are no longer allowed.