Context Deadline Exceeded on Admin API Call

1. My Caddy version (caddy version):

2.0rc3

2. How I run Caddy:

Service File

a. System environment:

Systemd, Ubuntu 18.04

b. Command:

/usr/bin/caddy run --environ --resume

c. Service/unit/compose file:

# caddy-api.service
#
# For using Caddy with its API.
#
# This unit is "durable" in that it will automatically resume
# the last active configuration if the service is restarted.

[Unit]
Description=Caddy
Documentation=https://caddyserver.com/docs/
After=network.target

[Service]
User=ubuntu
Group=www-data
ExecStart=/usr/bin/caddy run --environ --resume
TimeoutStopSec=5s
LimitNOFILE=1048576
LimitNPROC=512
PrivateTmp=true
ProtectSystem=full
AmbientCapabilities=CAP_NET_BIND_SERVICE

[Install]
WantedBy=multi-user.target

d. My complete Caddyfile or JSON config:

{
    "apps": {
        "http": {
            "servers": {
                "myserver":{
                    "listen":[":443"],
                    "routes":[
                    {
                        "match":[
                            {
                                "host": ["domains.mylodocs.com"]    
                            }
                        ],
                        "handle": [{
                            "handler": "static_response",
                            "body":"Hello World!"
                        }]
                    },
                    {
                        "match":[
                            {
                                "host": ["admin.mylodocs.com"]    
                            }
                        ],
                        "handle": [{
                            "handler": "static_response",
                            "body":"Hello World!"
                        }]
                    }]
                },
                "adminapi": {
                    "listen":["0.0.0.0:2008"],
                    "routes":[
                    {
                        "match":[
                            {
                                "host": ["admin.mylodocs.com"]    
                            }
                        ],
                        "handle": [{
                            "handler":"reverse_proxy",
                            "upstreams": [{
                                "dial":"localhost:2019"
                            }],
                            "headers": {
                                "request":{
                                    "set": {
                                        "Host":[""]
                                    },
                                    "delete": ["Origin"]
                                }
                            }
                        }]
                    }]
                }
            }
        },
        "tls":{
            "certificates":{
                "load_files":[
                    {
                        "certificate":"/etc/ssl/caddy/mylodocs-dev.com/cert.pem",
                        "key":"/etc/ssl/caddy/mylodocs-dev.com/privkey.pem"
                    }
                ]
            }
        }
    }
}

3. The problem I’m having:

I’m using a Saas app written in python to update the admin api.

                jsonBody = json.loads('{"match":[{"host":["' + customDomain + '"]}],"handle":[{"handler":"rewrite","uri":"' + path + '{http.request.uri}"},{"handler":"encode","encodings":{"gzip":{"level":0}}},{"handler":"reverse_proxy","upstreams":[{"dial":"mylodocs.s3-website-us-west-2.amazonaws.com:80"}],"headers":{"request":{"set":{"Host":["mylodocs.s3-website-us-west-2.amazonaws.com"]}}}}]}')
                print(json.dumps(jsonBody))
                headers = {'content-type' : 'application/json'}
               with closing(requests.post("https://admin.mylodocs.com:2008/config/apps/http/servers/myserver/routes", json = jsonBody)) as r:
                    print(r.content)

I don’t get a response but when I look over at the logs on my caddy service it did receive the POST but then it gives me this error.

{"level":"error","ts":1587571718.9594,"logger":"admin","msg":"stopping current admin endpoint","error":"shutting down admin server: context deadline exceeded"}

I can’t do anymore calls to the admin api after that. They just hang. I did notice that the admin api restarts itself after I kill my SaaS app.

4. Error messages and/or full log output:

{"level":"error","ts":1587571718.9594,"logger":"admin","msg":"stopping current admin endpoint","error":"shutting down admin server: context deadline exceeded"}

EDIT

When trying via curl i get Unexpected End Of File

curl -X POST -H "Content-Type: application/json" -d '{"match": [{"host": ["www.mylopod.com"]}], "handle": [{"handler": "rewrite", "uri": "/mdennett/masonite{http.request.uri}"}, {"handler": "encode", "encodings": {"gzip": {"level": 0}}}, {"handler": "reverse_proxy", "upstreams": [{"dial": "mylodocs.s3-website-us-west-2.amazonaws.com:80"}], "headers": {"request": {"set": {"Host": ["mylodocs.s3-website-us-west-2.amazonaws.com"]}}}}]}' "https://admin.mylodocs.com:2008/config/apps/http/servers/myserver/routes/"
curl: (56) Unexpected EOF

Pretty sure yours is the same issue as: v2: add one server route config will effect all server route · Issue #2938 · caddyserver/caddy · GitHub

Your HTTP server reverse-proxies to the admin endpoint. The admin endpoint is commanded to load a new configuration, so it shuts down the HTTP reverse proxy (gracefully). The reverse proxy is waiting for the upstream request to complete, which is in turn waiting for itself to shut down, so… yeah… it’s a circular dependency: the admin endpoint and the reverse proxy are both waiting for each other to stop first.

1 Like

Ahh ok that makes sense. So is there a better way to do what I am wanting to do?

Or do I need to stand up 2 caddy servers to achieve this?

For clarity, what I’m trying to do is access the admin api from another server

At the moment I suppose you’ll need another server. I’m not really sure a good way to break the cycle. I suppose we could terminate the connection forcefully but then your client would get an invalid/empty response…

PS. You can bind the admin API to a non-localhost interface (so you don’t need two servers running on the same machine) – just make sure you know what you’re doing with your network configuration to keep it secure.

    "admin" : {
        "listen":"0.0.0.0:2008",
        "enforce_origin": true,
        "origins" : ["someoriginorkey"]
    }

This basically achieves the same thing I think. This machine is in a VPC and port 2008 is not open outside of that VPC. This should be as secure as what I was doing before??

2 Likes

Yeah as long as you trust all network interfaces that machine is exposed to, that’s pretty decent security tbh. :+1:

With time we will probably improve on the situation, but for now that’ll have to do.

1 Like

Yeah there are a total of 3 machines in that VPC, The caddy server is the only server that is accessible from outside through 443. So I think its pretty good.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.