Unrecognized directive: header_up

1. Caddy version (caddy version):

2.2.0-alpine

2. How I run Caddy:

a. System environment:

docker, with systemd on production

b. Command:

docker-compose up

c. Service/unit/compose file:

none in testing. Just docker-compose up

d. My complete Caddyfile or JSON config:

{
    #auto_https off
    #auto_https disable_redirects
    #email   email@email.com # For letsencrypt certificates
}

# REVERSE PROXY
# Values from dockerconfig.env
# Change PROXY_HTTPS to true if you have a domain name for auto-HTTPS to work.

#domain.com, www.domain.com {
http://{$PROXY_SERVER_NAME} {
    reverse_proxy {$INTERNAL_SERVER_NAME} {
        header_up Host {$MAIN_UPSTREAM_HOST}
    }
}
#check for any docker image updates
http://localhost:3000 {
    reverse_proxy wud:3000
}

3. The problem I’m having:

In development, I have no problems. When I move to AWS EC2, I get this error:

caddyserver    | {"level":"info","ts":1604075873.6606507,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
caddyserver    | run: adapting config using caddyfile: /etc/caddy/Caddyfile:14: unrecognized directive: header_up
caddyserver exited with code 1

It is weird.
It works if I remove the header_up directive.

4. Error messages and/or full log output:

caddyserver    | {"level":"info","ts":1604075873.6606507,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
caddyserver    | run: adapting config using caddyfile: /etc/caddy/Caddyfile:14: unrecognized directive: header_up
caddyserver exited with code 1

5. What I already tried:

Removing the directive works.

6. Links to relevant resources:

none

Your config works for me. I suspect you’re not actually using the config file you think you are? Or you have changed it before posting it in the forum (which is against our rules, FYI, please be careful of that so we can help you).

It is the same config file. I know it works because it works locally. I will delete the volumes and try again.
Could that error be caused by the header values and not necessarily the directive?

1 Like

I doubt it; “unrecognized directive” usually implies a structural (and/or syntax) error with the Caddyfile.

The env vars might have expanded to something that broke parsing. Try replacing the env vars with the real values and see what happens.

Are your env vars actually set?

1 Like

I have tried hard-coding everything. It doesn’t seem to work. I am using Ubuntu 20.04 on EC2. Here is the complete log from start to failure.
I don’t necessarily need header_up for this project, but most of my projects need it to work. I also tried different Caddy versions with same results.

ubuntu@ip-172-31-71-121:~/go$ docker-compose up
Pulling wud (fmartinou/whats-up-docker:2.3.1)...
2.3.1: Pulling from fmartinou/whats-up-docker
cbdbe7a5bc2a: Pull complete
d22524cae38d: Pull complete
a31d6a083b03: Pull complete
38017abc27d5: Pull complete
af7f7e07b114: Pull complete
a8e4dd34f0d8: Pull complete
f1c601f9c59a: Pull complete
27afcb483a17: Pull complete
5a18b20028ac: Pull complete
Digest: sha256:e3da52de5f45e95712a16e72b1203fc24ab93d539f40e9951c7ce48082203d3c
Status: Downloaded newer image for fmartinou/whats-up-docker:2.3.1
Pulling caddyserver (caddy:2.1.1-alpine)...
2.1.1-alpine: Pulling from library/caddy
188c0c94c7c5: Pull complete
da2e2d825895: Pull complete
792a8e397063: Pull complete
e121fa2dd673: Pull complete
02cabd94ff7a: Pull complete
Digest: sha256:acf4730fd996e56a713b99d6f1892b87bf9f95af4ebc5fed53e942986ffbc2cf
Status: Downloaded newer image for caddy:2.1.1-alpine
Pulling mongo (mongo:)...
latest: Pulling from library/mongo
171857c49d0f: Pull complete
419640447d26: Pull complete
61e52f862619: Pull complete
892787ca4521: Pull complete
06e2d54757a5: Pull complete
e2f7d90822f3: Pull complete
f518d3776320: Pull complete
feb8e9d469d8: Pull complete
69705b632494: Pull complete
c7daea26376d: Pull complete
13d1f9e1fc77: Pull complete
f87e65fe7ffd: Pull complete
Digest: sha256:efc408845bc917d0b7fd97a8590e9c8d3c314f58cee651bd3030c9cf2ce9032d
Status: Downloaded newer image for mongo:latest
Pulling awszipperv2 (elgonlabs/awszipperv2:1.1.0)...
1.1.0: Pulling from elgonlabs/awszipperv2
d72e567cc804: Pull complete
0f3630e5ff08: Pull complete
b6a83d81d1f4: Pull complete
b5a2fa28fbe0: Pull complete
1249a702857a: Pull complete
a2a506e47a80: Pull complete
40a437284db2: Pull complete
Digest: sha256:571358228088917ee16045105d42deeccc8723e8de55cd2b3c228cb630e43d12
Status: Downloaded newer image for elgonlabs/awszipperv2:1.1.0
Creating caddyserver      ... done
Creating wud         ... done
Creating go_mongo_1  ... done
Creating go_awszipperv2_1 ... done
Attaching to wud, go_mongo_1, caddyserver, go_awszipperv2_1
caddyserver    | {"level":"info","ts":1604088101.655409,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
caddyserver    | run: adapting config using caddyfile: /etc/caddy/Caddyfile:16: unrecognized directive: header_up
mongo_1        | {"t":{"$date":"2020-10-30T20:01:42.449+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
mongo_1        | {"t":{"$date":"2020-10-30T20:01:42.498+00:00"},"s":"W",  "c":"ASIO",     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
mongo_1        | {"t":{"$date":"2020-10-30T20:01:42.499+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
mongo_1        | {"t":{"$date":"2020-10-30T20:01:42.501+00:00"},"s":"I",  "c":"STORAGE",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":1,"port":27017,"dbPath":"/data/db","architecture":"64-bit","host":"56590eab74bb"}}
mongo_1        | {"t":{"$date":"2020-10-30T20:01:42.501+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.1","gitVersion":"ad91a93a5a31e175f5cbf8c69561e788bbc55ce1","openSSLVersion":"OpenSSL 1.1.1  11 Sep 2018","modules":[],"allocator":"tcmalloc","environment":{"distmod":"ubuntu1804","distarch":"x86_64","target_arch":"x86_64"}}}}
mongo_1        | {"t":{"$date":"2020-10-30T20:01:42.502+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"Ubuntu","version":"18.04"}}}
mongo_1        | {"t":{"$date":"2020-10-30T20:01:42.502+00:00"},"s":"I",  "c":"CONTROL",  "id":21951,   "ctx":"initandlisten","msg":"Options set by command line","attr":{"options":{"net":{"bindIp":"*"}}}}
mongo_1        | {"t":{"$date":"2020-10-30T20:01:42.511+00:00"},"s":"I",  "c":"STORAGE",  "id":22270,   "ctx":"initandlisten","msg":"Storage engine to use detected by data files","attr":{"dbpath":"/data/db","storageEngine":"wiredTiger"}}
mongo_1        | {"t":{"$date":"2020-10-30T20:01:42.512+00:00"},"s":"I",  "c":"STORAGE",  "id":22297,   "ctx":"initandlisten","msg":"Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem","tags":["startupWarnings"]}
mongo_1        | {"t":{"$date":"2020-10-30T20:01:42.512+00:00"},"s":"I",  "c":"STORAGE",  "id":22315,   "ctx":"initandlisten","msg":"Opening WiredTiger","attr":{"config":"create,cache_size=256M,session_max=33000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000,close_scan_interval=10,close_handle_minimum=250),statistics_log=(wait=0),verbose=[recovery_progress,checkpoint_progress,compact_progress],"}}
caddyserver exited with code 1

Can you have your deploy script or whatever it is print out the config before it runs Caddy?

Well, I tried it almost verbatim (just replacing the env vars with some other value) and it works fine for me:

$ cat Caddyfile
{
    #auto_https off
    #auto_https disable_redirects
    #email   email@email.com # For letsencrypt certificates
}

# REVERSE PROXY
# Values from dockerconfig.env
# Change PROXY_HTTPS to true if you have a domain name for auto-HTTPS to work.

#domain.com, www.domain.com {
http://example.com {
	reverse_proxy foobar.com:1234 {
		header_up Host foobar.com
	}
}
#check for any docker image updates
http://localhost:3000 {
	reverse_proxy wud:3000
}
$ caddy adapt --config Caddyfile
{"apps":{"http":{"servers":{"srv0":{"listen":[":80"],"routes":[{"match":[{"host":["example.com"]}],"handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"reverse_proxy","headers":{"request":{"set":{"Host":["foobar.com"]}}},"upstreams":[{"dial":"foobar.com:1234"}]}]}]}],"terminal":true}],"automatic_https":{"skip":["example.com"]}},"srv1":{"listen":[":3000"],"routes":[{"match":[{"host":["localhost"]}],"handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"reverse_proxy","upstreams":[{"dial":"wud:3000"}]}]}]}],"terminal":true}],"automatic_https":{"skip":["localhost"]}}}}}}

I am trying to figure this out. It is just a docker compose file.

@Ed_Siror you mention in the original post that you are using Caddy 2.2.0-alpine. However, it looks as if the build portion in Docker is using Caddy 2.1.1-alpine. Perhaps that is the issue? Are you using a docker-compose.yaml version for development and then a Dockerfile to build? It might need a version change in the Dockerfile if so.

From your Docker build output:

Digest: sha256:acf4730fd996e56a713b99d6f1892b87bf9f95af4ebc5fed53e942986ffbc2cf
Status: Downloaded newer image for caddy:2.1.1-alpine
Pulling mongo (mongo:)...
latest: Pulling from library/mongo

I am using 2.2.0-alpine now. Same problem. I don’t think it is a Caddy issue because the script works locally. I think something else is causing it.

Some good news. It looks like any directive in those brackets will not be recognized. I tried max_fails, fail_duration, and header_down with the same results.

I don’t know why this did not work last time.
It looks like {$INTERNAL_SERVER_NAME} environment variable was the culprit.
Hard-coding it fixed the issue.
It is working now.

Upon further investigation, it looks like the docker env file comment caused this.
Initial environment:
INTERNAL_SERVER_NAME=awszipperv2:8000 #Name has to include port if needed

Now working without comment:
INTERNAL_SERVER_NAME=awszipperv2:8000

The environment variables work without comments.

1 Like

Ah, yeah I don’t think env vars can have comments on the same line, but you can put it on the line just before. The # comment would have been parsed by Caddy, therefore also commenting out the { on the same line, causing the parse error.

That makes sense. But all my environment variables are commented on the same line locally and they still work on my ubuntu laptop. Why the discrepancy?

Docker’s env parser might work differently than whatever you use locally.

:man_shrugging:

This topic was automatically closed after 30 days. New replies are no longer allowed.