1. The problem I’m having:
Trying to run Caddy to provide a proxy to a private S3 bucket. My Kubernetes environment doesn’t allow “root” or capabilities added (e.g. to run non-root on priviledged ports).
So, my first attempt was to add a non-root user to the Dockerfile (see below). Then I removed the cap from the Caddy binary because before the container still failed with “exec /usr/bin/caddy operation not permitted”.
Now I am stuck with the log below.
I think my Caddyfile is correct, I am running it locally with Docker and it works. Though I am not sure how to mimic my Kubernetes’ behavior to prohibit “binding to 80”.
This is my successful docker run:
docker run --rm \
--cap-drop=ALL \
-e AWS_ACCESS_KEY_ID \
-e AWS_SECRET_ACCESS_KEY \
-e PORT=3000 \
-p 3030:3000 \
$(IMAGE)
2. Error messages and/or full log output:
The following is the log of the pod - basically a restart loop. I don’t understand why it is still trying to bind to 80. I’ve traced through the code a bit — best guess I have is that it’s loading defaults before applying my configuration.
{“level”:“info”,“ts”:1774643485.4812043,“msg”:“maxprocs: Updating GOMAXPROCS=1: using minimum allowed GOMAXPROCS”}
{“level”:“info”,“ts”:1774643485.4812267,“msg”:“GOMEMLIMIT is updated”,“GOMEMLIMIT”:120795955,“previous”:9223372036854775807}
{“level”:“info”,“ts”:1774643485.4812307,“msg”:“using config from file”,“file”:“/etc/caddy/Caddyfile”}
{“level”:“info”,“ts”:1774643485.481233,“msg”:“adapted config to JSON”,“adapter”:“caddyfile”}
{“level”:“info”,“ts”:1774643485.4835198,“logger”:“admin”,“msg”:“admin endpoint started”,“address”:“localhost:2019”,“enforce_origin”:false,“origins”:[“//localhost:2019”,“//[::1]:2019”,“//127.0.0.1:2019”]}
{“level”:“warn”,“ts”:1774643485.4836476,“logger”:“http.auto_https”,“msg”:“server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server”,“server_name”:“srv0”,“http_port”:80}
{“level”:“info”,“ts”:1774643485.4875364,“logger”:“tls”,“msg”:“cleaning storage unit”,“storage”:“FileStorage:/data/caddy”}
{“level”:“info”,“ts”:1774643485.4901228,“logger”:“tls”,“msg”:“finished cleaning storage units”}
{“level”:“info”,“ts”:1774643485.4902031,“logger”:“tls.cache.maintenance”,“msg”:“started background certificate maintenance”,“cache”:“0x3e906c01af80”}
{“level”:“info”,“ts”:1774643485.490218,“logger”:“tls.cache.maintenance”,“msg”:“stopped background certificate maintenance”,“cache”:“0x3e906c01af80”}
{“level”:“info”,“ts”:1774643485.4902267,“logger”:“http”,“msg”:“servers shutting down with eternal grace period”}
Error: loading initial config: loading new config: http app module: start: listening on :80: listen tcp :80: bind: permission denied
3. Caddy version:
v2.11.2 h1:iOlpsSiSKqEW+SIXrcZsZ/NO74SzB/ycqqvAIEfIm64=
4. How I installed and ran Caddy:
ARG CADDY_FS_VERSION=v0.12.2
FROM caddy:2.11.2-builder AS builder
RUN xcaddy build \
--with github.com/sagikazarmark/caddy-fs-s3@${CADDY_FS_VERSION}
FROM caddy:2.11.2
ARG RUNID=65534
COPY --chown=${RUNID}:${RUNID} --from=builder /usr/bin/caddy /usr/bin/caddy
ADD --chown=${RUNID}:${RUNID} . /
RUN apk add --no-cache libcap \
&& setcap -r /usr/bin/caddy 2>/dev/null || true \
&& chown -R ${RUNID}:${RUNID} \
/data/caddy/ \
/config/caddy/ \
/etc/caddy \
/srv
EXPOSE 3000
USER ${RUNID}:${RUNID}
(for the ADD . /) Next to the Dockerfile, I have a file system tree such as:
rootfs
└── etc
└── caddy
└── Caddyfile
3 directories, 1 file
a. System environment:
Kubernetes
b. Command:
CMD ["caddy", "run", "--config", "/etc/caddy/Caddyfile", "--adapter", "caddyfile"]
c. Service/unit/compose file:
apiVersion: v1
kind: Pod
metadata:
name: caddy
namespace: ns
spec:
containers:
- name: caddy
image: image
env: []
imagePullPolicy: Always
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
readOnlyRootFilesystem: false
allowPrivilegeEscalation: false
restartPolicy: Always
automountServiceAccountToken: false
securityContext:
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
imagePullSecrets:
- name: secret
d. My complete Caddy config:
{
debug
admin off
auto_https off
http_port 3000
https_port 3443
filesystem myfs s3 {
bucket bucket
region us-east-1
endpoint https://s3.server
use_path_style
}
}
:{$PORT} {
bind 0.0.0.0
file_server {
fs myfs
browse
}
}