unknown. really, that the result of caddy -version
2. How I run Caddy:
Caddyfile:
:80 :443 {
# used to accept LetsEncrypt agreement
tls /etc/caddy/cert.pem /etc/caddy/key.pem
# provisioned by docker-compose, all logs are routed to stdout
log /api stdout "{remote} {method} {uri} {proto} {status} {size} {latency_ms}"
log /share stdout "{remote} {method} {uri} {proto} {status} {size} {latency_ms}"
errors
# see Dockerfile for how static html/js is "built" and copied here
root /var/www/html
gzip
# in docker-compose context, the backend node api is known as "server"
proxy /api server:9000 {
without /api
transparent
}
# in docker-compose context, the backend node api is known as "express"
proxy /api express:3000 {
without /api
transparent
}
proxy /share express:3000/share {
without /share
transparent
}
header / {
# don't expose server name to clients
-Server
# don't allow serving from iframes
X-Frame-Options "DENY"
# HSTS - force clients to always connect via HTTPS
Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
#Add caching
Cache-Control "max-age=259200"
}
header /api -Cache-Control
header /share -Cache-Control
header /api -Cache-Control
}
Hi
When running the docker compose above on an ec2 linux2, all is fine, when accessing from the browser the system is fine. Also the healthcheck (a POST endpoint) is fine via postman.
I have an aws application load balancer, with 2 listners:
on http port 80 with the following rule:
Default:
redirecting to HTTPS://#{host}:443/#{path}?#{query}
forwarding to a target with:
Target type instance (registered target is the instance id)
protocol port HTPS 443
protocol version HTTP1
IP address type IPv4
The problem is, the health status is Target.FailedHealthChecks. I’ve read the relevant docs, however cannot find the explanation for this.
:80 :443 {
# used to accept LetsEncrypt agreement
tls /etc/caddy/cert.pem /etc/caddy/key.pem
# provisioned by docker-compose, all logs are routed to stdout
log /api stdout "{remote} {method} {uri} {proto} {status} {size} {latency_ms}"
log /share stdout "{remote} {method} {uri} {proto} {status} {size} {latency_ms}"
handle_errors
# see Dockerfile for how static html/js is "built" and copied here
root * /var/www/html
encode gzip
# in docker-compose context, the backend node api is known as "server"
reverse_proxy /api server:9000 {
without /api
transparent
}
# in docker-compose context, the backend node api is known as "express"
reverse_proxy /share express:3000/share {
without /share
transparent
}
header / {
# don't expose server name to clients
-Server
# don't allow serving from iframes
X-Frame-Options "DENY"
# HSTS - force clients to always connect via HTTPS
Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
#Add caching
Cache-Control "max-age=259200"
}
header /share Cache-Control
header /api Cache-Control
}
The Caddy install is done in the Dockerfile:
### STAGE 1: Build ###
FROM node:12-alpine as base
RUN apk add --no-cache \
git
# make a directory for our application
WORKDIR /app
# angular
RUN npm install -g --unsafe-perm @angular/cli
# dependencies
COPY package.json .
COPY patch.js .
RUN npm cache clean --force
RUN npm install --unsafe-perm
COPY src src
COPY angular.json .
COPY tsconfig.json .
COPY tsconfig.base.json .
COPY tsconfig.worker.json .
RUN ng build --prod
# STAGE 2: Setup ###
# caddy server
FROM alpine:3.10
# we are installing caddy using curl + bash
# so here come the dependencies
RUN apk add --no-cache \
bash \
curl \
caddy
# Install latest version of caddy
COPY --from=base /app/dist /var/www/html
# the caddy config file
COPY Caddyfile /etc/Caddyfile
COPY cert.pem /etc/caddy/cert.pem
COPY key.pem /etc/caddy/key.pem
# for fun and logs
RUN caddy -version
# DOCUMENT ports
EXPOSE 443
CMD ["caddy", "--conf", "/etc/Caddyfile", "--agree=true"]
After these changes, the containers are restarting in an endless loop.
I’m having trouble running the caddy image from within the “client” image.
using the existing Dockerfile, after docker exec into the container, I receive the following:
bash-5.0# cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.10.9
PRETTY_NAME="Alpine Linux v3.10"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
bash-5.0# apk add caddy
(1/1) Installing caddy (1.0.0-r0)
Executing caddy-1.0.0-r0.pre-install
Executing busybox-1.30.1-r5.trigger
OK: 29 MiB in 23 packages
bash-5.0# caddy -version
unknown
bash-5.0# caddy
Activating privacy features... done.
Serving HTTP on port 2015
http://:2015
WARNING: File descriptor limit 1024 is too low for production servers. At least 8192 is recommended. Fix with `ulimit -n 8192`.
The apk doc says it should install caddy2, but it install v1 for some reason. Is there no way to apk add caddy2?
You’re using an old version of alpine. Caddy v2 doesn’t exist in the alpine repos for that version.
Again though, strongly recommend you use the official docker image instead. It’s properly configured to run Caddy correctly out of the box. There’s some extra setup steps you’d need to do otherwise.
Updating the alpine in the Dockerfile, I was able to update the caddy to devel.
Now the caddy file looks like this:
{$CADDY_SUBDOMAIN}.com:443 {
# used to accept LetsEncrypt agreement
tls /etc/caddy/cert.pem /etc/caddy/key.pem
#log /api stdout "{remote} {method} {uri} {proto} {status} {size} {latency_ms}"
#log /share stdout "{remote} {method} {uri} {proto} {status} {size} {latency_ms}"
#errors
# see Dockerfile for how static html/js is "built" and copied here
root * /var/www/html
encode gzip
# in docker-compose context, the backend node api is known as "server"
#reverse_proxy /api server:9000 {
# without /api
# transparent
#}
reverse_proxy /agroapi express:3000
#{
# without /agroapi
# transparent
#}
reverse_proxy /share express:3000/share
#{
# without /share
# transparent
#}
header {
# don't expose server name to clients
-Server
# don't allow serving from iframes
X-Frame-Options "DENY"
# HSTS - force clients to always connect via HTTPS
Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
#Add caching
Cache-Control "max-age=259200"
}
header /agroapi ?Cache-Control
header /share ?Cache-Control
header /api ?Cache-Control
}
Now the docker are running, but in the browser I get an empty document, instead of the website. Is the reverse_proxy config wrong?
Thanks for your time and effort!
Path matching in Caddy is exact. So those path matchers will only match request to exactly those paths, and nothing else. Please see the request matching docs:
Thank you, now the app itself work as it did when using caddy v1
Now I’m returning to my original question, about working with AWS ALB. Again, the current issue is that the target group health check fail, on the same path that to browser returns status 200.
WHAT I HAVE:
An EC2 instance with static IP and an A record pointing to it. My updated caddy file have the private IP of that instance as follows:
{$CADDY_SUBDOMAIN}.companyname.com:443 {
when typing the subdomain in the browser, I receive the website.
an AWS ALB, with a Cname pointing to it’s DNS. The ALB have 2 listeners, one for port 80, redirecting to port 443, the other for port 443 forwarding to an AWS Target group. The TG Registered target is same EC2 instance mentioned above. Here is the problem, the TG health check fails, to my understanding it is due to the caddy.
Please help approach the issue correctly.
I would write your config like this – it’s a bit clearer how things are being handled. I’m not sure I understand what you’re trying to do with the header lines though, that doesn’t make sense to me. I don’t think those do anything useful for you here.