Caddy behind aws application load balancer

1. Caddy version (caddy version):

unknown. really, that the result of caddy -version

2. How I run Caddy:

Caddyfile:

   :80 :443 {

        # used to accept LetsEncrypt agreement
        tls     /etc/caddy/cert.pem /etc/caddy/key.pem

        # provisioned by docker-compose, all logs are routed to stdout
        

        log     /api    stdout "{remote} {method} {uri} {proto} {status} {size} {latency_ms}"

        log     /share    stdout "{remote} {method} {uri} {proto} {status} {size} {latency_ms}"


        errors

        # see Dockerfile for how static html/js is "built" and copied here
        root    /var/www/html
        gzip

        # in docker-compose context, the backend node api is known as "server"
        proxy   /api    server:9000 {
            without     /api
            transparent
        }

        # in docker-compose context, the backend node api is known as "express"
        proxy   /api    express:3000 {
            without     /api
            transparent
        }

        proxy   /share    express:3000/share {
            without     /share
            transparent
        }

        header / {
            # don't expose server name to clients
            -Server

            # don't allow serving from iframes
            X-Frame-Options "DENY"

            # HSTS - force clients to always connect via HTTPS
            Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
            #Add caching
            Cache-Control "max-age=259200"
        }
        
        header /api -Cache-Control
        header /share -Cache-Control
        header /api -Cache-Control



    }

a. System environment:

aws linux2
docker compose

c. Service/unit/compose file:

version: '3'

services:

  client:
    container_name: client
    build:
      context: frontend
    environment:
      - CADDY_SUBDOMAIN=test2     
    restart: always
    ports:
      - "80:80"
      - "443:443"
    links:
      - express
    volumes:
      - /home/opc/persist/.caddy:/root/.caddy

  server:
    container_name: server
    build:
      context: backend    
    restart: always

  express:
    container_name: express
    build: express
    environment:     
      - NODE_ENV=automation
    restart: always

3. The problem I’m having:

Hi
When running the docker compose above on an ec2 linux2, all is fine, when accessing from the browser the system is fine. Also the healthcheck (a POST endpoint) is fine via postman.
I have an aws application load balancer, with 2 listners:

  1. on http port 80 with the following rule:
Default:

redirecting to HTTPS://#{host}:443/#{path}?#{query}
  1. forwarding to a target with:
    Target type instance (registered target is the instance id)
    protocol port HTPS 443
    protocol version HTTP1
    IP address type IPv4

The problem is, the health status is Target.FailedHealthChecks. I’ve read the relevant docs, however cannot find the explanation for this.

4. Error messages and/or full log output:

 "TargetHealth": {
                "State": "unhealthy",
                "Reason": "Target.FailedHealthChecks",
                "Description": "Health checks failed"
            }

5. What I already tried:

Played with the caddy file, adding and removing the Caddysubdomain, and being doing general trayouts.

THANK YOU

That doesn’t make any sense.

From your config though, it seems like you’re probably still using Caddy v1.

Caddy v1 is EOL, and no longer supported. The last release was almost 2 years ago.

Please upgrade to Caddy v2.

1 Like

Hi thanks for the quick response.

I’ve updated my Caddyfile to this:

    :80 :443 {

        # used to accept LetsEncrypt agreement
        tls     /etc/caddy/cert.pem /etc/caddy/key.pem

        # provisioned by docker-compose, all logs are routed to stdout

        log     /api    stdout "{remote} {method} {uri} {proto} {status} {size} {latency_ms}"

        log     /share    stdout "{remote} {method} {uri} {proto} {status} {size} {latency_ms}"


        handle_errors 

        # see Dockerfile for how static html/js is "built" and copied here
        root   * /var/www/html
        encode gzip

        # in docker-compose context, the backend node api is known as "server"
        reverse_proxy   /api    server:9000 {
            without     /api
            transparent
        }

        # in docker-compose context, the backend node api is known as "express"
  

        reverse_proxy   /share    express:3000/share {
            without     /share
            transparent
        }

        header / {
            # don't expose server name to clients
            -Server

            # don't allow serving from iframes
            X-Frame-Options "DENY"

            # HSTS - force clients to always connect via HTTPS
            Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
            #Add caching
            Cache-Control "max-age=259200"
        }
        
        header /share Cache-Control
        header /api Cache-Control



    }

The Caddy install is done in the Dockerfile:

### STAGE 1: Build ###
FROM node:12-alpine as base

RUN apk add --no-cache \
    git

# make a directory for our application
WORKDIR /app

# angular
RUN npm install -g --unsafe-perm @angular/cli

# dependencies
COPY package.json .
COPY patch.js .
RUN npm cache clean --force
RUN npm install --unsafe-perm


COPY src src
COPY angular.json .
COPY tsconfig.json .
COPY tsconfig.base.json .
COPY tsconfig.worker.json .
RUN ng build --prod 

# STAGE 2: Setup ###

# caddy server
FROM alpine:3.10

# we are installing caddy using curl + bash
# so here come the dependencies
RUN apk add --no-cache \
    bash \
    curl \
    caddy

# Install latest version of caddy

COPY --from=base /app/dist /var/www/html

# the caddy config file
COPY Caddyfile /etc/Caddyfile
COPY cert.pem /etc/caddy/cert.pem
COPY key.pem /etc/caddy/key.pem

# for fun and logs
RUN caddy -version

# DOCUMENT ports
EXPOSE 443
CMD ["caddy", "--conf", "/etc/Caddyfile", "--agree=true"]

After these changes, the containers are restarting in an endless loop.

What is missing here?

Didn’t

RUN apk add --no-cache \
    bash \
    curl \
    caddy

is spouse to install the latest Caddy?

Thanks for your time and effort!

You haven’t properly updated your config to Caddy v2.

Also, I strongly recommend using our official Docker image instead. See it on Docker Hub

The command line interface has changed from v1. Caddy v2 was a complete rewrite from the ground up.

I’m having trouble running the caddy image from within the “client” image.
using the existing Dockerfile, after docker exec into the container, I receive the following:

bash-5.0# cat /etc/os-release
NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.10.9
PRETTY_NAME="Alpine Linux v3.10"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://bugs.alpinelinux.org/"
bash-5.0# apk add caddy
(1/1) Installing caddy (1.0.0-r0)
Executing caddy-1.0.0-r0.pre-install
Executing busybox-1.30.1-r5.trigger
OK: 29 MiB in 23 packages
bash-5.0# caddy -version
unknown
bash-5.0# caddy
Activating privacy features... done.

Serving HTTP on port 2015
http://:2015

WARNING: File descriptor limit 1024 is too low for production servers. At least 8192 is recommended. Fix with `ulimit -n 8192`.

The apk doc says it should install caddy2, but it install v1 for some reason. Is there no way to apk add caddy2?

Thanks

You’re using an old version of alpine. Caddy v2 doesn’t exist in the alpine repos for that version.

Again though, strongly recommend you use the official docker image instead. It’s properly configured to run Caddy correctly out of the box. There’s some extra setup steps you’d need to do otherwise.

Hi

Updating the alpine in the Dockerfile, I was able to update the caddy to devel.
Now the caddy file looks like this:

     {$CADDY_SUBDOMAIN}.com:443 {

        # used to accept LetsEncrypt agreement
        tls     /etc/caddy/cert.pem /etc/caddy/key.pem

        #log     /api    stdout "{remote} {method} {uri} {proto} {status} {size} {latency_ms}"

        #log     /share    stdout "{remote} {method} {uri} {proto} {status} {size} {latency_ms}"

        #errors

        # see Dockerfile for how static html/js is "built" and copied here
        root * /var/www/html
        encode gzip

        # in docker-compose context, the backend node api is known as "server"
        #reverse_proxy   /api    server:9000 {
        #    without     /api
        #    transparent
        #}

        reverse_proxy   /agroapi    express:3000
        #{
        #    without     /agroapi
        #    transparent
        #}

        reverse_proxy   /share    express:3000/share
        #{
        #    without     /share
        #    transparent
        #}

        header  {
            # don't expose server name to clients
            -Server

            # don't allow serving from iframes
            X-Frame-Options "DENY"

            # HSTS - force clients to always connect via HTTPS
            Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
            #Add caching
            Cache-Control "max-age=259200"
        }

        header /agroapi ?Cache-Control
        header /share ?Cache-Control
        header /api ?Cache-Control

    }

Now the docker are running, but in the browser I get an empty document, instead of the website. Is the reverse_proxy config wrong?
Thanks for your time and effort!

Path matching in Caddy is exact. So those path matchers will only match request to exactly those paths, and nothing else. Please see the request matching docs:

2 Likes

Thank you, now the app itself work as it did when using caddy v1 :slight_smile:

Now I’m returning to my original question, about working with AWS ALB. Again, the current issue is that the target group health check fail, on the same path that to browser returns status 200.
WHAT I HAVE:

  1. An EC2 instance with static IP and an A record pointing to it. My updated caddy file have the private IP of that instance as follows:
       {$CADDY_SUBDOMAIN}.companyname.com:443 {

when typing the subdomain in the browser, I receive the website.

  1. an AWS ALB, with a Cname pointing to it’s DNS. The ALB have 2 listeners, one for port 80, redirecting to port 443, the other for port 443 forwarding to an AWS Target group. The TG Registered target is same EC2 instance mentioned above. Here is the problem, the TG health check fails, to my understanding it is due to the caddy.
    Please help approach the issue correctly.

My updated caddy file:

    {$CADDY_SUBDOMAIN}.companyname.com:443 {

        tls     /etc/caddy/cert.pem /etc/caddy/key.pem
        
        log

        root *  /var/www/html
        file_server

        encode gzip

        route  /agroapi* {

           uri strip_prefix  /agroapi
           reverse_proxy  express:3000 {
                  header_up  Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"

           }

        }

        reverse_proxy /share/*  express:3000/share {
                  header_up  Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
        }
            

        header /agroapi ?Cache-Control
        header /share ?Cache-Control
}

Thanks for your time and effort!

I would write your config like this – it’s a bit clearer how things are being handled. I’m not sure I understand what you’re trying to do with the header lines though, that doesn’t make sense to me. I don’t think those do anything useful for you here.

{$CADDY_SUBDOMAIN}.companyname.com:443 {
	tls /etc/caddy/cert.pem /etc/caddy/key.pem
	log

	encode gzip

	header /agroapi ?Cache-Control
	header /share ?Cache-Control

	header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"

	handle_path /agroapi* {
		reverse_proxy express:3000
	}

	handle /share/* {
		reverse_proxy express:3000
	}

	handle {
		root * /var/www/html
		file_server
	}
}

I have nothing to go on here. Make a request with curl -v to mimic the kind of health check requests AWS is making. Show what’s in Caddy’s logs.

2 Likes

This topic was automatically closed after 30 days. New replies are no longer allowed.