Caddy to caddy file?

I’m trying to use caddy in conjunction with a statically built HTML website that is deployed using docker compose but I think I’ve set it up in a really convoluted manner.

1. Output of caddy version:

Caddy 2.6

2. How I run Caddy:

a. System environment:

Debian v11 Bullseye
Docker Compose / Docker v20

b. Command:

docker compose up --build

c. Service/unit/compose file:

Docker compose that contains:

caddy:  
  image: caddy:2.6-alpine  
  restart: unless-stopped  
  ports:  
    - "80:80"  
    - "443:443"  
    - "443:443/udp"  
  volumes:  
    - ./Caddyfile:/etc/caddy/Caddyfile  

client:  
  build:  
    context: ./client  
  ports:  
    - "3000:80"

The docker compose Caddyfile

http://frontend.webdomaintests.com {
		# certificate recovery commented out
        # tls my.address@inbox.com

        # Another common task is to set up a reverse proxy:
        reverse_proxy client:3000
}

I’m just trying to get this to work with regular http before I move everything over to SSL.

The dockerfile contains nothing special, it just copies over static html files

FROM base  
  
WORKDIR /myapp  
  
COPY --from=production-deps /myapp/node_modules /myapp/node_modules  
  
COPY --from=build /myapp/dist /myapp/dist  
ADD . .  
  
FROM caddy:2.6-alpine  
  
COPY Caddyfile /etc/caddy/Caddyfile  
COPY --from=build /myapp/dist /srv

This single docker image also has a Caddyfile that looks like this:

:3000 {
		# certificate recovery
        # tls my.address@inbox.com

        # Set this path to your site's directory.
        root * /srv

        # Enable the static file server.
        file_server

        encode gzip

		@static {
			file
			path *.ico *.css *.js *.gif *.jpg *.jpeg *.png *.svg *.woff *.pdf *.webp
		}
}

3. The problem I’m having:

I’ve tested this locally and can hit the domain http://frontend.webdomaintests.com without any issues, but it feels weird to have to run caddy as a service at the compose level, and then caddy again in the docker file.

What happens when I make several docker compose (representing different projects)? Is it possible to not put Caddy in Docker (compose or regular docker) at all, and just have it installed on the VPS server itself?

Just use a single Caddy instance, you don’t need multiple. It’s just added overhead in this case. Unless I misunderstand your goals.

You don’t need to copy in the Caddyfile into your container if you prefer not to, you can just use a volume (as you’ve done for the other one).

This isn’t doing anything. You’re not using the @static matcher anywhere, so it gets dropped.

The email address isn’t for “recovery”. It’s just to give ACME issuers a contact in case they need to alert you of any issues with your certificate.

Absolutely. That would work fine.

Or you could use something like GitHub - lucaslorentz/caddy-docker-proxy: Caddy as a reverse proxy for Docker which can automatically configure itself from Docker labels you put on your containers, as long as they all share the same Docker network.

1 Like

That makes sense. But if I only have the Caddyfile on the docker-compose, how does it route the site directory (since the site directory is a docker container internal path)?

For example, if I modify the docker-compose file it now has this:

http://frontend.webdomaintests.com {
        
        # Another common task is to set up a reverse proxy:
        reverse_proxy client:3000

        # Set this path to your site's directory. 
        root * /srv

        # Enable the static file server.
        file_server

        encode gzip
 
        @static {
            file
            path *.ico *.css *.js *.gif *.jpg *.jpeg *.png *.svg *.woff *.pdf *.webp
        }

        header @static Cache-Control max-age=5184000
}

How can this line root * /srv know about srv which is a docker container directory that contains my index.html file?

I’m used to running docker containers that have their own built-in server (python flask or node express), but this docker container (client) only contains a list of static html files. That’s why I tried adding Caddy twice, because I didn’t know how to route into a docker container.

I’m saying you should just have a single service in your docker-compose file. It can be build from your Dockerfile to copy in your static site.

caddy:  
  build:  
    context: ./client  
  restart: unless-stopped  
  ports:  
    - "80:80"  
    - "443:443"  
    - "443:443/udp"  
  volumes:  
    - ./Caddyfile:/etc/caddy/Caddyfile
    - caddy_data:/data

And your Caddyfile can be just this:

http://frontend.webdomaintests.com {
        root * /srv
 
        @static {
            file
            path *.ico *.css *.js *.gif *.jpg *.jpeg *.png *.svg *.woff *.pdf *.webp
        }
        header @static Cache-Control max-age=5184000

        encode gzip

        file_server
}

See the docker-compose example we have in the docs:

1 Like

Thanks @francislavoie .

I think my misunderstanding comes from the fact that the full app is a compose consisting of a backend (node express) and a frontend (static html). The problem is that the dockerfile that compiles/builds/bundles the website dies at the end since its just a bunch of static files in /srv.

This is a visual of my project:

To “fix” this, I added FROM caddy:2.6-alpine to my client dockerfile. That way the container wouldn’t exit after finishing building the /srv assets.

However, I still need Caddy to also route my backend which is why I added it to my docker-compose like this:

  caddy:
    image: caddy:2.6-alpine
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
      - "443:443/udp"
    volumes:
      - $PWD/Caddyfile:/etc/caddy/Caddyfile

  client:
    build:
      context: ./client
    ports:
      - "3000:80"

  server:
    build:
      context: ./server
    ports:
      - "4000:4000"

I’m guessing that I have no choice in this situation, and I’ll need to keep Caddy in my client docker file, and also keep Caddy in my docker-compose so it can handle regular routing to my backend. Since the backend is actively listening using an express server, it doesn’t need its own Caddy.

Does that make sense?

But you can have a single Caddy instance that does both those things. It doesn’t need to be separate.

Your config in that case might look like this, to proxy to your NodeJS server container. So you’d only have two containers, one with your API server, and the other which is Caddy + your built frontend. I’m assuming your API is using /api as a route prefix, so you can match those requests and route them to your API, and everything else goes to your frontend fileserver.

http://frontend.webdomaintests.com {
	handle /api* {
		reverse_proxy server:4000
	}

	handle {
		root * /srv

		@static path *.ico *.css *.js *.gif *.jpg *.jpeg *.png *.svg *.woff *.pdf *.webp
		header @static Cache-Control max-age=5184000

		encode gzip
		file_server
	}
}

Huh, that’s pretty neat. The backend was actually a separate subdomain, backend.*.com vs frontend.*.com . But… I guess there no technical reason for me to do that.

Multiple Docker Compose App Problem

That’s got it working, thanks! My only issue now is that I put together another Docker Compose (basically identical), and then changed the Caddyfile from frontend.webdomaintests.com to frontend.webdomaintests2.com. Unfortunately, once I try to spin up the “docker compose 2”, it throws port already allocated error:

Error response from daemon: driver failed programming external connectivity on endpoint deploy-test-2-caddy-and-client-1 (4ca530e32a698c5792cddef0dc3129d59634133019d513404e66853577f6a927): Bind for 0.0.0.0:443 failed: port is already allocated

Each app (built with docker compose) is represented by a different domain. So since the first docker compose grabs the http(s) ports:

version: "3.7"
services:

  caddy-and-client:
    build:
      context: ./client
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
      - "443:443/udp"
    volumes:
      # When client docker spins up Caddy, this will give it access to the Caddyfile
      - ./Caddyfile:/etc/caddy/Caddyfile
      - caddy_data:/data

  server:
    build:
      context: ./server
#    ports:
#      - "4000:4000"

volumes:
  caddy_data:

I can’t run the second “identical” compose file. I think I’m going to have to build a dedicated “docker compose 3 just for caddy”, this will reverse proxy every top-level domain.

ATTEMPTED FIX :

Okay, I’ve written a dedicated docker compose for caddy. I also have all both the APP compose and CADDY compose on the declared network:

networks:
  all-apps-network:
    external: true

Unfortunately, the “Caddy Compose” can’t seem to refer to labeled containers in other “Docker Compose” files…

http://frontend.webdomaintests.com {
	handle {
		reverse_proxy myclient:3000
	}
}

Since the container myclient is part of the APP compose - I thought putting them all on the same external network would do the trick, but attempting to hit frontend results in Caddy compose showing:

{"level":"error","ts":1668310788.325823,"logger":"http.log.error","msg":"dial tcp: lookup myclient on 127.0.0.11:53: no such host","request":{"remote_ip":"172.22.0.1","remote_port":"55688","proto":"HTTP/1.1","method":"GET","host":"frontend.webdomaintests.com","uri":"/","headers":{"User-Agent":["curl/7.79.1"],"Accept":["*/*"]}},"duration":0.011544834,"status":502,"err_id":"utsiy7kmn","err_trace":"reverseproxy.statusError (reverseproxy.go:1272)"}

Caddy Composer has

http://frontend.webdomaintests.com {
	handle {
		reverse_proxy myclient:3000
	}
}

App Composer has

version: "3.7"
services:

  client:
    container_name: myclient
    build:
      context: ./client
    ports:
      - "3000:3000"
 
networks:
  all-apps-network:
    external: true

Finally client container caddy has:

:3000 {
		root * /srv
		encode gzip
		file_server
}

I can hit it if I refer to port 3000 in curl request, but curl against http://frontend.webdomaintests.com gives 502 Bad Gateway.

@francislavoie Just wanted to let you know that for some reason when I inspected my Caddy compose, it showed the default compose network which is why it couldn’t communicate with the other compose.

After I changed the yaml from:

networks:
  all-apps-network:
    external: true

to:

networks:
  default:
    external: true
    name: all-apps-network

It seemed to join the correct one. I have no idea why. :confused:

Thanks for all your help, I’m really happy to be using Caddy having switched over from Apache.

This is where using GitHub - lucaslorentz/caddy-docker-proxy: Caddy as a reverse proxy for Docker would be a good fit for you. This sets up a Caddy instance with a plugin that watched containers started up in the same network, and configures Caddy from the Docker labels on those containers. If you have many separate apps you want to run, this is probably the best approach to keep things relatively separated.

Comment on the config – you don’t need a handle here, since you only have this one route.

You need to add the network to the client service as well. The network part at the end just declares which networks you want to use and settings for each network, but you need to say which service uses what network. By default, they only use the default network. If you want to use an external network, you need to put it on that service too.

Here you’re just redefining what default is. That’s not really the right solution.

1 Like

thanks @francislavoie , I reverted the default network per your advice. Sorry to resurrect this. I messed around with caddy-docker-proxy but I still think ultimately no matter what I’ll need two Caddy instances.

This represents the current system for two theoretical web apps:

Why? Because each static website (mysite1.com, mysite2.com, etc) are represented by separate containers, each bundled/built using Node and Vite, which is handled seamlessly by their dockerfiles. The issue of course is that those website containers immediately dies after finishing the build. The static assets exist in each container’s internal /srv folder but it doesn’t matter.

To fix that issue, I use a “tiny Caddyfile” for each project’s docker compose who redirects from the top-level primary Caddy using internal ports:

:3000 {
		root * /srv
		encode gzip
		file_server
}

I don’t completely hate this, to your knowledge would redirecting from top-level Caddy:

http://frontend.mysite1.com {
	handle {
		reverse_proxy mysite-client:3000
	}
}

to another Caddy:

:3000 {
		root * /srv
		encode gzip
		file_server
}

slow things down?

As far as I can tell, the ONLY way that I could manage having a single primary level Caddy would be if the static HTML dockerfile copied its dist folder OUTSIDE to the host. That way the HTML files would be available to the top-level Caddy. This is apparently possible using Custom build outputs.

I’m actually surprised this has been so difficult to figure out. I would have thought a VPS serving multiple sites each having a docker container for a backend and a static frontend would have been common.

I’m almost tempted to throw in the towel completely on static sites, and use NextJS so that the docker container that serves the frontend is just a Node Express wrapper.

It feels like this will continue to happen, particularly if I start to add Hugo, Jekyll or other systems where the Dockerfile is really more doing the building/compiling, then actual serving.

1 Like

That’s not quite true. There’s ways to only have one. Maybe it’s not as convenient with CDP if you specifically want to manage separate apps… separately, but it is possible.

You don’t need to have Caddy in your containers with your frontend code. You could just use a dummy command like CMD sleep infinity or CMD tail -f /dev/null in the Dockerfile which keeps the container running, and then use volumes_from on your Caddy container to pull in the files from those running containers and serve them.

This is probably impractical to set up for you though because using volumes from another container in a entirely different service file isn’t ideal, plus volumes can’t be added dynamically to the main Caddy container because volumes need to be set up when the container is created, not at runtime.

Nah, this approach is just fine. You’ll be able to serve tons of current requests even with the proxy added. It is obviously less efficient than just having the front Caddy serve the files directly, but it’s never going to be enough of a difference for you unless you get to a scale where you’d need to load balance across multiple servers, anyway.

You don’t need a handle there. You can remove that to save a couple lines of config.

I’m probably stating the obvious, but don’t forget to remove http:// when you go to production.

That would certainly be simpler, since with NextJS you could just have one container per site. And you get the advantage of SSR, which can help users have better initial page load performance since the initial render is handled by the server. But it depends on what kind of apps/sites you’re building and whether you consider that a help or not.

Maybe consider Astro? https://astro.build/ It’s a pretty slick tool for building static-ish sites. This page in their docs explain the philosophy Why Astro? | Docs

1 Like

This topic was automatically closed after 30 days. New replies are no longer allowed.