Super-sized Image Build Size for Caddy

1. The problem I’m having:

Trying to build Caddy 2.6.4 using Docker and xcaddy (with the Cloudflare DNS module). The build works but the image is double the size of the version you can customize and download from the website (86MB vs 40MB).

I’m trying to learn how to run servers, use Docker/Podman, and a bunch of other things all at once. If this belongs in a Docker or xcaddy forum, please feel free to delete. I’m still new to all of this. I received great help a couple of weeks ago in this forum so I thought I would start here to see if anyone could help me understand what I’m missing.

2. Error messages and/or full log output:

No error messages. Full Github image build log can be viewed on Github.

3. Caddy version:

v2.6.4 h1:2hwYqiRwk1tf3VruhMpLcYTg+11fCdr8S3jhNAdnPy8=

4. How I installed and ran Caddy:

I used the following multi-stage Dockerfile to keep the file size down (theoretically):

FROM caddy:2.6.4-builder-alpine AS build

RUN XCADDY_SKIP_CLEANUP=0 xcaddy build --with github.com/caddy-dns/cloudflare

FROM caddy:2.6.4-alpine AS final

COPY --from=build /usr/bin/caddy /usr/bin/caddy

a. System environment:

  • Linode with Fedora 37 image—all software and security updates installed
  • AMD EPYC 7713 64-Core Processor
  • Podman: v4.4.2 w/ Go v1.19.6 for OS/Arch linux/amd64

b. Command:

I use Github actions to automatically build and tag my Docker images. This is the Github Action to build the image:

name: Docker Image

on:
  push:
    branches:
      - main
      - seed

    tags:
      - v*

  pull_request:

env:
  IMAGE_NAME: caddy-cf

jobs:
  push:
    runs-on: ubuntu-latest
    permissions:
      packages: write
      contents: read

    steps:
      - uses: actions/checkout@v3

      - name: Build image
        run: docker build . --file Dockerfile --tag $IMAGE_NAME --label "runnumber=${GITHUB_RUN_ID}"

      - name: Log in to registry
        run: echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u $ --password-stdin

      - name: Push image
        run: |
          IMAGE_ID=ghcr.io/${{ github.repository_owner }}/$IMAGE_NAME

          IMAGE_ID=$(echo $IMAGE_ID | tr '[A-Z]' '[a-z]')
          VERSION=$(echo "${{ github.ref }}" | sed -e 's,.*/\(.*\),\1,')
          [[ "${{ github.ref }}" == "refs/tags/"* ]] && VERSION=$(echo $VERSION | sed -e 's/^v//')
          [ "$VERSION" == "main" ] && VERSION=latest
          echo IMAGE_ID=$IMAGE_ID
          echo VERSION=$VERSION
          docker tag $IMAGE_NAME $IMAGE_ID:$VERSION
          docker push $IMAGE_ID:$VERSION

This builds the image and then I pull and create a container. It runs great. Just curious about the file size differential.

podman pull ghcr.io/jaredhowland/caddy-cf:2.6.4

# Run with the following command:
caddy run --config /etc/caddy/Caddyfile --adapter caddyfile

c. Service/unit/compose file:

Included above

d. My complete Caddy config:

Not relevant as Caddy runs perfectly. I can include it if it would really help. But the file size differential is there before even spinning up the container so that is my only problem. So it is more Caddy adjacent than necessarily a Caddy issue (but I don’t know enough to say for sure so I’m starting here).

5. Links to relevant resources:

All of this can be seen in my Github repository where I’m trying to create this:

https://github.com/jaredhowland/caddy-cloudflare

I know there are probably better ways of doing this, and several other projects that already do this, or similar things. But I’m trying to learn so that I can expand on this later and do things I haven’t seen in other projects.

It’s because of this part. The original caddy:2.6.4-alpine image has a vanilla Caddy binary, then a new image layer is created by the COPY with the custom binary.

Docker layering means that both binaries exist in the layer chain. You could squash the layers with the --squash flag when building the image, and this would flatten all the layers so you only end up with a single layer, and the vanilla binary would be gone.

What we could do as maintainers of the official image is to provide a variant of the base image without /usr/bin/caddy in it so that this pattern wouldn’t have the original vanilla binary first, but we haven’t done that yet, because a 40MB difference is typically not enough of a concern, and it’s kinda weird to ship an image that’s non-functional (I’m not sure the Docker official images team would be happy with it).

So, I’m assuming that if I changed

FROM caddy:2.6.4-alpine AS final

to something like

FROM alpine:3.17.3 AS final

It wouldn’t work because the base image has other customizations in it in more locations than usr/bin/caddy. Is that what’s going on? If so, I think I understand what’s going on and will just live with the 40MB.

What would happen if I changed it to the following?

FROM caddy:2.6.4-builder-alpine AS build

RUN XCADDY_SKIP_CLEANUP=0 xcaddy build --with github.com/caddy-dns/cloudflare

FROM caddy:2.6.4-alpine AS final

# NEW LINE TO REMOVE `/usr/bin/caddy`
RUN rm -rf. /usr/bin/caddy

COPY --from=build /usr/bin/caddy /usr/bin/caddy

Would that remove the base image and just replace it with the custom one or would it mess up other parts of the base image I would need for it to work properly?

Essentially, yeah. But it’s not that much, you could copy it into your own Dockerfile if you want, I guess. See:

https://github.com/caddyserver/caddy-docker/blob/master/2.6/alpine/Dockerfile

This would just add another layer. It wouldn’t help at all.

I suggest you do some reading about how Docker’s layering works.

Perfect, thank you! This helps me understand a little better. I will read up on how Docker layering works.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.