Continuing the discussion from Docker Image for caddy server with all the addons:
I’m starting to think this is a bad idea. We really don’t want everyone building images via
curl | bash
I propose instead that we make an official image that builds the binary with a list of plugins passed in via an environment variable. It will create a larger image and be slow to startup the first time, but it will be more docker-like.
curl getcaddy.com/docker | bash -s git,hugo
This would build an image called local_caddy using the Caddyfile in the current directory and the plugins git and hugo (or whatever plugins you specify.)
What do you think?
- This is a good idea
- This is a horrible idea
Justification for making this
Caddy is statically compiled so there is no way to add/remove plugins at runtime. This would allow users to take advantage of Docker with minimal knowledge of creating containers. Local Docker image builds are preferable as they don’t need to include all/no/arbitrary amount of plugins.
One other idea I just had would be to create an image on Docker Hub that uses the build server to download Caddy with plugins based on a environment variable passed using
The problem with this is if you don’t use
--volume to expose
/usr/local/bin then it would grab a new binary from the build server with every restart (which is bad for multiple reasons.)
I have created a caddy revproxy image with docker-gen and it would be great if there where supported images available for common use cases.
Would you prefer an image that builds the binary locally whenever it’s started? Or a script to build an image locally? The latter will start much faster after the initial build while the former will delay a few seconds to compile the binary every time. (Although Go builds pretty fast so this may not be an issue.
Include all source code for Caddy+plugins and compile the caddy binary when you first start the container.
This will increase the image size but allow pulling the image straight from Docker Hub
- This is better
- This is worse
We should create some common an maintained images and push them to docker hub. It’s the easiest way deploy caddy with docker. Just link to the official caddy docker images?
Container should start fast, so it wouldn’t be the best way to download caddy each start.
The container would only have to compile it with the initial
docker restart wouldn’t have to do it because the container already has the binary.
We don’t want to officially support an install method that doesn’t allow easy customization of plugins.
Yes, one compile during create would be ok, but image tag / version should be the caddy version inside.
Should be easier to understand and use. Also support should be easier, because each official build is the same.
It should be pretty easy to tag an image with a specific Caddy version.
If we made that do you think there would still be any use for
curl getcaddy.com/docker | bash for building the image locally?
I like this idea and doesn’t seem all that hard. My take on how this works (If you are going to build the image on the client) with docker-compose and
Dockerfile and The [ARG] (Dockerfile reference | Docker Documentation) instruction.
$ curl getcaddy.com/docker | bash -s git,hugo
Download a standard Dockerfile
create a docker-compose which has a build config with current working dir as build context and he
-s git,hugo gets passed to the Dockerfile as
args config in docker-compose. At this point let the user customize the
docker-compose.yml or just run
$ docker-compose up -d
let docker-compose build the image and run it. Yea the initial build will take couple of mins.
Do we really want to require docker-compose though? For that we should probably just have a doc showing an example.
I don’t think people should be using a script to setup a
In fact I’m not entirely sure building an image via
curl | bash is a good idea either anymore. I think we would be better off making an image that builds the binary when it’s first started.
If I understand you correctly, this would be similar to how a lot of Docker maintainers provide images of different version of their software software. In other words, we’d have tags for different add-ons (e.g., docker pull x/caddy:git, docker pull x/caddy:jwt). This would be easier for users, but would require quite a few tags to cover all add-ons and combinations so I think managing this wouldn’t work.
I’m not a fan of building at container start. It goes against everything Docker was meant to be. The whole idea is to build before distribution.
I think different tags would be sufficient, supporting common configurations. Perhaps you could use some analytics from the Caddy download page to determine what the most common add on combinations are.
Docker isn’t really a place where users would expect to customize the build at runtime like this anyways – rather, a Docker user would just make their own Dockerfile that suits their needs if they aren’t satisfied with the available options. To better serve users who want a customized build, we could create a Caddy base image to facilitate this (i.e. a Go environment with the Caddy source ready to go, and instructions on how to pull in the desired plugins and perform the build).
Tl;dr I feel that we should use Docker in an idiomatic way.
We could tinker around with the onbuild instruction in the Caddy base, and have users define what plugins they want in their directory and then compile. This is how the Rails onbuild starter image works, and I think would be a really good best-of-both-worlds solution here.
I just don’t like taking away the ability to easily customize plugins. If you put out a hand full of images with common plugins then users won’t get the most out of Caddy.
I’m starting to think we should just put an example Dockerfile in the docs and call it done.
Well, there’s still the onbuild thing, which is interesting and could solve everything.
Just as a test to see if it’d be fast enough I’ll start working on in-image building of Caddy in this repo.
If nothing else, it’d be a neat.
No i mean the
ONBUULD docker directive I linked to above
Oh my bad! Hey that actually looks cool.
What would a user’s Dockerfile who uses that image look like?