How should an official Docker image work?

Continuing the discussion from Would you use getcaddy.com/docker to quickly building a local Caddy Docker image?:

After discussing on the topic of using curl | bash to build a Docker image locally, I am starting to think that is a bad idea.

So instead, I propose we create an official Docker image to do the same thing.

Proposal

Create an official image containing the Go compiler, Caddy source code, and source code for all of the plugins.

When the image is first started via docker run it will build Caddy using plugins defined in an environment variable: -e PLUGINS=git,hugo

The downside is this will create a larger image, but it’s the only way I can see of officially supporting a Docker image for Caddy. But with Alpine it shouldn’t be TOO big. Edit: Just checked and it would be around ~30MB I think.
What do you all think?

Another downside I see is, it will increase the docker run startup time since the binary has to be built. Would the image itself contain all the source for all the plugins? as in go get commands for all plugins or just caddy source?

It would contain all plugin source as to be completely self contained. The source for the current 11 plugins is <2MB. The image would be completely self contained this way. Docker images are best when they include all dependencies.

docker run would be slower but, what is the alternative?

May be some sort of caching? save the binary to a volume and restore if running with same plugins

Is that needed?
Compilation will probably be <15 seconds and Go 1.7 should cut it in half.

Why would it need caching? Are people going to be removing the container very often? Only time you would need to destroy the container is if you change the plugins. The built binary will remain inside the container until you run docker rm. But the binary will still be there when you docker restart.

But you’re right you could allow the user to --volume /usr/local/bin, but I think it’s unnecessary.

Oh and Go does cache compilation for imported packages, so we might be able to do some optimization hacks around compiling.
Like compile Caddy with all plugins added. Then at docker run time remove the ones that aren’t requested and recompile. Should only take a couple seconds.

It seems Alpine has some weird DNS stuff. Just realized my Caddyfile was silently failing because it was proxying to DNS instead of IP. Will investigate further tomorrow for a solution.

Since it doesn’t add much to the image anyway, I’d just have everything built into the image. GitHub - ZZROTDesign/alpine-caddy: Alpine Linux Docker Container running Caddyserver does exactly that.

That’s an awesome project! I will definitely be looking into how they did it later.

The only issue I see with how that one is done is it includes all plugins, which isn’t a very good default IMO.
@matt am I wrong in thinking this? Does 0.9 make including all plugins a non-issue?

Some plugins will increase the size of the binary/image significantly; others have external dependencies (e.g. the git plugin runs the git command).

The biggest question is that of trust. Right now, there are only a few plugins and we can look at each one individually. What happens when there are 100? Not just trusting a package not to have malicious code, but to not have significant bugs either.

This side-steps into a separate topic that we don’t have to get into here, but these are just a few things to consider.

Good point. So if we do builds in-container, we would have to also include dependencies in the image as well.
I’m starting to see why there isn’t an official image on Docker Hub yet. Not the most simple problem to solve!

I think it depends on the goals of using Caddy in Docker.

I still don’t see Docker use as relevant/necessary. I see it as an added, unnecessary complexity. So I’m afraid I won’t be able to be very helpful in figuring this out, at least from an advice standpoint. But I can help clarify some workings of Caddy if it helps…

1 Like

Caddy in Docker is simplicity of use really.
I’m running 2 large apps (GitLab/Discourse) on a single server both in their own container. Being able to do the same with Caddy is really convenient. I can make sure it always restarts on crash/boot just by specifying --restart always. No messing with a bunch of systemd files for each service.

Plus I know if I completely mess things up I don’t have to rebuild the entire VPS, just stop the containers

Now I just need to learn docker-compose because it will probably make what I’m doing now seem way too complex.

tl;dr
I can run a bunch of different apps easily on a single server because of Docker.

Official docker image sounds more like official caddy repository on docker hub.

That’s the idea, if we can come up with a good one that Matt is okay with labeling ‘official.’

I mentioned in another thread that there could be tags for all the common configurations, and then an onbuild tag similar to Rails, which might expect a user defined list of desired plugins be present, and build a custom container based on that.

I think the use case would be identical to Nginx, which people use in docker all the time. It has a similar set of problems as well – Nginx can only have plugins added at compile time as well. If none of the official Nginx tags work for someone then they use a community image or make their own. All of this is very O.K., in my opinion.

I don’t think we should have an official Docker image. We should just have an example Dockerfile in the docs.

There just isn’t a good way to distribute via Docker without gimping the ability for users to pick and choose plugins.

Theres no point in not putting out a docker image… the Dockerfile for it is the example Dockerfile which people can copy to make their own if need be.

Whats the advantage having no official image? Whats the disadvantage of having one?

I’m having a hard time understanding this all or nothing mentality.

I’m just imagining a lot of people using Caddy via a Docker image which has a limited set of plugins and never taking advantage of all Caddy has to offer.

Is this problem only in my head? I over think stuff =)