Docker Registry behind Caddy reverse proxy giving "EOF" error

1. My Caddy version (caddy version):

Via docker (caddy:latest): v2.0.0-rc.3 h1:z2H/QnaRscip6aZJxwTbghu3zhC88Vo8l/K57WUce4Q=

2. How I run Caddy:

a. System environment:

Ubuntu Server 20.04
Docker version 19.03.8, build afacb8b7f0

Pushing from client:
Pop_OS 20.04 (basically Ubuntu 20.04)
Docker version 19.03.8, build afacb8b7f0

b. Command:

I run all my docker containers via docker-compose:

docker-compose up -d

c. Service/unit/compose file:

The docker registry is also started via a Docker compose file:

version: '3'

    external: true

    image: registry:2
    restart: always
      - "./registry:/var/lib/registry"
      - default
      - web
version: '3'

    external: true

    image: caddy
    restart: always
      - "80:80"
      - "443:443"
      - "./Caddyfile:/etc/caddy/Caddyfile:ro"
      - "./www:/www:ro"
      - "./config:/config"
      - "./data:/data"
      - web

d. My complete Caddyfile or JSON config:

(auth) {
	basicauth {
		admin redacted
} {
        import auth
        reverse_proxy /v2* http://registry_registry_1:5000
        reverse_proxy http://registry_registry-ui_1

3. The problem I’m having:

I’m trying to push a docker image to my private registry, like this:

docker push

This keeps failing, however. (Details below)

Note that this used to work fine when I was using Traefik instead of Caddy, which is why I’m fairly positive this is an issue with Caddy, somehow. (Or my configuration, of course.)

4. Error messages and/or full log output:

❯ docker push
The push refers to repository []
7fdb0a6602d3: Pushing [==================================================>]  3.703MB/3.703MB

Before it says “EOF”, it tells me “Retrying in X seconds” until eventually it gives up.

Running dockerd --debug in another window when trying to do the same thing doesn’t show me much interesting information either:

DEBU[2020-05-04T22:08:09.445125407+02:00] Calling HEAD /_ping                          
DEBU[2020-05-04T22:08:09.449518419+02:00] Calling POST /v1.40/images/ 
DEBU[2020-05-04T22:08:09.453669652+02:00] hostDir: /etc/docker/certs.d/ 
DEBU[2020-05-04T22:08:09.453776652+02:00] Trying to push to v2 
DEBU[2020-05-04T22:08:09.468914587+02:00] Pushing repository: 
DEBU[2020-05-04T22:08:09.469253576+02:00] Pushing layer: sha256:7fdb0a6602d30a4309a789b993cf5c363ba2a0262cfc5a449015682546943beb 
DEBU[2020-05-04T22:08:09.643584506+02:00] Assembling tar data for 9657298eea8b858d1a08a87db8051a0259a7cd11fad3e7ba4039b33df41dfef2 
ERRO[2020-05-04T22:08:09.653373300+02:00] Upload failed, retrying: EOF

The caddy log itself shows nothing when this happens.

The registry log shows this, which doesn’t look abnormal to me:

registry_1     | time="2020-05-04T20:08:09.640562023Z" level=info msg="response completed" go.version=go1.11.2 http.request.method=POST http.request.remoteaddr=redacted http.request.uri="/v2/ip/blobs/uploads/" http.request.useragent="docker/19.03.8 go/go1.13.8 git-commit/afacb8b7f0 kernel/5.4.0-7626-generic os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.8 \(linux\))" http.response.duration=55.810875ms http.response.status=202 http.response.written=0 
registry_1     | - - [04/May/2020:20:08:09 +0000] "POST /v2/ip/blobs/uploads/ HTTP/1.1" 202 0 "" "docker/19.03.8 go/go1.13.8 git-commit/afacb8b7f0 kernel/5.4.0-7626-generic os/linux arch/amd64 UpstreamClient(Docker-Client/19.03.8 \\(linux\\))"

5. What I already tried:

I tried setting REGISTRY_STORAGE_REDIRECT_DISABLE=true in the registry configuration as recommended here. This made no difference.

I tried pushing both a very small image (3.7 MB) and a big image (567 MB) with the same result.

6. Links to relevant resources:

Ps: Apologies for redacting my domain name to as well as my IP address, I assume that’s fine.

1 Like

This does not help but for the record: I have exactly the same problem, in the same context (moving from Traefik to Caddy).

Hmm. As a guess, maybe you could try setting the flush_interval subdirective of the reverse_proxy directive to -1?

That unfortunately did not help.

After I posted this thread I decided to do some snooping around with wireshark, and I did notice that Docker is seemingly sending the request with the PATCH method, but in the registry log it’s showing as a POST method. Could be a weird discrepancy in the logs though, but if Caddy is somehow “converting” the request to a different http method I suppose that could be a problem.

You can also add the following at the top of your Caddyfile to help with debugging:


It should add some useful information about the proxy request.

I went through the reverse_proxy code, I’m not seeing anything that would transform the method.

I’m using this Caddyfile: {

    reverse_proxy /v2/* web:5000 {
        header_up Docker-Distribution-Api-Version "registry/2.0"
        header_up X-Forwarded-Proto "https"

    tls /etc/caddy/registry-cert.pem /etc/caddy/registry-privatekey.pem


The registry and Caddy are part of a docker-compose.yml-file (setting up GitLab behind Caddy)

This is running on an intranet server without direct internet access and using a cert signed by a local CA.

FYI @jok it’s unnecessary to set X-Forwarded-Proto in the latest version of Caddy, it’s now done for you automatically.

To clarify, are you saying that setting the Docker-Distribution-Api-Version header fixes the issue?

This fixed it for me! In particular, this header_up line did it:

reverse_proxy /v2/* http://registry_registry_1:5000 {
	header_up X-Forwarded-Proto "https"

I was able to omit the Docker-Distritubiton-Api-Version header and only use the X-Forwarded-Proto one. So perhaps there’s a bug there?

1 Like

Ah, I see you’re still using v2.0.0-rc.3. Run docker pull caddy to get the latest and you shouldn’t need the X-Forwarded-Proto header anymore!

Ahh, I see. That would explain it! I’m actually doing docker pull but it’s not fetching the latest version, so I thought I was on the latest one already!

I don’t get the 2.0.0 release version of the image. I tried it from two different hosts, one Ubuntu 18.04 with docker-ce 19.03.8 and one with Centos 7 also with 19.03.8.
And different internet connectivity (one a vps on a cloud hoster, one on our intranet behind a proxy).

Gah. We’re looking into it, thanks. Looks like only some of the architectures got the v2 stable Docker Hub

I’ll let you know when it’s resolved!


Okay I think we’re good now!

$ docker pull caddy
Using default tag: latest
latest: Pulling from library/caddy
cbdbe7a5bc2a: Already exists 
b2c92d2df695: Already exists 
7d779d752c63: Already exists 
4dee063568bd: Pull complete 
Digest: sha256:3d56a37db3aec3ea117077d52d384bdce5fd0fe801a1971f5c5cd3b609a51b24
Status: Downloaded newer image for caddy:latest

$ docker run --rm caddy caddy version
v2.0.0 h1:pQSaIJGFluFvu8KDGDODV8u4/QRED/OPyIR+MWYYse8=

Sorry about that, I think it just took extra time to propagate.


Confirmed! I updated and tried pushing without the header_up line and it works great!


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.