Caddy, Docker, localhost and Windows

1. Caddy version:

v2.6.3 h1:QRVBNIqfpqZ1eJacY44I6eUC1OcxQ8D04EKImzpj7S8=

2. How I installed, and run Caddy:

a. System environment:

Docker

b. Command:

docker-compose up

c. Service/unit/compose file:

version: "3.8"
services:
  go:
    build: ./
    ports:
      - "3000"
    volumes:
      - ./:/telos
  caddy:
    image: caddy:latest
    ports:
      - "2080:80"
      - "2015:443"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - ./tmp/caddy/pki:/data/caddy/pki
      - ./public:/public

d. My complete Caddy config:

:2015 {
	encode zstd gzip
	log

	handle /api/* {
		reverse_proxy localhost:3000
	}

	handle {
		root * public
		try_files {path} index.html
		file_server
	}
}

e. My complete Dockerfile:

FROM golang:1.19

WORKDIR /telos

ADD public /public

RUN go install github.com/cosmtrek/air@latest

COPY go.mod ./
RUN go mod download

CMD ["air", "-c", ".air.toml"]

3. The problem I’m having:

Howdy! I’m new to Caddy and Docker and I researched as best as I could. I have a Go server/api that works fine but want to take advantage of Caddy’s full-featured server and automatic HTTPS capability. I created the Caddyfile included here and everything works great. However it would be nice if I could bundle everything up for team members and take advantage of live Go rebuilding so I started with the linked resource which was written for some Linux variant I suppose as it offers this command as a way to get Caddy to trust certificates: XDG_DATA_HOME=$(pwd)/tmp caddy trust something that Windows doesn’t understand. I have seen many replies from @ francislavoie that say there is no automatic way to get Caddy and Docker to handle this trust issue and if so I can just walk away and tell my team that Docker can’t be involved in the development of this project.

At this point I have 2 problems:

  1. Caddy won’t even serve a non-HTTPS version of the site. As you can see I both copy the static file folder using ADD in the Dockerfile and include ./public:/public in my docker-compose.yml but browsing to http://localhost:2080/ results in this error page:
This page isn’t working
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
  1. Once I get Caddy in Docker to serve my files, I don’t know how to handle the HTTPS situation. I’d prefer something automatic but if I have to tell my team to copy some files to some place I need to know what and where.

4. Error messages and/or full log output:

2023-02-11 00:26:33 {"level":"info","ts":1676093193.210777,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
2023-02-11 00:26:33 {"level":"info","ts":1676093193.2168117,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
2023-02-11 00:26:33 {"level":"info","ts":1676093193.2213476,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc000033180"}
2023-02-11 00:26:33 {"level":"info","ts":1676093193.2218623,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
2023-02-11 00:26:33 {"level":"info","ts":1676093193.2218924,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
2023-02-11 00:26:33 {"level":"info","ts":1676093193.2219105,"logger":"tls","msg":"finished cleaning storage units"}
2023-02-11 00:26:33 {"level":"info","ts":1676093193.2259758,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
2023-02-11 00:26:33 {"level":"info","ts":1676093193.2260005,"msg":"serving initial configuration"}
2023-02-11 00:28:05 {"level":"info","ts":1676093285.5991693,"msg":"shutting down apps, then terminating","signal":"SIGTERM"}
2023-02-11 00:28:05 {"level":"warn","ts":1676093285.599221,"msg":"exiting; byeee!! đź‘‹","signal":"SIGTERM"}
2023-02-11 00:28:05 {"level":"info","ts":1676093285.600214,"logger":"tls.cache.maintenance","msg":"stopped background certificate maintenance","cache":"0xc000033180"}
2023-02-11 00:28:05 {"level":"info","ts":1676093285.6004004,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
2023-02-11 00:28:05 {"level":"info","ts":1676093285.600463,"msg":"shutdown complete","signal":"SIGTERM","exit_code":0}
2023-02-11 00:28:13 {"level":"info","ts":1676093293.152982,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
2023-02-11 00:28:13 {"level":"info","ts":1676093293.15863,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
2023-02-11 00:28:13 {"level":"info","ts":1676093293.1634111,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0003fec40"}
2023-02-11 00:28:13 {"level":"info","ts":1676093293.165817,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
2023-02-11 00:28:13 {"level":"info","ts":1676093293.1658442,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
2023-02-11 00:28:13 {"level":"info","ts":1676093293.1658866,"logger":"tls","msg":"finished cleaning storage units"}
2023-02-11 00:28:13 {"level":"info","ts":1676093293.1661632,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
2023-02-11 00:28:13 {"level":"info","ts":1676093293.1661751,"msg":"serving initial configuration"}
2023-02-11 00:29:23 {"level":"info","ts":1676093363.8447692,"msg":"shutting down apps, then terminating","signal":"SIGTERM"}
2023-02-11 00:29:23 {"level":"warn","ts":1676093363.8447964,"msg":"exiting; byeee!! đź‘‹","signal":"SIGTERM"}
2023-02-11 00:29:23 {"level":"info","ts":1676093363.8458045,"logger":"tls.cache.maintenance","msg":"stopped background certificate maintenance","cache":"0xc0003fec40"}
2023-02-11 00:29:23 {"level":"info","ts":1676093363.8458636,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
2023-02-11 00:29:23 {"level":"info","ts":1676093363.8458705,"msg":"shutdown complete","signal":"SIGTERM","exit_code":0}
2023-02-11 00:30:06 {"level":"info","ts":1676093406.8709257,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
2023-02-11 00:30:06 {"level":"info","ts":1676093406.8780737,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//[::1]:2019","//127.0.0.1:2019","//localhost:2019"]}
2023-02-11 00:30:06 {"level":"info","ts":1676093406.8811905,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0000f4d20"}
2023-02-11 00:30:06 {"level":"info","ts":1676093406.8816216,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
2023-02-11 00:30:06 {"level":"info","ts":1676093406.8818042,"logger":"tls","msg":"finished cleaning storage units"}
2023-02-11 00:30:06 {"level":"info","ts":1676093406.8816645,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
2023-02-11 00:30:06 {"level":"info","ts":1676093406.8848062,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
2023-02-11 00:30:06 {"level":"info","ts":1676093406.8849547,"msg":"serving initial configuration"}
2023-02-11 00:31:09 {"level":"info","ts":1676093469.900885,"msg":"shutting down apps, then terminating","signal":"SIGTERM"}
2023-02-11 00:31:09 {"level":"warn","ts":1676093469.900938,"msg":"exiting; byeee!! đź‘‹","signal":"SIGTERM"}
2023-02-11 00:31:09 {"level":"info","ts":1676093469.9018898,"logger":"tls.cache.maintenance","msg":"stopped background certificate maintenance","cache":"0xc0000f4d20"}
2023-02-11 00:31:09 {"level":"info","ts":1676093469.9020283,"logger":"admin","msg":"stopped previous server","address":"localhost:2019"}
2023-02-11 00:31:09 {"level":"info","ts":1676093469.9020677,"msg":"shutdown complete","signal":"SIGTERM","exit_code":0}
2023-02-11 00:42:14 {"level":"info","ts":1676094134.6171691,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
2023-02-11 00:42:14 {"level":"info","ts":1676094134.6218672,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
2023-02-11 00:42:14 {"level":"info","ts":1676094134.6225202,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0002b8a10"}
2023-02-11 00:42:14 {"level":"info","ts":1676094134.6229713,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
2023-02-11 00:42:14 {"level":"info","ts":1676094134.6230016,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
2023-02-11 00:42:14 {"level":"info","ts":1676094134.6230485,"logger":"tls","msg":"finished cleaning storage units"}
2023-02-11 00:42:14 {"level":"info","ts":1676094134.623403,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
2023-02-11 00:42:14 {"level":"info","ts":1676094134.6234171,"msg":"serving initial configuration"}

5. What I already tried:

Other than searching this site and other support communities, nothing. Caddy runs fine with this configuration under Windows, but fails under Docker, and I have no idea what those logs mean.

6. Links to relevant resources:

There’s a mismatch here; you’re listening on port 2015 in Caddy, but you’re only publishing ports 80 and 443 from the container to the host as different ports. That’s incorrect.

This is also a mismatch. You probably meant to use root * /public. The distinction between an absolute and relative path is important. Caddy’s default working directory is /srv so with public (relative) the root would be /srv/public, which isn’t right.

This isn’t a good idea – your volume should be for /data. Everything in there should be persisted.

The caddy trust command was rewritten a few versions ago to read the root CA cert from Caddy’s admin API instead of from a file. This was done because it was too easy to use the command incorrectly because of depending on the HOME of the current user.

So the way to do it now would be to bind the admin endpoint to the host with - 127.0.0.1:2019:2019 and then change the admin endpoint in your config to admin :2019. Then you can run caddy trust on the host and it should be able to read from the API (which defaults to localhost:2019) to get the root CA cert.

1 Like

Thanks so much for your response. The quick changes I made between my original post plus your insights have lead me to a docker-compose.yml that looks like this:

version: "3.8"
services:
  go:
    build: ./
    ports:
      - "3000"
    volumes:
      - ./:/telos
  caddy:
    image: caddy:latest
    ports:
      - "2015:2015"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - ./public:/srv
      - caddy_data:/data
      - caddy_config:/config
volumes:
  caddy_data:
    external: true
  caddy_config:

and a Caddyfile that looks like this:

:2015 {
	encode zstd gzip
	log

	handle /api/* {
		reverse_proxy localhost:3000
	}

	handle {
		root * /srv
		try_files {path} /index.html
		file_server
	}
}

This gets me a working Caddy in terms of HTTP. It doesn’t do anything HTTPS-wise and I’m afraid you need to talk to me like a 5-year-old because

So the way to do it now would be to bind the admin endpoint to the host with - 127.0.0.1:2019:2019 and then change the admin endpoint in your config to admin :2019 . Then you can run caddy trust on the host and it should be able to read from the API (which defaults to localhost:2019 ) to get the root CA cert.

I don’t understand at all. I am on Windows as a host and as I mentioned Caddy runs fine there (first launch I got a dialog to trust and that was it) so any additional help is greatly appreciated.

That looks much better :+1:

Keep in mind this is only allowing HTTP. If you want HTTPS, then you’ll probably want to bind ports 80 and 443 to the host. Is there any particular reason you can’t bind those ports?

I’m honestly not sure where to start. But uh, here goes?

For clients on the host machine to trust connections to Caddy, they need to have Caddy’s root CA cert installed in the system’s trust store. When running Caddy on the host, this is really simple because Caddy can attempt to install certs on its own (and you get that dialog/prompt to accept).

When Caddy is running inside of Docker, since Caddy is running as an isolated process, it can’t ask Windows to install the root CA cert. So to automate installation, you would have to run Caddy in CLI with the caddy trust command on the host and have that Caddy process communicate via the admin API to the Caddy running inside the container, then the CLI command can perform the installation. To make that happen, you’d have to configure Caddy inside of Docker to expose its admin API to the host because by default the admin API is not accessible from outside the Caddy container itself.

Your other option is manually installing the root CA cert which is easy enough, just go find it in the data volume at /data/caddy/pki/authorities/local/root.crt and installing it to the various trust stores (remember that modern browsers have their own trust stores separate from the host OS now, so you might have to install it manually to those as well).

I’m not sure what other information you’re missing or not understanding, so you’ll need to ask more specific questions otherwise.

1 Like

I host 6 sites through IIS on this server already and each is bound to 80 and 443 for their particular domain names. If I go to just plain old localhost on the machine I get a HTTP Error 404. The requested resource is not found. and if I go to localhost:443 I get a This site can’t be reached and ERR_CONNECTION_RESET which I assume are being generated by IIS somehoe despite all of my sites having bindings. Bottom line I use high port numbers for development and Caddy runs fine on my system until Docker gets involved. I understand if I could bind Docker to a domain this would probably be easier but I’m in a shared development environment and while we could go the fake domain in hosts route I figured there must be a localhost on any port solution. When I run Caddy with my :2015 Caddyfile under Windows all is well with HTTPS so I was hoping to do the same in Docker with a minimal amount of hoop jumping for the sake of the other team members.

:stuck_out_tongue:

I added another port to the docker-compose.yml hoping that would give me the access you’re referring to:

version: "3.8"
services:
  go:
    build: ./
    ports:
      - "3000"
    volumes:
      - ./:/telos
  caddy:
    image: caddy:latest
    ports:
      - "2015:2015"
      - "2019:2019"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - ./public:/srv
      - caddy_data:/data
      - caddy_config:/config
volumes:
  caddy_data:
    external: true
  caddy_config:

But if I try curl localhost:2019/config/ I get curl: (52) Empty reply from server.
I tried adding

{
	admin 0.0.0.0:2019
}

to the top of my Caddyfile but then curl 0.0.0.0:2019/config/ returned curl: (7) Failed to connect to 0.0.0.0 port 2019 after 1 ms: Address not available.
I tried another suggestion to using “host” networking in Docker:

...
    image: caddy:latest
    network_mode: "host"
...

Now curl localhost:2019/config/ got me curl: (7) Failed to connect to localhost port 2019 after 1202 ms: Connection refused. And the previously working :2015 was broken.
So I give up and just have to ask: how do I make the admin port work on the host?

I cannot find this directory structure. If I go to Docker Desktop, choose Volumes, then caddy_data it shows a folder icon next to “caddy” which does nothing when expanded. So how are you supposed to get to this data. If I search my drive I find my Caddy (outside of Docker I assume) certificates at C:\Users\Administrator\AppData\Roaming\Caddy\pki\authorities\local:

02/10/2023  02:41 PM               680 intermediate.crt
02/10/2023  02:41 PM               227 intermediate.key
02/10/2023  02:41 PM               631 root.crt
02/10/2023  02:41 PM               227 root.key

Are these the files? I don’t think this is the way given you saying this process requires “various trust stores”. I’d prefer help getting the admin interface accessible and the steps needed to accomplish the certificate installation you mentioned first.
I apologize for all of these Docker questions, this is a Caddy community and Caddy runs fine for me outside of Docker, but I’d really like to solve this as the web is filled with broken or non-localhost articles on this subject and out of 20+ not a single one works for me.

That’s because you never actually configured Caddy inside Docker to use HTTPS, so Caddy never had a reason to generate its local CA.

Like I said earlier, be careful about binding 2019 fully on the host, it’s better to use 127.0.0.1:2019:2019 to make sure only localhost can access the admin API.

What’s in your Caddy container’s logs? It probably failed to start up.

That’s the storage for your Caddy running on the host machine (not the one in Docker).

I tried this change to the docker-compose.yml file

version: "3.8"
services:
  go:
    build: ./
    ports:
      - "3000"
    volumes:
      - ./:/telos
  caddy:
    image: caddy:latest
    restart: on-failure
    ports:
      - "2015:2015"
      - "127.0.0.1:2019:2019"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - ./public:/srv
      - caddy_data:/data
      - caddy_config:/config
volumes:
  caddy_data:
    external: true
  caddy_config:

and both curl localhost:2019/config/ and curl 127.0.0.1:2019/config/ returned curl: (52) Empty reply from server.
If that’s not the right way to specify 127.0.0.1:2019:2019 then where do I put it? This is my working Caddyfile which does work in HTTP at :2015

:2015 {
	encode zstd gzip
	log

	handle /api/* {
		reverse_proxy localhost:3000
	}

	handle {
		root * /srv
		try_files {path} /index.html
		file_server
	}
}

and returns this startup logging:

purdy-caddy-1  | {"level":"info","ts":1676234130.0888047,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
purdy-caddy-1  | {"level":"info","ts":1676234130.0946658,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
purdy-caddy-1  | {"level":"info","ts":1676234130.0955236,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
purdy-caddy-1  | {"level":"info","ts":1676234130.0957098,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0001552d0"}
purdy-caddy-1  | {"level":"info","ts":1676234130.0958946,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
purdy-caddy-1  | {"level":"info","ts":1676234130.095915,"logger":"tls","msg":"finished cleaning storage units"}
purdy-caddy-1  | {"level":"info","ts":1676234130.096069,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
purdy-caddy-1  | {"level":"info","ts":1676234130.09608,"msg":"serving initial configuration"}

You’re running this on the host machine? Use curl -v, it should show some more detail.

I’m not sure if this is a weird interaction with networking for Docker on Windows. But in general I don’t trust Docker on Windows because it’s a Linux tool, and on Windows it’s virtualized.

I erased a rant about Docker :grinning: and output with -v on my old configuration as I have made a change that gets me admin access.

Caddyfile:

{
    admin 0.0.0.0:2019
}
:2015 {
	encode zstd gzip
	log

	handle /api/* {
		reverse_proxy localhost:3000
	}

	handle {
		root * /srv
		try_files {path} /index.html
		file_server
	}
}

docker-compose.yml:

version: "3.8"
services:
  go:
    build: ./
    ports:
      - "3000"
    volumes:
      - ./:/telos
  caddy:
    image: caddy:latest
    restart: on-failure
    ports:
      - "2015:2015"
      - "2019:2019"
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - ./public:/srv
      - caddy_data:/data
      - caddy_config:/config
volumes:
  caddy_data:
    external: true
  caddy_config:

curl localhost:2019/config/ -v

*   Trying 127.0.0.1:2019...
* Connected to localhost (127.0.0.1) port 2019 (#0)
> GET /config/ HTTP/1.1
> Host: localhost:2019
> User-Agent: curl/7.83.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: application/json
< Trailer: ETag
< Date: Mon, 13 Feb 2023 04:35:44 GMT
< Transfer-Encoding: chunked
<
{"admin":{"listen":"0.0.0.0:2019"},"apps":{"http":{"servers":{"srv0":{"listen":[":2015"],"logs":{},"routes":[{"handle":[{"encodings":{"gzip":{},"zstd":{}},"handler":"encode","prefer":["zstd","gzip"]}]},{"group":"group2","handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"reverse_proxy","upstreams":[{"dial":"localhost:3000"}]}]}]}],"match":[{"path":["/api/*"]}]},{"group":"group2","handle":[{"handler":"subroute","routes":[{"handle":[{"handler":"vars","root":"/srv"}]},{"handle":[{"handler":"rewrite","uri":"{http.matchers.file.relative}"}],"match":[{"file":{"try_files":["{http.request.uri.path}","/index.html"]}}]},{"handle":[{"handler":"file_server","hide":["/etc/caddy/Caddyfile"]}]}]}]}]}}}}}
* Connection #0 to host localhost left intact

Caddy log:

purdy-caddy-1  | {"level":"info","ts":1676262941.9825013,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
purdy-caddy-1  | {"level":"warn","ts":1676262941.9883637,"msg":"Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
purdy-caddy-1  | {"level":"info","ts":1676262941.9890082,"logger":"admin","msg":"admin endpoint started","address":"0.0.0.0:2019","enforce_origin":false,"origins":["//0.0.0.0:2019"]}
purdy-caddy-1  | {"level":"warn","ts":1676262941.9890442,"logger":"admin","msg":"admin endpoint on open interface; host checking disabled","address":"0.0.0.0:2019"}
purdy-caddy-1  | {"level":"info","ts":1676262941.9894726,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc000193960"}
purdy-caddy-1  | {"level":"info","ts":1676262941.9898036,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
purdy-caddy-1  | {"level":"info","ts":1676262941.989997,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
purdy-caddy-1  | {"level":"info","ts":1676262941.9900398,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
purdy-caddy-1  | {"level":"info","ts":1676262941.990137,"msg":"serving initial configuration"}
purdy-caddy-1  | {"level":"info","ts":1676262941.9904072,"logger":"tls","msg":"finished cleaning storage units"}
purdy-caddy-1  | {"level":"info","ts":1676262944.7720034,"logger":"admin.api","msg":"received request","method":"GET","host":"localhost:2019","uri":"/config/","remote_ip":"172.18.0.1","remote_port":"39546","headers":{"Accept":["*/*"],"User-Agent":["curl/7.83.1"]}}

So now hopefully we’re at the point where we can implement this:

I tried caddy trust --address localhost:2019 which caused this dialog:

caddy-trust

which I accepted and that produced:

2023/02/13 04:53:33.486 e[33mWARNe[0m   installing root certificate (you might be prompted for password)        {"path": "localhost:2019/pki/ca/local"}
2023/02/13 04:53:33.487 e[34mINFOe[0m   note: NSS support is not available on your platform
2023/02/13 04:53:33.487 e[34mINFOe[0m   define JAVA_HOME environment variable to use the Java trust
2023/02/13 04:54:01.655 e[34mINFOe[0m   certificate installed properly in windows trusts

and this entry in the log:

purdy-caddy-1  | {"level":"info","ts":1676264013.4642842,"logger":"admin.api","msg":"received request","method":"GET","host":"localhost:2019","uri":"/pki/ca/local","remote_ip":"172.18.0.1","remote_port":"57984","headers":{"Accept-Encoding":["gzip"],"Origin":["http://localhost:2019"],"User-Agent":["Go-http-client/1.1"]}}

and now my Docker volume has a path caddy/pki/authorities/local/ with intermediate and root .crt and .key files.
Refreshing localhost:2015 gets me the HTTP site only, entering https://localhost:2015/ returns:

This site can’t provide a secure connection
localhost sent an invalid response.
Try running Windows Network Diagnostics.
ERR_SSL_PROTOCOL_ERROR

If I try curl https://localhost:2015 -v I get:

*   Trying 127.0.0.1:2015...
* Connected to localhost (127.0.0.1) port 2015 (#0)
* schannel: disabled automatic use of client certificate
* ALPN: offers http/1.1
* schannel: next InitializeSecurityContext failed: SEC_E_INVALID_TOKEN (0x80090308) - The token supplied to the function is invalid
* Closing connection 0
curl: (35) schannel: next InitializeSecurityContext failed: SEC_E_INVALID_TOKEN (0x80090308) - The token supplied to the function is invalid

I think we’re close. What changes to my Caddyfile or docker-compose.yml are needed to wrap things up?

Don’t get me wrong, I love Docker. But I love it on Linux. On Windows, it’s a mess.

Oh, so you are getting the config back. That looks correct. I don’t think there’s any problem, then.

Yeah, you still haven’t configured Caddy to serve HTTPS.

You could change your site address from :2015 to https://localhost:2015 I guess.

That will tell Caddy to issue a cert for localhost, which it will use its local CA to do since localhost is known as a local-only domain.

1 Like

That did it, I’m now using Caddy in HTTPS through Docker which is also running my API through the reverse proxy and that API is live-updated in the container.

I’m going to do this process over from scratch and write everything up for those of us cursed to use Docker under Windows :crazy_face: and then come back here to post a link (plus any thing else that might arise in case I mess up following this thread). I will then try to do the whole thing another time with a transition of the container to a live environment.

One last question to make a tutorial complete: if I were to write access and error logs to disk can I use the external caddy_data we’ve already set up or should I make a different external volume? And either way how would I specify the file names for those logs if using a Docker volume?

Thanks again for all your help!

1 Like

Oops, not so fast…these fixes worked for the main site but now the API does not work.

If I run my API by itself without Docker and Caddy it still works so something about the changes we made is preventing the API passthrough which worked within Docker/Caddy when HTTPS didn’t.

I stripped my full API down to a tiny file just in case the problem lies within the Go:

package main

import (
	"fmt"
	"log"
	"net/http"
)

func main() {
	http.HandleFunc("/api/jimbo", func(w http.ResponseWriter, r *http.Request) {
		fmt.Fprint(w, "<h1>Yo Jimbo!</h1>")
	})
	log.Fatal(http.ListenAndServe(":3000", nil))
}

If I try to access the API within the served site for example with fetch('/api/jimbo') I get this log error:

purdy-caddy-1  | {"level":"error","ts":1676268784.1590133,"logger":"http.log.error","msg":"dial tcp 127.0.0.1:3000: connect: connection refused","request":{"remote_ip":"172.18.0.1","remote_port":"43936","proto":"HTTP/2.0","method":"GET","host":"localhost:2015","uri":"/api/jimbo","headers":{"Sec-Fetch-Site":["none"],"Accept-Encoding":["gzip, deflate, br"],"Sec-Ch-Ua":["\"Not_A Brand\";v=\"99\", \"Google Chrome\";v=\"109\", \"Chromium\";v=\"109\""],"Sec-Ch-Ua-Platform":["\"Windows\""],"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9"],"Sec-Fetch-Mode":["navigate"],"Sec-Fetch-User":["?1"],"Sec-Fetch-Dest":["document"],"Accept-Language":["en-US,en;q=0.9"],"Sec-Ch-Ua-Mobile":["?0"],"Upgrade-Insecure-Requests":["1"],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36"]},"tls":{"resumed":true,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"localhost"}},"duration":0.0004729,"status":502,"err_id":"gw4kw59np","err_trace":"reverseproxy.statusError (reverseproxy.go:1281)"}

If I try from the browser with https://localhost:2015/api/jimbo (which I assume shouldn’t work as the API is protected from the outside by the reverse proxy, right?) I get this log message (access instead of error):

purdy-caddy-1  | {"level":"error","ts":1676268784.1590352,"logger":"http.log.access","msg":"handled request","request":{"remote_ip":"172.18.0.1","remote_port":"43936","proto":"HTTP/2.0","method":"GET","host":"localhost:2015","uri":"/api/jimbo","headers":{"Accept":["text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9"],"Sec-Fetch-Site":["none"],"Accept-Encoding":["gzip, deflate, br"],"Sec-Ch-Ua":["\"Not_A Brand\";v=\"99\", \"Google Chrome\";v=\"109\", \"Chromium\";v=\"109\""],"Sec-Ch-Ua-Platform":["\"Windows\""],"User-Agent":["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36"],"Sec-Fetch-Mode":["navigate"],"Sec-Fetch-User":["?1"],"Sec-Fetch-Dest":["document"],"Accept-Language":["en-US,en;q=0.9"],"Sec-Ch-Ua-Mobile":["?0"],"Upgrade-Insecure-Requests":["1"]},"tls":{"resumed":true,"version":772,"cipher_suite":4865,"proto":"h2","server_name":"localhost"}},"user_id":"","duration":0.0004729,"size":0,"status":502,"resp_headers":{"Server":["Caddy"],"Alt-Svc":["h3=\":2015\"; ma=2592000"]}}

Man, I thought we had it!

I’d recommend making a separate volume. The data volume should be assumed to be owned by the Caddy process.

You could make a /var/log/caddy volume or something, have logs written out to there.

Inside Docker, localhost means this container. Nothing inside the Caddy container is listening on port 3000.

Where’s the API running? If it’s in another container, use that container’s name instead. If it’s on the host, that’s more complicated, you’ll need to set up the host-gateway to reach the host (google it, plenty of info about that).

1 Like

Thanks, changing my Caddyfile from

	handle /api/* {
		reverse_proxy localhost:3000
	}

to

	handle /api/* {
		reverse_proxy go:3000
	}

I think I’m set, still going to build everything from scratch before signing off and writing a walkthrough.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.