Mobile phone testing

1. Caddy version (caddy version):

v2.4.5

2. How I run Caddy:

a. System environment:

Docker version 20.10.8, build 3967b7d

b. Command:

docker-compose up -d ssl

c. Service/unit/compose file:

  ssl:
    image: caddy
    container_name: lf-ssl
    ports:
      - 80:80
      - 443:443
    depends_on:
      - app
    volumes:
      - lf-caddy-data:/data
      - lf-caddy-config:/config
    # https://caddyserver.com/docs/command-line
    command: caddy reverse-proxy --from localhost --to app
    restart: unless-stopped

volumes:
  lf-caddy-config:
  lf-caddy-data:

d. My complete Caddyfile or JSON config:

I do not supply one at this time.

3. The problem I’m having:

I’m trying to access my locally running app from a phone using my internal IP address on the same home network, e.g., https://192.168.86.249/

4. Error messages and/or full log output:

curl -v https://192.168.86.249/

*   Trying 192.168.86.249...
* TCP_NODELAY set
* Connected to 192.168.86.249 (192.168.86.249) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS alert, internal error (592):
* error:14004438:SSL routines:CONNECT_CR_SRVR_HELLO:tlsv1 alert internal error
* Closing connection 0
curl: (35) error:14004438:SSL routines:CONNECT_CR_SRVR_HELLO:tlsv1 alert internal error
dc logs ssl

lf-ssl  | {"level":"warn","ts":1631658643.6995146,"logger":"admin","msg":"admin endpoint disabled"}
lf-ssl  | {"level":"info","ts":1631658643.6997552,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"proxy","https_port":443}
lf-ssl  | {"level":"info","ts":1631658643.6998103,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"proxy"}
lf-ssl  | {"level":"info","ts":1631658643.6998482,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0x40003a6ee0"}
lf-ssl  | {"level":"info","ts":1631658643.7009673,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
lf-ssl  | {"level":"info","ts":1631658643.700988,"logger":"tls","msg":"finished cleaning storage units"}
lf-ssl  | {"level":"warn","ts":1631658643.810716,"logger":"pki.ca.local","msg":"installing root certificate (you might be prompted for password)","path":"storage:pki/authorities/local/root.crt"}
lf-ssl  | 2021/09/14 22:30:43 Warning: "certutil" is not available, install "certutil" with "apt install libnss3-tools" or "yum install nss-tools" and try again
lf-ssl  | 2021/09/14 22:30:43 define JAVA_HOME environment variable to use the Java trust
lf-ssl  | 2021/09/14 22:30:43 certificate installed properly in linux trusts
lf-ssl  | {"level":"info","ts":1631658643.8413696,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["localhost"]}
lf-ssl  | {"level":"info","ts":1631658643.841631,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
lf-ssl  | Caddy proxying https://localhost -> http://app
lf-ssl  | {"level":"info","ts":1631658643.8418415,"logger":"tls.obtain","msg":"acquiring lock","identifier":"localhost"}
lf-ssl  | {"level":"info","ts":1631658643.880806,"logger":"tls.obtain","msg":"lock acquired","identifier":"localhost"}
lf-ssl  | {"level":"info","ts":1631658643.881728,"logger":"tls.obtain","msg":"certificate obtained successfully","identifier":"localhost"}
lf-ssl  | {"level":"info","ts":1631658643.8817406,"logger":"tls.obtain","msg":"releasing lock","identifier":"localhost"}
lf-ssl  | {"level":"warn","ts":1631658643.8820071,"logger":"tls","msg":"stapling OCSP","error":"no OCSP stapling for [localhost]: no OCSP server specified in certificate"}

I don’t currently supply a Caddyfile so I did not turn on debug mode because I’m thinking this is a common problem for others and it’s a simple solution that I’ve just missed looking over the docs, I’m sorry.

5. What I already tried:

I tried changing the command line from localhost to 192.168.86.249:
command: caddy reverse-proxy --from 192.168.86.249 --to app

6. Links to relevant resources:

Your command is proxying from localhost to app, not from 192.168.86.249. Caddy is thus serving on localhost and not on the network.

I thought when I changed the command to caddy reverse-proxy --from 192.168.86.249 --to app that would be resolved though…am I doing something wrong in the alternative approach I took by supplying the ip address?

dc logs ssl

lf-ssl  | {"level":"warn","ts":1631661426.429585,"logger":"admin","msg":"admin endpoint disabled"}
lf-ssl  | {"level":"info","ts":1631661426.4297934,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"proxy","https_port":443}
lf-ssl  | {"level":"info","ts":1631661426.4298158,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"proxy"}
lf-ssl  | {"level":"info","ts":1631661426.4308066,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["192.168.86.249"]}
lf-ssl  | {"level":"info","ts":1631661426.4311757,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0x40001a72d0"}
lf-ssl  | {"level":"info","ts":1631661426.4312656,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
lf-ssl  | {"level":"info","ts":1631661426.4312809,"logger":"tls","msg":"finished cleaning storage units"}
lf-ssl  | {"level":"info","ts":1631661426.431831,"logger":"tls.obtain","msg":"acquiring lock","identifier":"192.168.86.249"}
lf-ssl  | {"level":"warn","ts":1631661426.440442,"logger":"pki.ca.local","msg":"installing root certificate (you might be prompted for password)","path":"storage:pki/authorities/local/root.crt"}
lf-ssl  | 2021/09/14 23:17:06 Warning: "certutil" is not available, install "certutil" with "apt install libnss3-tools" or "yum install nss-tools" and try again
lf-ssl  | 2021/09/14 23:17:06 define JAVA_HOME environment variable to use the Java trust
lf-ssl  | {"level":"info","ts":1631661426.4706397,"logger":"tls.obtain","msg":"lock acquired","identifier":"192.168.86.249"}
lf-ssl  | {"level":"info","ts":1631661426.4741924,"logger":"tls.obtain","msg":"certificate obtained successfully","identifier":"192.168.86.249"}
lf-ssl  | {"level":"info","ts":1631661426.4742043,"logger":"tls.obtain","msg":"releasing lock","identifier":"192.168.86.249"}
lf-ssl  | {"level":"warn","ts":1631661426.4748063,"logger":"tls","msg":"stapling OCSP","error":"no OCSP stapling for [192.168.86.249]: no OCSP server specified in certificate"}
lf-ssl  | 2021/09/14 23:17:06 certificate installed properly in linux trusts
lf-ssl  | {"level":"info","ts":1631661426.4953368,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
lf-ssl  | Caddy proxying https://192.168.86.249 -> http://app

curl -v https://192.168.86.249/

*   Trying 192.168.86.249...
* TCP_NODELAY set
* Connected to 192.168.86.249 (192.168.86.249) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS alert, internal error (592):
* error:14004438:SSL routines:CONNECT_CR_SRVR_HELLO:tlsv1 alert internal error
* Closing connection 0
curl: (35) error:14004438:SSL routines:CONNECT_CR_SRVR_HELLO:tlsv1 alert internal error

not sure if this is helpful but it reassured me my volume wasn’t keeping something around it should not have been:

dc exec ssl cat /data/caddy/certificates/local/192.168.86.249/192.168.86.249.json
{
	"sans": [
		"192.168.86.249"
	]
}%                                                                                                 

If you use --from localhost, then Caddy is expecting a request with the hostname localhost in it, which wouldn’t be the case if you’re accessing it by IP address.

I strongly recommend using a Caddyfile instead of overriding the command of the image. It offers much more flexibility.

The other issue with using IP addresses for HTTPS, especially in Docker, is because Caddy will use the remote address on the TCP packets as the hostname for certificate matching, but Docker sometimes enables a userland proxy which makes the IP address of incoming requests look like it came from Docker itself rather than the original host.

Best to use a domain you own (for example a DuckDNS domain, which is free) and make that resolve to your local IP address. Something like this:

home.myname.duckdns.org {
	reverse_proxy app:80
}

Although I’m pretty sure the IP address should work. I don’t know if curl shows an internal alert for an untrusted cert (almost certainly that’s what’s happening if being accessed from a different device), or if it’s something else. Maybe a way to enable debug mode with the subcommand would be helpful.

Thank you for taking the time to discuss this, I really appreciate it! I will switch over to using the Caddyfile which will allow me to turn on the debugging. I’ll post here once I do that unless it’s something I can figure out on my own. I might go ahead and take a look at disabling auto_https while I’m at it so I can simply eliminate the possible cert issues altogether.

I’ve been able to configure things so I get what I want here.

Devs can utilize http://localhost locally and expect to get redirected to https://localhost and Caddy will handoff to the app container so the app can serve up its resources.

Additionally, devs can now access the app from another device on the same network using the machine’s current ip address, e.g., (from my phone) http://192.168.161.99, and expect to use the app from his/her phone.

here’s the relevant snippet from my docker-compose.yml:

  ssl:
    image: caddy
    container_name: lf-ssl
    ports:
      - 80:80
      - 443:443
    restart: unless-stopped
    depends_on:
      - app
    volumes:
      - lf-caddy-data:/data
      - lf-caddy-config:/config

      # for developer convenience
      - ./ssl/Caddyfile:/etc/caddy/Caddyfile

and here’s my Caddyfile:

# https://caddyserver.com/docs/caddyfile
{
	#debug
}

localhost {
	# NOTE: "app" here is the name of the service found in the docker-compose.yml
	reverse_proxy app:80
}

:80 {
	# NOTE: "app" here is the name of the service found in the docker-compose.yml
	reverse_proxy app:80
}

FWIW, I wanted this ability because I needed to utilize my phone’s keyboard while using my app and I’m not sure of an easy way to emulate that on my machine where the app is running.

While I’m very happy to have this ability to test locally now I’m mildly disappointed that I have to reference the app service name in the Caddyfile this way when that service name is actually defined in the docker-compose.yml. I try to keep any “service name references” in the docker-compose.yml but did not see a way to do that here. I’ve added a comment to help mitigate that potential maintenance issue but wish there was a better way to do that.

Thank you so much for your help with this gentlemen.

I don’t see the problem. This is very standard. Your Caddyfile is paired with your docker-compose.yml, they’re shipped together.

You could use environment variables if you want though:

Something like this:

reverse_proxy {$APP_ADDRESS}
1 Like

What is the difference between specifying the app service name in the command line versus the Caddyfile? What’s the downside?

it’s a very stylistic thing for me (definitely bikeshedding), nothing to get hung up on in any way. My “personal” preference is that any references made to services in the docker-compose file also appear in the same docker-compose file. The hope would be that if someone wanted to change a service name for some reason, they could just search-replace in the one file…it also helps make it clear when configs are utilizing the built-in local, docker networking conveniences.

Our docker-compose (https://github.com/sillsdev/web-languageforge/blob/issue/1113-disable-intial-capital-for-some-fields/docker/docker-compose.yml) utilize these “service name” references in a few places:

  db:
    image: mongo:4.0
...
  test-e2e:
  ...
    environment:
      - WAIT_HOSTS=db:27017, mail:25, selenium:4444, app-for-e2e:80
...
  app:
  ...
    environment:
      - WAIT_HOSTS=db:27017, mail:25
      ...
      - MONGODB_CONN=mongodb://db:27017

IMHO, since the service name is defined in the docker-compose.yml, references to any services should be made in the same file.

Like I said, this is a very personal ideal for me, not worth any heartburn and not a hill anyone needs to die on :wink:

the {$APP_ADDRESS} would be a fine way to address it, thanks for that suggestion!

This topic was automatically closed after 30 days. New replies are no longer allowed.