Issues with private ip and gRPC

1. The problem I’m having:

I am unable to connect to a gRPC service behind a Caddy reverse-proxy using http and private IP address. Using https works just as expected.

It is most likely an issue on my side or a misunderstanding of the conception in Caddy + gRPC, but I am not getting logs of the requests on Caddy.

To validate things if I by pass caddy the client connects as expected.

2. Error messages and/or full log output:

No error or request logs from Caddy’s output:

getting-started-caddy-1  | {"level":"info","ts":1690876430.9027994,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
getting-started-caddy-1  | {"level":"warn","ts":1690876430.9044518,"msg":"Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
getting-started-caddy-1  | {"level":"info","ts":1690876430.90604,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]}
getting-started-caddy-1  | {"level":"warn","ts":1690876430.9064245,"logger":"http","msg":"automatic HTTPS is completely disabled for server","server_name":"srv0"}
getting-started-caddy-1  | {"level":"warn","ts":1690876430.9064693,"logger":"http","msg":"automatic HTTPS is completely disabled for server","server_name":"srv1"}
getting-started-caddy-1  | {"level":"info","ts":1690876430.9071064,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0x40001fb420"}
getting-started-caddy-1  | {"level":"info","ts":1690876430.907149,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
getting-started-caddy-1  | {"level":"info","ts":1690876430.907417,"logger":"tls","msg":"finished cleaning storage units"}
getting-started-caddy-1  | {"level":"debug","ts":1690876430.9074674,"logger":"http","msg":"starting server loop","address":"[::]:80","tls":false,"http3":false}
getting-started-caddy-1  | {"level":"info","ts":1690876430.9074967,"logger":"http.log","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]}
getting-started-caddy-1  | {"level":"debug","ts":1690876430.9078667,"logger":"http","msg":"starting server loop","address":"[::]:8080","tls":false,"http3":false}
getting-started-caddy-1  | {"level":"info","ts":1690876430.9080462,"logger":"http.log","msg":"server running","name":"srv1","protocols":["h1","h2","h3"]}
getting-started-caddy-1  | {"level":"info","ts":1690876430.9084826,"msg":"autosaved config (

From our gRPC client:

2023/08/01 10:05:50 INFO: [transport] [client-transport 0x140000a6b40] Closing: connection error: desc = "error reading server preface: http2: frame too large"
2023/08/01 10:05:50 INFO: [core] Creating new client transport to "{\n  \"Addr\": \"192.168.65.1:80\",\n  \"ServerName\": \"192.168.65.1:80\",\n  \"Attributes\": null,\n  \"BalancerAttributes\": null,\n  \"Type\": 0,\n  \"Metadata\": null\n}": connection error: desc = "error reading server preface: http2: frame too large"
2023/08/01 10:05:50 WARNING: [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
  "Addr": "192.168.65.1:80",
  "ServerName": "192.168.65.1:80",
  "Attributes": null,
  "BalancerAttributes": null,
  "Type": 0,
  "Metadata": null
}. Err: connection error: desc = "error reading server preface: http2: frame too large"

3. Caddy version:

v2.6.4

4. How I installed and ran Caddy:

a. System environment:

Running a local docker container on Docker for desktop on MacOS

b. Command:

docker-compose up -d

c. Service/unit/compose file:

version: "3.4"
services:
  # Caddy reverse proxy
  caddy:
    image: caddy
    restart: unless-stopped
    networks: [ netbird ]
    ports:
      - '443:443'
      - '80:80'
      - '8080:8080'
    volumes:
      - netbird_caddy_data:/data
      - ./Caddyfile:/etc/caddy/Caddyfile
volumes:
  netbird_caddy_data:

networks:
  netbird:

d. My complete Caddy config:

So far I’ve tested with auto_https off and without setting it, I’ve also used http://192.168.65.1:80 and http://192.168.65.1:8080 for site addresses without luck either.

{
        debug
        auto_https off
        servers :80,:8080 {
                protocols h1 h2c
        }
}

:80, :8080 {
        # Signal
        reverse_proxy /signalexchange.SignalExchange/* h2c://signal:10000
        # Management
        reverse_proxy /api/* management:80
        reverse_proxy /management.ManagementService/* h2c://management:80
        # Zitadel
        reverse_proxy /zitadel.admin.v1.AdminService/* h2c://zitadel:8080
        reverse_proxy /admin/v1/* h2c://zitadel:8080
        reverse_proxy /zitadel.auth.v1.AuthService/* h2c://zitadel:8080
        reverse_proxy /auth/v1/* h2c://zitadel:8080
        reverse_proxy /zitadel.management.v1.ManagementService/* h2c://zitadel:8080
        reverse_proxy /management/v1/* h2c://zitadel:8080
        reverse_proxy /zitadel.system.v1.SystemService/* h2c://zitadel:8080
        reverse_proxy /system/v1/* h2c://zitadel:8080
        reverse_proxy /assets/v1/* h2c://zitadel:8080
        reverse_proxy /ui/* h2c://zitadel:8080
        reverse_proxy /oidc/v1/* h2c://zitadel:8080
        reverse_proxy /saml/v2/* h2c://zitadel:8080
        reverse_proxy /oauth/v2/* h2c://zitadel:8080
        reverse_proxy /.well-known/openid-configuration h2c://zitadel:8080
        reverse_proxy /openapi/* h2c://zitadel:8080
        reverse_proxy /debug/* h2c://zitadel:8080
        # Dashboard
        reverse_proxy /* dashboard:80
}

5. Links to relevant resources:

Gah. gRPC and h2c is a real mess. The problem is that gRPC does very non-standard things.

This seems to explain what’s going on but I’m not sure I can help with a solution A misleading error when using gRPC with Go and nginx - Kenneth Jenkins

Why can’t you use HTTPS? You could get a publicly trusted cert with the DNS challenge, or have Caddy issue certs using its internal CA (and install its root CA cert to whatever machines need to connect to it).

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.