Streaming gRPC over port 443

Hi Matt,

Can we use gRPC h2c:// with port 443 (TLS)?
Currently I’m testing the case that needs to send Traces from OpenTelementry → Caddy → Grafana Cloud Tempo (with URL tempo-us-central1.grafana.net:443).
So when I set to

:4300 { 
    reverse_proxy h2c://tempo-us-central1.grafana.net:443 {
        header_up Authorization "Bearer <secret>"
    }
}

we got this error: “run: adapting config using caddyfile: parsing caddyfile tokens for ‘reverse_proxy’: /etc/caddy/Caddyfile:56 - Error during parsing: upstream address has conflicting scheme (h2c://) and port (:443, the HTTPS port)”.
Thank you.

1 Like

Hi Nguyen, welcome – I split this into its own topic since it’s asking a question for help.

h2c is HTTP/2 over cleartext, not TLS. If you want to send HTTP/2 frames (gRPC) over TLS, then use https:// instead.

1 Like

Thank Matt for your quick reply.

I’m finding a way to solve my case, I know that this might be out-of-scope of this thread but if you have any suggestions, this would be great:

The simple topology: OpenTelementry (otel-collector) → Caddy → Grafana Cloud Tempo.

I can send directly from Otel-Collector —> Grafana Cloud Tempo via the endpoint tempo-us-central1.grafana.net:443 (with TLS.enable = true and will interpret as grpc in my config) with a Authorization header.

I added Caddy in between my Otel-Collector and Grafana Cloud, so now I moved my Authorization headers in Otel-Collector to Caddy header_up, the flow is that
Otel-Collector —tls = false—> Caddy :4300
Caddy config:

:4300 { 
    reverse_proxy https://tempo-us-central1.grafana.net:443 {
        header_up Authorization "Bearer <secret>"
    }
}

The error is
{"level":"error","ts":1656244287.577565,"logger":"http.handlers.reverse_proxy","msg":"reading from backend","error":"stream error: stream ID 1; PROTOCOL_ERROR; received from peer"} {"level":"error","ts":1656244287.5778072,"logger":"http.handlers.reverse_proxy","msg":"aborting with incomplete response","error":"stream error: stream ID 1; PROTOCOL_ERROR; received from peer"} {"level":"error","ts":1656244287.5780146,"logger":"http.log.access.log2","msg":"handled request","request":{"remote_ip":"172.23.0.3","remote_port":"56556","proto":"HTTP/2.0","method":"PRI","host":"","uri":"*","headers":{}},"user_id":"","duration":0.174381354,"size":0,"status":400,"resp_headers":{"Server":["Caddy"],"Content-Type":["text/html; charset=UTF-8"],"Referrer-Policy":["no-referrer"],"Content-Length":["273"],"Date":["Sun, 26 Jun 2022 11:51:27 GMT"]}}

When I use the config: (without scheme)

:4300 { 
    reverse_proxy tempo-us-central1.grafana.net:443 {
        header_up Authorization "Bearer <secret>"
    }
}

otel-stack-poc-caddy-1 | {"level":"error","ts":1656244426.5726256,"logger":"http.log.error.log2","msg":"net/http: HTTP/1.x transport connection broken: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02F\"","request":{"remote_ip":"172.23.0.3","remote_port":"56704","proto":"HTTP/2.0","method":"PRI","host":"","uri":"*","headers":{}},"duration":0.10876762,"status":502,"err_id":"mqw96e6z4","err_trace":"reverseproxy.statusError (reverseproxy.go:1196)"} otel-stack-poc-caddy-1 | {"level":"error","ts":1656244426.5728211,"logger":"http.log.access.log2","msg":"handled request","request":{"remote_ip":"172.23.0.3","remote_port":"56704","proto":"HTTP/2.0","method":"PRI","host":"","uri":"*","headers":{}},"user_id":"","duration":0.10876762,"size":0,"status":502,"resp_headers":{"Server":["Caddy"]}}

So I want to ask where those response is from the Grafana Cloud? And do you have any suggestions that I can use to ask from Grafana Lab team.

Best regards.

1 Like

Weird, I’ve never seen that (first) error before. Grafana must not be responding with HTTP2 properly. (I’m mobile so I can’t really investigate right now unfortunately.)

1 Like

Thank Matt,

Looking forward to receiving your suggestions (if you have some spare time, of course).

Regards,
Sang.

1 Like

Does your grafana server have a TLS certificate? What domain name does it have in the certificate’s SAN field?

You might need to configure header_up Host {upstream_hostport} to tell Caddy to set the Host header to your upstream address’ domain, and you might need to configure the tls_server_name option of the proxy http transport. See the docs:

2 Likes

Sang, welcome to the forum!

  1. Are you running caddy in a docker container or binary directly?

  2. @francislavoie I am wondering if this configuration could help Sang? The idea is that we remove the local TLS/HTTPS stage for now and also turn on debug logging:

{
    auto_https off
    debug

    servers {
        timeouts {
                idle 1d
        }
    }
}


http://localhost:4300 { 
    reverse_proxy tempo-us-central1.grafana.net:443 {
        header_up Host {upstream_hostport} 
        header_up Authorization "Bearer {$BEARER}"
        lb_try_duration 30s
        lb_try_interval 1s
        fail_duration 20s
    }
}

Please, let me know what you think - I plan to pair with Sang to look at his exact setup including looking at his code.

1 Like

Thank you francislavoie for your idea,

I believe it does have TLS Certificate - this is from Grafana Cloud not our built Grafana, in fact it’s just an endpoint for me to send traces to tempo-us-central1.grafana.net:443, some example you can see here.

I tried adding header_up Host {upstream_hostport} but the second error stills persists.
It’s gRPC, so do we need to tls_server_name to the proxy http transport, Francislavoie?

Regards,
Sang.

Hi Mark,

  1. We use docker container with version v2.5.1
  2. I tried it too but the error still pops up.
    Anyway, really appreciate your effort.

Sang.

1 Like

Ok, in that case, localhost is less useful. I’ll see what @matt or @francislavoie has to say about Streaming gRPC over port 443 - #7 by gcss, and in the meantime, check out your branch

Thank you @all,

I found the solution. It turns out that we need to add allow_h2c for the Caddy to handle our “cleartext HTTP/2” request from the OpenTelementry. The working solution will be like this:

{
    servers {
        protocol {
            # We need to enable h2c in order to use grpc without TLS.
            # ref: https://caddy.community/t/caddy-grpc-h2c-passthrough/11780
            allow_h2c
        }
    }
}

:4300 { #tempo-grpc
    log {
        output stdout
        level DEBUG
    }
    reverse_proxy tempo-us-central1.grafana.net:443 {
        header_up Authorization "Basic <secret>"
        transport http {
                tls_server_name tempo-us-central1.grafana.net
        }
    }
}

or without the tls_server_name option, we then use https

{
    servers {
        protocol {
            # We need to enable h2c in order to use grpc without TLS.
            # ref: https://caddy.community/t/caddy-grpc-h2c-passthrough/11780
            allow_h2c
        }
    }
}

:4300 { #tempo-grpc
    log {
        output stdout
        level DEBUG
    }
    reverse_proxy https://tempo-us-central1.grafana.net:443 {
        header_up Authorization "Basic <secret>"
    }
}
2 Likes

Hat tip to @vbsd for the recommendation.

1 Like

Thanks for following up!

To clarify for future readers, allow_h2c tells the server to the user-facing client to allow HTTP/2 over cleartext. I don’t believe this has anything to do with the proxy to the backend: it’s purely the connection between the client and the server/proxy, not the proxy and the backend.

1 Like

This topic was automatically closed after 29 days. New replies are no longer allowed.