Combining the layer4 and http apps (SSL pass through + http file_server/reverse_proxy)

1. Caddy version (caddy version):

v2.4.1 h1:kAJ0JB5Xk5gPdTH/27S5cyoMGqD5lBAe9yZ8zTjVJa0=

2. How I run Caddy:

Run caddy as a systemd service using stock service file except changed to be used by non-root systemd using custom user:

[Service]
Type=notify
ExecStart=/srv/popple/caddy/caddy run --environ --config /srv/popple/caddy/caddy.yaml --adapter yaml
ExecReload=/srv/popple/caddy/caddy reload --config /srv/popple/caddy/caddy.yaml --adapter yaml
TimeoutStopSec=5s
LimitNOFILE=1048576
LimitNPROC=512
PrivateTmp=true
ProtectSystem=full

a. System environment:

Ubuntu 20.04 using basic user (no root or sudo) that I use to run rootless Docker containers. All serving is done by and from this rootless user.

b. Command:

systemctl --user start caddy

d. My complete Caddyfile or JSON config:

My json file as yaml:

apps:
  layer4:
    servers:
      mdath:
        listen:
          - ":443"
        routes:
          - match:
              - tls:
                  sni:
                    - "*.e1cmyhhndp0ep.cdn.network"
            handle:
              - handler: "proxy"
                proxy_protocol: "v2"
                upstreams:
                  - dial:
                    - "localhost:4433"
          - handle:
            - handler: "tls"
          - match:
            - http: []
            handle:
              - handler: proxy
                upstreams:
                  - dial:
                    - localhost:80

  http:
     servers:
      blog.me:
        listen:
          - ":80"
        routes:
          - match:
            - host:
              - "localhost"
              - "127.0.0.1"
            handle:
              - handler: file_server
                root: /srv/popple/blog.me
      my_blog:
        listen: ["localhost:5733"]
        routes:
          - handle:
            - handler: "reverse_proxy"
              upstreams:
                - dial: "localhost:2368"

  
  tls:
    certificates:
      automate:
        - "localhost"
    automation:
      policies:
        - issuers:
          - module: internal
          subjects:
            - "localhost"
        - issuers:
          - module: acme
            email: <email>
            ca: https://acme-staging-v02.api.letsencrypt.org/directory
            
    

logging:
  logs:
    default:
      level: "DEBUG"

3. The problem I’m having:

I want to use layer4 app to proxy some tls traffic to a backend unterminated while getting all the normal caddy automatic HTTPS behavior for a variety of other http servers.

Since I can’t use caddyfile and layer4 at the same time I’ve been studying the json configs and forums to try to get it to work using only localhost for now.

My current solution is to terminate in layer4 and proxy to http but the proxying isn’t working and it seems like I won’t get auto https features like always redirect (since output thinks I’m only listening to port 80)

4. Error messages and/or full log output:

Startup output:

Jun 06 01:15:21 localhost caddy[1238114]: {"level":"info","ts":1622909721.1721902,"msg":"using provided configuration","config_file":"/srv/popple/caddy/caddy.yaml","config_adapter":"yaml"}
Jun 06 01:15:21 localhost caddy[1238114]: {"level":"info","ts":1622909721.173868,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["localhost:2019","[::1]:2019","127.0.0.1:2019"]}
Jun 06 01:15:21 localhost caddy[1238114]: {"level":"info","ts":1622909721.1747122,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc0001c50a0"}
Jun 06 01:15:21 localhost caddy[1238114]: {"level":"info","ts":1622909721.181795,"logger":"http","msg":"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server","server_name":"blog.me","http_port":80}
Jun 06 01:15:21 localhost caddy[1238114]: {"level":"warn","ts":1622909721.1829004,"logger":"tls","msg":"stapling OCSP","error":"no OCSP stapling for [localhost]: no OCSP server specified in certificate"}
Jun 06 01:15:21 localhost caddy[1238114]: {"level":"debug","ts":1622909721.183083,"logger":"http","msg":"starting server loop","address":"[::]:80","http3":false,"tls":false}
Jun 06 01:15:21 localhost caddy[1238114]: {"level":"info","ts":1622909721.1830945,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/srv/popple/.local/share/caddy"}
Jun 06 01:15:21 localhost caddy[1238114]: {"level":"debug","ts":1622909721.1832669,"logger":"http","msg":"starting server loop","address":"127.0.0.1:5733","http3":false,"tls":false}
Jun 06 01:15:21 localhost caddy[1238114]: {"level":"debug","ts":1622909721.183423,"logger":"layer4","msg":"listening","address":"tcp/[::]:443"}
Jun 06 01:15:21 localhost caddy[1238114]: {"level":"info","ts":1622909721.1842442,"logger":"tls","msg":"finished cleaning storage units"}
Jun 06 01:15:21 localhost caddy[1238114]: {"level":"warn","ts":1622909721.207919,"logger":"pki.ca.local","msg":"installing root certificate (you might be prompted for password)","path":"storage:pki/authorities/local/root.crt"}
Jun 06 01:15:21 localhost caddy[1238114]: 2021/06/06 01:15:21 not NSS security databases found
Jun 06 01:15:21 localhost caddy[1238114]: 2021/06/06 01:15:21 define JAVA_HOME environment variable to use the Java trust
Jun 06 01:15:21 localhost sudo[1238122]: pam_unix(sudo:auth): Couldn't open /etc/securetty: No such file or directory
Jun 06 01:15:21 localhost sudo[1238122]: pam_unix(sudo:auth): conversation failed
Jun 06 01:15:21 localhost sudo[1238122]: pam_unix(sudo:auth): auth could not identify password for [popple]
Jun 06 01:15:21 localhost caddy[1238114]: {"level":"error","ts":1622909721.2141056,"logger":"pki.ca.local","msg":"failed to install root certificate","error":"failed to execute sudo: exit status 1","certificate_file":"storage:pki/authorities/local/root.crt"}
Jun 06 01:15:21 localhost caddy[1238114]: {"level":"info","ts":1622909721.2145255,"msg":"autosaved config (load with --resume flag)","file":"/srv/popple/.config/caddy/autosave.json"}
Jun 06 01:15:21 localhost caddy[1238114]: {"level":"info","ts":1622909721.2148037,"msg":"serving initial configuration"}
Jun 06 01:15:21 localhost systemd[871310]: Started Caddy.
-- Subject: A start job for unit UNIT has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
-- 
-- A start job for unit UNIT has finished successfully.

This is normal except for the lack of sudo causing error when trying to create root certificate locally. I ran caddy trust from a sudo account and it says it’s trusted but I guess there’s a problem since curl also doesn’t work without a -k argument to trust invalid certs. I don’t think that’s related to my problem but I want to be complete so mentioning it.

curl -v http://localhost returns the expected html contents.
curl -kv https://localhost terminates tls and appears to proxy to localhost:80 but no content is returned and curl returns an error.

caddy logs:

Jun 06 01:20:31 localhost caddy[1238114]: {"level":"debug","ts":1622910031.5418913,"logger":"layer4.handlers.tls","msg":"terminated TLS","server_name":"localhost"}
Jun 06 01:20:31 localhost caddy[1238114]: {"level":"debug","ts":1622910031.5425372,"logger":"layer4.handlers.proxy","msg":"dial upstream","address":"localhost:80"}

curl output:

curl -kv https://localhost
*   Trying ::1:443...
* TCP_NODELAY set
* Connected to localhost (::1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_128_GCM_SHA256
* ALPN, server accepted to use h2
* Server certificate:
*  subject: [NONE]
*  start date: Jun  5 08:29:53 2021 GMT
*  expire date: Jun  5 20:29:53 2021 GMT
*  issuer: CN=Caddy Local Authority - ECC Intermediate
*  SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55c036351e10)
> GET / HTTP/2
> Host: localhost
> user-agent: curl/7.68.0
> accept: */*
> 
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* http2 error: Remote peer returned unexpected data while we expected SETTINGS frame.  Perhaps, peer does not support HTTP/2 properly.
* OpenSSL SSL_write: Broken pipe, errno 32
* Failed sending HTTP2 data
* Connection #0 to host localhost left intact
curl: (55) OpenSSL SSL_write: Broken pipe, errno 32

5. What I already tried:

Since layer4 is binding :443 I assume I need to terminate my https requests there always since there’s an error if I try to listen to :443 in the http app as well.

I’ve tried various combinations of localhost vs 127.0.0.1 and moving logic between matchers adn listen blocks but nothing worked and it was hard to know if I was going in the right direction.

6. Links to relevant resources:

GitHub - mholt/caddy-l4: Layer 4 (TCP/UDP) app for Caddy has various examples but none using layer4 with http app together.

Interestingly, while testing using one or the other configurations I noticed that caddy does not actually complain about both apps listening on :443 but both cannot work at the same time.

What seems to happen is the http app starts listening first (sending no certificate found errors to the passthrough requests) and then after 10-30s the layer4 layer takes over and https requests no longer work.

Then it will switch back to https working but not layer4. The period of these swaps seems random.

Here is a set of consecutive logs showing the alternate serving behavior:

Jun 06 09:39:58 localhost caddy[1250758]: {"level":"debug","ts":1622939998.718856,"logger":"http.stdlib","msg":"http: TLS handshake error from 209.141.50.61:56498: no certificate available for 'a4ya2dhp6j654.e1cmyhhndp0ep.cdn.network'"}
Jun 06 09:40:18 localhost caddy[1250758]: {"level":"debug","ts":1622940018.7203608,"logger":"layer4.matchers.tls","msg":"matched","server_name":"a4ya2dhp6j654.e1cmyhhndp0ep.cdn.network"}
Jun 06 09:40:18 localhost caddy[1250758]: {"level":"debug","ts":1622940018.7208138,"logger":"layer4.handlers.proxy","msg":"dial upstream","address":"localhost:4433"}
Jun 06 09:40:18 localhost caddy[1250758]: {"level":"debug","ts":1622940018.736446,"logger":"layer4","msg":"connection stats","read":686,"written":16672,"duration":0.016431365}
Jun 06 09:40:38 localhost caddy[1250758]: {"level":"debug","ts":1622940038.7183166,"logger":"layer4.matchers.tls","msg":"matched","server_name":"a4ya2dhp6j654.e1cmyhhndp0ep.cdn.network"}
Jun 06 09:40:38 localhost caddy[1250758]: {"level":"debug","ts":1622940038.7197387,"logger":"layer4.handlers.proxy","msg":"dial upstream","address":"localhost:4433"}
Jun 06 09:40:38 localhost caddy[1250758]: {"level":"debug","ts":1622940038.7321837,"logger":"layer4","msg":"connection stats","read":686,"written":16674,"duration":0.01390892}
Jun 06 09:40:58 localhost caddy[1250758]: {"level":"debug","ts":1622940058.7183127,"logger":"http.stdlib","msg":"http: TLS handshake error from 209.141.50.61:56524: no certificate available for 'a4ya2dhp6j654.e1cmyhhndp0ep.cdn.network'"}

Here is the starting logs showing both apps binding:

Jun 06 09:39:23 localhost caddy[1250758]: {"level":"info","ts":1622939963.56888,"msg":"using provided configuration","config_file":"/srv/popple/caddy/caddy.yaml","config_adapter":"yaml"}
Jun 06 09:39:23 localhost caddy[1250758]: {"level":"info","ts":1622939963.570597,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["localhost:2019","[::1]:2019","127.0.0.1:2019"]}
Jun 06 09:39:23 localhost caddy[1250758]: {"level":"info","ts":1622939963.5712297,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc000353f10"}
Jun 06 09:39:23 localhost caddy[1250758]: {"level":"info","ts":1622939963.5788026,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"blog.me","https_port":443}
Jun 06 09:39:23 localhost caddy[1250758]: {"level":"info","ts":1622939963.5791519,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"blog.me"}
Jun 06 09:39:23 localhost caddy[1250758]: {"level":"info","ts":1622939963.6045275,"logger":"pki.ca.local","msg":"root certificate is already trusted by system","path":"storage:pki/authorities/local/root.crt"}
Jun 06 09:39:23 localhost caddy[1250758]: {"level":"warn","ts":1622939963.605018,"logger":"tls","msg":"stapling OCSP","error":"no OCSP stapling for [localhost]: no OCSP server specified in certificate"}
Jun 06 09:39:23 localhost caddy[1250758]: {"level":"debug","ts":1622939963.6051888,"logger":"http","msg":"starting server loop","address":"[::]:443","http3":false,"tls":true}
Jun 06 09:39:23 localhost caddy[1250758]: {"level":"debug","ts":1622939963.6053665,"logger":"http","msg":"starting server loop","address":"127.0.0.1:5733","http3":false,"tls":false}
Jun 06 09:39:23 localhost caddy[1250758]: {"level":"debug","ts":1622939963.6053953,"logger":"http","msg":"starting server loop","address":"[::]:80","http3":false,"tls":false}
Jun 06 09:39:23 localhost caddy[1250758]: {"level":"info","ts":1622939963.6054025,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["localhost","127.0.0.1"]}
Jun 06 09:39:23 localhost caddy[1250758]: {"level":"info","ts":1622939963.605437,"logger":"tls.renew","msg":"acquiring lock","identifier":"localhost"}
Jun 06 09:39:23 localhost caddy[1250758]: {"level":"info","ts":1622939963.6053689,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/srv/popple/.local/share/caddy"}
Jun 06 09:39:23 localhost caddy[1250758]: {"level":"warn","ts":1622939963.6058073,"logger":"tls","msg":"stapling OCSP","error":"no OCSP stapling for [localhost]: no OCSP server specified in certificate"}
Jun 06 09:39:23 localhost caddy[1250758]: {"level":"info","ts":1622939963.6065838,"logger":"tls","msg":"finished cleaning storage units"}
Jun 06 09:39:23 localhost caddy[1250758]: {"level":"warn","ts":1622939963.6066191,"logger":"tls","msg":"stapling OCSP","error":"no OCSP stapling for [127.0.0.1]: no OCSP server specified in certificate"}
Jun 06 09:39:23 localhost caddy[1250758]: {"level":"debug","ts":1622939963.6068246,"logger":"layer4","msg":"listening","address":"tcp/[::]:443"}
Jun 06 09:39:23 localhost caddy[1250758]: {"level":"info","ts":1622939963.6070173,"msg":"autosaved config (load with --resume flag)","file":"/srv/popple/.config/caddy/autosave.json"}
Jun 06 09:39:23 localhost systemd[871310]: Started Caddy

Of course, I didn’t expect binding both to work but I thought it was relevent so am including this extra bit of information.

Caddy permits multiplexing multiple listeners onto the same bound socket using its Listen() methods – which make graceful reloads possible. When multiple servers bind to the same socket, it’s nondeterministic which one “gets” the connection. The “random” swapping might be related to connection lifetimes / timeouts.

Generally, it is user error to bind separate services on the same socket, but as I said, Caddy technically allows it so a new config can gracefully take the place of an old one. This is why graceful reloads work on Windows, not just POSIX-compliant systems; a relatively unique feature.

Ah, that’s interesting. That’s pretty cool! I think I had seen bind errors in other contexts which is why I was surprised but it looks like it’s nothing to worry about!

Thanks for the context!

1 Like

Thanks to an old issue covering this topic in GitHub I think I fairly well understand what’s going on and the best way forward for my setup (ssl passthrough proxy + normal caddy http servers/reverse_proxies)!

A lot of interesting details are in the issue here

First, why was I getting errors with my posted config
My inintial config was trying to solve the problem by terminating TLS in layer4 and proxying to normal http setup in the http app. This wasn’t a complete solution but also strangely didn’t work!

The root cause was that curl and caddy use HTTP/2 by default but that doesn’t work when terminating in layer4 and then proxying http upstream. You get an error like:

http2 error: Remote peer returned unexpected data while we expected SETTINGS frame.  Perhaps, peer does not support HTTP/2 properly.

I don’t know the exact mechanics behind why this breaks (since the problem in the GitHub issue is not quite the same) but it can be fixed in two ways:

  1. Configure TLS handler to only advertise HTTP/1.1:
{
	"handler": "tls",
	"connection_policies": [
		{"alpn": ["http/1.1"]}
	}
}
  1. Or configure curl to only try HTTP/1.1:
    curl -Lv --http1.1 https://localhost

This is useful if your upstream doesn’t support HTTP/2 but if it does it’s not great since performance will be degraded IIUC.

It also breaks most of the automatic HTTPS functionality of caddy. You’ll be able to automate certificates via the tls app but you won’t get automatic HTTP → HTTPS redirects for example.

Luckily, the GitHub issue had the answer to that as well though it seemed it wasn’t exactly what that poster wanted. I’ll summarize the salient points:

TLS Passthrough proxy using layer4 app while serving HTTPS file_server and reverse_proxy
The main mental blocker I ran into when trying to solve this was thinking "Because layer4 must listen to :443 for TLS passthrough, I must terminate all regular HTTPS traffic there as well.

Reading the work-around in GitHub made me realize that there was no reason that the http app needed to listen to :443 to work. In fact, there are many posts/docs around how caddy should work when it can’t bind :443 directly! Just like any other service sitting in front of caddy, we just need to make sure layer4 forwards :443 to caddy for all required traffic (Our HTTPS services and ACME challenges mostly).

The key piece that the posted config in GitHub was missing was that it didn’t set https_port in the http app config which is required if we want caddy to figure out that we expect automatic HTTPS.

I’ll link my current working config below but first I’ll highlight the key steps in case my config isn’t exactly the same as others.

  1. Add a layer4 app with specific routes for handling your layer 4 traffic in special ways. In my case, I use an tls.sni matcher to forward specific HTTPS domains to a Java backend with a baked in cert.

  2. Your last route is a catch all with no matcher that should simply proxy to your chosen caddy https_port. I chose 1337. :slight_smile:

  3. Create an http app config and set https_port as before. (1337 in my case)

  4. Set up your http servers like normal using your https_port where you would have used :443

In retrospect, it’s not so unreasonable but it did take me a bit to understand what was happening end to end. Also note, that my config still has acme config set to the staging URL for testing so don’t copy that part if you already using prod CA. (My next step is to actually get this working with real URLs!)

My final config:

apps:
  layer4:
    servers:
      mdath:
        listen:
          - ':443'
        routes:
          - match:
              - tls:
                  sni:
                    - '*.e1cmyhhndp0ep.cdn.network'
            handle:
              - handler: 'proxy'
                proxy_protocol: 'v2'
                upstreams:
                  - dial:
                      - 'localhost:4433'
          - handle:
              - handler: proxy
                upstreams:
                  - dial:
                      - 127.0.0.1:1337

  http:
    https_port: 1337
    servers:
      blog.me:
        listen:
          - ':1337'
        routes:
          - match:
              - host:
                  - 'localhost'
          - handle:
              - handler: file_server
                root: /srv/popple/blog.me
      my_blog:
        listen: ['localhost:5733']
        routes:
          - handle:
              - handler: 'reverse_proxy'
                upstreams:
                  - dial: 'localhost:2368'

  tls:
    automation:
      policies:
        - issuers:
            - module: acme
              email: <email>
              ca: https://acme-staging-v02.api.letsencrypt.org/directory

logging:
  logs:
    default:
      level: 'DEBUG'

edit: Using localhost:1337 as the listen target caused connection refused errors on port 80 when I changed to using real domain names so I changed it to :1337. I had originally wanted to bind to localhost as to not expose the internal secure port but it’s fine since my firewall already blocks other ports. I’ll debug it later but I assume I just don’t understand listen.

3 Likes

Thanks for taking the time to document your findings! I’m sure it’ll be useful for someone else in the future!

:smiley:

This topic was automatically closed after 30 days. New replies are no longer allowed.