OAuth proxy rejecting some requests

I’m trying to use Caddy as an OAuth proxy for an internal service. It seems to be working but some requests are rejected and (303) redirected to reauthenticate and I can’t tell why it works for some but not all.

The setup is based on this article. Basically, an auth.x domain that hosts the login plugin configured to authenticate with GitHub OAuth, and a service.x domain that uses the jwt plugin to gate access to the service. Both Caddy and the service are running in Docker in the same docker network. Here’s the Caddyfile:

auth.i.example.com {
        tls email@example.com
        redir 302 {
                if {path} is /
                / /login
        }
        login {
                github client_id=aaaaaaaaaaaaaaaaa,client_secret=bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
                redirect_check_referer false
                redirect_host_file /etc/caddy/i.example.com-redirect_hosts.txt
                cookie_domain i.example.com
        }
}

(i-auth) {
        jwt {
                path /
                redirect https://auth.i.example.com/login?backTo=https%3A%2F%2F{host}{rewrite_uri_escaped}
                allow sub infogulch
        }
}

service.i.example.com {
        import i-auth
        proxy / service:8080 {
                transparent
        }
}

Everything seems to be configured correctly. I can go to the service domain, and it automatically redirects me to login at the auth domain. The auth domain authenticates with GitHub without issue and redirects me back to the service. And after being redirected back to the service it comes up fine and displays content as expected.

But I can’t interact with the service. Looking at the network tab in dev tools in the browser, it shows some requests get a 200 OK with a normal response (html, js etc) and other requests get 303 See Other response. The requests are being made to the same domain that the page itself loads from so I don’t know what is blocking it. Some examples:

Request:
GET https://service.i.example.com/
Cookie: jwt_token=cccccc
Host: service.i.example.com

Response:
200 OK
server: Caddy
content-type: text/html; charset=utf-8
content-length: 2804
Request:
POST https://service.i.example.com/api/endpoints
Content-Type: application/json
Cookie: jwt_token=cccccc
Authorization: Bearer dddddddd

Response:
303 See Other
location: https://auth.i.example.com/login?backTo=https%3A%2F%2Fservice.i.example.com%2Fapi%2Fendpoints
server: Caddy

Does jwt plugin block requests based on method or content type? I don’t see any kind of configuration for such a thing. I also notice that there’s both an Authorization header and jwt_token cookie. I’ve been testing in new private windows so cache/old cookies shouldn’t be interfering here but I don’t know what difference that could make. What am I missing here?

Are these “other requests” made by the service itself (e.g. polling), or are you browsing to them?

If it’s the former, is that service appending the Authorization headers for API requests?

I wonder if JWT is getting hung up on that.

1 Like

Ah yeah it looks like that’s what’s happening. It’s a web app so it probably adds the Auth header when making requests because I’m interacting with the app. The claims in the Auth header are completely different from the cookie set by login. And http.jwt documents that it checks the Auth header before the cookie to boot.

Well then. Guess I’m not sure what to do.

Can’t even double-handle it to strip a header somewhere, because the web app will need to append the header from the client’s end and it’ll need to survive all the way through Caddy to the service itself. Unfortunate.

Your only recourse might be to open an issue or PR on BTBurke/caddy-jwt, or make some edits and compile yourself, to have it ignore the Authorization header - since I’m not seeing any configuration to that end in the docs.

Right.

Ideally the app would support Token-Claim-X headers that jwt adds when passing the request to the backend and not even need its own authentication. Though it would then need some way to add extra claims relevant to the app…

I’ll start with a PR to caddy-jwt.

1 Like

Actually, while I did say you can’t really double-handle it, you could probably triple-handle it. It’s messy, but might just work in a pinch until you get the thing working.

Basically, you’d translate the header to something that wont hang JWT, then translate it back. The externally available HTTPS site “renames” the header, then passes it right along to an internal only listener with JWT on it, which proxies on to the service (re-renaming the header back to normal as it goes).

service.i.example.com {
  proxy / :8081 {
    transparent

    # Rename and strip the auth header
    # so it won't trip up the JWT plugin
    header_upstream X-Non-JWT-Authorization {>Authorization}
    header_upstream -Authorization
  }
}

:8081 {
  internal
  import i-auth
  proxy / service:8080 {
    # Don't use transparent twice, it would
    # only overwrite the headers with
    # info from the outside proxy layer
    header_upstream Host {host}

    # Rename the auth header back
    # so the service can use it
    header_upstream Authorization {>X-Non-JWT-Authorization}
    header_upstream -X-Non-JWT-Authorization
  }
}
2 Likes

That is beautiful and horrific, I love it!

I’ll try it out tomorrow. :smiley:

1 Like

Yes, god help you if you need this trick for prod…

I was inspired by some of the port-detouring shenanigans I’ve seen / engaged in when routing HTTP://:80 and HTTPS://:443 through an edge router to a non-standard port on a Docker host and translating it back to the correct port inside the Caddy container for LetsEncrypt… Fingers crossed it works!


Also, I just realised, you don’t want to use transparent twice because the second one will overwrite the X-Real-IP and X-Forwarded-For with the top Caddy layer! You should just leave it off the second one because the headers set by the first one will carry through. You’ll need to set Host though. I’ll edited the example I posted above to reflect this.

2 Likes

I finally got around to trying this and it’s working great, thank you!

One thing I had to change is that the internal site had to be defined as just :8081 { ... not localhost:8081 { .... Seems caddy isn’t happy serving from “localhost” but it can serve “:8081” just fine. I also tried 127.0.0.1 to no avail. Odd, but ultimately not an issue since I have caddy firewalled off from serving anything but 80 and 443 externally.

Thanks again!

1 Like

I almost can’t believe it, it just seems like such a filthy hack :stuck_out_tongue:

I’m simultaneously proud and, well, disgusted is maybe a strong word… I’m sure it’s fine. :+1:

Oh, yeah, that makes absolute sense - we’re trying to use transparent to preserve the Host all the way through, as a matter of prudence. You could instead proxy to :8081 and rename the site label for that site to service.i.example.com:8081, or just use :8081 on both, it’s internal so whatever. Same issue for 127.0.0.1.

Anyway, I’ll update the accepted solution again.

Exactly, and internal should protect it from an external request anyway (it’ll throw 404s), so you can expect it to be pretty secure.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.