Page Error: The plain HTTP request was sent to HTTPS port

1. Caddy version (caddy version):


2. How I run Caddy:

a. System environment:

Ubuntu 20.04.4 LTS, Docker

b. Command:

docker-compose up -d 

c. Service/unit/compose file:

version: "3.7"


    image: nginx:stable-alpine
    container_name: nginx
      - internal
      - /mnt/unionfs:/media
      - /opt/appdata/superplex/streaming-server/nginx.conf:/etc/nginx/conf.d/nginx.conf
      - /opt/appdata/superplex/streaming-server/html:/var/www
      - /opt/appdata/superplex/streaming-server/cert.pem:/etc/ssl/cert.pem
      - /opt/appdata/superplex/streaming-server/key.pem:/etc/ssl/key.pem
      - "8443:8443"
    restart: always

d. My complete Caddyfile or JSON config:

    # Global options block. Entirely optional, https is on by default
    # Optional email key for lets encrypt
    # Optional staging lets encrypt for testing. Comment out for production.

* {
   tls {
          dns cloudflare KEY

  @nginx host
  handle @nginx {
    reverse_proxy nginx:8443
 handle {


server {
  listen 8443 default_server ssl;
  ssl_certificate /etc/ssl/cert.pem;
  ssl_certificate_key /etc/ssl/key.pem;

  root /var/www;


3. The problem I’m having:

I’ve previously setup Caddy to use as a reverse proxy. I’m working on an additional project that I want to runing nginx so I’m trying to continue that reverse proxy for my web server. I have SSL setup and what I’m getting when I try to test the server is the message:

The plain HTTP request was sent to HTTPS port

As far as I know, everything I have setup should be using https so I’m not sure what to do next to configure this properly.

4. Error messages and/or full log output:

caddy log:

{"level":"info","ts":1655068713.171485,"msg":"using provided configuration","config_file":"/etc/caddy/Caddyfile","config_adapter":"caddyfile"}
{"level":"warn","ts":1655068713.173798,"msg":"input is not formatted with 'caddy fmt'","adapter":"caddyfile","file":"/etc/caddy/Caddyfile","line":2}
{"level":"info","ts":1655068713.1801436,"logger":"admin","msg":"admin endpoint started","address":"tcp/localhost:2019","enforce_origin":false,"origins":["localhost:2019","[::1]:2019",""]}
{"level":"info","ts":1655068713.1803634,"logger":"http","msg":"server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS","server_name":"srv0","https_port":443}
{"level":"info","ts":1655068713.180375,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"}
{"level":"info","ts":1655068713.1803904,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xc000217a40"}
{"level":"info","ts":1655068715.5951037,"logger":"http","msg":"enabling automatic TLS certificate management","domains":["*"]}
{"level":"info","ts":1655068715.6051712,"logger":"tls","msg":"cleaning storage unit","description":"FileStorage:/data/caddy"}
{"level":"info","ts":1655068715.6688135,"msg":"autosaved config (load with --resume flag)","file":"/config/caddy/autosave.json"}
{"level":"info","ts":1655068715.6688333,"msg":"serving initial configuration"}
{"level":"info","ts":1655068715.6691058,"logger":"tls","msg":"finished cleaning storage units"}

nginx log: - - [12/Jun/2022:21:18:39 +0000] "GET / HTTP/1.1" 400 255 "-" "curl/7.68.0" "2a01:4f9:6b:3368::2,"

curl -v

*   Trying 2606:4700:3034::ac43:b4b8:443...
* Connected to (2606:4700:3034::ac43:b4b8) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
*  subject: C=US; ST=California; L=San Francisco; O=Cloudflare, Inc.;
*  start date: Mar 19 00:00:00 2022 GMT
*  expire date: Mar 18 23:59:59 2023 GMT
*  subjectAltName: host "" matched cert's "*"
*  issuer: C=US; O=Cloudflare, Inc.; CN=Cloudflare Inc ECC CA-3
*  SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55c62ca542f0)
> GET / HTTP/2
> Host:
> user-agent: curl/7.68.0
> accept: */*
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Connection state changed (MAX_CONCURRENT_STREAMS == 256)!
< HTTP/2 400
< date: Sun, 12 Jun 2022 21:38:25 GMT
< content-type: text/html
< cf-cache-status: DYNAMIC
< expect-ct: max-age=604800, report-uri=""
< report-to: {"endpoints":[{"url":"https:\/\/\/report\/v3?s=ggCwKQylsunwfvq%2Bm3M%2BlIlJno1VR0mZI1qNJ0xWpflEeeiiY12onI7x%2BfrQQRARkihsOLDFUxI%2B0CQEoPUk7GkBKMoMt65qNl%2BtNCWZO8UqcpD7gbNez0DbUtImn2u5Zo41Z8GzNKigVUJmGzdY"}],"group":"cf-nel","max_age":604800}
< nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
< server: cloudflare
< cf-ray: 71a5bbbd7dfa9a05-FRA
< alt-svc: h3=":443"; ma=86400, h3-29=":443"; ma=86400
<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>
<center><h1>400 Bad Request</h1></center>
<center>The plain HTTP request was sent to HTTPS port</center>
* Connection #0 to host left intact

I think I may have figured it out. Let me preface this by saying when it comes to SSL certificates and HTTPS and all, I’m extremely dumb. I have muddled my way through this far.

I think my issue is I’ve configured NGINX to open a secure port when CADDY is already doing that. Here is how I imagine it,

Outside Request to Caddy, “Hey I have a secure request, please process.”

Caddy, “Yes this is secure, I will unlock it with my key and see where it should go. Ok, I see the message and it should go down my friend NGINX.”

Caddy to Nginx, “Here you are, I’ve already opened this message for you. What is your response?”

Nginx to Caddy, “Here you go, this is what I need sent back.”

Caddy, Thanks! I’ll lock this up with my key before I do.

Caddy to Outside Requestor: Here you are, your private message from Nginx.

Is that anything close to what’s going on?

1 Like

This also sums up what I’m trying to say.

1 Like

Yeah, essentially. The connection from your client to Caddy is encrypted with TLS, but from Caddy to nginx doesn’t need to be, because nginx is running inside of your docker stack, so the only things that could get in between and cause a man-in-the-middle attack is stuff running on that machine (as root probably).

The important thing to think about is “through what machines/devices is the data flowing through”, and if the data is coming in or out of your private network, then you want to encrypt it so that anyone between your server and your client can’t mess with the data and inject bad stuff, etc.

So yeah, you can proxy Caddy to nginx’s HTTP port. The reverse_proxy directive’s default behaviour is to proxy over HTTP, but it can be configured to proxy over HTTPS (HTTP over TLS) if you need it (like if the upstream server is hosted elsewhere in another datacenter and you can’t necessarily trust all the machines in between).

You don’t need this line btw, this binds the port to the host but Caddy will communicate with your container through the docker network, which uses the port internal to the network. If you don’t have some other application running on the host that needs to connect directly to your nginx container without first going through Caddy, then binding the port to the host doesn’t help, it just potentially introduces a vulnerability if the firewall wasn’t properly configured to not accept outside connections on that port.

1 Like

Very good point! Thank you!

For anyone in the future who finds this response, the answer is removing the port so I have for my docker-compose file:

    image: nginx:stable-alpine
    container_name: nginx
      - internal
      - /mnt/unionfs:/media
      - /opt/appdata/superplex/streaming-server/nginx.conf:/etc/nginx/conf.d/nginx.conf
      - /opt/appdata/superplex/streaming-server/html:/var/www
      - /opt/appdata/superplex/streaming-server/cert.pem:/etc/ssl/cert.pem
      - /opt/appdata/superplex/streaming-server/key.pem:/etc/ssl/key.pem
    restart: always

and changed my Caddyfile to:

  @nginx host
  handle @nginx {
    reverse_proxy nginx:80

I have a hunch that port 80 isn’t required here but I’ve left it anyway

nginx.conf is

server {
  listen 80 default_server;

  root /var/www;


Very minimal but I’m able to serve up a test page.

I hope this helps someone in the future. I found lots of references to people having this issue but not any good solutions. It wasn’t until I found that one reddit post that made me wonder, why is it that I’m trying to open a secure port internally?

Thanks again!

1 Like

Since it seems like you’re just serving static content with nginx… why not just use Caddy to do that? Caddy’s file_server can do that for you quite well. It’ll be more efficient than proxying through to another server.

1 Like

This topic was automatically closed after 30 days. New replies are no longer allowed.