Is it possible to implement reverse proxy SSH in Caddy(ngrok/Kadeessh)

1. The problem I’m having:

For security reasons, I hope to only open port 80/443 and use caddy reverse proxy ssh. For example, proxy1.example.com points to server1, proxy2.example.com. Because I saw the introduction of ngrok and Kadeessh modules on the website, it seemed to give me hope. After several days of trying, I did not achieve the goal, so I really wanted to know whether my idea might be impossible. If it cannot be achieved, there is no need to continue exploring this issue. My file format is not json, and it is already very complex, and the cost of modification and maintenance is very high.

2. Error messages and/or full log output:

PASTE OVER THIS, BETWEEN THE ``` LINES.
Please use the preview pane to ensure it looks nice.

3. Caddy version:

v2.7.5 h1:HoysvZkLcN2xJExEepaFHK92Qgs7xAiCFydN5x5Hs6Q=root@iZbp1fscqxwv6p8p8x95qaZ:

4. How I installed and ran Caddy:

export version=$(curl -s "https://api.github.com/repos/caddyserver/caddy/releases/latest" | jq -r .tag_name)


/usr/local/go/bin/xcaddy build ${version} \
> --output ./caddy_${version} \
> --with github.com/caddy-dns/cloudflare \
> --with github.com/caddy-dns/dnspod \
> --with github.com/caddy-dns/alidns \
> --with github.com/mholt/caddy-dynamicdns \
> --with github.com/mholt/caddy-events-exec \
> --with github.com/WeidiDeng/caddy-cloudflare-ip \
> --with github.com/caddy-dns/gandi \
> --with github.com/hairyhenderson/caddy-teapot-module \
> --with github.com/kirsch33/realip \
> --with github.com/porech/caddy-maxmind-geolocation \
> --with github.com/caddyserver/transform-encoder \
> --with github.com/caddyserver/forwardproxy@caddy2 \
> --with github.com/kadeessh/kadeessh \
> --with github.com/mohammed90/caddy-ngrok-listener 

a. System environment:

Ubuntu 22.04.3 LTS

b. Command:

PASTE OVER THIS, BETWEEN THE ``` LINES.
Please use the preview pane to ensure it looks nice.

c. Service/unit/compose file:

PASTE OVER THIS, BETWEEN THE ``` LINES.
Please use the preview pane to ensure it looks nice.

d. My complete Caddy config:

ssh {
  "grace_period": 0,
  "servers": {
   "admin02.ssh.example.com" {
      "address": "192.168.225.209:22"
      "localforward": {
        "forward": "allow"
      }
      "configs" : { 
            "loader" : "provided"
            "singer" : {
                 "module" : "fallback"
             }
             "authentication": {
               "public_key": {
                  "providers": "os"
                }
             }
      }
    }
  }
} 

5. Links to relevant resources:

My understanding is that caddy handles the http protocol, which is based on tcp. The ideal state is through Caddyfile, because it can modularize the configuration information. This is what I do now. This is very similar to the nginx configuration, and The maintenance cost of json is much higher. Although this is not the original intention of caddy, it is very useful for me. Use keywords like ssh, tunel

This reads like you’re trying to mix Caddyfile and JSON. You can’t do that.

Kadeessh doesn’t support Caddyfile, so you’ll need to only use JSON. You can convert your Caddyfile to JSON with caddy adapt -p and adjust it from there.

That said, I can’t really help with Kadeessh, I don’t use it myself so I’ll have to defer to @Mohammed90 when he has time to look into this.

Thank you very much for your reply, actually I didn’t mix josn and caddyfile, below is the composition of my caddyfile

admin.caddy
Caddyfile
hosts
server1.example.com
server2.example.com

in Caddyfile, I use import ,like this:

#import /etc/caddy/hosts/.caddy
import /home/richardson/.config/caddy/
.caddy
import /home/richardson/.config/caddy/widecardcert/.caddy
import /home/richardson/.config/caddy/hosts/
.caddy

If I use json format, this advantage of modularity is gone

In fact, this was one of my many failed attempts

I understand, but it’s the only option for using some of these plugins for the time being.

like this file

https://www.hokang.info/install-caddy-as-forward-proxy-server/

host:port {                     // (1)
    root /home/www              // (2)
    ipfilter / {                // (3)
        rule allow
       
    }
    forwardproxy {
        hide_ip                 // (5)
    }
}

if the keyword forwaryproxy is ssh, perfect

Thank you, you reminded me, maybe you can consider opening another port, such as 8080, 8433, and starting another caddy process, maybe it is an option

That’s from Caddy v1, and no longer relevant.

forwardproxy is this plugin.

Yes, that’s certainly an option.

{
  "apps": {
    "ssh": {
      "grace_period": "2s",
      "servers": {
        "srv0": {
          "address": "tcp/0.0.0.0:2000-2012",
          "configs": [
            {
              "config": {
                "loader": "provided",
                "signer": {
                  "module": "fallback"
                },
                "authentication": {
                  "public_key": {
                    "providers": {
                      "os": {}
                    }
                  }
                }
              }
            }
          ],
          "localforward": {
            "forward": "allow"
          },
        }
      }
    }
  }
}

this is sample in github repo. in my case ,use it for jump server.I really don’t understand. If I use the putty client to access different intranet service areas through this caddy and use ssh, how should I configure it? Does srv0 in the example represent the target server or is it just used to distinguish it?

{
  "admin":{
          "listen": "localhost:2023"
  },
  "apps": {
      "ssh": {
         "grace_period": "2s",
         "servers": {
             "srv0": {
                "address": "tcp/0.0.0.0:2000-2012",
                "configs": [
                  {
                   "config": {
                     "loader": "provided",
                     "signer": {
                         "module": "fallback"
                     },
                     "authentication": {
                         "public_key": {
                            "providers": {
                              "os": {}
                             }
                          }
                      }
                    }
                  }
                ],
               "localforward": {
                   "forward": "allow"
               }
            }
      }
   }
 }
}


this is my config file

The ngrok plugin is currently part of the HTTP listener wrapper, so routing connections outside of the caddy http app is near impossible. I’d stick with Kadeessh.

The configuration is close to what you want, but requires a bit of modification. I’ll explain this config, then I’ll show you the final modified version so you understand what you have to validate and change according to your needs.

Starting with srv0, this is just an identifier for the server within this key. It does not represent the target server. It is just to distinguish this particular server configuration set. The name can be anything, e.g. tunnerl_config or john_doe. For example, if you need to listen on port 22 for shell and on port 2020 for tunneling, you’d have 2 keys inside server, where one listening only on port 22 for shell access, and the other listening only on port 2020 for forwarding. In such case, Kadeessh configuration would be:

{
  "apps": {
    "ssh": {
      "grace_period": "2s",
      "servers": {
        "shell_server": {
          "address": "tcp/0.0.0.0:22",
          "pty": {
            "pty": "allow"
          },
          "configs": [
            {
              "config": {
                "loader": "provided",
                "no_client_auth": false,
                "authentication": {
                "public_key": {
                  "providers": {
                    "os": {}
                  }
                }
              }
              }
            }
          ],
          "actors": [
            {
              "act": {
                "action": "shell"
              }
            }
          ]
        },
        "tunnel_server": {
          "address": "tcp/0.0.0.0:2020",
          "configs": [
            {
              "config": {
                "loader": "provided",
                "signer": {
                  "module": "fallback"
                },
                "authentication": {
                  "public_key": {
                    "providers": {
                      "os": {}
                    }
                  }
                }
              }
            }
          ],
          "localforward": {
            "forward": "allow"
          }
        }
      }
    }
  }
}

See, it doesn’t have to say srv0 or any particular format. It’s a name set by the user for their own reference. The address field is a Network Address which Kadeessh will establish listeners for the server.

Then comes configs. It’s plural because the configuration for the every incoming connection can be varied, for instance, you might want to ignore authentication for connections from local network (LAN) but enforce them for external ones (WAN). Objects inside the configs array have 2 fields: match, and config. The match part is what allows you change behavior per aspects of the connection (currently only IP addresses and not).

Inside config, we start with loader to tell Kadeessh where to get the configuration from. This is future-proofing to modularize the source of the configuration. Don’t worry much about it, but know that the provided word means the configuration is given within the same file itself. As for this part

"signer": {
    "module": "fallback"
}

This tells Kadeessh to look for the server’s private/public keys in storage (which don’t exist if fresh), load them, and generate new keys if absent. RSA and ed25519 key are loaded and generated if absent, but ecdsa are only loaded but not generated. DSA keys are ignored.

Now we come to authentication

"authentication": {
  "public_key": {
    "providers": {
      "os": {}
    }
  }
}

This tells Kadeessh to validate users based on their public/private key pair, using the OS (operating system) as the source of data. In the operating system, the keys of authorized users are placed under the user’s home directory, specifically in ~/.ssh/authorized_keys. So Kadeessh will take the user name, check the file under its home directory, and check if the matching key exists inside the file or not.

Lastly, there’s the localforward part

"localforward": {
    "forward": "allow"
}

By default, Kadeessh will not allow tunneling of any sort. This part tells Kadeessh to allow tunneling requests if they come through.

The complete structure of Kadeessh configuration format is available on Caddy documentation website here

Now we come for the config you need. I understand you can only listen on the ports 80 and 443. If you’ll only accept ssh connections on them, then you can do something like this (subject to your validation and scrutiny), where we tell Kadeessh to listen on port 443 and only process tunneling requests based on the successful authentication per the user’s keys as they’re available in the OS:

{
  "apps": {
    "ssh": {
      "grace_period": "2s",
      "servers": {
        "tunnel_server": {
          "address": "tcp/0.0.0.0:443",
          "configs": [
            {
              "config": {
                "loader": "provided",
                "signer": {
                  "module": "fallback"
                },
                "authentication": {
                  "public_key": {
                    "providers": {
                      "os": {}
                    }
                  }
                }
              }
            }
          ],
          "localforward": {
            "forward": "allow"
          }
        }
      }
    }
  }
}

If you want to serve both HTTP and SSH on the same ports, then this requires an additional Caddy module, i.e. layer4. You’ll need to combine Kadeessh, layer4, and http apps to come up with this configuration (disclaimer: I have not vetted this config thoroughly, but I assure you the capability exists within Caddy ecosystem):

{
  "apps": {
    "http": {
      "http_port": 8080,
      "https_port": 8443,
      "servers": {
        "srv0": {
          "listen": [
            "tcp/127.0.0.1:8443"
          ],
          "routes": [
            {
              "match": [
                {
                  "host": [
                    "proxy1.example.com"
                  ]
                }
              ],
              "handle": [
                {
                  "handler": "subroute",
                  "routes": [
                    {
                      "handle": [
                        {
                          "body": "OK!",
                          "handler": "static_response"
                        }
                      ]
                    }
                  ]
                }
              ],
              "terminal": true
            }
          ]
        }
      }
    },
    "layer4": {
      "multiplexer": {
        "listen": ["tcp/0.0.0.0:443", "tcp/0.0.0.0:80"],
        "routes": [
          {
            "match": [{
              "ssh": {}
            }],
            "handle": [
              {
                "handler": "proxy",
                "upstreams": [
                  {"dial": ["127.0.0.1:2020"]}
                ]
              }
            ]
          },
          {
            "match": [
              {"http": []}
            ],
            "handle": [
              {
                "handler": "subroute",
                "routes":[
                  {
                    "match": [
                      {
                        "tls": {}
                      }
                    ],
                    "handler": [
                      {
                        "handler": "proxy",
                        "upstreams": [
                          {"dial": ["127.0.0.1:8443"]}
                        ]
                      }
                    ]
                  },
                  {
                    "handler": [
                      {
                        "handler": "proxy",
                        "upstreams": [
                          {"dial": ["127.0.0.1:8080"]}
                        ]
                      }
                    ]
                  }
                ]
              }
            ]
          }
        ]
      }
    },
    "ssh": {
      "grace_period": "2s",
      "servers": {
        "tunnel_server": {
          "address": "tcp/127.0.0.1:2020",
          "configs": [
            {
              "config": {
                "loader": "provided",
                "signer": {
                  "module": "fallback"
                },
                "authentication": {
                  "public_key": {
                    "providers": {
                      "os": {}
                    }
                  }
                }
              }
            }
          ],
          "localforward": {
            "forward": "allow"
          }
        }
      }
    }
  }
}

The above config follows this logic in processing incoming connections:

  • If the connection is an SSH connection, proxy it to the local SSH server.
  • Otherwise, if it’s an HTTP connection, follow the below logic:
    – If the connection is TLS connection, proxy it to the https/TLS port of the HTTP server (which is Caddy, just next door).
    – Otherwise proxy the request to the plain HTTP port of the HTTP server.

This way, you can serve both SSH and HTTP traffic on the same ports, and the users of SSH traffic are authenticated by the public/private that already exist on the operating system before proxying them to the other internal server.

Finally, regarding PuTTY, I can’t help with its configuration because I don’t use it. However, I found the following links showing how to use PuTTY to for ssh tunnels:

1 Like

Thank you very much for your patient reply, it’s very nice. But there is a small problem. When using your json file, an error occurs. Is it necessary to compile something else? The error message is as follows:

Error: loading initial config: loading new config: loading layer4 app module: decoding module config: layer4: json: unknown field "multiplexer"
Error: caddy process exited with error: exit status 1

Below are my caddy compilation options. Do I need to add anything else?

export version=$(curl -s "https://api.github.com/repos/caddyserver/caddy/releases/latest" | jq -r .tag_name)

/usr/local/go/bin/xcaddy build ${version} --output ./caddy_${version} \
--with github.com/caddy-dns/cloudflare \
--with github.com/caddy-dns/dnspod \
--with github.com/caddy-dns/alidns \
--with github.com/mholt/caddy-dynamicdns \
--with github.com/mholt/caddy-events-exec \
--with github.com/WeidiDeng/caddy-cloudflare-ip \
--with github.com/caddy-dns/gandi \
--with github.com/hairyhenderson/caddy-teapot-module \
--with github.com/kirsch33/realip \
--with github.com/porech/caddy-maxmind-geolocation \
--with github.com/caddyserver/transform-encoder \
--with github.com/caddyserver/forwardproxy@caddy2=github.com/klzgrad/forwardproxy@naive \
--with github.com/caddyserver/forwardproxy@caddy2 \
--with github.com/kadeessh/kadeessh \
--with github.com/mohammed90/caddy-ngrok-listener \
--with github.com/mholt/caddy-l4/layer4 

Your reply gave me an inspiration, that is, it is possible that I can start 2 caddy processes, one process uses Caddyfile, which uses ports such as 8433 and 8080, and then handle my current business. The other uses current json format, listens on ports 80 and 443, and handles both http and ssh. If it is feasible in the end, it can be regarded as a solution

# caddy list-modules| grep mu
http.reverse_proxy.upstreams.multi

I try modify the json file like this, but not success.

{
  "admin":{
	  "listen": "localhost:2023"
  },
  "apps": {
    "http": {
      "http_port": 6080,
      "https_port": 6443,
      "servers": {
        "srv0": {
          "listen": [
            "tcp/127.0.0.1:6443"
          ],
          "routes": [
            {
              "match": [
                {
                  "host": [
                    "proxy.caddy.example.com"
                  ]
                }
              ],
              "handle": [
                {
                  "handler": "subroute",
                  "routes": [
                    {
                      "handle": [
                        {
                          "body": "OK!",
                          "handler": "static_response"
                        }
                      ]
                    }
                  ]
                }
              ],
              "terminal": true
            }
          ]
        }
      }
    },
    "layer4": {
      "servers": {
        "example": {
          "listen": ["tcp/0.0.0.0:8443", "tcp/0.0.0.0:8080"],
          "routes": [
            {
              "match": [{
                "ssh": {}
              }],
              "handle": [
                {
                  "handler": "proxy",
                  "upstreams": [
                    {"dial": ["127.0.0.1:2020"]}
                  ]
                }
              ]
            },
            {
              "match": [
                {"http": []}
              ],
              "handle": [
                {
                  "handler": "subroute",
                  "routes":[
                    {
                      "match": [
                        {
                          "tls": {}
                        }
                      ],
                      "handler": [
                        {
                          "handler": "proxy",
                          "upstreams": [
                            {"dial": ["127.0.0.1:6443"]}
                          ]
                        }
                      ]
                    },
                    {
                      "handler": [
                        {
                          "handler": "proxy",
                          "upstreams": [
                            {"dial": ["127.0.0.1:6080"]}
                          ]
                        }
                      ]
                    }
                  ]
                }
              ]
            }
          ]
        }
      }
    },
    "ssh": {
      "grace_period": "2s",
      "servers": {
        "tunnel_server": {
          "address": "tcp/127.0.0.1:2020",
          "configs": [
            {
              "config": {
                "loader": "provided",
                "signer": {
                  "module": "fallback"
                },
                "authentication": {
                  "public_key": {
                    "providers": {
                      "os": {}
                    }
                  }
                }
              }
            }
          ],
          "localforward": {
            "forward": "allow"
          }
        }
      }
    }
  }
}


this is the error

Error: loading initial config: loading new config: loading layer4 app module: provision layer4: server 'example': route 0: loading matcher modules: module name 'ssh': unknown module: layer4.matchers.ssh
Error: caddy process exited with error: exit status 1

blow is my modules containes ssh

caddy list-modules | grep ssh
ssh
ssh.actor_matchers.critical_option
ssh.actor_matchers.extension
ssh.actor_matchers.group
ssh.actor_matchers.not
ssh.actor_matchers.remote_ip
ssh.actor_matchers.user
ssh.actors.shell
ssh.actors.static_response
ssh.ask.localforward.allow
ssh.ask.localforward.deny
ssh.ask.pty.allow
ssh.ask.pty.deny
ssh.ask.reverseforward.allow
ssh.ask.reverseforward.deny
ssh.authentication.flows.interactive
ssh.authentication.flows.password_auth
ssh.authentication.flows.public_key
ssh.authentication.providers.password.static
ssh.authentication.providers.public_key.os
ssh.authentication.providers.public_key.static
ssh.config.loaders.provided
ssh.config_matchers.local_ip
ssh.config_matchers.not
ssh.config_matchers.remote_ip
ssh.session.authorizers.chained
ssh.session.authorizers.max_session
ssh.session.authorizers.public
ssh.session.authorizers.reject
ssh.signers.fallback
ssh.signers.file
ssh.subsystem.inmem_sftp

--with github.com/mholt/caddy-l4/layer4 \
--with github.com/mholt/caddy-l4/modules/l4ssh \
--with github.com/mholt/caddy-l4/modules/l4proxyprotocol \
--with github.com/mholt/caddy-l4/modules/l4proxy \
--with github.com/mholt/caddy-l4/modules/l4http \
--with github.com/mholt/caddy-l4/modules/l4tls \

add some modules, let’s try :grinning:

Ah, my JSON had a structural issue. Here’s the fixed version:

{
  "apps": {
    "http": {
      "http_port": 8080,
      "https_port": 8443,
      "servers": {
        "srv0": {
          "listen": [
            "tcp/127.0.0.1:8443"
          ],
          "routes": [
            {
              "match": [
                {
                  "host": [
                    "proxy1.example.com"
                  ]
                }
              ],
              "handle": [
                {
                  "handler": "subroute",
                  "routes": [
                    {
                      "handle": [
                        {
                          "body": "OK!",
                          "handler": "static_response"
                        }
                      ]
                    }
                  ]
                }
              ],
              "terminal": true
            }
          ]
        }
      }
    },
    "layer4": {
      "servers": {
        "multiplexer": {
          "listen": [
            "tcp/0.0.0.0:443",
            "tcp/0.0.0.0:80"
          ],
          "routes": [
            {
              "match": [
                {
                  "ssh": {}
                }
              ],
              "handle": [
                {
                  "handler": "proxy",
                  "upstreams": [
                    {
                      "dial": [
                        "127.0.0.1:2020"
                      ]
                    }
                  ]
                }
              ]
            },
            {
              "handle": [
                {
                  "handler": "subroute",
                  "routes": [
                    {
                      "match": [
                        {
                          "tls": {}
                        }
                      ],
                      "handler": [
                        {
                          "handler": "proxy",
                          "upstreams": [
                            {
                              "dial": [
                                "127.0.0.1:8443"
                              ]
                            }
                          ]
                        }
                      ]
                    },
                    {
                      "handler": [
                        {
                          "handler": "proxy",
                          "upstreams": [
                            {
                              "dial": [
                                "127.0.0.1:8080"
                              ]
                            }
                          ]
                        }
                      ]
                    }
                  ]
                }
              ]
            }
          ]
        }
      }
    },
    "ssh": {
      "grace_period": "2s",
      "servers": {
        "tunnel_server": {
          "address": "tcp/127.0.0.1:2020",
          "configs": [
            {
              "config": {
                "loader": "provided",
                "signer": {
                  "module": "fallback"
                },
                "authentication": {
                  "public_key": {
                    "providers": {
                      "os": {}
                    }
                  }
                }
              }
            }
          ],
          "localforward": {
            "forward": "allow"
          }
        }
      }
    }
  }
}

Also, the way you’re compiling in the layer4 module isn’t correct. Instead of all of this:

Just do:

--with github.com/mholt/caddy-l4

Thank you very much for your reply. It seems that it has been started. What needs to be done next is to configure .ssh/config? It looks like the port is open. If you want it to be a jump machine. Is it .ssh/config that configures it?


2023/11/27 13:09:47.129 INFO    ssh.authentication.flows.public_key     authentication start    {"providers_count": 1, "remote_address": "127.0.0.1:60708", "username": "caddyssh", "key_type": "ssh-rsa"}
2023/11/27 13:09:47.129 INFO    ssh.authentication.flows.public_key     authentication successful       {"provider": "os", "user_id": "1001", "username": "caddyssh", "key_type": "ssh-rsa"}
2023/11/27 13:09:47.222 INFO    ssh.ask.pty.deny        asking for permission   {"session_id": "bab164ff1efc936704919228edd961bbb4db5bcbc0c4f88947a601ae8273685a", "local_address": "127.0.0.1:2020", "client_version": "SSH-2.0-J", "user": "caddyssh", "terminal": "xterm"}
2023/11/27 13:09:47.222 INFO    ssh.tunnel_server       session ended   {"user": "caddyssh", "remote_ip": "127.0.0.1:60708", "session_id": "bab164ff1efc936704919228edd961bbb4db5bcbc0c4f88947a601ae8273685a"}
2023/11/27 13:09:48.097 INFO    ssh.authentication.flows.public_key     authentication start    {"providers_count": 1, "remote_address": "127.0.0.1:41294", "username": "caddyssh", "key_type": "ssh-rsa"}
2023/11/27 13:09:48.097 INFO    ssh.authentication.flows.public_key     authentication successful       {"provider": "os", "user_id": "1001", "username": "caddyssh", "key_type": "ssh-rsa"}
2023/11/27 13:09:48.195 INFO    ssh.ask.pty.deny        asking for permission   {"session_id": "d197a1b1b41cab582353f14241fd5a537e402982d9f5c6cfefe70aa50ce876b9", "local_address": "127.0.0.1:2020", "client_version": "SSH-2.0-J", "user": "caddyssh", "terminal": "xterm"}
2023/11/27 13:09:48.196 INFO    ssh.tunnel_server       session ended   {"user": "caddyssh", "remote_ip": "127.0.0.1:41294", "session_id": "d197a1b1b41cab582353f14241fd5a537e402982d9f5c6cfefe70aa50ce876b9"}
2023/11/27 13:10:15.951 INFO    http.acme_client    

destsrv01:

.ssh/authorized_keys
contains client01’s id_rsa.pub

jumpsrv01(caddy server):

some information for .ssh/authorized_keys?

in fact, it has client01’s id_rsa.pub

client_mac:

ssh -J -i (client's id_rsa) caddyssh@jumpsrv01:8443 richardson@destsrv01:22

but

2023/11/27 14:14:26.422 INFO    ssh.authentication.flows.public_key     authentication start    {"providers_count": 1, "remote_address": "127.0.0.1:54200", "username": "caddyssh", "key_type": "ssh-rsa"}
2023/11/27 14:14:26.422 INFO    ssh.authentication.flows.public_key     authentication failed   {"provider": "os", "username": "caddyssh", "key_type": "ssh-rsa"}
2023/11/27 14:14:26.422 WARN    ssh.authentication.flows.public_key     invalid credentials     {"username": "caddyssh", "key_type": "ssh-rsa"}

I don’t think this command is 100% correct

try:

ssh -i (client's id_rsa) -J caddyssh@jumpsrv01:8443 richardson@destsrv01:22

It works for me. Just double-checked.