Issue in running Web API reverse proxy with Caddy

Hi there,

So I’m new to Caddy. Testing it out today on a project, setting up a simple static website and a web API server.

I managed to setup the static site fine but I am having issues with setting up the API.

domain.xyz {
  root /to/static/files
  index index.html
}

api.domain.xyz {
  # API load balancer
  proxy / localhost:3001 {
    transparent
  }
}

When I ran the Caddyfile, it showed the following message:

Activating privacy features... done.

Serving HTTPS on port 443
https://domain.xyz
https://api.domain.xyz


Serving HTTP on port 80
http://domain.xyz
http://api.domain.xyz

WARNING: File descriptor limit 1024 is too low for production servers. At least 8192 is recommended. Fix with `ulimit -n 8192`.

So with this, I tested the API, I can see the welcome message on my API endpoint, however, when I exit from the console, the API appears to be shutdown or something, which is weird because my API is running on pm2, so it’s always running regardless.

Any ideas as to how I can solve the API issue that I am facing?

Thank you.

Hi @jei, welcome to the Caddy community!

Your Caddyfile and the Caddy startup output both look good to me.

When this happens, what result do you get from a request to the api subdomain? And what result do you get from a request directly to localhost:3001? (And what is pm2?)

Hi @Whitestrake, thank you for the warm welcome :slight_smile:

So in terms of results, I can’t seem to get anything from the subdomain. Only a “The site can’t be reached” message that you get when your internet isn’t working well.

For the localhost:3001, I was getting a response when I use my internal server IP to start the server. I managed to get a ping back from the server, which is fine.

Pm2 is a production process manager for Node.js applications. It allows you to run Node.js apps in the background.

I have managed to set this up well when working with nginx, but I wanted to try Caddy and give it a go. Learn its architecture, features etc.

I tried making changes to the config with this:

domain.xyz {
  root /root/to/dist
  index index.html
}

api.domain.xyz {
  # API load balancer
  proxy / http://<ipaddress>:3001 {
    transparent
    header_upstream Host {host}
    header_upstream X-Real-IP {remote}
    header_upstream X-Forwarded-For {remote}
    header_upstream X-Forwarded-Port {server_port}
    header_upstream X-Forwarded-Proto {scheme}
  }
}

I simply want to set a proxy for the api so it runs on the subdomain I have mentioned above. I am not sure if I’m missing anything from the config.

Also, if I do make changes to the Caddyfile, can simply run the command caddy to reflect the new changes ?

Thank you.

There should be a message below this that explains how it failed (it couldn’t resolve DNS, the site refused the connection, the server timed out, etc). It’s important to know what actually isn’t working.

Your first config was good. In fact, the five lines that you added are redundant with transparent - you’re essentially setting each header twice, which won’t cause any issues, but you can keep your Caddyfile nice and simple by just sticking with transparent. http:// is also implied and not strictly necessary.

No, this will start a new Caddy server instance. You can instead signal the existing instance with the USR1 signal to have it gracefully reload the new configuration with zero downtime (rolling back to the last good config if the new one has errors).

Ah you mean this: ERR_CONNECTION_REFUSED ?

Noted on this. I will revert those changes. Does it make a difference in using localhost or the internal ip address to build the server URL ? In most cases I have people simply use 0.0.0.0 along with the port.

How do I go about doing this on a Ubuntu 18.04? Also, I am running Caddy using the One step installer bash script from the website.

when I exit from the console, the API appears to be shutdown

Is this because the parent process (the console) shuts down the web server when you exit? (That’s usually the case on most computers, I think on both Windows and Linux.)

1 Like

I am not sure how Caddy’s architecture works but yes, when I exit from the process, it seems to shut down the web server, however, my other web server that is running the static files is up and running? So I’m slightly confused @matt

Sounds like Caddy is the what’s shutting down when you exit the console, not the API.

Caddy needs to be running to serve requests. ERR_CONNECTION_REFUSED makes sense if Caddy is stopped, and there is nothing listening on the HTTP(S) ports, so the server refuses connection.

No difference at all if Caddy will be running on the same host as the server.

The easiest way is pkill -SIGUSR1 caddy.

What is your other web server, and what ports is it listening on? (Or do you mean your other site?)

Ah I see. Make sense. So how do I make sure it is running in the background regardless?

This would be the domain.xyz where it renders simple static files from my Vuejs app.

You can use a process manager like systemd. There’s a great example unit file and guide available here: https://github.com/caddyserver/caddy/tree/master/dist/init/linux-systemd

You can also run the good old, quick and dirty nohup caddy &. The nohup stops hangup signals from being sent to the process (so it doesn’t stop running if you log off) and & sends it to the background. This, obviously, lacks the benefits of logging, service management, etc.

I would double and triple-check that Caddy is in fact serving these.

Under pretty much no circumstances does Caddy stop serving one site while still serving another (unless, for example, you reload its config). Even if it did, the standard response is to accept the connection and return 404 site not served on this interface.

Also, Caddy can’t tell which site you want before you connect. So, technologically speaking, it’s not really possible for it to not serve the api subdomain by refusing connections while still allowing connections for other domains.

When it happens, you can check easily with pgrep caddy. If you get a process ID number in response, you know Caddy is running. If you get no output from that command, Caddy is stopped.

Ah right. So I’m essentially using the One step installer bash script that I downloaded into my Linux instance and wrap it around the process manager systemd?

I will test this out using the link you provided.

Will this allow me to restart caddy when new changes are made to the configuration? Also, does this require me to have a single Caddyfile for all of my domains that need serving?

I have double and triple-checked it that Caddy is in fact serving those files.

I test using pgrep caddy, no process ID, however when I run caddy, it does show a process ID. Weirdly enough :astonished:

Yes, you’ll be able to sudo service reload caddy, and systemd will handle signalling Caddy to load the new configuration.

The service file provided by way of example does require a single central Caddyfile, yes. But you can either modify the service file, or you can use import statements in the central Caddyfile to grab multiple other files.

https://caddyserver.com/docs/import

This isn’t possible. If the process isn’t running, it can’t receive requests or return responses. I would be looking at browser cache or maybe even DNS (is it actually your Caddy host you’re talking to?).

If the DNS for api and the bare domain resolve to the same IP address, and you connect on the same default HTTP(S) ports, and you can curl one site and find a Server: Caddy header, but curling the other results in a refused connection, I will show you a magic box that defies logic.

Awesome. I will set this up in the morning. Need some sleep.

Awesome, imports will do just fine.

I am not sure.

The api and website are residing within the same IP . i thought that you’d need to invoke a separate port for the api besides using the proxy, ie api.domain.xyz:8082 {...}, but then again Caddy might handle that differently.

Caddy listens on whatever port you specify. If you specify no port, Caddy uses defaults.

In your Caddy startup output:

We can see that the bare domain and the api subdomain are running side by side on standard HTTP(S) ports.

But does this mean that you need to specify the port along with the URL ie https://api.domain.cuz:8082 for you to access the api?

This shouldn’t cause a conflict right as you’re simply stating that its running on a SSL port.

No. https://api.domain.cuz will suffice. The default HTTPS port is 443, so this effectively becomes https://api.domain.cuz:443. Caddy then proxies the request internally to the port specified in the upstream (i.e. localhost:3001). I’m not sure what importance port 8082 is meant to have.

Yes, no conflict. After a connection is established, Caddy determines which site to serve based on the client’s requested Host (this is virtual hosting, just like nginx and Apache do). The point is that they’re all on port 443, rather than an arbitrary port like 8082.

Solved it with the link you provided of wrapping the caddy.service around the process manager. It would be cool if the devs could bundle this configuration into a package so that it would be easier for others to do installs with a simple apt install caddy. Not that the documentation was hard to follow, I’m just saying, simplicity is key since Caddy follows that philosophy of automating certain procedures in setting a web server.

Regardless, I’m going to try out Caddy for the rest of my projects.

Thank you so much for the help @Whitestrake and thank you for being patient with me. I tend to ask a lot of questions :grin::sweat_smile:

1 Like

No worries.

As for packaging - there are a number of packages available, I believe, but none maintained by the developers of Caddy.

The question has a long history - there’s a long thread with much discussion on the topic here: Packaging Caddy

In short: It’s complicated.