Local Domain and Reverse Proxy Slow

1. The problem I’m having:

My company has several different web services for different web-apps and APIs. I’m trying to essentially recreate our deployed enviornments locally with a local domain. I have 9 different services running locally on different ports. I have added my local domain and some subdomains to /etc/hosts and configured Caddy to direct traffic on the domain and subdomains to the correct services. Everything is working I suppose, but this setup is significantly slower than hitting the services directly at localhost:port. I tried turning off auto https to see if that would speed it up, but it didn’t seem to make a difference. I have read some things about adding some DNS settings to my router? I can’t figure out exactly where the bottleneck is. As an example, I have a web-app running on port 3000 that talks to an API which is running on port 8000. I have the web-app configured to be available at outdoorsy.local and the API available at api.outdoorsy.local. I’m watching the traffic in the developer tools of my browser and when I go to http://outdoorsy.local it can take 40 seconds to load the initial document. After that has loaded, it might make some XHR requests to the API, but those are not particularly slow. One thing I’ve noticed which may be useful is with a Nextjs app. The home page does not make any requests for data in getInitialProps (server side) and that page doesn’t feel particularly slow. But another page that uses getInitialProps and makes some requests to some of the other services can take 40+ seconds to load. So that kind of points to a Node process requesting data from another service being much slower than the browser doing the same thing…

One other question if it’s ok to ask a couple in a single post: I initially had everything running with auto https enabled, but some of the services call other services on the server side and those were not working. Again that Nextjs example, if getIntialProps makes a request to API for data, the request fails as Node doesn’t like https on the local domain. Not sure if it uses the system certificates, seems like it doesn’t. Is there any quick way to get that working or will I have to get my Caddy certificates into that enviorment somehow?

Thanks! Let me know if I can provide any more information. Appreciate any help you can offer!

2. Error messages and/or full log output:

I’m not sure logs are relevant without any errors, but I can provide them if needed.

3. Caddy version:

v2.9.1 h1:OEYiZ7DbCzAWVb6TNEkjRcSCRGHVoZsJinoDR/n9oaY=

4. How I installed and ran Caddy:

I have a docker-compose file that includes Caddy image caddy:2.9-alpine.

a. System environment:

macOS 15.3
Docker version 27.4.0, build bde2b89

b. Command:

Just starting the docker container with docker compose up

c. Service/unit/compose file:

d. My complete Caddy config:

{
  auto_https off
}

# Main local domain for frontend applications
http://outdoorsy.local {
  # Matcher for all routes that should be routed to host tools
  @outdoorsy-nht {
    path /dashboard*
  }

  # Regex matcher static assets
  @static-assets path_regexp \.(?:css|js|json|jpg|jpeg|gif|png|woff2|woff|ttf|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc|txt|html|xml|webmanifest)$

  # Expression to determine presence of `new-host-tooling` cookie
  @nht-cookie expression {http.request.cookie.new-host-tooling} == "enabled"

  # Handle static assets (frontend apps use an asset prefix so we can route to the correct upstream)
  handle @static-assets {
    # `outdoorsy-host` (NHT) assets
    handle_path /outdoorsy-host/* {
      reverse_proxy host.docker.internal:4510
    }

    # `outdoorsy-dashboard` (ember host) assets
    handle_path /outdoorsy-host-classic/* {
      reverse_proxy host.docker.internal:4210
    }
  }

  # Handle host tools routes
  handle @outdoorsy-nht {
    # Route to NHT when the `new-host-tooling` cookie is set to `enabled`
    handle @nht-cookie {
      reverse_proxy host.docker.internal:4510
    }

    # Route to legacy host tooling when the `new-host-tooling` cookie is not set or disabled
    reverse_proxy host.docker.internal:4210
  }

  # Default route all traffic to `outdoorsy-renter`
  reverse_proxy host.docker.internal:3000
}

# Local domain for `admin-portal` application
http://portal.outdoorsy.local {
  # Matcher for all routes that should be routed to the React admin portal
  @admin-portal-react {
    path /campground-organizations*
  }

  # Regex matcher static assets
  @static-assets path_regexp \.(?:css|js|json|jpg|jpeg|gif|png|woff2|woff|ttf|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm|htc|txt|html|xml|webmanifest)$

  # Handle static assets (frontend apps use an asset prefix so we can route to the correct upstream)
  handle @static-assets {
    # `admin-portal-react` assets
    handle_path /admin-portal-react/* {
      reverse_proxy host.docker.internal:4511
    }
  }

  # Handle routes that should be routed to the React admin portal
  handle @admin-portal-react {
    reverse_proxy host.docker.internal:4511
  }

  # Default route all traffic to the Ember admin portal
  reverse_proxy host.docker.internal:4205
}

# Local domain for `api` application
http://api.outdoorsy.local {
  reverse_proxy host.docker.internal:8000
}

# Local domain for `odc-search` application
http://search.outdoorsy.local {
  reverse_proxy host.docker.internal:9000
}

# Local domain for `campgrounds-svc` application
http://campgrounds.outdoorsy.local {
  reverse_proxy host.docker.internal:8081
}

# Local domain for `java-api` application
http://pricing.outdoorsy.local {
  reverse_proxy host.docker.internal:8080
}

5. Links to relevant resources:

Yeah it definitely feels like the bottleneck is cross-service communication on the server side. I just changed my config so that all cross-service communication happens directly over http://localhost:port instead of the domain and that has helped considerably. I don’t know why I even need to use the domain for those requests, this is more like our deployed environments anyways! Still curious if there is something I’m missing though…

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.