Scaling and Managing CaddyFile in AWS for multiple Sub-Domain from different Domain

1. Output of caddy version:

v2.6.2 h1:wKoFIxpmOJLGl3QXoo6PNbYvGW4xLEgo32GPBEjWL8o=

2. How I run Caddy:

systemctl start caddy

a. System environment:

Ubuntu 22.04.1 LTS (AWS EC2)

b. Command:

systemctl reload caddy

c. Service/unit/compose file:

d. My complete Caddy config: {
        root * /var/www/html

route /io/* {

        uri strip_prefix /io
        reverse_proxy * {
        header_down -Content-Type       
        header_down Content-Type text/css


header {
        Content-Type text/css
        Access-Control-Allow-Origin *
        Access-Control-Allow-Credentials true
        Access-Control-Allow-Methods *
        Access-Control-Allow-Headers *

3. The problem I’m having:

Above are the configurations that I have made for a single domain just for testing the caddy server with reverse_proxy.

Clarification and is there any possibility:

I have a client list of 1000+ members, planning to increase it to 10,000+ maybe if it goes well.

So here is my question, Literally, a noob in server configuration so don’t take me wrong.

1, I will be using S3 bucket to load station HTML or js file for ever user, which is not a problem in caddy. But the issue is every user will have a separate sub-domain which is pointing to my IP address.

Example: Everyone will point a sub-domain to their main domain with my IP, Like x1x.example .com, x1x.example1 .com, x1x.example2 .com, and so on for unlimited depending on my user.

The question is I don’t like to Update the caddyfile everytime by visiting the server,

Is there any way I can load the config file of caddy from the S3 Bucket so that I can directly update the AWS with some script.

2, What about scaling of server, I may receive more request per sec, is that a horizontal scaling or vertical, because, I need to know how this works if one domain has 1M views per month and another has 10k views per month.

3, What’s the maximum req that Caddy can handle if I host in AWS EC2? is it scalable?

4, If caddyfile cant be severed from S3 bucket, How can i update the file or config directly from my application? is there any module to make this work easier.

First, instead of

route /io/* {
	uri strip_prefix /io
	reverse_proxy * {
		header_down -Content-Type
		header_down Content-Type text/css

consider using

handle_path /io/* {
	reverse_proxy {
		header_down Content-Type text/css

The route directive has a very distinct use case, in which it does not reorder its contents per the directive order.
However, that default directive order provides sane defaults, so in almost all use cases you should use
handle and handle_path instead.
The latter does the uri strip_prefix /io already for you :slight_smile:

Also, I just noticed, that in the route directive examples it shows essentially your very use case (stripping a prefix from request path just before proxying the requests to a reverse_proxy upstream).
You don’t need to use route for that, as outlined above, the usual handle or handle_path accomplish the same.
I’ll try to fix that in the docs soon.

And header_down Content-Type text/css already overwrites any existing values, so no need for header_down -Content-Type.

Now to your questions :innocent:

The json config provides a way to regularly pull a config from a http endpoint and automatically applying it.
See JSON Config Structure - Caddy Documentation together with load_delay.
Every Caddyfile converts to a json. So if you decide to take that route, I would recommend you continuing to write your config as Caddyfile as usual and converting it (before uploading) via caddy adapt.
The Caddyfile currently does expose those admin.config.load and admin.config.load_delay, so you will have to extend the json (after converting) to inject those parts into the json.

Note: admin.config.load_delay doesn’t seem to show up on the auto-generated json docs right now. Will check why, but just so you know, it does exist caddy/admin.go at 762b02789ac0ef79e92de7c58dac19d24a104587 · caddyserver/caddy · GitHub

But I highly recommend looking into Caddy’s On-Demand-TLS instead.
With that option, you don’t need to change your Caddyfile/config each time a new domain/subdomain/hostname gets added.

You shouldn’t have to worry about scaling at that amount of requests per second.
But since you asked, Caddy does scale horizontally. Just put the data directory (read: your certificates) on a shared network volume or use one of the many storage modules and you are good.
Start another Caddy with the same config and everything “will just work”.

You didn’t provide any details how your applications works, so there might be other options, but you can always just use the admin API

1 Like

That’s really excellent support reply from the caddy,

So, since I don’t need to restart the caddy server to add multiple domains which solve my one issue.

But is there any way I could add a CDN to all domains that I have added from a caddy or is there any other way to do that? because I cant manually add cdn of every domain manually, I am ready to use any CDN provider which will work with this.

If its off topic any can get any resources that works well with caddy.

because I will be added a load balancer above this caddy server to make the scaling work well, as you said its horizontal scaling.

For these on-demand domains how to add CDN. Like if I allow like 1000+ domains how to sever them via CDN since cloudfront supports only few domains. Is there any way I can route all allowed domains via some CDN.

This is severed directly from EC2 via caddy server, need to make a layer above this server for all domains. Like etc1. com, etc2. com, etc3. com and 1000+ domains.

In any ways i will get response for it.

I’m afraid I don’t really understand what you mean.
Do you want to put Caddy behind a CDN and are looking for a CDN recommendation?

Can you elaborate on that as well, please? :innocent:


This topic was automatically closed after 30 days. New replies are no longer allowed.