Add userinput in url in between url content?

1. Caddy version (caddy version):

v2.4.6 h1:HGkGICFGvyrodcqOOclHKfvJC0qTU7vny/7FhYp9hNw=

2. How I run Caddy:

caddy.exe with config file

a. System environment:

Windows

b. Command:

caddy.exe run

d. My complete Caddyfile or JSON config:

{
  order request_id before header
}



https://domain {

request_id

encode gzip
log {
output file D:\caddy\logs\domain_access.log {
roll true # Rotate logs, enabled by default
roll_size_mb 5 # Set max size 5 MB
roll_gzip true # Whether to compress rolled files
roll_local_time true # Use localhost time
roll_keep 2 # Keep at most 2 log files
roll_keep_days 7 # Keep log files for 7 days
}

}
reverse_proxy 192.168.1.2:1234/p/?RandomID={http.request_id}
    basicauth {
        username phash
	
	}	

}

3. The problem I’m having:

What I get with configuration of user input to url as an example below

Current form which will give invalid url

192.168.1.2:1234/p/?RandomID={http.request_id}w100 ← where w100 is userinput after url

What I would like and have trying to figure out but never understood.

192.168.1.2:1234/p/{USER INPUT}?RandomID={http.request_id}

192.168.1.2:1234/p/w100?RandomID={http.request_id}

5. What I already tried:

I tried messing with rewrite and uri but I think Im on deep water or simply don’t understand it properly.

If it makes anything easier it would only be the number at w100 that would be changed by userinput in url, so 100 could be 935 or anything else just pure numbering.

I’m also using an extra plugin to generate the uuid which I selected beneath and added to compile file > download.

github.com/
lolPants/caddy-requestid

:electric_plug: http.handlers.request_id implements an HTTP handler that writes a unique request ID to response headers.

Is it necessary that you sent the request ID upstream via a URL query? It would definitely be easier to do it with a header. You can use the header_up option of reverse_proxy to do it:

reverse_proxy 192.168.1.2:1234 {
	header_up Request-ID {http.request_id}
}

But if you must do it via query, then you can do it like this:

rewrite * {path}?RandomID={http.request_id}&{query}

This will preserve any query you already had by adding it at the end.

1 Like

Hi thanks for helping out, I will look into your 2 suggestions.

In the mean time I figured something out my self, but if its the right way to do it I have no idea.

{
  order request_id before header
}

https://domain {

  request_id

encode gzip
log {
output file D:\caddy\logs\dvr_access.log {
roll true # Rotate logs, enabled by default
roll_size_mb 5 # Set max size 5 MB
roll_gzip true # Whether to compress rolled files
roll_local_time true # Use localhost time
roll_keep 2 # Keep at most 2 log files
roll_keep_days 7 # Keep log files for 7 days
}

}
rewrite * /p{uri}?RandomID={http.request_id}
reverse_proxy 192.168.1.2:1234



    basicauth {
        username phash
	
	}	

}

What threw me off is that I had to add rewrite before reverse_proxy, else it would also give me an error with missing port in address ?

What you tried won’t work if the request already had a query, because you’d end up with ? twice which isn’t valid. You need to use {path} and {query} separately.

The reverse_proxy module in Caddy doesn’t perform rewrites. That’s the rewrite directive’s job.

I see that makes sense!

I do not have much time atm, but I will come back and ask for some more help when I’m sitting with caddy configruation again if that’s okay :slight_smile:

Okay I got some more time to fuzzing around is it not possible to actually set remote client ip as internal ip to reach devices?

I can only make my live logging show the server running caddy, I have no possible way to make the hardware units listen on anything that by default headers it would normally listen to.

If I open up ports in my router by default it would show incoming wan ip in logs.

I tried with these

header_up X-Real-IP {remote}
header_up X-Forwarded-For {remote}
header_up Host {remote}
header_up X-Forwarded-Host {remote}

I guest Host cannot be changed?

Next question is it not possible to remove server header caddy and just let it use whatever was set from the origin point?

I tried removing it and it just completely wiped it including the origin header point from.

Last Thing

Not sure if this is possible and or it should be a feature request for media files when using load balancing to share the actually data since its getting broadcasted to the caddy server unit and spit out to clients anyways?

So as an example.

Client1 logs on starts getting streamed live content.
Client2 logs on and wants to see same content as Client1 since its live, and gets passed on what client1 is getting streamed

If Client1 suddenly stops check for other clients still getting same content if so just pass on the fork to next client, would that be doable with the load balancing stuff?

That would you would save bandwidth?

Here’s the config so far.

I would like to know if anything can be done proper

(theheaders) {
	header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
	header X-Xss-Protection "1; mode=block"
	header X-Content-Type-Options "nosniff"
	header Cache-Control "no-store, no-cache, must-revalidate, max-age=0"
	header Pragma "no-cache"
	header X-Frame-Options "SAMEORIGIN"
	header Permissions-Policy "accelerometer=(), ambient-light-sensor=(), autoplay=(self), camera=(), encrypted-media=(), fullscreen=(self), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), midi=(), payment=(), picture-in-picture=(*), speaker=(), sync-xhr=(), usb=(), vr=()"
	header Access-Control-Allow-Credentials true {
	defer
}
	
}



domain {

import theheaders
encode gzip
log {
	format single_field common_log # if getting erros this is removed later on from https://caddyserver.com/docs/caddyfile/directives/log > https://github.com/caddyserver/format-encoder

	output file C:\stuff\caddy\logs\dvr_access.log {
	roll true # Rotate logs, enabled by default
	roll_size_mb 5 # Set max size 5 MB
	roll_gzip true # Whether to compress rolled files
	roll_local_time true # Use localhost time
	roll_keep 2 # Keep at most 2 log files
	roll_keep_days 7 # Keep log files for 7 days
	
	
	}

}


rate_limit {
	distributed
	zone dynamic_example {
		key    {remote_host}
		events 2
		window 6s
		
	}

}



#handle_path /1/* {
rewrite * /a{path}

	
	reverse_proxy ip:port ip:port ip:port ip:port {
	
	header_up X-Real-IP {remote}
	header_up X-Forwarded-For {remote}
	header_up Host {remote}
	header_up X-Forwarded-Host {remote}

	
	#removes info about server header
	header_down -server
	
	# streaming
	flush_interval -1
	buffer_requests

	# load balancing
	lb_policy least_conn
	lb_try_duration 5s
	lb_try_interval 2s
	
	# passive health checking
	max_fails 2
	
	transport http {
					dial_timeout 5s
	
					keepalive_idle_conns_per_host 4
					#keepalive_idle_conns 4
					#max_conns_per_host 2
			}
}
 
 

basicauth {
    user phash
	
	}	

}

Caddy’s reverse_proxy automatically sets the X-Forwarded-For header to the client IP address.

The upstream app behind Caddy will always see the remote address on the TCP packets as coming from Caddy. There’s no way to change this, it’s how TCP works.

The upstream app needs to instead look at the X-Forwarded-For header to get the original client IP address.

The Host header is not the same thing as the remote address. It’s the domain that was in the request. Caddy will pass that through as-is from the original request.

Read more about Caddy’s handling of headers in the proxy on this page in the docs (note that these have been updated according to the changes landing in Caddy v2.5.0, whose first beta has been released a few days ago)

That’s not possible with Caddy’s reverse_proxy. Caddy is not a CDN.

This is removed in v2.5.0.

Remove all of these. They’re not helpful. Caddy will set the headers automatically, appropriately.

There’s no benefit to doing this.

Are you sure you need these? Only set them if you’re absolutely sure they’re necessary for your usecase. Buffering requests is quite inefficient.

With this, you’re not likely to ever have a retry happen. lb_try_duration is the total amount of time from when the proxy tries to connect to an upstream before giving up if it can’t connect. If you set dial_timeout to 5s, then lb_try_duration will never have a chance to retry. FYI, Caddy v2.5.0 lowers the default dial timeout to 3s.

Please run caddy fmt on your config, the syntax is very messy. It’s hard to read when the indentation is not consistent.

2 Likes

Alright

I corrected to remove header


	header /* {
        -Server
    }

I also changed following

lb_try_duration 500ms
lb_try_interval 250ms
dial_timeout 3s

Will still do some testing around streaming with those 2 cmds still

flush_interval -1
buffer_requests

For the IP forward thing I will try contact the hw vendors and see if they would agree to add this as a listener.

I will also try running that command and see what happens.

caddy fmt

Thanks for getting back to me.

That’s even worse – this means Caddy will only try for up to 500ms after starting to try to proxy to retry, but the dial timeout is 3s. 3s is well after 500ms, so no retries will ever happen.

See the docs:

1 Like

Right so what would be good "general values without knowing stability?

Would this be something that could possible work then?

lb_try_duration 10s
lb_try_interval 1s
dial_timeout 3s

Thanks will try fmt cmd out later

1 Like

Yeah, something like that should work. You can lower the try interval to 250ms (or omit it, since that’s the default). That would make the difference between 2 tries or 3. Because try_interval is how much time between each try.

1 Like

Okay got that setup, still toying :smiley:

Is it possible to write and detect if client is on same local network as caddy skip precaution like basicauth for internal only?

Yes, with the remote_ip matcher:

You can pair it with the not matcher to invert the result of the remote IP matcher, to say “when not private IPs, do basicauth”

1 Like

wow you are so fast to respond, this works great thanks, I’m not done asking yet.

I’m still puzzled over a way to figure out how my dvb-c tuners could share same upstream data and spit out to each downstream, you said caddy is not a cdn, but would this even go as cdn for internal sharing upstream out to each downstream?

Can you explain what you mean? I’m not sure I understand.

So whenever a client(lets say client1) gets on channel1, caddy grabs the tuner info from the hdhomerun device and passes it to client1

So when client2 gets on same channel1 caddy grabs another instance of very same data making a duplicate of already on-going stream data.

The way it works right now 1 stream takes 10mbit IN and OUT so 20mbit bandwidth total for 1 client

Tuner unit 4(slots) how caddy behaves right now
Caddy > Tuner1 > Channel1 > 10mbit IN > Caddy > 10mbit OUT > client1
Caddy > Tuner2 > Channel1 > 10mbit IN > Caddy > 10mbit OUT > client2
Caddy > Tuner3 > Channel1 > 10mbit IN > Caddy > 10mbit OUT > client3
Caddy > Tuner4 > Channel1 > 10mbit IN > Caddy > 10mbit OUT > client4

What if we could do it like this… (again 4 clients) but with all sharing same data from channel1
Caddy > Tuner1 > Channel1 > 10mbit IN > Caddy > 10mbit OUT > client1 _ →
→ 10mbit OUT > Share Downstream data > client2
→ 10mbit OUT > Share Downstream data > client3
→ 10mbit OUT > Share Downstream data > client4

Tuner2 > Free
Tuner3 > Free
Tuner4 > Free

So in total only 1 tuner is used and 10mbit IN total and 40 mbit OUT total which is 30mbit bandwidth saved + ram/cpu/power usage.

The data they get will always be static linked and contain pure mpeg-ts streams

It is possble to do with DVR engine
https://info.hdhomerun.com/info/dvr_api:live_tv

But in order for that to work since it record you will have to be able to pass a clientID and SessionID in various scenarios described here

Which I also tried to do via this method
{ #-# GitHub - lolPants/caddy-requestid: Caddy v2 Module that sets a unique request ID placeholder.
order request_id before header
}

#- Use DVR
rewrite * /auto{path}?ClientID={http.request_id}?SessionID=1{http.request_id}&{query}
reverse_proxy ip:port {
flush_interval -1
}

It also seemed to work most time but very rarely you would get brought back into the initiate start time when watching because of either another client jumped on same channel or left it, It might have been something todo with me not making client/sessionid setup properly.

Does this even make sense or am I just rambling :slight_smile:

1 Like

This in particular makes it sound like you want Caddy to cache and rebroadcast each channel to each subsequent client, instead of re-proxying once for each client.

Having Caddy cache and manage rebroadcasting is a bit of a daunting prospect. I hesitate to say impossible, purely based on how extensible Caddy actually is if you’re willing to write middleware.

This part, however, makes it seem like you don’t need Caddy to cache the stream for rebroadcasting but simply to ensure that clients are plugging into the correct URI, which in turn is simply to ensure that the actual upstream device (HDHomeRun) can manage tuner allocations more efficiently (one tuner to many clients, rather than one tuner to one client).

However, a description of the parameters from that link is as follows:

If the app supports multiple running instances on the same computer each instance must have a unique ClientID. The ClientID does not need to be persistent across app launches. A valid implementation is to choose a ClientID when an instance of the app is launched.

If the app supports picture-in-picture then each “picture” must have a different ClientID so that changing channel on one “picture” does not cancel and free the tuner being used by the other “picture”.

The SessionID must be unique for each new request of a channel. This allows the record engine to differentiate between a new request vs a seek within the existing session.

—dvr_api:live_tv [HDHomeRun]

Trying to implement management of these two parameters in the proxy layer is the incorrect place for this logic. These need to be managed by the client application which is connecting through the proxy (Caddy) to your upstream. Caddy doesn’t (and can’t) know which requests are from new launches of an app, and doesn’t (can’t) know which requests are NEW requests for a channel vs. requests to SEEK within the channel. This is the source of your woes described here:

It’s perhaps crazy for me to say this but it would be logically easier for you to write a completely custom stream caching and rebroadcasting middleware for Caddy than it would be for you to try to figure out how Caddy could possibly manage client and session IDs for your clients if they aren’t already supplying that information in an accessible manner.

4 Likes

This topic was automatically closed after 30 days. New replies are no longer allowed.