I wonder why Caddy is fast


(void) #1

I made a benchmark on the bench.

The benchmark environment is as follows
Server: Server: A Internet connection with LAN line connected to router. Server port forwarding
Client: WiFi connection to B router
Implement benchmark test by installing Jmeter on client

client specifications : intel® pentium® CPU p6200 @ 2.13Ghz
RAM : 2.00GBM, SYSTEM KINDS: 64 BIT
server specifications: intel® Core™ i7-6700HQ CPU @ 2.60Ghz
RAM : 16.00GBM, SYSTEM KINDS: 64 BIT

The physical specifications of the virtual server are as follows
vmware 사양

precondition

The bandwidth of the server should be 10Mbps.


I set it to 10Mbps and confirmed it.
And the client bandwidth was 100Mbps.

Experimental method

Centos 7
PHP 7
HTTP 1

Client -> Server
Passing data up to php file (id: yyyyyyyyyyyyyy @ gmail.com, pw: 1q2w3e4r !!), no processing in php

Server -> Client
Answer unconditionally with the number “1”

Listener uses “Summary Report”, “View Results in Table”, “Transactions Per Second”

경과시간

“경과시간” == Elapsed time

Unit ~> x axis: number of concurrent users, y axis: sec (second)

http요청에 대한 평균 응답시간

“http요청에 대한 평균 응답시간” == Average response time for http requests
Unit ~> x axis: number of concurrent users, y axis: ms (millisecond)

지연시간

“지연시간” == Delay time
Unit ~> x axis: number of concurrent users, y axis: ms (millisecond)

오류율

“오류율” == Error rate
Unit ~> x axis: number of concurrent users, y axis:%

6
“처리량” == Throughput
Unit ~> x axis: number of concurrent users, y axis:%

I wonder how it is as fast as nginx.

I am a Korean college student.
I think caddy is really attractive.
So I keep wanting to study the caddy server.
I would be grateful if you could tell me.
I really want to know why.

I would like to know the difference from the other web servers in architecture and the same thing.
I am so curious.

The data is in my blog.
http://tristan91.tistory.com/237
Thank you.


(Matt Holt) #2

Thanks for your question! Caddy uses Go’s network stack. You can study its code here: https://golang.org/src/net/http

And you can also do searches for blog posts, talks, and articles about how Go implements network functions and HTTP. There are many out there!


(void) #3

thank you for the reply.
And thanks for the badge. Let’s study hard.
I hope I have more information about the caddy server.


(void) #4

I did not know go language and studied hard for several days.

I thought why it was as fast as nginx.

Language features are different.

nginx is a web server built on c language.
caddy is a web server built on go language.

The c language is fast because it is a procedural language.
However, there are a lot of header files, and it is slow because it recompiles the entire modified code.
The go language is object oriented. But because of the “goroutine” nature, it is also fast among the object oriented.
I think it’s fast because there are few header files, the source code is packaged and only the changed parts are compiled.

For this reason I want to know why the web server is faster.
Is the architecture not open to the public?
If you can not do it, I want to know why it’s fast.
I am curious too.
I really am curious.
I want to study caddy and become a contributor to caddy server.
and I want to introduce this good server to Korea.
Oh, of course, Republic of Korea. haha


(Matt Holt) #5

Go doesn’t have objects. But its goroutines are more lightweight than threads, and Go’s scheduler is arguably superior to the operating system’s scheduler.

I do not understand what this has to do with a performance comparison.

I think you’re barking up the wrong tree. To validate the results you got, you should probably investigate the network stacks of each server and conduct experiments that vary request and traffic density parameters, and see what changes. It’s probably less about the language’s compilation properties and more about its standard library and interface with the OS. It also depends highly on what the contents of the request is. Use profiling from the Go toolchain to understand its characteristics more.

We would love to have Caddy used more in Korea. :slight_smile: Best of luck!


(Matthew Fay) #6

I’m not an expert by any means, but my understanding is that Go and C are tough to compare in speed, or even use case.

One thing that people praise Go for excelling at - in part because of its goroutines being much more efficient than threads - is network servers.

Another thing frequently brought up is the fact that Go has garbage collection, which means it will always be inherently slightly slower than C, which I imagine bites into that goroutine advantage.

Caddy benefits quite a lot from the built-in net/http library, which Matt linked to above. Most of Caddy’s genius, really, is just cleverly implementing things like the Caddyfile, automatic certificate management, and a slew of efficient middleware on top of this - taking net/http and producing a fully featured, easily configurable web server.

Pretty much every step of the way is open source. You can inspect every line of Caddy’s code at github.com/mholt/caddy. Also, a browsable copy of the Golang source code can be found at github.com/golang/go.


(void) #7

thank you.
Thank you for your kindness.
I will study more and let Korean developers know about Caddy’s excellence.
Thank you.


(void) #8

thank you.
I think I’m still lacking a lot.
I will study more.
thank you for telling me!


(void) #9

Today I am studying caddy.
I’m sorry to bother you. I am so~ curious :weary:
But the only way to know about caddy is to use this community.
I am listening to you and studying architecture.
Then I saw the article "Apache is fast to process dynamic files, and EngineX is fast to process static file processing."
The reason is that the cgi is in the core or not.
Does caddy have cgi in the core? Is it outside?

https://caddyserver.com/docs/fastcgi

Even if I look at it here, I did not have any information.
Sorry.


(Matthew Fay) #10

The reason Apache is considered fast for PHP is because each Apache process with mod_php has its own PHP interpreter instead of using CGI or FastCGI at all.

As far as I know, there’s no reason FastCGI mode should be any different between Apache, nginx, and Caddy in terms of speed.

For reference, func ServeHTTP in fastcgi.go[1] is where the magic happens; Caddy once again leans on Go’s built in network stack[2], in this case to connect to a listening FastCGI process such as PHP-FPM. DialContext here handles the call to a Golang net.Dialer[3]. Conceptually it’s not very different to a reverse proxy, I suppose.

[1] https://github.com/mholt/caddy/blob/master/caddyhttp/fastcgi/fastcgi.go
[2] https://github.com/mholt/caddy/blob/1125a236eabb61bbccb5b6a1af1a48e39da59a20/caddyhttp/fastcgi/fastcgi.go#L124
[3] https://golang.org/pkg/net/#Dialer.DialContext


(system) #11

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.