I made a benchmark on the bench.
The benchmark environment is as follows
Server: Server: A Internet connection with LAN line connected to router. Server port forwarding
Client: WiFi connection to B router
Implement benchmark test by installing Jmeter on client
client specifications : intel(R) pentium(R) CPU p6200 @ 2.13Ghz
RAM : 2.00GBM, SYSTEM KINDS: 64 BIT
server specifications: intel(R) Core™ i7-6700HQ CPU @ 2.60Ghz
RAM : 16.00GBM, SYSTEM KINDS: 64 BIT
The physical specifications of the virtual server are as follows
precondition
The bandwidth of the server should be 10Mbps.
I set it to 10Mbps and confirmed it.
And the client bandwidth was 100Mbps.
Experimental method
Centos 7
PHP 7
HTTP 1
Client → Server
Passing data up to php file (id: yyyyyyyyyyyyyy @ gmail.com, pw: 1q2w3e4r !!), no processing in php
Server → Client
Answer unconditionally with the number “1”
Listener uses “Summary Report”, “View Results in Table”, “Transactions Per Second”
“경과시간” == Elapsed time
Unit ~> x axis: number of concurrent users, y axis: sec (second)
“http요청에 대한 평균 응답시간” == Average response time for http requests
Unit ~> x axis: number of concurrent users, y axis: ms (millisecond)
“지연시간” == Delay time
Unit ~> x axis: number of concurrent users, y axis: ms (millisecond)
“오류율” == Error rate
Unit ~> x axis: number of concurrent users, y axis:%
“처리량” == Throughput
Unit ~> x axis: number of concurrent users, y axis:%
I wonder how it is as fast as nginx.
I am a Korean college student.
I think caddy is really attractive.
So I keep wanting to study the caddy server.
I would be grateful if you could tell me.
I really want to know why.
I would like to know the difference from the other web servers in architecture and the same thing.
I am so curious.
The data is in my blog.
http://tristan91.tistory.com/237
Thank you.