微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

g-wan – 再现performance主张

在Ubuntu 12.04 LTS下使用gwan_linux64-bit.tar.bz2解包并运行gwan

然后指向wrk(使用空文件null.html)

wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1:8080/null.html Running 20s test @ http://127.0.0.1:8080/null.html 2 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 11.65s 5.10s 13.89s 83.91% Req/Sec 3.33k 3.65k 12.33k 75.19% 125067 requests in 20.01s,32.08MB read Socket errors: connect 0,read 37,write 0,timeout 49 Requests/sec: 6251.46 Transfer/sec: 1.60MB

性能非常差,实际上似乎有某种巨大的延迟问题。 在testing期间,gwan是200%繁忙,wrk是67%繁忙。

指向Nginx,wrk是200%忙,Nginx是45%忙:

刷新cpucaching而不调用caching?

Django的Nginx静态文件caching在浏览器上

禁用Apache文件的caching

限制Linux中的Chromiumcaching大小

任何HTTP代理具有显式的,可configuration的支持请求/响应缓冲和延迟连接?

wrk --timeout 10 -t 2 -c 100 -d20s http://127.0.0.1/null.html Thread Stats Avg Stdev Max +/- Stdev Latency 371.81us 134.05us 24.04ms 91.26% Req/Sec 72.75k 7.38k 109.22k 68.21% 2740883 requests in 20.00s,540.95MB read Requests/sec: 137046.70 Transfer/sec: 27.05MB

指向Nginx的weighttpd可以得到更快的结果:

/usr/local/bin/weighttp -k -n 2000000 -c 500 -t 3 http://127.0.0.1/null.html weighttp - a lightweight and simple webserver benchmarking tool starting benchmark... spawning thread #1: 167 concurrent requests,666667 total requests spawning thread #2: 167 concurrent requests,666667 total requests spawning thread #3: 166 concurrent requests,666666 total requests progress: 9% done progress: 19% done progress: 29% done progress: 39% done progress: 49% done progress: 59% done progress: 69% done progress: 79% done progress: 89% done progress: 99% done finished in 7 sec,13 millisec and 293 microsec,285172 req/s,57633 kbyte/s requests: 2000000 total,2000000 started,2000000 done,2000000 succeeded,0 Failed,0 errored status codes: 2000000 2xx,0 3xx,0 4xx,0 5xx traffic: 413901205 bytes total,413901205 bytes http,0 bytes data

服务器是KVM下的虚拟8核专用服务器(裸机)

我在哪里开始寻找gwan在这个平台上遇到的问题?

我已经在同一个操作系统上testing了lighttpd,Nginx和node.js,结果都是一样的。 服务器通过扩展的临时端口,增加限制,调整时间等等循环来调整。

我可以在Windows平台上获得每个进程的L2caching未命中数吗?

如何使用.htaccess禁用代理caching

如何从Linux系统caching中驱逐文件

删除WCFTestClient的caching – 适用于Visual Studio 2010

如何在Nginx上configurationETag

11月7日更新:我们修复了G-WAN v4.11.7中的空文件问题,现在G-WAN的速度比Nginx快两倍(禁用www缓存)。

最近G-WAN的发布速度比Nginx的小,大文件快,而G-WAN缓存在认情况下是禁用的,以便于人们比较G-WAN与Nginx等其他服务器的比较。

Nginx有几个缓存功能一个跳过stat()调用的fd cahe和一个基于memcached的模块),但两者都必须比G-WAN的本地缓存慢得多。

禁用缓存对于某些应用程序(如CDN)也是需要的。 其他应用程序(如AJAX应用程序)从G-WAN缓存功能中受益匪浅,因此即使在每个请求的基础上,也可以随时重新启用缓存。

希望这个澄清这个问题。

“再现业绩声明”

首先,由于上面记录不良的测试不使用相同的工具,也没有通过G-WAN测试获取的HTTP资源,因此标题是误导性的。

[*]你的Nginx.conf文件在哪里? 这两个服务器的HTTP响应头是什么? 什么是你的“裸机” 8核cpu

G-WAN测试基于ab.c ,由G-WAN小组编写的用于weighttp (由Lighttpd服务器团队制作的测试工具)的包装器,因为ab.c公开的信息更加丰富 。

其次,测试文件"null.html"是…一个文件

我们不会浪费时间讨论这种测试的不相关性( 您的网站提供了多少个空的HTML文件? ),但这可能是观察到的“糟糕的性能”的原因。

G-WAN没有被创建为服务于空文件(我们从来没有试过,也没有要求这样做)。 但是我们一定会添加这个功能来避免这样的测试造成的混乱。

正如你想要“检查声明”,我鼓励你用weighttp文件一个100字节的文件,不可压缩的MIME类型)使用weighttp (在你的测试中最快的HTTP加载工具):在这里没有Gzip )。

对于非空文件,即使在独立测试中, Nginx也比G-WAN慢得多 。

我们到目前为止还不了解wrk ,但似乎是Nginx团队所做的工具:

“wrk是专门为了试图将Nginx推向极限而在第一轮测试中被推到0.5Mr / s的。

更新(一天之后)

既然你不打算发布更多的数据,我们做到了:

wrk weighttp ----------------------- ----------------------- Web server 0.html RPS 100.html RPS 0.html RPS 100.html RPS ---------- ---------- ------------ ---------- ------------ G-WAN 80,783.03 649,367.11 175,515 717,813 Nginx 198,800.93 179,939.40 184,046 199,075

就像在你的测试中,我们可以看到weighttp比weighttp 。

我们也可以看到G-WAN比Nginx都有更快的HTTP加载工具。

以下是详细的结果:

G-WAN

./wrk -c300 -d3 -t6 "http://127.0.0.1:8080/0.html" Running 3s test @ http://127.0.0.1:8080/0.html 6 threads and 300 connections Thread Stats Avg Stdev Max +/- Stdev Latency 3.87ms 5.30ms 80.97ms 99.53% Req/Sec 14.73k 1.60k 16.33k 94.67% 248455 requests in 3.08s,55.68MB read Socket errors: connect 0,read 248448,timeout 0 Requests/sec: 80783.03 Transfer/sec: 18.10MB ./wrk -c300 -d3 -t6 "http://127.0.0.1:8080/100.html" Running 3s test @ http://127.0.0.1:8080/100.html 6 threads and 300 connections Thread Stats Avg Stdev Max +/- Stdev Latency 263.15us 381.82us 16.50ms 99.60% Req/Sec 115.55k 14.38k 154.55k 82.70% 1946700 requests in 3.00s,655.35MB read Requests/sec: 649367.11 Transfer/sec: 218.61MB weighttp -kn300000 -c300 -t6 "http://127.0.0.1:8080/0.html" progress: 100% done finished in 1 sec,709 millisec and 252 microsec,175515 req/s,20159 kbyte/s requests: 300000 total,300000 started,300000 done,150147 succeeded,149853 Failed,0 errored status codes: 150147 2xx,0 5xx traffic: 35284545 bytes total,35284545 bytes http,0 bytes data weighttp -kn300000 -c300 -t6 "http://127.0.0.1:8080/100.html" progress: 100% done finished in 0 sec,417 millisec and 935 microsec,717813 req/s,247449 kbyte/s requests: 300000 total,300000 succeeded,0 errored status codes: 300000 2xx,0 5xx traffic: 105900000 bytes total,75900000 bytes http,30000000 bytes data

Nginx

./wrk -c300 -d3 -t6 "http://127.0.0.1:8080/100.html" Running 3s test @ http://127.0.0.1:8080/100.html 6 threads and 300 connections Thread Stats Avg Stdev Max +/- Stdev Latency 1.54ms 1.16ms 11.67ms 72.91% Req/Sec 34.47k 6.02k 56.31k 70.65% 539743 requests in 3.00s,180.42MB read Requests/sec: 179939.40 Transfer/sec: 60.15MB ./wrk -c300 -d3 -t6 "http://127.0.0.1:8080/0.html" Running 3s test @ http://127.0.0.1:8080/0.html 6 threads and 300 connections Thread Stats Avg Stdev Max +/- Stdev Latency 1.44ms 1.15ms 9.37ms 75.93% Req/Sec 38.16k 8.57k 62.20k 69.98% 596070 requests in 3.00s,140.69MB read Requests/sec: 198800.93 Transfer/sec: 46.92MB weighttp -kn300000 -c300 -t6 "http://127.0.0.1:8080/0.html" progress: 100% done finished in 1 sec,630 millisec and 19 microsec,184046 req/s,44484 kbyte/s requests: 300000 total,0 5xx traffic: 74250375 bytes total,74250375 bytes http,0 bytes data weighttp -kn300000 -c300 -t6 "http://127.0.0.1:8080/100.html" progress: 100% done finished in 1 sec,506 millisec and 968 microsec,199075 req/s,68140 kbyte/s requests: 300000 total,0 5xx traffic: 105150400 bytes total,75150400 bytes http,30000000 bytes data

Nginx配置文件试图匹配G-WAN的行为

# ./configure --without-http_charset_module --without-http_ssi_module # --without-http_userid_module --without-http_rewrite_module # --without-http_limit_zone_module --without-http_limit_req_module user www-data; worker_processes 6; worker_rlimit_nofile 500000; pid /var/run/Nginx.pid; events { # tried other values up to 100000 without better results worker_connections 4096; # multi_accept on; seems to be slower multi_accept off; use epoll; } http { charset utf-8; # HTTP "Content-Type:" header sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 10; keepalive_requests 10; # 1000+ slows-down Nginx enormously... types_hash_max_size 2048; include /usr/local/Nginx/conf/mime.types; default_type application/octet-stream; gzip off; # adjust for your tests gzip_min_length 500; gzip_vary on; # HTTP "vary: Accept-Encoding" header gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+RSS text/javascript; # cache Metadata (file time,size,existence,etc) to prevent syscalls # this does not cache file contents. It should helps in benchmarks where # a limited number of files is accessed more often than others (this is # our case as we serve one single file fetched repeatedly) # THIS IS ACTUALY SLOWING-DOWN THE TEST... # # open_file_cache max=1000 inactive=20s; # open_file_cache_errors on; # open_file_cache_min_uses 2; # open_file_cache_valid 300s; server { listen 127.0.0.1:8080; access_log off; # only log critical errors #error_log /usr/local/Nginx/logs/error.log crit; error_log /dev/null crit; location / { root /usr/local/Nginx/html; index index.html; } location = /nop.gif { empty_gif; } location /imgs { autoindex on; } } }

欢迎评论 – 特别是Nginx的专家 – 根据这个完整的文档测试进行讨论。

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 [email protected] 举报,一经查实,本站将立刻删除。

相关推荐