Web stress test tools :http_load webbench ab Siege methods of use

  • 2020-05-14 05:38:17
  • OfStack

Foreword: often float in the network, which have not get knife? Do the webmaster of the website is not sure which day offended the XX people, 2 do not say directly hold DDOS will come up to do you, black you, brother D die you, meet such a rogue, also really have no way, so often to their own website to do the pressure test, to see the bearing capacity of the server is very necessary 1 thing.

1. http_load

The program is very small, less than 100K after decompression

http_load runs in parallel multiplexing to test the throughput and load of the web server. But it is different from most stress testers

Well, it can run as a single 1 process, and 1 usually doesn't kill the client. You can also test site requests for the HTTPS class.

Download address: http: / / soft vpser. net/test/http_load/http_load - 12 mar2006. tar. gz
Installation is simple
#tar zxvf http_load-12mar2006.tar.gz
#cd http_load-12mar2006
#make && make install
Command format: http_load-p number of concurrent access processes -s access time required to access the URL file

Parameters can be freely combined, and there is no limit to the choice of parameters. So let's say you write http_load-parallel 5-seconds

300 urls.txt is also ok. So let's just give you a brief description of the parameter 1.
-parallel -p: means the number of concurrent user processes.
-fetches -f: means the total number of visits
-rate -p: means the frequency of access per second
-seconds -s: means total access time

Prepare the URL file: urllist.txt, the file format is 1 URL per line, URL should be more than 50-100 tests

As follows:
http://hackerfans.com/xxx.html
http://hackerfans.com/xx.html
Such as:

http_load -p 30 -s 60 urllist.txt
With the argument understood, let's run a command to see what it returns
Command: %./ http_load-rate 5-seconds 10 urls instructions 1 test was performed for a duration of 10 seconds at a frequency of 5 per second.

49 fetches, 2 max parallel, 289884 bytes, in 10.0148 seconds5916 mean bytes/connection4.89274

fetches/sec, 28945.5 bytes/secmsecs/connect: 28.8932 mean, 44.243 max, 24.488 minmsecs/first

-response: 63.5362 mean, 81.624 max, 57.803 minHTTP response codes: code 200 -- 49

Result analysis:
1.49 fetches, 2 max parallel, 289884 bytes, in 10.0148 seconds
Note that 49 requests were run in the above test, the maximum number of concurrent processes was 2, the total data transmitted was 289884bytes, and the run time was 10.0148 seconds
2.5916 mean bytes/connection 2.5916 mean bytes/connection 2.5916 mean bytes/connection 2.5916 mean bytes/connection
3.4.89274 fetches/sec, 28945.5 bytes/sec
Note that the response request per second is 4.89274, and the data passed per second is 28945.5 bytes/sec
4. msecs/connect: 28.8932 mean, 44.243 max, 24.488 min means that the average response time per connection is 28.8932 msecs

, the maximum response time is 44.243 msecs, and the minimum response time is 24.488 msecs
5. msecs/ first-response: 63.5362 mean, 81.624 max, 57.803 min
6, HTTP response codes: code 200 -- 49 indicates the type of response page to open. If there are too many types of 403, then maybe

Pay attention to whether the system has hit a bottleneck.
Special note:
The main metrics in the test results are fetches/sec, msecs/connect, which is the number of queries per second that the server can respond to.

Use this metric to measure performance. It seems that apache's ab is more accurate and more convincing than apache's ab.
Qpt- response number per second and response time, response user time per connection.
The results of the test are mainly based on these two values. Of course, only these two indicators can not complete the performance analysis, we also need to the server

cpu, men analysis, to draw a conclusion

2. webbench

webbench is a website stress testing tool under Linux, which can simulate up to 30,000 concurrent connections to test the load capacity of a website. download

You can go to google for the address, and I'll give you one
Download address: http: / / soft vpser. net/test/webbench/webbench - 1.5 tar. gz
This program is even smaller, less than 50K after unzipping, hehe
Very simple installation
#tar zxvf webbench-1.5.tar.gz
#cd webbench-1.5
#make && make install
The webbench executable will be generated in the current directory and ready to use

Usage:

webbench -c concurrency -t run test time URL
Such as:
webbench -c 5000 -t 120 http://hackerfans.com

3. ab
ab is a powerful testing tool that comes with apache
You install apache1 and it comes with you,
See the instructions for usage

$ ./ab
./ab: wrong number of arguments
Usage: ./ab [options] [http://]hostname[:port]/path
Options are:
-n requests Number of requests to perform
-c concurrency Number of multiple requests to make
-t timelimit Seconds to max. wait for responses
-p postfile File containing data to POST
-T content-type Content-type header for POSTing
-v verbosity How much troubleshooting info to print
-w Print out results in HTML tables
-i Use HEAD instead of GET
-x attributes String to insert as table attributes
-y attributes String to insert as tr attributes
-z attributes String to insert as td or th attributes
-C attribute Add cookie, eg. 'Apache=1234. (repeatable)
-H attribute Add Arbitrary header line, eg. 'Accept-Encoding: gzip'
Inserted after all normal header lines. (repeatable)
-A attribute Add Basic WWW Authentication, the attributes
are a colon separated username and password.
-P attribute Add Basic Proxy Authentication, the attributes
are a colon separated username and password.
-X proxy:port Proxyserver and port number to use
-V Print version number and exit
-k Use HTTP KeepAlive feature
-d Do not show percentiles served table.
-S Do not show confidence estimators and warnings.
-g filename Output collected data to gnuplot format file.
-e filename Output CSV file with percentages served
-h Display usage information (this message)
There are many parameters. In general, we use -n and -c
Such as:
./ab -c 1000 -n 100 http://hackerfans.com/index.php

This represents processing 1000 requests and running the index.php file 100 times at the same time.
4. Siege
An open source stress test tool that can be configured to perform multiple concurrent user visits to one WEB site, record the corresponding time of all requests made by each user, and repeat them in a fixed number of concurrent visits.
Official: http: / / www. joedog. org /
Siege download: http: / / soft vpser. net/test/siege/siege - 2.67 tar. gz
Extract:
# tar -zxf siege-2.67.tar.gz
Enter unzip directory:
# cd siege-2.67/
Installation:
#./configure ; make
#make install

use
siege -c 200 -r 10 -f example.url
-c is a concurrent quantity, -r is the number of repeats. The url file is just one piece of text, one url per line, and it will access it randomly from within.

example. url content:

http://hackerfans.com
http://soft.hackerfans.com

The result shows that
Lifting the server siege... done.
Transactions: 3419263 hits // completed 419263 times
Availability: 100.00 % //100.00 % success rate
Elapsed time: 5999.69 secs //
Data transferred: 84273.91 MB // total data transfer 84273.91 MB
Response time: 0.37 secs // corresponding time: 1.65 seconds: display network connection speed
Transaction rate: 569.91 trans/sec // per second: after the server
Throughput: 14.05 MB/sec // average data per second
Concurrency: 213.42 // actual maximum number of concurrent
Successful transactions: 2564081 // times of successful processing
Failed transactions: 11 // failed handling times
Longest transaction: 29.04 // maximum time per transmission
Shortest transaction: 0.00 // minimum time per transmission


Related articles: