HTTP stress tests are performed using Python's Treq on Twisted

  • 2020-05-09 18:50:52
  • OfStack

Maintaining system stability and robustness at peak times is one of the challenges associated with API, which is why we do a lot of stress testing at Mailgun.

Over time, we have tried many approaches, from simple ApacheBench to more complex custom test suites. But this post is about a "quick and dirty" but flexible stress test using python.
When writing HTTP clients using python, we all like to use Requests library. This is what we recommend to our API users. Requests is powerful, but has one drawback. It's a modular, one-call-per-thread thing that is difficult or impossible to use to generate thousands of requests quickly.
Introduction to Treq on Twisted

To solve this problem we introduced Treq (Github library). Treq is an HTTP client library, influenced by Requests, but it runs on Twisted and has the power typical of Twisted: it is asynchronous and highly concurrent when handling network I/O.

Treq isn't just limited to stress testing: it's a great tool for writing high-concurrency HTTP clients, such as web scraping. Treq is elegant, easy to use, and powerful. Here's an example:


 >>> from treq import get
  
 >>> def done(response):
 ...   print response.code
 ...   reactor.stop()
  
 >>> get("http://www.github.com").addCallback(done)
  
 >>> from twisted.internet import reactor
 200

Simple test scripts
Here is a simple script that USES Treq to bombard a single 1URL with the maximum possible number of requests.


 #!/usr/bin/env python
 from twisted.internet import epollreactor
 epollreactor.install()
  
 from twisted.internet import reactor, task
 from twisted.web.client import HTTPConnectionPool
 import treq
 import random
 from datetime import datetime
  
 req_generated = 0
 req_made = 0
 req_done = 0
  
 cooperator = task.Cooperator()
  
 pool = HTTPConnectionPool(reactor)
  
 def counter():
   '''This function gets called once a second and prints the progress at one
   second intervals.
   '''
   print("Requests: {} generated; {} made; {} done".format(
       req_generated, req_made, req_done))
   # reset the counters and reschedule ourselves
   req_generated = req_made = req_done = 0
   reactor.callLater(1, counter)
  
 def body_received(body):
   global req_done
   req_done += 1
  
 def request_done(response):
   global req_made
   deferred = treq.json_content(response)
   req_made += 1
   deferred.addCallback(body_received)
   deferred.addErrback(lambda x: None) # ignore errors
   return deferred
  
 def request():
   deferred = treq.post('http://api.host/v2/loadtest/messages',
              auth=('api', 'api-key'),
              data={'from': 'Loadtest <test@example.com>',
                 'to': 'to@example.org',
                'subject': "test"},
             pool=pool)
   deferred.addCallback(request_done)
   return deferred
  
 def requests_generator():
   global req_generated
   while True:
     deferred = request()
     req_generated += 1
     # do not yield deferred here so cooperator won't pause until
     # response is received
     yield None
  
 if __name__ == '__main__':
   # make cooperator work on spawning requests
   cooperator.cooperate(requests_generator())
  
   # run the counter that will be reporting sending speed once a second
   reactor.callLater(1, counter)
  
   # run the reactor
   reactor.run()

Output results:


 2013-04-25 09:30 Requests: 327 generated; 153 sent; 153 received
 2013-04-25 09:30 Requests: 306 generated; 156 sent; 156 received
 2013-04-25 09:30 Requests: 318 generated; 184 sent; 154 received

The number of the "Generated" class represents requests that have been prepared by the Twisted reactor but have not yet been sent. This script ignores all error handling for the sake of brevity. Adding information about the timeout state to it is left to the reader as an exercise.

This script can be used as a starting point, and you can extend it to customize the application specific processing logic. It is recommended that you use collections.Counter instead of ugly global variables when making improvements. This script runs on a single thread, and if you want to squeeze the maximum number of requests out of one machine, you can use a technique similar to mulitprocessing.

May you enjoy the stress test!


Related articles: