By using this web site you accept our use of cookies. More information about cookies
Accept
Infopulse - Expert Software Engineering, Infrastructure Management Services
By using this web site you accept our use of cookies. More information about cookies
Accept
Infopulse - Expert Software Engineering, Infrastructure Management Services
Infopulse - Expert Software Engineering, Infrastructure Management Services
reCAPTCHA
Send message Please fill in this quick form and we will send you a free quote shortly.
* Required fields
Your privacy is important to us. We will never share your data.
Subscribe to our updates Be among the first to get exclusive content on IT insights, innovations, and best practices.
* Required fields
Your privacy is important to us. We will never share your data.
Subscribe to our New career opportunities Please fill in this quick form to be among the first to receive our updates.
* Required fields
Your privacy is important to us. We will never share your data.
Subscribe to our updates Be among the first to get exclusive content on IT insights, innovations, and best practices.
* Required fields
Your privacy is important to us. We will never share your data.
Photo of Oleksii Ostapov Send an email to Oleksii Ostapov Please fill in this quick form to contact our expert directly.
* Required fields
Your privacy is important to us. We will never share your data.
Infopulse - Expert Software Engineering, Infrastructure Management Services
Read the Full Case Study Don't miss the most interesting part of the story!
Submit this quick form to see the rest and to freely access all case studies on our website.
* Required fields
Your privacy is important to us. We will never share your data.

Performance Testing with Locust [Part 1]

Performance testing is not in demand as much and, therefore, is not as popular as other types of software testing. There are not many instruments to perform such testing, and very few of them are simple and convenient.

If you start talking about performance testing, everybody thinks about JMeter first, as it undoubtedly remains the most known tool with the biggest number of plugins. As for me, I have never liked JMeter because of unfriendly interface and high learning curve which you face each time when it’s required to test something more complicated than a “Hello World” application.

And now, inspired by successful testing within two different projects, I’ve decided to share some information about a relatively simple and convenient software —  .

What is Locust?

Locust is an open-source testing tool, which allows us to specify loading scenarios by a Python code, supports distributed loading and, according to authors, is used for the Battlelog load testing for the Battlefield games series (which immediately wins you over).

Advantages:

  • Simple documentation, including a copy-paste example. It is possible to begin testing with just basic programming skills.
  • It utilizes a   library (HTTP for humans). Its documentation can be used as a detailed prompt to debug tests.
  • Python support — I just like this language.
  • The previous item allows using different platforms to launch tests.
  • A dedicated web-server on   to present test results.

Disadvantages:

  • No Capture & Replay — all is done manually.
  • Consequently, you need to think. As in the case of using Postman, it is necessary to understand the mechanics of HTTP.
  • Minimal programming skills are required.
  • The linear load model, which immediately disappoints those who like to generate load “by Gauss”.

Testing process

Any testing is a complex task that requires planning, preparation, performance control, and results analysis. With performance testing, it is necessary, if possible, to collect all the data, which is able to influence the result:

  • Hardware servers (CPU, RAM, ROM);
  • Software servers (OS, server version, JAVA, .NET, and others, database and data quantity, server and tested application logs);
  • Network bandwidth;
  • Proxy-servers, load balancers and DDOS shield presence;
  • Performance testing data (users quantity, response average time, queries quantity per second).

Hereinafter described examples can be classified as black-box functional performance testing. We can measure performance even without having any information about the application under test and without access to the logs.

Before starting

To check the performance tests in practice, I have locally deployed a  . Almost all of the following examples will be done on it. I have taken the server’s data from the  . To launch it, nodeJS is necessary.

Obvious spoiler: experiments with performance testing are better performed locally, without loading online services to avoid being banned.

Python is necessary to start, and I will use version 3.6 and Locust itself (at the moment of writing the article — version 0.9.0) in all examples. It can be installed using the following command:

python -m pip install locustio

Installation details are described in official documentation.

Example analysis

Further, we need a test file. I have taken the example from the documentation, because it is very simple and clear:

from locust import HttpLocust, TaskSet
 
def login(l):
   l.client.post("/login", {"username":"ellen_key", "password":"education"})
 
def logout(l):
   l.client.post("/logout", {"username":"ellen_key", "password":"education"})
 
def index(l):
   l.client.get("/")
 
def profile(l):
   l.client.get("/profile")
 
class UserBehavior(TaskSet):
   tasks = {index: 2, profile: 1}
 
   def on_start(self):
       login(self)
 
   def on_stop(self):
       logout(self)
 
class WebsiteUser(HttpLocust):
   task_set = UserBehavior
   min_wait = 5000
   max_wait = 9000

That is it! That is enough to start the test! Let us analyze the example above before getting down to testing itself.

Skipping the “import” part in the very beginning, we can see two almost identical functions of login and logout, consisting of one line. l.client is the object of HTTP session, which we are going to use to create the loading. We are going to use a POST method, almost identical to the one in the requests library. I say “almost identical” because in this example we input not a full path URL as the first argument, but only its part, i.e. a specific service.

The data are transferred as the second argument, and, I must admit, using Python dictionaries is very convenient as they are automatically converted to json.

It is also worth pointing out that we do not process the request result in any way — if it is successful, the results (cookies for instance) will be saved in this session. If an error occurs, it will be recorded and added to the load statistics.

If we want to know, whether our request is correctly written, it is possible to check it in the following way:

import requests as r
response=r.post(base_url+”/login”,{“username”:”ellen_key”,”password”:”education”})
print(response.status_code)

I have added only base_url variable, which must contain a full address of tested resource.

Next several functions are requests that will create the load. Once again, we do not need to process server response — the results will immediately appear in statistics.

Further on, there is a UserBehavior class (the class may have any name). As the name suggests, this class will describe the behavior of a spherical user in the vacuum of tested application. Tasks property will be supplied from a methods dictionary, called by a user, as well as frequency of calls. Although we do not know which functions will be called by each user and their order (they will be selected randomly), we guarantee that index function will be called, on average, twice as often as profile function.

Apart from behavior description, TaskSet parent class allows assigning 4 functions, which can be performed before and after the tests. The order of calls is going to be the following:

  1. setup is called once at the start of UserBehavior(TaskSet)/span> — it is not given in the example.
  2. on_start is called once by each new loading user at the beginning of work.
  3. tasks is the performance of the tasks themselves.
  4. on_stop is called once by each user when the test has finished its work.
  5. teardown is called once when TaskSet has finished its work — it is also not given in the example.

It is worth reminding that there are two ways to define user’s behavior: the first is mentioned in the previous example —functions are specified in advance. The second way is to specify methods inside UserBehavior class:

from locust import HttpLocust, TaskSet, task

class UserBehavior(TaskSet):
   def on_start(self):
       self.client.post("/login", {"username":"ellen_key", "password":"education"})
 
   def on_stop(self):
       self.client.post("/logout", {"username":"ellen_key", "password":"education"})
 
   @task(2)
   def index(self):
       self.client.get("/")
 
   @task(1)
   def profile(self):
       self.client.get("/profile")
 
class WebsiteUser(HttpLocust):
   task_set = UserBehavior
   min_wait = 5000
   max_wait = 9000

In this example, user’s functions and their call frequency are set by   annotation. Functionally, nothing has changed.

The last class given in the example is WebsiteUser (the class can have any name). In this class, we set user’s behavior model UserBehavior, and minimum and maximum time of waiting between calls of each user’s separate tasks. To clarify it a little, this can be visualized in the following way:

Performance Testing with Locust [Part 1] - Infopulse - 1

Starting testing

‘Launch the server’ is still to be tested:

json-server --watch sample_server/db.json

Also, let us modify the example file so that it corresponds to the service which we are testing. Let us remove login and logout, and set user’s behavior:

  1. When you start working, open the main page once.
  2. Receive the list of all the posts х2.
  3. Comment on the initial post х1.
from locust import HttpLocust, TaskSet, task
class UserBehavior(TaskSet):
  def on_start(self):
      self.client.get("/")
 
  @task(2)
  def posts(self):
      self.client.get("/posts")
 
  @task(1)
  def comment(self):
      data = {
          "postId": 1,
          "name": "my comment",
          "email": "test@user.habr",
          "body": "Author is cool. Some text. Hello world!"
      }
      self.client.post("/comments", data)
 
class WebsiteUser(HttpLocust):
  task_set = UserBehavior
  min_wait = 1000
  max_wait = 2000

To launch in the command line, perform the following command

locust -f my_locust_file.py --host=http://localhost:3000

where host is the address of the tested resource. It will be supplemented by service addresses specified in the test.

If there are no mistakes in the test, the loading server will start and will be accessible at http://localhost:8089/

Performance Testing with Locust [Part 1] - Infopulse - 2

As can be seen, the tested server is mentioned, and the addresses from the test file are added specifically to this URL.

Here we can also set the quantity of users to create the load, as well as their increment per second.

Start the test by clicking on the “Start swarming” button.

Performance Testing with Locust [Part 1] - Infopulse - 3

Results

After some time, let us stop the test and look at the first results:

  1. As expected, each of 10 created users appeared on the main page at the very beginning.
  2. On average the posts list was opened two times more frequently than comments were written.
  3. There is an average and a median time of response for each operation, and quantity of operations per second, which is already useful information, which can be used to compare actual performance with the expected result.

The second tab has the loading graphs in real time. If the server falls (over) under a certain load, or its behavior changes, the graph will show it immediately.

Performance Testing with Locust [Part 1] - Infopulse - 4

The third tab contains mistakes. In my case, that is client’s mistake. But if the server returns mistakes 4ХХ or 5ХХ — their text will be recorded specifically here.

If a mistake happens in your text code, it is moved to Exceptions tab. So far my most frequent mistake is connected with the print() command in the code — this isn’t the best logging technique :)

The last tab allows loading all the test results in the CSV format.

Are these results relevant? Let us think about it a little. Most often, demands for performance (if specified at all) are something like this: average time of loading the page (server response) must be less than N seconds under the load of M users, without specifying what the users have to do. And this is what I like locust for — it creates the activity of a specified quantity of users who in random order perform activities expected from users.

If you need to perform a benchmark test, i.e. to measure system’s behavior under various loads, several behavior classes can be created and several tests under various loads can be conducted.

This is enough for now. If you like this article, in the nearest future I will share another post about:

  1. complicated testing scenarios, where results of one step are used in the next steps;
  2. server response processing, because it can be incorrect even with HTTP 200 OK;
  3. not-obvious complications that can be expected and how to overcome them;
  4. testing without UI;
  5. distributed performance testing.

Stay tuned!