Today we come to talk a little bit about the world in which we move, surrounded by technology and devices connected to the Internet. Every day more people use these services and they become more common in our daily life. Actions such as buying clothes, books, gifts and even doing the shopping can be done online and reach our homes in less than two hours thanks to companies like Amazon. This is the reason why it is increasingly important that web resources load in a fast way and are able to survive to millions of users at the same time.

To verify that a website is able to support such a workload, we are going to demonstrate how a stress test would be done on a website. This time we will use apache Jmeter, an Open-Source resource that the apache community offers us to simulate real users browsing a web. This allows us, in addition to visiting all the URLs of a website in an automated way, to make more complex tests by logging to simulate real users or other actions that allow us to thoroughly test our web applications.

In these tests you can define many possibilities, but in our case, we will stick to generating a large number of users by simultaneously visiting one of the websites that we managed to migrate to our cloud platform. The advantage of having this web in the cloud is that it gives us much more flexibility when it comes to supporting this type of workload. It allows us, among other things, that if we receive many visits, as we will simulate below, the infrastructure of the web will be able to grow to adapt to it.

Apache Jmeter is a program made in Java that allows us to perform these tests from our own laptop generating a considerable number of users. However, in our case we want to generate about 1.5 million requests, so it will be necessary to distribute this load among several teams. For the test we will use our laptop together with three more computers that will be in AWS.

For this we are going to be based on an architecture such as the following one:

This architecture is formed by our team, from where we will configure the tests and orchestrate the slaves; exactly three will be those that will send the requests to the web that we want to try. Also, to be able to carry out the tests, it is necessary that the slaves are in the same private network as the master, so we will configure an IPSec VPN Tunnel from our network to the slave network

Let’s start with the network configuration:

  1. The first thing is to define in AWS a VPC and subnet where we can house our slaves.
  2. Once we have created our network, it will be necessary to connect our master to it. This time we will use the VPN AWS ​​service to carry it out. We have two options:  look for a router in which a VPN can be configured, or configure it locally in our team.
  3. In the case of using Linux on our computer is very simple, we need to install OpenSwan or another tool that allows us to build IPSec tunnels, and we apply the configuration that we downloaded from AWS when creating our VPN.
  4. Once all this is configured we start with the JMETER. In our installation we will use Ubuntu for all the nodes, both the master and the Slave, and the installation is very simple.

Let’s start with the slave. Once connected to the machine, we will update the packages:

  • $ sudo apt-get update

Next we will proceed to install Java and Jmeter:

  • $ sudo apt install Jmeter-server
  • $ sudo apt install default-jre

Once everything is installed, we will proceed to configure our slaves. Since this is a distributed test we must define the ports through which our slaves will listen to the master. For this we edit the Jmeter properties file:

  • $ vim /usr/share/jmeter/bin/jmeter.properties

At the same time, we modified the following line:

  • “remote_hosts=127.0.0.1” por las IPs de nuestros esclavos “remote_hosts=192.168.0.10,192.168.0.11,192.168.0.12,192.168.0.13,192.168.0.14”; en nuestro caso los esclavos serán los equipos de las IPs de la 10-14.

Once we have this configured, we proceed to start the Jmeter in the background executing the following command in each of the slaves:

  • “$ nohup /usr/share/jmeter/bin/jmeter-server  > /dev/null 2>&1 &”

The next step is to install the Jmeter in our master and configure it. As we also use Ubuntu in our master, we will proceed to install it as in the slaves.

With this we will have our test infrastructure mounted and we can proceed to configure the test and launch it. In our master, we have to create a kbdx file, which is where we define our tests, but we will do it with a graphical interface is much simpler.

  • First we have to configure the site to which we want to do the tests and define the type of tests we want to do, so we have to create a group of threads.
  • Next we define the number of threads (number of users) that we want to attack the web. We will define 100 threads, 10 times during a period of 100, but we can set to run the tests until we stop them. We can observe it in the following shot.
  • Next we will have to add the HTTP request by default indicating the name of the web or IP to which we want to attack and the port, in our case the 443.
  • Additionally we will add more routes from our website to attack, so once created by default, we will only have to add HTTP request samplers.
  • We created a couple of tests and we went to the the results part, we must add this in Add → Receiver → Graph.

At the bottom of the image we can see the statistics represented in colors:

  • Black: Total number of requests sent.
  • Blue: Average of the requests sent.
  • Red: Current standard deviation.
  • Green: Throughput rate, represents the number of requests per minute that the server can handle.

Once this is done, we can analyze our chart to see where we can have a bottleneck and improve the performance of our website.

Nowadays the speed of a web page is a very important point for this to have visitors, if our web takes several seconds to load on a mobile, users will get tired and end up abandoning it, in addition to the penalties that we will have in terms of SEO .

About Miguel Camba

Cloud Engineer. Miguel is studying a degree in Information Systems at UNED University, and he has AWS Certification Exam Readiness Workshop: AWS Certified Solutions Architect – Associate, 533 Exam: Implementing Microsoft Azure Infrastructure Solutions and in 535 Exam: Architecting Microsoft Azure Solutions. He started his professional career as Head of Information Technology at Complutense University of Madrid, and in 2017 he joined Enimbos as Cloud Engineer.