A Journey through a Performance Test – Part II

We’ve already talked about the basic process of the Performance Test in “A Journey through a Performance Test” and now let’s dive deep into the following 3 phases:
- Performance Test Prerequisites
- Performance Test Planning
- Performance Test Execution
Performance Test Prerequisites
Testers should have a clear idea about the project domain as well as the performance needed for the specific project before starting to plan the Performance Test. We can divide the Performance Test Prerequisites phase into four subcategories:
- Domain KT
- NFR Gathering
- Study on API Documentation /Collection
- Test Estimation Creation
- Domain KT
Testers must understand the systems, interfaces, modules, and other parts of the software. Many system components and different ways to use the software exist. Set a test scope to narrow the focus to examine the most sensitive or important parts of the software. Consider interface testing as an example, one round of tests might focus on stress testing an API to determine the software’s ability to handle requests and responses at busy times.
- NFR Gathering
Nonfunctional Requirements (NFRs) define system attributes such as security, reliability, performance, maintainability, scalability, and usability. NFRs are captured as quality attributes of the product properties and are hard to capture. NFR documentation describes how the product works and concentrates on the user’s expectations.
When we talk about NFR gathering for a Performance Test, we need to gather all performance requirements and environmental requirements to carry out this test. The objective of gathering the NFR is to verify the performance of the software. Performance Testers should obtain the performance requirements and then obtain the support of the DevOps or Dev teams to fill in the environmental requirements. Following this, the Tester should obtain confirmation from the Dev and DevOps teams, confirming that the data in the sheet is correct and good to proceed.
- Study on API Documentation / Collection
The Performance Tester should have to study the API documentation/collection of the system to get an idea about what needs to be added for the Performance Test. APIs should be functionally correct, as well as available, fast, secure, and reliable in order to move on.
- Test Estimation Creation
After the Tester has a thorough understanding of the API documentation then the Tester can move into
the estimation phase. A properly planned testing process is necessary for ensuring the required level of software quality without exceeding a project’s time and budget. Misestimating can cause a delay in product delivery
or decrease the product’s quality and competitiveness. Estimating software testing is a rather complicated and volumetric process, but it is critical for creating a successful project. To make testing time estimates more accurate and realistic: test tasks should be broken down, divided into several parts, and the time for each should be estimated. This is a formalized method, but it requires the least effort for assessment.
Then we move on to the next phase:
Performance Test Planning
In this phase, we focus on the creation of the Performance Test plan, mainly scripting, and is divided up into five subcategories:
- Test Plan Creation
- Test Data Creation
- Environment Preparation
- Script Creation / Update
- Script Review and Trial Execution
Let’s consider the above one by one!
- Test Plan Creation
The Performance Test Plan concerns this particular type of testing and any condition. In the same way as the general Test Plan, the Performance Test Plan should always reflect the real state of the Project. The Performance Test Plan should cover the following areas:
- Entry and Exit Criteria.
- Environment Requirements, along with Dependencies and Constraints, Load Injectors, and Test Tools used in the process of testing.
- The Performance Testing Approach (including target volumes, the selected number of users and data to be loaded with, assertions, and load profiles).
- Performance Testing activities (including test environment build state, use-case scripting, test scenario building, test execution and analysis, and reporting).
After creating the Test Plan a signoff from the relevant parties should be obtained before moving on to the next step.
- Test Data Creation
The Testers do not only collect/maintain data from the existing sources, but they also generate huge volumes of test data to ensure their quality booming contribution in the delivery of the product for real-world use. Therefore, we as Testers must continuously explore, learn and apply the most efficient approaches for data collection, generation, maintenance, automation, and comprehensive data management for any type of functional and non-functional testing as Test data creation plays a major role in Performance Testing.
- Environment Preparation
Performance Testing is conducted to evaluate the ability of a system in terms of responsiveness and stability under a workload. This process is conducted on three major attributes, which include scalability, reliability, and resource usage. Several techniques can be used to check the performances of software or hardware in a Performance Testing environment setup. There are several simple ways in which one can ensure accuracy and better results in the tests and this can be done using a better Performance Testing environment setup, which can be done in the following ways:
- Detailed Knowledge of Production and Test Environment
- Isolating the Test Environment
- Network Isolation
- Test Data Generators
- Removing Proxy Servers from the Network Path
However, it is the responsibility of a performance testing engineer to have complete knowledge and awareness about the production environment such as server machines, load balancing, recreating production network latency as well as a number of other system components. Once these details are known they, should be properly documented and well understood before initiating the initial stage of the performance testing. It is also important for an engineer to keep themselves informed about the complete details of the architecture and to ensure that the same architecture is being executed in the test environment. This is because having any sort of difference between the two can lead to wastage of time, cost of production, and effort.
- Script Creation / Update
After creating the test environment, the Tester can start scripting. A Performance Test Script is a programming code specific to Performance Testing to automate real-world user behavior. This code contains the user actions performed by a real user on an application, and such scripts are developed with the help of Performance Testing tools like LoadRunner, JMeter, and NeoLoad, etc. The main purpose of the Performance Test script is to simulate the behavior of a real-world user. The virtual users or threads use a Performance Test script and generate the desired load on the server. Some of the scripts like Java MQ trigger the messages to test the middleware application.
- Script Review and Trial Execution
After creating the scripts, the Testers can move to the script review; the Testers can utilize a person who has a thorough knowledge of Performance Testing or a senior developer from the project team to review the scripts, and the Tester should then start the trial executions. At this time the Tester can locate the errors and bottlenecks in the scripts. Testers can correct these errors and fine-tune the scripts while doing the trial runs. After conducting the trial executions we have the runnable reusable test scripts for the execution, we then move into the final phase of the Performance Test – the Performance Test Execution phase. This refers to running tests specific to Performance Testing like load tests, soak tests, stress tests, spike tests, etc. using Performance Testing tools. The Performance Test Plan contains detailed information on all these tests, which needs to be executed during the performance testing window.
Performance Test Execution
This phase has three sub-phases:
- Test Execution: To run the planned Performance Tests
- Performance Test Execution – Backup
- Result Analysis: To analyze the test result and prepare an interim test report
Let’s take a look at these, one by one:
- Test Execution
Once the test starts, the graphs and stats should be checked on the live monitors of the testing tool. A
Performance Tester needs to pay attention to some basic metrics like active users, transactions per second, hits per second, throughput, error count, error type, etc. In addition, the Tester needs to check the behavior of the users against the defined workload. Finally, the test should be stopped properly and the result should be collated correctly at the given location. Once a Performance Test is completed, a Performance Tester collects the result and starts the
result analysis, which acts as a post-execution activity. A Performance Tester should follow the given approach to conducting the Performance Test Result Analysis.
- Test Execution – Backup
This time is allocated as a backup to cater to any sudden or emergencies.
- Test Results Analysis
The Performance Test Result Analysis is an important and more technical part of Performance Testing. Performance Test Result Analysis requires expertise to determine bottlenecks and remediation options at the appropriate level of the software system – business, middleware, application, infrastructure, network, etc.
When conducting the Performance Test Result Analysis, a Performance Tester should follow these important points:
- Start the analysis with some basic metrics such as:
Number of Users, Response Time, Transactions per second / Iterations per hour, Throughput, Errors.
- Analyze the Graphs
Set the proper granularity for all the graphs, Read the graph carefully and note down the points, Check the spikes and lows in the graph.
- Analyze other reports such as Java heap during the test, deadlock or stuck threads, Garbage Collector logs.
A Performance Tester gathers all these results i.e. the client-side and server-side stats and starts analyzing the results. The Tester then verifies the results against the defined NFRs. After each test, the Performance Tester prepares an interim test report, which is analyzed by a Performance Test Lead or Manager. Finally, the Result Report is presented to the client.
Further Reading
Performance Test: https://www.blazemeter.com/blog/performance-testing-vs-load-testing-vs-stress-testing
How to Conduct Test Estimations: https://www.apriorit.com/
Performance Test Execution: https://www.perfmatrix.com/performance-test-execution/
Performance Test Results Analysis: https://www.perfmatrix.com/performance-test-result-analysis/

Gihani Naligama
Senior Software Quality Engineer
Clear and comprehensive content. Well written Gihani.
Thanks