A Journey Through Performance Testing

What is Performance Testing?

Performance Testing in general, is a name given to tests that are used to check on how a system behaves and performs. It examines the responsiveness, scalability, stability, speed, reliability, and resource usage of your software and infrastructure. Different types of Performance Tests provide you with different data.

Before conducting a Performance Test, it is important to determine your system’s business goals, so that it can be determined if your system behaves in a satisfactory manner or not in accordance to the needs of your customers.

After running Performance Tests, you can analyze different KPIs, such as the number of virtual users, hits per second, errors per second, response time, latency, and bytes per second (throughput), as well as the correlations between them. Through the reports, you can identify bottlenecks, bugs, and errors, then decide what needs to be done.

When should you use Performance Testing?

Performance Tests should be run when you want to check your website and app performance, which may extend to testing servers, databases, networks, etc. If you follow the waterfall methodology, it is important that a test is conducted with each version released. Similarly, if you are shifting left and going agile, you should test continuously.

It should also be noted that a detailed plan should be prepared before starting Performance Testing activities, to help clarify how Performance Testing will be carried out from both technical and business perspectives.

Here are some planning steps and explanations for each which will come in handy for you next time you want to test the waters!

What is the process of doing a Performance Test?

Doing an end to end Performance Test can be divided into three main parts:

● Performance Test Prerequisites

● Performance Test Planning

● Performance Test Execution

In each of these phases, we need to cover some specific areas for performing a perfect Performance Test adhering to the best practices.

Performance Test Prerequisites

Testers should have a clear idea about the project domain as well as the performance needed for the specific project before starting to plan out a Performance Test.

Testers should gain an understanding of performance criteria such as design, normal and peak loads, system resources, dependencies, workload characteristics and time as the prerequisites for Performance Testing which should be part of any strategy.

We can divide Performance Test Prerequisites phase in to four sub categories:

● Domain KT

● NFR Gathering

● Study on API Documentation/Collection

● Test Estimate Creation

Performance Test Planning

In this phase, we are focusing on the creation of a Performance Test plan mainly focused on scripting. We can divide this phase into five sub categories as follows:

● Test Plan Creation

● Test Data Creation

● Environment Preparation

● Script Creation / Update

● Script Review and Trial Execution

Performance Test Execution

Performance Test Execution refers to running tests specific to Performance Testing such as load test, soak test, stress test, spike test etc. using Performance Testing tools. Performance Test Plans contains detailed information of all the tests which need to be executed within the performance-testing window.

The Performance Test Execution Phase has the following activities to be performed:

● Execute the Documented/ Agreed Performance Test

● Analyze the Performance Test Result

● Verify the Result against defined NFRs

● Prepare an Interim Performance Test Report

● Take Decisions to Complete or Repeat the Test Cycle based on the Interim Test Result

This phase has three sub-phases:

● Test Execution: To run the planned Performance Tests

● Performance Test Execution: Back up

● Result Analysis: To analyze the Test Results and Prepare an Interim Test Report

Let’s dive deeper into the area of Performance Testing processes in the next chapter.

To be continued….

Further Reading:

Gihani Naligama

Software QA engineer

Pair Testing: How To Get The Maximum Out Of It.

Among many test techniques that are out there in Software QA discipline, Pair Testing has been a long-standing technique used by Quality Engineers to track and verify issues. But due to how it’s practised and the time constraints, pair testing tends to be left out. Knowing when to use Pair testing will enable you to identify issues that need to be fixed and it will save time and resources.

As stated in the profesionalqa.com website, Pair testing refers to the technique which involves two members using a single workstation to test different aspects of a software application. Who’s going to be in the pair now? Well, that depends on the testing requirements. It can be two Quality Engineers, a BA and a Quality Engineer, a Developer, and a Quality Engineer, etc. The bottom line is that an exercise of pair testing should provide an effective output in test execution and finding defects compared to normal test execution. Hence, the question arises “How am I going to get the maximum out of it?”

To do this, here’s a list of criteria which can be followed to assess the capability of conducting Pair testing

Is it worth the effort to start things off?

Pair testing is effective when used to verify a complex or broad set of functionality in the application. If you’re going to test a feature that has very straight forward functionality there won’t be a big requirement to conduct pair testing. For example, user login for a goods delivery application is better tested by one person. But if there’s the feature to track delivered goods, that feature itself has the potential to do pair testing due to the different selection criteria a user will have to enter to get the desired result.

Is the time with you OR against you?

Though there may be content/features to pair test, it’s important to look at the time available to conduct pair testing before you decide to get started. Based on the time constraints of the project, it may not be feasible to allocate time to conduct a pair testing session since you have to give priority to individual test execution. We are utilizing another person’s time so it should be a responsible decision towards making it a productive pair test session. The ideal approach would be to talk to your project manager and come to a decision on going ahead with pair testing.

Choose the right person to Pair Test with

It takes two to tango but the performance depends on how good your partner is. If you’re to do pair testing, you need to make sure you’re doing it with the right person next to you. Because our goal is to make sure the feature is working according to the requirement. And depending on the situation the person whom you’re conducting pair testing has to change. If you’re to have a pair test session at the end of a sprint, doing it with one of the main developers would benefit in finding any issues. If you’re to test a part with more integrated components, perhaps doing it with your BA or your product manager would result in a better output on identifying issues.

Identify your observation criteria

Pair testing is not there only to detect bugs. You can use pair testing to Identify issues that may obstruct the intended user requirement. Identifying UX improvements can be done effectively with the use of pair testing. It helps the Quality Engineer to get a better understanding of aspects that he or she may haven’t looked at among all possible viewpoints of an application feature.

Defect verification? Pair Testing to the rescue!

When you have a list of defects to be verified, doing it in pair testing will save you some time and effort in communicating. This is most effective when the defect list is fixed for one or two developers because getting them paired up for the testing will allow you to verify your test inputs and possible edge cases for that defect. If the defect still exists, the developer is aware then and there. This saves time in reporting a reopened issue and perhaps it may be resolved at the moment itself saving time on another defect verification cycle.

Pair testing can help project teams utilize their time and resources in a collaborative manner to help build the quality of an application. But it’s the responsibility of the Quality Engineer to identify feasibility on how to use it with the support of other project team members. This in return, will ensure a significant improvement in the overall quality of the application.

Thaveesha Gamage

QA Engineer

Common Design Patterns every Test Automation Engineer should know

In software design, design patterns are created to solve common problems. This doesn’t seem to be widely discussed in software automation, because the topic sounds complicated. There are sophisticated design patterns used to solve complex issues in software development. Also there are easy to understand and easily adoptable design patterns that can significantly improve readability and maintainability of our test automation code. In this article, you will see complete details of Page Object Model (POM) design pattern with Page Factory in selenium with practical examples.

Page Object Model

POM is a design pattern which has become popular in Selenium Test Automation. It is a widely used design pattern in Selenium for enhancing test maintenance and reducing code duplication. To get started with POM, I would like to take you through the following basic example.

What’s your first name?

Last name?

Page Object

Tell us more about your family

Everyone from my family is a page Object.

Each of us knows everything about a page.

The page that I specialize in is the login page.

My brother, Result Page Object Knows about the results page.

And our sister, Detail Page Object is the expert on details page.

What makes you an expert in login pages?

I know a lot about login page, I know the info that makes the page unique, the title and url.

I also know about elements of the login page, their values and attributes. And my knowledge does not end at login page information.

I can tell this information to anyone interested in it. Just the other day, someone from the unit tests family asked about these details.

Awesome. What else do you know about the login page?

I know what can be done on a page.

Like clicking links, typing in a textbox, selecting values in a listbox. And to see what an expert I am, I can even do these things on behalf of others.

The same unit test that asked the other day about the page title and url wanted more things from me.

Such as…

First let me tell you his name. It is testThatLoginWorks. Weird name, isn’t it?

So, he wanted me to type some text in the username textbox.

Then, to type other text in the Password textbox. And finally, to click the login button. I did so many things for him.
That was a busy day for me.

So you know many things about the login page. Do you know about other pages as well?

Login page is the only thing that I know of.

About other pages, you should talk to the other members of the Page Object family.

My brother, my sister and the everybody else.

Each of them is an expert in one page as well.

Excellent. Can you summarize for our audience what you do best?

I am the number 1 expert in login page.

I know all the information about the login page and its elements. I also know everything that can happen on a login page.

And I can even do these things when asked nicely. I am a Page Object.

Thank you very much!

Consider the below example of simple selenium script which has applied POM concept.

public class GmailLoginPage {
WebDriver driver;

By userName = By.name("uid");

By password = By.name("password");

By login = By.name("btnLogin");

public GmailLoginPage (WebDriver driver){

   this.driver = driver;


//Set user name in textbox

public void setUserName(String strUserName){



//Set password in password textbox

public void setPassword(String strPassword){



//Click on login button

public void clickLogin(){



 * This POM method will be exposed in test case to login in the application

 * @param strUserName

 * @param strPasword


public void loginToGmail(String strUserName, String strPasword){

   //Fill user name


   //Fill password


   //Click Login button



Why POM should be used?

Here are the main advantages of Page Object Pattern using:

  • Easy to Maintain
  • Easy Readability of scripts
  • Reduce or Eliminate duplicate
  • Reusability of code
  • Reliability

Page Factory Model

Page Factory is an inbuilt Page Object Model concept for selenium webdriver but it is optimized. Using Page Factory under test file, you can access these page objects in other java file. Used annotations @FindBy to find WebElement and initElements method to initialize web elements. Consider the below example of simple selenium script which has applied Page Factory concept.

    public class GmailLoginPage {


	* All WebElements are identified by @FindBy annotation


   WebDriver driver;


   WebElement userName;


   WebElement password;


   WebElement login;

   public GmailLoginPage(WebDriver driver){

  	this.driver = driver;

  	//This initElements method will create  all WebElements

  	PageFactory.initElements(driver, this);


   //Set user name in textbox

   public void setUserName(String strUserName){



   //Set password in password textbox

   public void setPassword(String strPassword){



   //Click on login button

   public void clickLogin(){




	* This POM method will be exposed in test case to login in the application

	* @param strUserName

	* @param strPasword


   public void loginToGmail(String strUserName, String strPasword){

  	//Fill user name


  	//Fill password


  	//Click Login button




What is the page object in Selenium?

Design Patterns In Test Automation World

Page Object Model (POM) & Page Factory: Selenium WebDriver Tutorial

Dinuka Abeysinghe

Senior QA Engineer

Importance of AI for test automation

Software Testing Evolution

In software development life cycle, software testing has higher importance. Even though developers develop according to requirement, during testing only we make sure that everything meets the expectation. When looking at past decades, it is quite evident that software testing has gradually evolved. At the first stage, testing started only after the software development was completed. Currently, most of the companies have moved to agile processes and that has enabled software testing to be initiated parallel to the development process. Reason behind selecting agile processes is finding bugs as early as possible and fixing those earlier. For that automation testing came to play a role with the manual testing process. In this era, it can be seen that Artificial Intelligence (AI) is stepping out to software testing. It is worth talking about and application of AI to test automation. In this article, we will be taking a look at AI test automation and it will also provide guidance on how to apply it.

What is AI?

AI is also known as machine intelligence. Simply, machines performing natural human intelligence is what’s known as ‘AI’ or Artificial Intelligence. Image Recognition, Speech Recognition, Chatbots, Natural Language Generation etc. came to the world as applications because of AI support.
When looking at the current AI systems, most of them belong to “Limited memory” (based on the past experiences, system/machine reacts). Above mentioned can be also be used in test automation to maintain tests and here will provide directions to apply AI to your project.

Why should you apply AI on Test Automation?

As the survey conducted by testcraft using 200+ testers, followings are the major concerns identified in testing

  • Test Maintenance
  • Not enough man power
  • Lack of integrations
  • Want to increase coverage
  • Hard to find good test engineers
  • Not able to keep up with an agile schedule

Out of all participants, 50% of testers have mentioned that Test Maintenance is the biggest bottleneck for testing. So AI will be the best solution to overcome this issue. Other than the test maintenance, AI supports time saving, stability of tests, finding bugs fastest way and fixing those much faster etc.

AI enhance the software testing efficiency

“Self-healing” is the mechanism that is used in AI to overcome the above mentioned drawbacks. What self-healing does is, it identifies damages, errors by itself and has ability repair those damages and fix errors automatically by itself without human involvement.

The problem in test automation is, once we automated things, if those got failed we have to spend some time to validate it to identify root cause. This will be time taken one. So by patterning this self-healing behavior in to our test automation, mainly it supports error handling and also for information flow management. When occurring errors, AI is able to adjust those by viewing system response and accordingly patterned to self-heal those errors. With this mechanism, followings are the advantages can be gained for testing.

  1. Able to maintain test stability
  2. Able to create most reliable automated tests
  3. Able to identify bugs earlier and resolve those so fast way.
  4. Able to save time and reduce the cost of failures
  5. Reduce the maintenance
  6. Able to follow continues learning with data using and make correct decisions to overcome failures

AI test automation Tools

Following are the currently most popular AI test automation tools

  • Applitools
  • SauceLabs
  • Testim
  • Sealights
  • Test.AI
  • Mabl
  • ReTest
  • ReportPortal

Stay informed about future AI test automation trends

Applying AI for test automation will be a great opportunity for software testing in future since it addressing current test maintain issues without any human involvement. As a result of that testers will not have to go into the code manually to identify issues and to validate those issues. To get the greatest benefit, it is good keeping touch with future AI test automation trends. For that you can follow Blogs, research articles related to AI.

AI in test automation is not an obstacle, but an opportunity

Vindya Gunarathna

QA Engineer

IoT Security Testing – Identifying the Scope

IoT (Internet of Things); Where it stands now?

A couple of years back only a handful of people who were involved in the subject knew what IoT is all about. IoT or Internet of Things you should know by now. It’s revolutionized the way we interact with the day-to-day devices and of course technology is very common and IoT will be the next big thing in the coming years. If you want to ensure that, simply google the IoT predictions. Here I am not going to talk about what IoT all about.

There are thousands if not maybe millions of articles on the same subject on the Internet. So in case if you are sitting in the IT industry for several years by now, I hope that you might get crossed with IoT at some point in your career by now. Or if you are a newbie who just entered the arena, my advice is that it’s worth paying some attention to the subject.

Security of an IoT device

Let me dig into the subject now. Day by day, second by second every application and every device connected to the internet is becoming more vulnerable to hackers. The reason is it’s a constant battle where hackers are trying to steal valuable information and people who developed them releasing patches to close the holes in their systems. Why? Ultimately it’s all about money and business. There will be a time where cybersecurity knowledge for a local police officer is mandatory. When it comes to IoT devices, this will be far more critical as they are all about your private data or devices where you get services for your own needs. This leads developers/testers to have a serious thought on the security (physical, firmware and software) of the device.

There are considerable differences when ensuring the security of web/mobile applications to IoT devices. Here onwards, I’ll talk about the areas where you should concentrate on when identifying the scope for an IoT security testing initiative. If you are someone who is engaged in an IoT project and who is also playing the role of developer, QA or just an enthusiast about the subject, the content below will be a valuable piece for your arsenal on the cybersecurity domain.

  • Process matters

    The basics should exist where you will start with scope identification. Then you should start with the threat modeling and map the attack surface. With this, you will be able to see the bigger picture. And you will be able to easily identify the rest what describes below.
  • Hardware (Physical) security

    This will be something new for you in case you only dealt with the application security so far. When you are ensuring the security of the hardware, there are several aspects that you should concentrate on.
    1. Does the device have places that are exposed to human interaction?

      For example, If there is a display (front-end) that exists and applications are usable then we are mostly talking about android (mobile) application security testing. If there is no real estate to display something or having a display just for informational purposes, then you completely out of that headache.
    2. Does the device contain any physical ports?

      Here you need to make sure that if the device having physical ports what are they used for and for what purpose they used. You need to make sure that unnecessary physical ports do not exist. If there are USB ports, then you need to make sure the accessibility of the device by using those. USB debugging should be false unless a hacker will be able to get the root access and do whatever he wishes.
    3. How does the device get access to the internet? Is that through Ethernet or WiFi?

      You need to find out whether a device can connect to the internet via both Ethernet and WiFi or only using a single medium. Based on that your testing methods will get change.
    4. What are the other connectivity methods?

      And at last, if the device connects to Bluetooth, Zigbee or another wireless communication medium, then you need to make sure required security measures are addressed when implementing them. Also, evaluate whether the device trusted the data before accepting them. When it comes to physical security, we have to think about how the device is intended to be accessed from outside. Then close all the access points other than the intended channel. Also if the device store any passwords or any other sensitive data, it is required to make sure that the hardware exists will not expose those data to an outsider (tamper mechanisms).

      If you are interested in this particular subject, it’s better to get more familiar with secure microcontrollers, secure key storage, encryption for physical data channels (pin pad cables, inter IC communication links) and tamper switches. And last, the hardware security standards. It’s worth getting to know the kind of standards your hardware following (Ex- MISRA C). Once you make sure of the above aspects in terms of the coverage on hardware security, you are pretty much safe.

  • Firmware security

    This is the most important piece of your IoT device. This will control all that matters from sensors to the operating system. So having a look at installed firmware on your device is mandatory and if you missed it then you probably missing the big fish of your IoT security testing initiative. There are three major areas in firmware security.
    1. Invest more time on debugging interfaces (USB/Serial/JTAG/SWI)
    2. Protect your bootloader
    3. Implement continuous monitoring on both devices and firmware sides. In addition to the above, please be aware of the firmware level attacks. Below are the possible areas you should consider,
  • Vulnerabilities in third-party components and libraries.

    When developing the firmware there are many third-party libraries and components developers use. So not only scanning via an automated tool but getting a list of all of them and manually validating is critical.
  • Injection attacks where a hacker can alter the firmware logic.

    Then the injection attacks, this is a broader subject. What you need to ensure is if the IoT device can directly interact with the user interface and user can input data via a provided application or even when interacting with the operating system you need to make sure all the fields are properly validated so the user cannot perform injection attacks. Based on the technology the method of injection attacks getting different. If you deal with an application then it can be SQL or NoSQL attacks.

    If you dealing with the OS it can be command injections where you can alter the firmware logic so that you disrupt the normal functions of the device. So it’s very important that making sure your IoT device having a very good defense (on-boot/periodic firmware integrity check) on these types of attacks.
  • Sensitive information at rest and transit

    When talking about sensitive data or PII (Personally Identifiable Information) whether they exist or not in your IoT device will depend on which purpose they intended to be used. It can be even inside your body which monitors your health condition or operating at home or operating publically. What you need to worry about is making sure what kind of data passing through and stores in your device. Can they be classified as sensitive information? If yes, you need to make sure two things. How they transit within the device or to the outside and how they stored.

    When data in transit especially from the backend server to the IoT device and vice versa or when passing the data to some other third-party peripherals it should be secured. Make sure the channel was secured. And make sure they use not only TLS is enough but also the version. Anything below TLS version 1.2 considered not recommended by the industry now. When storing data you should verify PII data stored in plaintext or ciphertext (result on encryption performed on the plain text).
  • DoS attacks

    Another important aspect that you should look at is the DoS (Denial of Service) attacks targeting the firmware. With this hacker can crash the system by utilizing all the available memory. In such a situation please make sure proper mechanisms are enforced concerning the security of the firmware.
  • Key management on client-side

    Another important point when it comes to firmware security is the key management of your IoT device. As you know when a device service or an application communicating with its backend server it uses a secret key to establish the connection. So, in this case, it could be a service running on the firmware. So please make sure where the key is stored on the device and how it stored. Was that the same key used all the time or any key rotation mechanism is implemented. This is very important since a hacker can steal the secret key and do whatever he wants after that.
  • Open ports and services (To the network)

    Finally, you should be aware of what are the open ports and services to the network. Any unnecessary ports should be closed. For example, if the device allows port 23, someone can get into the device via Telnet and take control of it unless proper security mechanisms are not enforced.
  • Software security

    If your IoT device dealing with some applications on top of the firmware then this section matters. Some devices may not have any software which is directly running with the firmware but some may have software that will interact with the firmware and the device (This will depend on the service your IoT device provides). If it uses any software, that means mostly we talking about android applications. You should primarily look on below,

    1. Vulnerabilities exist in the APK
    2. Data in transit and at rest
    3. Injection attacks via input fields
    4. Authentication and authorization mechanisms.

    Here I would not be going to describe each in detail as most covered in the previous sections. But here the applicability is about the software applications. So that you should separately test each area.


In conclusion, before you start everything as I mentioned in the beginning, planning matters where you will perform a deep dive into the overall architecture and then to the threat model. By doing that you will identify where your device stands in terms of the security and how well you should enforce the corresponding security mechanisms. If you consider the areas that I highlighted above when identifying the scope in your IoT security testing, you have a good start to a secure IoT device.

If you have time for preparation, It’s better to study common IoT infrastructures and components first to get some understanding of individual components. Then it will also help to design and study testing procedures relevant to them.

So fasten your seatbelt and start securing your IoT device if already not. You will save lots of money for your business and maybe you will be the one who saves the business ultimately. And besides, I would like to write down that if you are to become a security test professional and you were succeeded in performing your IoT security testing work, please be aware that you enlightened your security testing journey with an area that the future represents…

Chandima Athapattu

Chandima Athapattu is a Lead QA Engineer at Zone24x7.

Investing on prevention vs investing on cure – Testing common vulnerabilities in the World Wide Web.

Do you remember the last time you lost something valuable, something that would cost you a sentimental value? Believe me it is better than the only breadwinner in the family losing their monthly income as the company they worked for lost more than $6 trillion each year, the reason behind it being the most critical crime of all times CYBERCRIME!!! If you don’t wish your company to be included in this figure, mind it being time up to increase investment on cyber-security testing.

Today’s world is becoming increasingly networked in connection and every business / entrepreneurship has one or more web applications, which is why the scope of potential exploits is extremely increasing and mind blogging.

Would you like to see the company you work for on news? Yes, if it’s for something like free publicity, increasing brand awareness as well as enhancing brand identity and popularity, well if it’s not and if it is for a negative context having a diminish on the brand identity causing a huge financial loss. This is why it is crucial to ensure that your applications are tested and secured.

Chances of Vulnerabilities are high and so is the cost

According to statistics on legal and compliance guides “the average consolidated total cost of a data breach is $3.8 million.”

There are various types of costs with regards to security breaches and vulnerabilities on WWW.

  • Reduction and loss of revenue: Occurring due to stolen corporate data or consecutive decrease on sales volume
  • Cost incurred on Investigation: An investigation process eats up your time, energy and most importantly money.
  • Cost of downtime: Time spent on fixing breaches vs time spent on Innovation

Moving on to what is a vulnerability and how it can be prevented through testing

Inability to withstand the effect of a hostile environment

Vulnerability in WWW
A weakness in a web application which allows a malicious user to disturb web application’s security objective(s)

Exploit in WWW.
Taking advantage of a web application’s flaws and carrying out unauthorized activities related to the system


  • Active Attacks – Attempts to modify a system’s state by altering its resources and operations.
  • Denial of Service (DOS)
  • Spoofing
  • Passive Attacks – Attempts to learn or gather information about a system, yet it doesn’t alter system’s resources or operations.

Advanced Persistent Threat (APT)

A malicious user/ party gains unauthorized access to a system and unauthorized activities are carried out
for an extended period of time without being detected.

  • Politically or Commercially motivated
  • Stay undetected
  • Longer period
  • Most often data theft

Warning Signs

  • Abnormal internet bandwidth usage
  • Abnormal patterns in network traffic
  • Detection of Trojans and other malware
  • Detection of aggregated data bundles

SQL Injection

  • Malicious user passes (injects) a SQL script or a part through a web application’s input field
  • Alters the developer intended behavior of the SQL query.

Importance of Security Testing in order to minimize the risk

Software security testing services helps in identifying implementation errors that were not discovered during code reviews, risk analysis, especially at the design level, can help us identify potential security problems and their impact.

It is a well-known fact that earlier the defect is detected the lesser impact it will make on the project. Therefore, giving high attention to security testing and reducing the risk the project will face is immensely important.


Different types of security tests

Importance of Security Testing

SQL Injections

Verushka Thilakarathne

Senior QA Engineer

Towards SOC Compliance

Continuous business value creation while handling customers’ data in a secure way is a must for SaaS (Software as a Service) providers to survive in today’s rapidly changing, highly coupled and deadly competitive business environment. Even though the organizations follow their own techniques and processes, customers expect some kind of mutually recognized guarantee to ensure that their data is handled in a secure manner. Achieving Service Organization Control(SOC) compliance is one such guarantee. Hence, SaaS providers may at least achieve SOC 2 Compliance to ensure their customers that their services meet data security requirements.

Being a Technology Solution company who wants to get emotionally touched with their clients (SaaS providers), it is important to implement some reports and adhere to some procedures to make sure our clients achieve SOC2 Compliance certificate. However, the reports that are needed (Eg: User Snapshot Report, Change Client Report, Failed Login Report … ) and processes to follow (Eg: Selected Sensitive Fields Encryption, Employee Deletion & Obfuscation with GDPR, scrubbing data, decommissioning canceled clients, maintain audit logs) will be decided by the service providers based on their specific business practices to comply with selected trust service principles out of five trust service principles.

Pic 1 : FiveTrust Service Principles

(Note: SOC related trust service principles are security, availability, processing integrity, confidentiality and privacy. For SOC2 compliance it is not a must to have all of them in place)

Readiness Testing

Even Though SOC2 audit is done by an independent CPA (Certified Public Accountant) or accountancy organization, we (QA engineers) also can do a basic readiness testing by verifying the newly created reports/scripts/logs, sensitive data encryption etc… These small actions will provide significant confidence to SaaS providers to move forward with SOC2 audit as it reduces the risk of exceptions and failures. Furthermore, QA team needs to make sure the newly added changes do not affect the functionality and the performance of the application.

Validating Sensitive Fields Encryption

Encrypting all the identified sensitive fields is an easy way to achieve two trusted principles, privacy and confidentiality. Data can be encrypted as required after proper identification of all the tables (in both systemDB and clientDB) which include the identified sensitive fields. Then those tables/fields can be validated by querying as expected to check whether the ciphertext is shown in the required fields, instead of actual data. If you still want to check whether the correct data is stored as ciphertext, that can be easily done using SSMS 2017. A certificate will be created on the database server when identified fields are encrypted using ‘Always Encrypted’. Then by exporting that certificate, and import it to your machine can help you to view both exact data and ciphertext based on the mode selected.

Validating Intrusion Detection and Prevention

Intrusion detection and prevention can be done via network and application firewalls, custom rules and implementing two-way authentication to achieve security. Implementing two-factor authentication requires users to provide two ways of verification when logging into an account, such as a password and one-time passcode (OTP) sent to a mobile device. It strengthens intrusion prevention by adding an extra layer of protection to the application’s sensitive data. Verification can be done using an application that supports time-based one-time password token generation (TOTP token Generators). Permission structure, UI validations, reset password flow, enabling and disabling 2FA, are few areas that need to be taken into consideration when testing 2FA.

Maintaining a Failed Login Report will also help to keep track of all the failed login attempts. This information can be used to investigate and make quick and informed decisions about detected intrusions.

As QA engineers we need to make sure that failed login report includes accurate and relevant information related to all failed login attempts for all types of user accounts regardless of active/inactive status. Related data can be stored in the Eventlog table. Related xml can be viewed by querying as needed.

Pic 2 : Sample xml related to failed logon report

Testing Audit Logs

To achieve SOC compliance, monitoring authorized and unauthorized system configuration changes, and user access levels changes is essential. Adding/ Editing permissions of user access levels, changing already defined permission structures are few activities that need attention. QA team can validate the related xml and its content by performing such activities.

Pic 3 : Sample xml related to changing report permissions

Validating Decommissioning Cancelled Client Process

Decommission inactive/canceled clients from the application and remove all historical data is essential to satisfy security requirements around data retention policies. The development team can come up with a script to remove the data from the system database. This script can be given to SaaS providers so that they can execute it when they need to decommission the canceled or inactivated clients.

Following steps can be used to validate the Decommissioning Cancelled Client Process,

  1. Inactive / cancel the client (via application /using their license)
  2. Execute the script to remove the historical data
  3. Verify no record is there in the systemDB after decommissioning (You can basically check deleted clientID do not exist in systemDB )

Apart from just validating reports/xmls, as proactive QA engineers we need to follow up on modifying the scrubbing scripts, sensitive data encryption, event logs and audit logs with their xml with the schema modification. These actions will help to increase our value significantly.

Moreover, if we want our clients to achieve SOC 2 compliance, we need to adhere to the agreed process, because in SOC audit even the process is validated. Below mentioned few examples for such agreed rules/process in terms of quality assurance.

  • All the production candidates should be QA verified to do a production release.
  • All the changes should be approved by an authorized party.
  • QA sign off should be given as a document not via email and that document should be attached to the production release ticket.

Performing end to end SOC compliance testing cannot be done by QA engineers, especially the offshore QA team. However, they can test the newly implemented two-factor authentication, sensitive fields encryption, audit logs, scrubbing scripts, decommissioning canceled clients scripts and few other reports related to intrusion detection, disaster recovery and security incident handling.

These small actions will provide great value addition for the customers by increasing significance confidence to proceed with SOC2 audit. So why not proceed with these few step

These small actions will provide great value addition for the customers by increasing significance confidence to proceed with SOC2 audit. So why not proceed with these few steps.



Nuwani Navodya

Associate Lead QA Engineer

Execute Testing on Autonomous Robot Platform

Robotics is an interdisciplinary branch of engineering and science that includes mechanical engineering, electronic engineering, information engineering, computer science, and much more. Robotics deals with the design, construction, operation, and use of robots, as well as computer systems for their control, sensory feedback, and information processing.

A robot is a machine programmable by a computer capable of carrying out a complex series of actions automatically. It can be autonomous or semi-autonomous. An external control device can guide robots or the control may be embedded within.

When we consider about Robot test engineering nowadays-mobile autonomous robots are progressively entering the mass market. Thus, manufacturers have to perform quality assurance tests on a series of robots. Therefore, tests should be repeatable and as automated as possible. Testing activity applied to mobile robotic systems is a challenge because new features, such as non-determinism of inputs, communication among components and time constraints must be considered. Simulations have been used to support the development and validation of these systems. Coverage testing criteria can contribute to this scenario adding mechanisms for measuring quality during the development of systems.

When we move from software test engineering to robot test engineering we mainly focus on the following five areas.

  1. Stress Testing
  2. Performance Testing
  3. Longevity Testing
  4. Security Testing
  5. Functional Testing

Let’s take a look at each, one by one:

Stress Testing

Stress testing is defined as a type of software testing that verifies the stability & reliability of the system. We use the stress test in Robot test engineering to determine the robustness and error handling under extremely heavy load conditions of the robot as well as its backend. Under stress test following areas are addressed.

  • Highly Accelerated Life Test (HALT)

    HALT is a technique designed to discover weak links of a product. It can trigger failure modes faster than traditional testing approaches. Rather than applying low-stress levels over long durations, HALT testing applies high-stress levels for short durations well beyond the expected field environment. Hardware units like electronics, drive systems, sensors used in the robot are tested under the HALT test by driving robot with different speed limits. Software components, for example, the robot’s control system, which is the brain of the robot, communication between the robot to the backend as well as the monitoring and managing tools of the robot are validated under extreme workload or stress.

    When we come to the control node stress test, we stress the different states of the robot and test how it behaves under stress. Failures that are triggered under the above-mentioned tests do not have pass/fail results and it requires a root cause analysis and corrective action to achieve optimum value from testing. It creates the ability to learn more about the design and material limitations as well as the bottlenecks and provides opportunities to continually improve the design before bringing the robot to market. As most items are only as good as their weakest link, the testing should be repeated to systematically improve the overall robustness of the product.

  • Environmental conditions

    Environmental conditions testing is the measurement of the performance of equipment under specified environmental conditions such as temperature, static charge, obstacles, climate. When we run the robot in different environmental conditions, how environmental factors affect the robot’s smooth functioning and how much stress it creates on hardware as well as software of the robot was tested.

  • Benchmarking values

    Considering the Mean Time Between Failures (MTBF) and the performance statistics of each component, the benchmark is derived for the robot.

    A couple of examples are;

    • The battery charge retention time
    • The motor gear wastage
    • RF antenna interference

Performance Testing

Performance testing is the process of determining the speed, responsiveness, and stability of a computer, network, software program or device under a workload.

In Robot test engineering, we do a performance test to ensure the performance of software, network &, hardware.

Under software, QA should ensure the performance of software resources used like MQTT broker, responsiveness of monitoring & troubleshooting software.

To ensure the network performance, QA should validate the teleoperation control, which used to control the robot externally from a remote location. This involves accessing and controlling the robot in real-time (autonomous mode & manual mode) including in the Network performance test.

The QA should ensure that the performance of the sensor system, control System, and drive system of the robot is carried under hardware performance.

Longevity Testing

Longevity test is an operational testing scheme that uses a baseline work efficiency specification to evaluate large enterprise applications and hardware performance. LT is applied for error checking or heavy usage after a live operational period and is contingent on complexity and size. In robot test engineering QA should carry out the Longevity test and ensure;

  • Hardware (sensors & electronics)

    How the hardware performs in the long run and benchmarks the value for maintenance.
  • Drive system (Performance in continuous operation)

    How the robot’s control node and navigation system performs in the long run and benchmarks the value for maintenance.
  • Environment (Parameters change with runtime)

    How these environmental factors (temperature, static charge, obstacles, climate) affected the robot in the long run and benchmark value for maintenance.
  • Software (Performance over time)

    How backend, managing and monitoring systems behave in the long run and identify the bottlenecks to optimize the software.

Security Testing

Security testing is a process intended to reveal flaws in the security mechanisms of an information system that protects data and maintains functionality as intended. When we come to robot test engineering, security is a very sensible and important component to verify.

First, the reasons for an attack needs to be identified. Then, the methods of attacking needs to be thought of, for each reason. Finding out whether precautions have already been taken for the above methods would be the next step. However, a system cannot be secured 100% due to the existence of unavoidable reasons. Such an instance would be, an attacker trying to remove the nuts and bolts of a robot. This will also be identified as a threat that cannot be avoided or take any precautions to. A scope will be laid out in the initial stages and only these threats will be looked into. Some threats that are out of scope will be kept unattended.

Functional Testing

When we consider about functional testing there are 4 main areas, which we consider.

  • Software
  • Navigation/Autonomous functions
  • Use cases
  • Simulation

Under software testing, we test the robot’s control node which is the brain of the robot by testing its every state with all the possible scenarios and also we test the robot’s utilities like RDL (Report Definition Language) which uses to communicate between robots and the backend. In addition, QA should ensure the software backend as well as managing and monitoring units of the robot-like Troubleshooter application.

Robot Navigation stack plays a major role. The ROS (Robot Operating System ) navigation stack is useful for mobile robots to move from place to place reliably. The job of navigation stack is to produce a safe path for the robot to execute, by processing data from odometry, sensors and environment map.

QA validates the Navigation Stack and the autonomous function of the robot to ensure that the robot navigates safely and correctly with avoiding blockers. A robot is designed for specific use cases and in the case of AZIRO, the robot’s primary use case was to scan for RFID inventory whilst navigating the store floor. The QA should be able to thoroughly validate this use case with special emphasis on the algorithm used by the robot for navigation and testing the edge cases.

When we do the robot functional testing QA use a testing environment. In the test field firstly it is required to assure the intractability of hardware and software by running the robot. Upon fine-tuning the robot’s performance, the next step was to create a more advanced testing environment which was very similar to the actual store environment.

Finally, the robot is deployed in the real-world environment, and a final testing round is carried out to ensure the robot works as expected.

Let’s go to the deeper level of robot testing in the next chapter.

To be continued….



A Methodology for Testing Mobile Autonomous Robots

Gihani Naligama

Gihani Naligama is a QA Engineer at Zone24x7

Test Design Technique – Equivalence Partitioning

“Your test equipment is lying to you and it is your job to figure out how.”

Charles Rush

What is a software test design technique?
Why do we need to use a technique for testing?

Software test design technique is a method, which a quality assurance engineer can use to derive test cases, test scenarios according to the specification/structure of the application. However, why we need to use a technique for this. We can just simply go through the specification document or a structural document and derive the test cases. We know that as a fundamental software testing principle, “Exhaustive testing is impossible” and sometime we are waiting our time by attending to unnecessary test scenarios.

To overcome these conflicts, we can use test design techniques so we can utilize the test execution time to maximum coverage.

As shown in the below diagram, we can divide testing in to two categories such as Static testing and Dynamic testing. In this blog I will do a discussion on “Equivalence Partitioning” technique which falls under black-box testing method.

Let’s discuss what is equivalence partitioning. Equivalence partitioning is testing various groups of inputs that we expect the application to handle the same way according to each group in a similar behavior. This method identifies equivalent classes / equivalent partitions and these are equivalent because the application should handle them in a same way. We can divide these equivalent partitions into two categories.

  1. Valid equivalent partitions – describes the valid scenarios which the application should handle
  2. Invalid equivalent partitions – describes the invalid scenarios which the application should handle or reject

Note: We can use multiple valid classes at a time. However, keep in mind that we cannot multiple invalid classes in one scenario because one invalid class value might mask the incorrect handling of the other invalid class.

How to derive equivalent classes;

Assume we have to validate an input field where it is accepting numbers, characters and application should reject special characters. Then,

Set – Keyboard input
Subset A – Numbers (Valid)
Subset B – Characters (Valid)
Subset C- Special Characters (Invalid)

This is the basic visualization of equivalence partitioning. But we need to remember that we can apply equivalence partitioning iteratively. In the above example, for the subset A it is accepting numbers. We can divide numbers class to sub partitions like integer values, decimal values, negative values etc. as shown below.

How to compose test cases;
Assume we have 2 sets as X and Y. X1 and X2 are the subsets of X and Y1, Y2 and Y3 are the subsets of Y. Assuming all these subsets are independent and valid we can compose test cases using both X and Y subsets and compose a new test case. This helps to reduce the number of test cases and covers the same scenarios using less amount of test cases.

I have only shared the fundamental of equivalence partitioning. Try google and seek more 😉

Let’s see again with another test design technique.!

Praveena Karunarathna

Was Senior QA Engineer

Reduce Cost and Improve Quality with Defect Prevention

What do clients expect from a Quality Assurance Engineer? A lengthy list of the defects you have found or a high-quality software product within a low production cost and time?

Defects do not add significant value to the software development process since the occurrence of the defects has become the main reason behind increases in cost and time. Therefore, the ultimate goal of a Quality Assurance Engineer is to deliver a product of the best possible quality ensuring minimized production cost and time. So what are the strategies we can introduce to our process to achieve this goal? Defect prevention is one such activity that is important but often neglected in the software delivery process as most of the QA teams focus on defect detection.

What is Defect Prevention?

Defect Prevention is a strategy that applies in the early stages of the Software Development Life Cycle to identify and remove the defects before actual testing gets started. Down the line, it helps to identify the root causes of the defects to prevent them from reoccurring.

Different organizations adhere to different Defect Prevention strategies. What makes a strategy differs from another is the set of activities take place during the process. There is a responsible party for each activity. The important fact is, not only QA Engineers are responsible for Defect Prevention, but also all the members who are involved in the development process have to take part in it.

Figure 1: Defect Prevention Stage

Why Defect Prevention is important?

  • Once QA engineers invested time on Defect Prevention in the early stages of the development process, they do not have to put more time and effort into detecting defects and tracking them in the testing stage. This directly saves time and leads to an on-time delivery of the product.
  • It is very cost effective and time saving to identify the defects and fix them in the early stages of the development process. As the code base grows, it is more difficult to fix a defect without having a negative impact that leads to rework.

“The Systems Sciences Institute at IBM has reported that the cost to fix an error found after product release was four to five times as much as one uncovered during design, and up to 100 times more than one identified in the maintenance phase”

Figure 2: Relative Costs to Fix Software Defects (Source: IBM Systems Sciences Institute)
  • Rework has a considerable impact on production cost and it has been the main reason for delays in the development process. Defect Prevention reduces the amount of rework and ensures low production cost and faster delivery.
  • Defect Prevention activities such as ‘Design Reviews’ leads to a better design by identifying bottlenecks, roadblocks, and possible performance and security failures early in the development process.
  • Introducing a Defect Prevention strategy into a development process will improve it to be more reliable and manageable. Apart from that, it leads to a cultural change that focuses on the quality rather than the quantity.

As conclusion, Defect Prevention has a direct impact on controlling the cost of the project and the quality of deliverables. Therefore introducing a Defect Prevention Strategy into your process will be a good investment as it directs to a ‘Satisfied Client’. In my next article, I will talk about the actions that QA Engineers can take to prevent defects, which are more effective and easy to implement in your software development process.



Pavithra Dissanayake

Pavithra Dissanayake is an Associate Lead QA Engineer at Zone24x7