Common Design Patterns every Test Automation Engineer should know

In software design, design patterns are created to solve common problems. This doesn’t seem to be widely discussed in software automation, because the topic sounds complicated. There are sophisticated design patterns used to solve complex issues in software development. Also there are easy to understand and easily adoptable design patterns that can significantly improve readability and maintainability of our test automation code. In this article, you will see complete details of Page Object Model (POM) design pattern with Page Factory in selenium with practical examples.

Page Object Model

POM is a design pattern which has become popular in Selenium Test Automation. It is a widely used design pattern in Selenium for enhancing test maintenance and reducing code duplication. To get started with POM, I would like to take you through the following basic example.


What’s your first name?
Login

Last name?

Page Object

Tell us more about your family

Everyone from my family is a page Object.

Each of us knows everything about a page.

The page that I specialize in is the login page.

My brother, Result Page Object Knows about the results page.

And our sister, Detail Page Object is the expert on details page.


What makes you an expert in login pages?

I know a lot about login page, I know the info that makes the page unique, the title and url.

I also know about elements of the login page, their values and attributes. And my knowledge does not end at login page information.

I can tell this information to anyone interested in it. Just the other day, someone from the unit tests family asked about these details.


Awesome. What else do you know about the login page?

I know what can be done on a page.

Like clicking links, typing in a textbox, selecting values in a listbox. And to see what an expert I am, I can even do these things on behalf of others.

The same unit test that asked the other day about the page title and url wanted more things from me.


Such as…

First let me tell you his name. It is testThatLoginWorks. Weird name, isn’t it?

So, he wanted me to type some text in the username textbox.

Then, to type other text in the Password textbox. And finally, to click the login button. I did so many things for him.
That was a busy day for me.


So you know many things about the login page. Do you know about other pages as well?

Login page is the only thing that I know of.

About other pages, you should talk to the other members of the Page Object family.

My brother, my sister and the everybody else.

Each of them is an expert in one page as well.



Excellent. Can you summarize for our audience what you do best?

I am the number 1 expert in login page.

I know all the information about the login page and its elements. I also know everything that can happen on a login page.

And I can even do these things when asked nicely. I am a Page Object.

Thank you very much!

Consider the below example of simple selenium script which has applied POM concept.

public class GmailLoginPage {
 
WebDriver driver;

By userName = By.name("uid");

By password = By.name("password");

By login = By.name("btnLogin");

public GmailLoginPage (WebDriver driver){

   this.driver = driver;

}

//Set user name in textbox

public void setUserName(String strUserName){

   driver.findElement(userName).sendKeys(strUserName);

}

//Set password in password textbox

public void setPassword(String strPassword){

	driver.findElement(password).sendKeys(strPassword);

}

//Click on login button

public void clickLogin(){

  	driver.findElement(login).click();
}

/**

 * This POM method will be exposed in test case to login in the application

 * @param strUserName

 * @param strPasword

 */

public void loginToGmail(String strUserName, String strPasword){

   //Fill user name

   this.setUserName(strUserName);

   //Fill password

   this.setPassword(strPasword);

   //Click Login button

   this.clickLogin();	

}

Why POM should be used?

Here are the main advantages of Page Object Pattern using:

  • Easy to Maintain
  • Easy Readability of scripts
  • Reduce or Eliminate duplicate
  • Reusability of code
  • Reliability

Page Factory Model

Page Factory is an inbuilt Page Object Model concept for selenium webdriver but it is optimized. Using Page Factory under test file, you can access these page objects in other java file. Used annotations @FindBy to find WebElement and initElements method to initialize web elements. Consider the below example of simple selenium script which has applied Page Factory concept.


    public class GmailLoginPage {

   /**

	* All WebElements are identified by @FindBy annotation

*/

   WebDriver driver;

   @FindBy(name="uid")

   WebElement userName;

   @FindBy(name="password")

   WebElement password;

   @FindBy(name="btnLogin")

   WebElement login;

  
   public GmailLoginPage(WebDriver driver){

  	this.driver = driver;

  	//This initElements method will create  all WebElements

  	PageFactory.initElements(driver, this);

   }

   //Set user name in textbox

   public void setUserName(String strUserName){

  	userName.sendKeys(strUserName);  	

   }

  
   //Set password in password textbox

   public void setPassword(String strPassword){

   password.sendKeys(strPassword);

   }

  
   //Click on login button

   public void clickLogin(){

     	login.click();

   }


   /**

	* This POM method will be exposed in test case to login in the application

	* @param strUserName

	* @param strPasword

*/

   public void loginToGmail(String strUserName, String strPasword){

  	//Fill user name

  	this.setUserName(strUserName);

  	//Fill password

  	this.setPassword(strPasword);

  	//Click Login button

  	this.clickLogin();
        
   }

}

References

What is the page object in Selenium?

Design Patterns In Test Automation World

Page Object Model (POM) & Page Factory: Selenium WebDriver Tutorial

Dinuka Abeysinghe

Senior QA Engineer


Robotic Process Automation

Robotic Process Automation

Today, the world is moving forward with variants of technologies. As a result of that, the future workplace will be blend with human and software bots. This human and software bots relationship will create many exciting opportunities for the world.

Are you ready for that challenge?

If not, now would be an excellent time to consider those exciting technologies.

RPA is one of that technology that you can challenge to world technology.

The process of automating business operations with the help of robots to reduce human intervention is called RPA. Actually, RPA is a challenging technology that is designed to automate the high volume of repeatable tasks that take a large percentage of worker’s time.

This is a technology that allows anyone today to configure computer software to emulate and integrate the actions within the humans and digital systems to execute a business process.

RPA allows organizations to automate at a fraction of the cost and time previously encountered by delivering direct profitability while improving accuracy across organizations and industries.


Getting started with RPA

What is RPA?

The process of automating business operations with the help of robots to reduce human intervention is known as Robotic Process Automation

Why RPA?

Think you want to publish an article on social media every day at a specific time. For that, you have to do it manually or appoint an employee to do that task. Both methods are time and money consuming. Imagine, if you can appoint a software robot to do that repetitive tasks on behalf of you. It’s awesome, That’s why RPA comes into the picture.


What are the things that RPA robots can do?

RPA robots are capable of mimicking human actions and by using this RPA robot we can log into applications, move files and folders, copy and paste data, fill in forms, extract data from documents, and scraping data from the web sites


Difference between test automation and RPA

RPA Implementation methodology

  • Planning – Identify processes which you want to automate
  • Development – Start developing the automation workflow as per agreed plan
  • Testing – Run testing cycles to identify and correct defects
  • Maintenance – Provide continuous support after going production


Best Practices of RPA Implementation

  • Consider business impact before opting for the RPA process
  • Define and focus on the desired ROI
  • Focus on targeting larger groups and processes
  • Don’t forget the impact on people


Advantages of RPA

  • RPA can automate complex process easily
  • RPA takes care of repetitive tasks and saves time and resources
  • Real-time visibility into a defect
  • Software robots do not get tired

Disadvantages of RPA

  • The bot is strict to the speed of the application
  • After changing the automation scripts that need to reconfigure the bot


RPA Tools


We can select the RPA tool by considering below 4 parameters.

  • Data
  • Interoperability
  • Type of tasks mainly performed
  • Artificial Intelligence

There are a lot of tools that are used for robotic process automation and below are some popular tools that are used in today’s industry.

  • UiPath
  • Blue Prism
  • Automation Anywhere

Wrapping Up

Finally, this is a marvelous technology which will make our life much more exciting. Here I have only shared the basics of RPA. It has more to discover and learn. Keep learning more about RPA……

Sajeeka Mayomi

QA Engineer


Investing on prevention vs investing on cure – Testing common vulnerabilities in the World Wide Web.

Do you remember the last time you lost something valuable, something that would cost you a sentimental value? Believe me it is better than the only breadwinner in the family losing their monthly income as the company they worked for lost more than $6 trillion each year, the reason behind it being the most critical crime of all times CYBERCRIME!!! If you don’t wish your company to be included in this figure, mind it being time up to increase investment on cyber-security testing.

Today’s world is becoming increasingly networked in connection and every business / entrepreneurship has one or more web applications, which is why the scope of potential exploits is extremely increasing and mind blogging.

Would you like to see the company you work for on news? Yes, if it’s for something like free publicity, increasing brand awareness as well as enhancing brand identity and popularity, well if it’s not and if it is for a negative context having a diminish on the brand identity causing a huge financial loss. This is why it is crucial to ensure that your applications are tested and secured.

Chances of Vulnerabilities are high and so is the cost

According to statistics on legal and compliance guides “the average consolidated total cost of a data breach is $3.8 million.”

There are various types of costs with regards to security breaches and vulnerabilities on WWW.

  • Reduction and loss of revenue: Occurring due to stolen corporate data or consecutive decrease on sales volume
  • Cost incurred on Investigation: An investigation process eats up your time, energy and most importantly money.
  • Cost of downtime: Time spent on fixing breaches vs time spent on Innovation

Moving on to what is a vulnerability and how it can be prevented through testing

Vulnerability
Inability to withstand the effect of a hostile environment


Vulnerability in WWW
A weakness in a web application which allows a malicious user to disturb web application’s security objective(s)


Exploit in WWW.
Taking advantage of a web application’s flaws and carrying out unauthorized activities related to the system


Attacks

  • Active Attacks – Attempts to modify a system’s state by altering its resources and operations.
  • Denial of Service (DOS)
  • Spoofing
  • Passive Attacks – Attempts to learn or gather information about a system, yet it doesn’t alter system’s resources or operations.

Advanced Persistent Threat (APT)

A malicious user/ party gains unauthorized access to a system and unauthorized activities are carried out
for an extended period of time without being detected.

  • Politically or Commercially motivated
  • Stay undetected
  • Longer period
  • Most often data theft


Warning Signs

  • Abnormal internet bandwidth usage
  • Abnormal patterns in network traffic
  • Detection of Trojans and other malware
  • Detection of aggregated data bundles


SQL Injection

  • Malicious user passes (injects) a SQL script or a part through a web application’s input field
  • Alters the developer intended behavior of the SQL query.

Importance of Security Testing in order to minimize the risk

Software security testing services helps in identifying implementation errors that were not discovered during code reviews, risk analysis, especially at the design level, can help us identify potential security problems and their impact.

It is a well-known fact that earlier the defect is detected the lesser impact it will make on the project. Therefore, giving high attention to security testing and reducing the risk the project will face is immensely important.

References

Different types of security tests

Importance of Security Testing

SQL Injections

Verushka Thilakarathne

Senior QA Engineer


Test Design Technique – Equivalence Partitioning

“Your test equipment is lying to you and it is your job to figure out how.”

Charles Rush

What is a software test design technique?
Why do we need to use a technique for testing?

Software test design technique is a method, which a quality assurance engineer can use to derive test cases, test scenarios according to the specification/structure of the application. However, why we need to use a technique for this. We can just simply go through the specification document or a structural document and derive the test cases. We know that as a fundamental software testing principle, “Exhaustive testing is impossible” and sometime we are waiting our time by attending to unnecessary test scenarios.

To overcome these conflicts, we can use test design techniques so we can utilize the test execution time to maximum coverage.

As shown in the below diagram, we can divide testing in to two categories such as Static testing and Dynamic testing. In this blog I will do a discussion on “Equivalence Partitioning” technique which falls under black-box testing method.

Let’s discuss what is equivalence partitioning. Equivalence partitioning is testing various groups of inputs that we expect the application to handle the same way according to each group in a similar behavior. This method identifies equivalent classes / equivalent partitions and these are equivalent because the application should handle them in a same way. We can divide these equivalent partitions into two categories.

  1. Valid equivalent partitions – describes the valid scenarios which the application should handle
  2. Invalid equivalent partitions – describes the invalid scenarios which the application should handle or reject

Note: We can use multiple valid classes at a time. However, keep in mind that we cannot multiple invalid classes in one scenario because one invalid class value might mask the incorrect handling of the other invalid class.

How to derive equivalent classes;

Assume we have to validate an input field where it is accepting numbers, characters and application should reject special characters. Then,

Set – Keyboard input
Subset A – Numbers (Valid)
Subset B – Characters (Valid)
Subset C- Special Characters (Invalid)

This is the basic visualization of equivalence partitioning. But we need to remember that we can apply equivalence partitioning iteratively. In the above example, for the subset A it is accepting numbers. We can divide numbers class to sub partitions like integer values, decimal values, negative values etc. as shown below.

How to compose test cases;
Assume we have 2 sets as X and Y. X1 and X2 are the subsets of X and Y1, Y2 and Y3 are the subsets of Y. Assuming all these subsets are independent and valid we can compose test cases using both X and Y subsets and compose a new test case. This helps to reduce the number of test cases and covers the same scenarios using less amount of test cases.

I have only shared the fundamental of equivalence partitioning. Try google and seek more 😉

Let’s see again with another test design technique.!

Praveena Karunarathna

Was Senior QA Engineer


Reduce Cost and Improve Quality with Defect Prevention

What do clients expect from a Quality Assurance Engineer? A lengthy list of the defects you have found or a high-quality software product within a low production cost and time?

Defects do not add significant value to the software development process since the occurrence of the defects has become the main reason behind increases in cost and time. Therefore, the ultimate goal of a Quality Assurance Engineer is to deliver a product of the best possible quality ensuring minimized production cost and time. So what are the strategies we can introduce to our process to achieve this goal? Defect prevention is one such activity that is important but often neglected in the software delivery process as most of the QA teams focus on defect detection.

What is Defect Prevention?

Defect Prevention is a strategy that applies in the early stages of the Software Development Life Cycle to identify and remove the defects before actual testing gets started. Down the line, it helps to identify the root causes of the defects to prevent them from reoccurring.

Different organizations adhere to different Defect Prevention strategies. What makes a strategy differs from another is the set of activities take place during the process. There is a responsible party for each activity. The important fact is, not only QA Engineers are responsible for Defect Prevention, but also all the members who are involved in the development process have to take part in it.

Figure 1: Defect Prevention Stage

Why Defect Prevention is important?

  • Once QA engineers invested time on Defect Prevention in the early stages of the development process, they do not have to put more time and effort into detecting defects and tracking them in the testing stage. This directly saves time and leads to an on-time delivery of the product.
  • It is very cost effective and time saving to identify the defects and fix them in the early stages of the development process. As the code base grows, it is more difficult to fix a defect without having a negative impact that leads to rework.


“The Systems Sciences Institute at IBM has reported that the cost to fix an error found after product release was four to five times as much as one uncovered during design, and up to 100 times more than one identified in the maintenance phase”

Figure 2: Relative Costs to Fix Software Defects (Source: IBM Systems Sciences Institute)
  • Rework has a considerable impact on production cost and it has been the main reason for delays in the development process. Defect Prevention reduces the amount of rework and ensures low production cost and faster delivery.
  • Defect Prevention activities such as ‘Design Reviews’ leads to a better design by identifying bottlenecks, roadblocks, and possible performance and security failures early in the development process.
  • Introducing a Defect Prevention strategy into a development process will improve it to be more reliable and manageable. Apart from that, it leads to a cultural change that focuses on the quality rather than the quantity.

As conclusion, Defect Prevention has a direct impact on controlling the cost of the project and the quality of deliverables. Therefore introducing a Defect Prevention Strategy into your process will be a good investment as it directs to a ‘Satisfied Client’. In my next article, I will talk about the actions that QA Engineers can take to prevent defects, which are more effective and easy to implement in your software development process.

References

https://www.isixsigma.com/tools-templates/software/defect-prevention-reducing-costs-and-enhancing-quality/

Pavithra Dissanayake

Pavithra Dissanayake is an Associate Lead QA Engineer at Zone24x7


Take Maximum Out of Quality Metrics

Have you ever felt calculating and presenting quality metrics as a useless, time-consuming cumbersome task? If your answer is yes, then it seems you are not using the correct quality metrics to measure the quality of your application/release build or you are not aware of how quality metrics can be utilized to improve the quality as well as taking decisions.

This article will provide you some insights on the importance of carefully selected and well-presented quality matrices. Reading this article will motivate you to use quality metrics smartly.


Why ‘Quality Metrics’ Matter?

Quantitative approach of measuring the quality with proper guidelines is essential to compare the quality of two builds or two projects as quality is subjective and differs from person to person. Isolating test execution cannot measure and improve the quality of a software. Carefully selected quality metrics need to be used to evaluate the quality aspects by identifying the initial level of quality, current quality status, deviations and quality achievements.

Apart from that, nowadays software companies tend to push their software to the market as fast as they could to gain competitive advantages. This behavior increases the risk of releasing a highly defective release to production environment. Quality metrics can provide a significant help to control the above, as quality cannot be compromised at any cost.


Following are some of the usages of quality metrics.

  • Influence the decision-making
  • Set quality goals
  • Evaluate the testing effort
  • Report the progress of the project
  • Communicate an issue
  • Foresee the risks using the deviations from the quality goals and take preventive actions


How to choose the right quality metrics?

Selection of right quality metrics is crucial to get the maximum out of the quality metrics. Context, purpose/usage, traceability and audience are four key factors when selecting a metric. Apart from that having a common interpretation, actionability, accessibility and transparency are few other qualities of a good metric.

Link : https://www.juiceanalytics.com/writing/choosing-right-metric

Selection of too vague metrics does not provide any actionable insight whereas too specific metrics are not comprehensive enough to represent the actual difference made to overall product quality. To derive the most benefit from metrics, it is important to keep them simple and relevant.


How quality metrics can be used to improve the quality?

Evaluating the quality of the software under development is the primary objective of using quality metrics. Quality of the system under test can be improved by identifying issues, analyze progress/trends and take corrective actions efficiently and effectively before it is too late.

Apart from that evaluating the current testing process and the testing strategy is one of the key benefits of using quality metrics. Metrics related to testing effort, test effectiveness, test coverage, test execution rates, defect finding rate etc. can provide valuable information about test progress and effectiveness of current testing methods. This information can help the quality assurance team to make relevant changes to the current testing process.

Furthermore, quality metrics can also be used to challenge the development team by setting high quality targets (Eg: Defect Severity Index should be less than 1.5 to pass the build) that can ultimately result in high process and product quality.


How Quality Metrics can be utilized to
take managerial level decisions?

Usage of proper quality metrics increases the confidence of taking managerial level decisions.
Given below are some decisions that can be taken by observing the quality metrics.

  1. Development process improvements related decisions as quality metrics provide complete visibility of the effectiveness of software development efforts.
  2. QA process and strategy related decisions by observing how QA efforts add value to deliver high-quality software.
  3. Resource utilization related decisions
  4. Technology change related decisions by observing defect related metrics such as defect age, defect removal efficiency etc.

Accurate and timely details presented in an effective manner is vital to take management decisions. Having a central dashboard with up to date quality metrics provides great value to the decision-making process as quality metrics together can provide valuable insights than using it individually.

Build Quality Index is kind of a dashboard which presented quality-related metrics such as Defect Density, Defect Severity Index, Defect Status Distribution (DSD), Test Pass Ratio (TPR) related trend charts are presented on a dashboard concerning each build etc.

In conclusion, software quality metrics provide immense value to increase the quality of application /release build and overall decision making process. Hence, let’s pay more attention on calculating and presenting software quality metrics to take the maximum benefits out of them.

Nuwani Navodya

Associate Lead QA Engineer


To build or to buy a Test Automation Tool

Should I build or should I buy?

As continuous integration and development shift the momentum of software delivery to a higher gear, testing teams need to upscale their test strategies. We talk about faster and frequent deliveries with higher quality. We stress on tools that lubricate efficient and effective software delivery. Therefore, quality of these releases play a major role in meeting these delivery expectations. It is for this reason that testing has evolved into an automatic execution process.

The history of software automated testing has been trailed by the evolution of software development. The introduction of the GUI (Graphical User Interface) applications paved way for automation tools that have the recording and playing back functions. Over the years, we as quality assurance engineers have been investing our time and resources on learning sophisticated enterprise test automation tools that were built by well-known solutions providers such as HP, Microsoft, Atlassian and many more.

However, there are many instances where these tools have not provided a blanket solution for our automated testing needs. Also, integrating and adjusting an off the shelf automated testing tool could be even more complicated and time consuming. This is when testers should be driven to build their own custom automation tool. This article will help you decide whether to write your own tool for automation and if you choose to do so, what are some of the key parameters you should consider during your implementation.

Analyze the pros and cons

As an initial step to decide whether to build your own tool, it is advised to measure the pros and
cons. Below are some of the advantages of writing your own test tool:

  • Can acquire more control over the test design and architecture.
  • Independence in development and maintenance.
  • Flexibility of adding or removing features as per requirement.
  • Easy to apply coding, testing process and delivery standards in the framework.
  • Creates a platform to market the framework as an enterprise tool.
  • Improve and utilize the knowledge and capacity of the QA Engineers within a team.

Although investing on a framework on your own seems appealing given the above advantages,
there are negative impacts of this approach as well. We can list down the disadvantages as
follows:

  • Consumes time, development resources and other costs.
  • Requires a thorough study and evaluation to assure if such a tool is not already in the market. No need to reinvent the wheel!
  • Requires highly skilled automation engineers.
  • Uncertainty of the return on investment.
  • Requires a strong proof of concept to impress investors.

Incorporate standards and best practices

After measuring the pros and cons of building your own tool for test automation, if you finally decide to go ahead with writing your own, then there are few concepts and parameters to consider. It is important to note here that a test automation framework can be explained as a collection of standards, processes that facilitate component interactions and integrations on top of which test scripts can be executed. In a pro-agile development background, writing a framework that incorporates best practices is a challenge. Following are key concepts and standards that facilitate building a powerful automation test framework:

  1. Separate the tests from the framework
    To promote reusability of tests and easier maintainability, it is advised to separate the tests from the framework. Which means that your test scripts need to be included in a separate package while your framework related code exists inside a separate package.
  2. Separate the tests from test data
    This separation will make sure that no modifications will be required to the tests during each test
    run when data needs to be changed for input parameters.
  3. Look for other mechanisms other than UI to verify tests
    This means that the assertions you use in your tests should not only depend on the UI. For
    example, you can verify if a test is passed by a web service call. This will largely cut
    down the UI control waiting times and will eventually speed up test execution.
  4. Use libraries
    For maintainability purposes, it is advised to separate out reusable classes, external connections,
    common components and generic functions into a library. Test script writers can then invoke these libraries in their code.
  5. Following after coding standards
    The automation framework should follow after coding standards to maintain consistency, enforce security measures and also to help coordinate with the development team. This will largely improve the knowledge of the QA engineers who are involved with the test framework development as well.
  6. Maintain versioning
    This will largely help monitor the modifications done to the framework over time. Versioning
    will also play a major role during licensing and deployments, if the tool is eventually
    marketed as a commercial product.

If you have very specific test automation requirements and you cannot find a suitable automated testing tool in the market to facilitate your needs, the answer is Yes! Consider the pros and cons and build your own tool. The value you are adding to your organization and the benefits you are earning for your personal development will be immeasurable!

Menaka de Silva 

Lead QA Engineer


Exploring BDD with Specflow

Introduction on BDD

One of the biggest challenges faced by development teams today is understanding and implementing the exact requirements of the product owner. Dan North’s amazing software development method BDD (Behaviour Driven Development) reduced the requirements knowledge gap between product owners and developers. In addition, it bridged the domain knowledge gap between developers and testers. Also, it reduces the cost of translation and rework. Through this development process, the business will gain a more accurate product rather than a product filled with messy code and full of bugs.

BDD provides the following benefits:

  • The product owner can obtain a product matching the exact requirements.
  • The product owner will provide the requirement with examples that make it easy to develop and test.
  • Minimizes the domain and technical knowledge gap between testers and developers.
  • Tests/Scenarios are written in a human-readable format. Therefore, it is easy to read and write tests.
  • Tests/Scenarios are written according to the behavior of the application and divided into features.
  • Tests will verify the exact business requirements implemented.
  • Easy to implement test automation.

Test Automation industry is gaining a lot of advantages through BDD. Several testing tools got introduced along with BDD. Some of the famous ones are JBehave and Cucumber. Specflow is also one of them which is currently gaining the attention of a lot of automation testers. In all of these frameworks, gherkin syntax will be used to write the feature files.

Getting started with Specflow

What is Specflow?

This is called cucumber for .net. As cucumber, it allows users to write feature files/user stories in human-readable formatted gherkin syntax. This is an open-source tool. Therefore, all the technology seekers are welcome to give a try. Visual Studio is recommended to use along with it since the tool is based on .net. Additionally, the VS Debugger will be an added advantage.

Set up in 5 mins with Visual Studio

Pre-Conditions for setup

  1. Install Visual Studio (Community Edition will be enough/ Latest Version)
  2. .NET framework should be installed (Latest Version)

Pre-Conditions for setup

  1. Navigate to Manage Extensions in order to install Specflow
  2. Install Specflow (Configure according to the Visual Studio version installed)

Tips and Tricks to write handy feature files

Feature files are considered to be the most important when it comes to the BDD. Scenarios should be simple and should not contain any complex steps. Regardless of the domain expertise, any individual starting from a product owner to a developer should be able to understand any scenario. Below are some tips and tricks which I follow when writing feature files,

  • Use clear simple sentences when writing feature files
  • Include repetitive testing steps in the Background block
  • Use parameterizing rather than hard coding the values inside step files
  • Use Data tables to give multiple values to the parameters
  • Use scenario Outline to run the same test over and over with different parameters
  • Use tags to group test scenarios according to the environment or the test suite
  • Don’t include all the scenarios in one feature file. Always divide it and include a set of related scenarios in one feature file.

Note: Feature files are supported to write in many natural languages other than English.

The structure of a feature file is as follows.

The main structure will be divided into Feature, Background, and Scenarios. There are several keywords which give different meanings and they are explained below,

  1. Feature – Describes the specific functionality/feature of the application
  2. Background – This will contain a collection of pre-conditions to be run before each scenario in the file
  3. Given – Describes pre-condition steps
  4. Scenario – Explains the identified test scenario
  5. And – Describes conditions
  6. When – Describes actions
  7. Then – Describes expected results

Example of Scenario Outline

Project Structure

There is no recommended way for the project structure. However, the project should be manageable and easy to re-work. Below is a sample project structure,

The project can be divided into Page Objects, Specifications containing the feature files and another folder containing the step definition files. App.config file will define the chrome driver and the unit test provider for the project. RunSettings files can be used to store different URLs, usernames, and passwords and re-use when writing the tests.

Writing your first test

Follow the steps in the video to write your first test.

Finally, this is a wonderful technology which will make our daily work much more exciting. Keep learning more about Specflow and BDD. There is more to discover and learn. As Dan North says “BDD is not about conversations, it’s about shipping working, tested software that matters, using conversations.”

Sajitha Dissanayake

Sajitha Dissanayake is a Trainee Associate QA Engineer  at Zone24x7


How you would test a microservices architecture?


First of all, did you ever thought of this topic as an employee who ensuring the quality of the product and worrying to release your product almost bug-free to UAT? Yes, I am talking about testers who think upside down when it comes to test planning in a new testing engagement.

“What is the architecture I am looking at?”
“It’s a microservices architecture… “
“So what it makes sense to me as a tester? Does that matter?”


If you came across with aforementioned answer you should read this else just don’t. But you might get crossed soon.

The identification of microservice architecture is clear. But testing is complex than you think. Testing a microservice architecture is still new for many testers in the industry even today. Why? Because even now, new development work starts mostly in a monolithic architecture. I am not an architecture but a professional tester. I’ll explain this the way all testers will understand.

Why are architects selecting microservices architecture?

Monolithic architecture is well known & well spread in the industry. It’s simply the whole code in one piece. There the whole system developed as a single box and deploying to production. It’s independent and interconnected. This is good sometimes excellent for small systems and as a startup. But this era it’s all about web applications and they evolving rapidly. So many of us

testing/tested systems with monolithic architectures. Companies like Amazon, eBay and google all shifted to microservices architecture. When your system expanding, old fashioned monolithic architectures cannot resist the load. Unable to plug

new technologies. Unable to utilize them in your CI/CD programs because even a little change/update to the system, you need to deploy the whole system to the production which is as you know is hectic to do continuously.

What is microservices architecture?

The solution is indeed microservices. What are these? Your system has broken down to different services without keeping everything together on the code level. For example, let’s look at an insurance system where you will identify claims, policy coverage, alerts & notifications, new businesses & renewals into different services. Then if we consider a single service, within that it will develop the way a monolithic architecture works. But the difference is there are connections in between other microservices so designers need to carefully plan that. With this, all the microservices not need to stick to one technology or database. Developers have space to design their services separately even in different technologies. But don’t forget still this is one system. I am not going to explain further as my objective is to showcase the testing side of this whole thing.

What are the testing challenges?

So what are the challenges? Why in terms of testing systems with microservices architecture getting more complex during designing test cases and execution? There are two reasons,

  • Interconnections between microservices.
  • Different technologies each microservice may use.

That’s all. Here, connections between microservices something we always have to deal with in a microservices architecture. Later, the different technologies each microservice will use is something may not always happen. There can be a system built in a microservices architecture that utilizes the same technologies across the application functionalities. So testers need to understand the architecture clearly.

Web services and microservices. Two different things

Now let’s focus on the topic. How you would test a system with such an architecture? Right now some of you who read this may conflict microservices against the web services. Let me clarify the later. Web services tell us how the communication between two devices/applications held over the world wide web (www). There are two architectural styles. Simple object access protocol or SOAP and Representational state transfer or REST. REST API uses advantages of URL exposure like @path(“WeatherService”) while SOAP API uses services interfaces like @WebService.

So now you may understand that the microservices architecture is one thing and web services and API testing is one thing. Connections I am talking about is which built inside the architecture. Communication among the microservices.

How to overcome the challenges in your strategy?

To address the first challenge which I was mentioned earlier (Connections which exists between microservices) performing an integration testing is vital. Here I should highlight that it’s very important and plays a big role in your overall test suite. You should find a way in your test suite to figure out to test the connections. It can be a manual test (front-end) or an API test. Understanding the dependencies is crucial. And in integration testing, I would say API testing is more efficient. Here too you should clarify two things. Microservices and APIs (endpoints).

A particular microservice may have one or more endpoints based on the design. Or maybe there may not any endpoints connected at all. Such a microservice should have a dependency on another microservice and the later should have one or more endpoints. Sometimes there can be a dependency chain I would say. When planning your API testing you should know the microservices tasks and their dependency (if any). This is something you may not face in a monolithic architecture where such design may not exist. When we say ‘Integration Testing’, various people say various terms. But what you need to know is to cover the integration among microservices. We can call this system integration testing as well. Whatever the name says, make sure to achieve the coverage.

The second challenge I mentioned, different technologies each microservice used is something may not always happen. But you should be prepared. You need to understand the complete microservice architecture and their database connections and external communications with other microservices. Knowing one microservice doesn’t mean you understood the architecture. So your API testing and test data preparation may get vary if two microservices in the same system use two different technologies. For example, one microservice may use SQL and others use an open-source database technology. So you need to figure out how you should design and execute these test cases which is vital and will save lots of time during the test execution.

So after all, the E2E (end to end) testing is the most critical testing type when you are testing a system with microservices architecture. This is where you should test the end to end journey. Business scenarios. In your test suite, you need to identify the E2E test cases carefully. And cover majority if not all of the microservices in the system. Why E2E is this important? This is because in a microservices architecture the two challenges I highlighted. You doing system integration to cover the connections. But end-to-end testing is where you cover business flows. While covering business flows, the different technologies involved in each microservice are covered and the external connections (dependent microservices and data stores).

The solution in brief,

The conclusion is that the classic thinking of integration testing is not part of QA is somewhat not applicable here. You have to consider integration testing while doing system testing we called it system integration testing. Then the E2E testing is critical. As highlighted before, the main reason I would say that is because microservices having an external connection. And you need to think about different technologies which involved in architecture as well. There’s a strong possibility that you might deal with multiple technologies.

The test design phase is vital. You need to plan your test suite carefully. And involving QA in architecture discussions during the design phase is crucial. And please make sure to prepare your test environment as same as in production which is very important here as well.

The trend is towards microservices architecture especially when we are talking about web applications. So hope you have taken all that I have mentioned. And keep this architecture in mind because sooner am repeating, you might get crossed if not already…

Chandima Athapattu

Lead QA Engineer


Splitting Estimation

In agile testing, as all are aware, we need to estimate the time for the items that we take to the current sprint.

Even though we usually give one figure either in hours or in days, frankly, there are
so many sub tasks that are hidden in our main test task. So not only should requirement analysis, finding impacted areas, mind map creation, test case preparations be considered prior to the execution and also we must think of the test round that might occur if there was any bug being reported during the test execution.

The above practice will help you decide what went wrong in your estimation for the current task as well as how you can estimate correctly if you get a similar task at a future sprint. Another advantage of this is that you can focus more on your estimated time before you start each work, which enables you to stick to your plan to the letter.

However, during our sprint planning it is difficult to give an exact total time for a task at once. As a corrective action for this, it is recommend splitting your task to the smallest chunks. This will be helpful to estimate more closely to the actual time spent without underestimating or overestimating. I would like to recommend a template that you can use during your spring planning. During the sprint, you can find out in which task you have given incorrect estimations if it has gone beyond the estimation.

This is just a sample template and you can change it as per your requirement and tasks. This template can be customized and used by any team member who likes to estimate accurately without struggling. As a bonus point, I will give you a sample estimation sheet which can be shared with your Dev Team.


Happy Estimations ….. 🙂

References:

Image Courtesy: unsplash.com/@bradneathery

Malkanthi Hewage

Senior QA Engineer