Common Design Patterns every Test Automation Engineer should know

In software design, design patterns are created to solve common problems. This doesn’t seem to be widely discussed in software automation, because the topic sounds complicated. There are sophisticated design patterns used to solve complex issues in software development. Also there are easy to understand and easily adoptable design patterns that can significantly improve readability and maintainability of our test automation code. In this article, you will see complete details of Page Object Model (POM) design pattern with Page Factory in selenium with practical examples.

Page Object Model

POM is a design pattern which has become popular in Selenium Test Automation. It is a widely used design pattern in Selenium for enhancing test maintenance and reducing code duplication. To get started with POM, I would like to take you through the following basic example.


What’s your first name?
Login

Last name?

Page Object

Tell us more about your family

Everyone from my family is a page Object.

Each of us knows everything about a page.

The page that I specialize in is the login page.

My brother, Result Page Object Knows about the results page.

And our sister, Detail Page Object is the expert on details page.


What makes you an expert in login pages?

I know a lot about login page, I know the info that makes the page unique, the title and url.

I also know about elements of the login page, their values and attributes. And my knowledge does not end at login page information.

I can tell this information to anyone interested in it. Just the other day, someone from the unit tests family asked about these details.


Awesome. What else do you know about the login page?

I know what can be done on a page.

Like clicking links, typing in a textbox, selecting values in a listbox. And to see what an expert I am, I can even do these things on behalf of others.

The same unit test that asked the other day about the page title and url wanted more things from me.


Such as…

First let me tell you his name. It is testThatLoginWorks. Weird name, isn’t it?

So, he wanted me to type some text in the username textbox.

Then, to type other text in the Password textbox. And finally, to click the login button. I did so many things for him.
That was a busy day for me.


So you know many things about the login page. Do you know about other pages as well?

Login page is the only thing that I know of.

About other pages, you should talk to the other members of the Page Object family.

My brother, my sister and the everybody else.

Each of them is an expert in one page as well.



Excellent. Can you summarize for our audience what you do best?

I am the number 1 expert in login page.

I know all the information about the login page and its elements. I also know everything that can happen on a login page.

And I can even do these things when asked nicely. I am a Page Object.

Thank you very much!

Consider the below example of simple selenium script which has applied POM concept.

public class GmailLoginPage {
 
WebDriver driver;

By userName = By.name("uid");

By password = By.name("password");

By login = By.name("btnLogin");

public GmailLoginPage (WebDriver driver){

   this.driver = driver;

}

//Set user name in textbox

public void setUserName(String strUserName){

   driver.findElement(userName).sendKeys(strUserName);

}

//Set password in password textbox

public void setPassword(String strPassword){

	driver.findElement(password).sendKeys(strPassword);

}

//Click on login button

public void clickLogin(){

  	driver.findElement(login).click();
}

/**

 * This POM method will be exposed in test case to login in the application

 * @param strUserName

 * @param strPasword

 */

public void loginToGmail(String strUserName, String strPasword){

   //Fill user name

   this.setUserName(strUserName);

   //Fill password

   this.setPassword(strPasword);

   //Click Login button

   this.clickLogin();	

}

Why POM should be used?

Here are the main advantages of Page Object Pattern using:

  • Easy to Maintain
  • Easy Readability of scripts
  • Reduce or Eliminate duplicate
  • Reusability of code
  • Reliability

Page Factory Model

Page Factory is an inbuilt Page Object Model concept for selenium webdriver but it is optimized. Using Page Factory under test file, you can access these page objects in other java file. Used annotations @FindBy to find WebElement and initElements method to initialize web elements. Consider the below example of simple selenium script which has applied Page Factory concept.


    public class GmailLoginPage {

   /**

	* All WebElements are identified by @FindBy annotation

*/

   WebDriver driver;

   @FindBy(name="uid")

   WebElement userName;

   @FindBy(name="password")

   WebElement password;

   @FindBy(name="btnLogin")

   WebElement login;

  
   public GmailLoginPage(WebDriver driver){

  	this.driver = driver;

  	//This initElements method will create  all WebElements

  	PageFactory.initElements(driver, this);

   }

   //Set user name in textbox

   public void setUserName(String strUserName){

  	userName.sendKeys(strUserName);  	

   }

  
   //Set password in password textbox

   public void setPassword(String strPassword){

   password.sendKeys(strPassword);

   }

  
   //Click on login button

   public void clickLogin(){

     	login.click();

   }


   /**

	* This POM method will be exposed in test case to login in the application

	* @param strUserName

	* @param strPasword

*/

   public void loginToGmail(String strUserName, String strPasword){

  	//Fill user name

  	this.setUserName(strUserName);

  	//Fill password

  	this.setPassword(strPasword);

  	//Click Login button

  	this.clickLogin();
        
   }

}

References

What is the page object in Selenium?

Design Patterns In Test Automation World

Page Object Model (POM) & Page Factory: Selenium WebDriver Tutorial

Dinuka Abeysinghe

Senior QA Engineer

Robotic Process Automation

Robotic Process Automation

Today, the world is moving forward with variants of technologies. As a result of that, the future workplace will be blend with human and software bots. This human and software bots relationship will create many exciting opportunities for the world.

Are you ready for that challenge?

If not, now would be an excellent time to consider those exciting technologies.

RPA is one of that technology that you can challenge to world technology.

The process of automating business operations with the help of robots to reduce human intervention is called RPA. Actually, RPA is a challenging technology that is designed to automate the high volume of repeatable tasks that take a large percentage of worker’s time.

This is a technology that allows anyone today to configure computer software to emulate and integrate the actions within the humans and digital systems to execute a business process.

RPA allows organizations to automate at a fraction of the cost and time previously encountered by delivering direct profitability while improving accuracy across organizations and industries.


Getting started with RPA

What is RPA?

The process of automating business operations with the help of robots to reduce human intervention is known as Robotic Process Automation

Why RPA?

Think you want to publish an article on social media every day at a specific time. For that, you have to do it manually or appoint an employee to do that task. Both methods are time and money consuming. Imagine, if you can appoint a software robot to do that repetitive tasks on behalf of you. It’s awesome, That’s why RPA comes into the picture.


What are the things that RPA robots can do?

RPA robots are capable of mimicking human actions and by using this RPA robot we can log into applications, move files and folders, copy and paste data, fill in forms, extract data from documents, and scraping data from the web sites


Difference between test automation and RPA

RPA Implementation methodology

  • Planning – Identify processes which you want to automate
  • Development – Start developing the automation workflow as per agreed plan
  • Testing – Run testing cycles to identify and correct defects
  • Maintenance – Provide continuous support after going production


Best Practices of RPA Implementation

  • Consider business impact before opting for the RPA process
  • Define and focus on the desired ROI
  • Focus on targeting larger groups and processes
  • Don’t forget the impact on people


Advantages of RPA

  • RPA can automate complex process easily
  • RPA takes care of repetitive tasks and saves time and resources
  • Real-time visibility into a defect
  • Software robots do not get tired

Disadvantages of RPA

  • The bot is strict to the speed of the application
  • After changing the automation scripts that need to reconfigure the bot


RPA Tools


We can select the RPA tool by considering below 4 parameters.

  • Data
  • Interoperability
  • Type of tasks mainly performed
  • Artificial Intelligence

There are a lot of tools that are used for robotic process automation and below are some popular tools that are used in today’s industry.

  • UiPath
  • Blue Prism
  • Automation Anywhere

Wrapping Up

Finally, this is a marvelous technology which will make our life much more exciting. Here I have only shared the basics of RPA. It has more to discover and learn. Keep learning more about RPA……

Sajeeka Mayomi

QA Engineer

Investing on prevention vs investing on cure – Testing common vulnerabilities in the World Wide Web.

Do you remember the last time you lost something valuable, something that would cost you a sentimental value? Believe me it is better than the only breadwinner in the family losing their monthly income as the company they worked for lost more than $6 trillion each year, the reason behind it being the most critical crime of all times CYBERCRIME!!! If you don’t wish your company to be included in this figure, mind it being time up to increase investment on cyber-security testing.

Today’s world is becoming increasingly networked in connection and every business / entrepreneurship has one or more web applications, which is why the scope of potential exploits is extremely increasing and mind blogging.

Would you like to see the company you work for on news? Yes, if it’s for something like free publicity, increasing brand awareness as well as enhancing brand identity and popularity, well if it’s not and if it is for a negative context having a diminish on the brand identity causing a huge financial loss. This is why it is crucial to ensure that your applications are tested and secured.

Chances of Vulnerabilities are high and so is the cost

According to statistics on legal and compliance guides “the average consolidated total cost of a data breach is $3.8 million.”

There are various types of costs with regards to security breaches and vulnerabilities on WWW.

  • Reduction and loss of revenue: Occurring due to stolen corporate data or consecutive decrease on sales volume
  • Cost incurred on Investigation: An investigation process eats up your time, energy and most importantly money.
  • Cost of downtime: Time spent on fixing breaches vs time spent on Innovation

Moving on to what is a vulnerability and how it can be prevented through testing

Vulnerability
Inability to withstand the effect of a hostile environment


Vulnerability in WWW
A weakness in a web application which allows a malicious user to disturb web application’s security objective(s)


Exploit in WWW.
Taking advantage of a web application’s flaws and carrying out unauthorized activities related to the system


Attacks

  • Active Attacks – Attempts to modify a system’s state by altering its resources and operations.
  • Denial of Service (DOS)
  • Spoofing
  • Passive Attacks – Attempts to learn or gather information about a system, yet it doesn’t alter system’s resources or operations.

Advanced Persistent Threat (APT)

A malicious user/ party gains unauthorized access to a system and unauthorized activities are carried out
for an extended period of time without being detected.

  • Politically or Commercially motivated
  • Stay undetected
  • Longer period
  • Most often data theft


Warning Signs

  • Abnormal internet bandwidth usage
  • Abnormal patterns in network traffic
  • Detection of Trojans and other malware
  • Detection of aggregated data bundles


SQL Injection

  • Malicious user passes (injects) a SQL script or a part through a web application’s input field
  • Alters the developer intended behavior of the SQL query.

Importance of Security Testing in order to minimize the risk

Software security testing services helps in identifying implementation errors that were not discovered during code reviews, risk analysis, especially at the design level, can help us identify potential security problems and their impact.

It is a well-known fact that earlier the defect is detected the lesser impact it will make on the project. Therefore, giving high attention to security testing and reducing the risk the project will face is immensely important.

References

Different types of security tests

Importance of Security Testing

SQL Injections

Verushka Thilakarathne

Senior QA Engineer

Take Maximum Out of Quality Metrics

Have you ever felt calculating and presenting quality metrics as a useless, time-consuming cumbersome task? If your answer is yes, then it seems you are not using the correct quality metrics to measure the quality of your application/release build or you are not aware of how quality metrics can be utilized to improve the quality as well as taking decisions.

This article will provide you some insights on the importance of carefully selected and well-presented quality matrices. Reading this article will motivate you to use quality metrics smartly.


Why ‘Quality Metrics’ Matter?

Quantitative approach of measuring the quality with proper guidelines is essential to compare the quality of two builds or two projects as quality is subjective and differs from person to person. Isolating test execution cannot measure and improve the quality of a software. Carefully selected quality metrics need to be used to evaluate the quality aspects by identifying the initial level of quality, current quality status, deviations and quality achievements.

Apart from that, nowadays software companies tend to push their software to the market as fast as they could to gain competitive advantages. This behavior increases the risk of releasing a highly defective release to production environment. Quality metrics can provide a significant help to control the above, as quality cannot be compromised at any cost.


Following are some of the usages of quality metrics.

  • Influence the decision-making
  • Set quality goals
  • Evaluate the testing effort
  • Report the progress of the project
  • Communicate an issue
  • Foresee the risks using the deviations from the quality goals and take preventive actions


How to choose the right quality metrics?

Selection of right quality metrics is crucial to get the maximum out of the quality metrics. Context, purpose/usage, traceability and audience are four key factors when selecting a metric. Apart from that having a common interpretation, actionability, accessibility and transparency are few other qualities of a good metric.

Link : https://www.juiceanalytics.com/writing/choosing-right-metric

Selection of too vague metrics does not provide any actionable insight whereas too specific metrics are not comprehensive enough to represent the actual difference made to overall product quality. To derive the most benefit from metrics, it is important to keep them simple and relevant.


How quality metrics can be used to improve the quality?

Evaluating the quality of the software under development is the primary objective of using quality metrics. Quality of the system under test can be improved by identifying issues, analyze progress/trends and take corrective actions efficiently and effectively before it is too late.

Apart from that evaluating the current testing process and the testing strategy is one of the key benefits of using quality metrics. Metrics related to testing effort, test effectiveness, test coverage, test execution rates, defect finding rate etc. can provide valuable information about test progress and effectiveness of current testing methods. This information can help the quality assurance team to make relevant changes to the current testing process.

Furthermore, quality metrics can also be used to challenge the development team by setting high quality targets (Eg: Defect Severity Index should be less than 1.5 to pass the build) that can ultimately result in high process and product quality.


How Quality Metrics can be utilized to
take managerial level decisions?

Usage of proper quality metrics increases the confidence of taking managerial level decisions.
Given below are some decisions that can be taken by observing the quality metrics.

  1. Development process improvements related decisions as quality metrics provide complete visibility of the effectiveness of software development efforts.
  2. QA process and strategy related decisions by observing how QA efforts add value to deliver high-quality software.
  3. Resource utilization related decisions
  4. Technology change related decisions by observing defect related metrics such as defect age, defect removal efficiency etc.

Accurate and timely details presented in an effective manner is vital to take management decisions. Having a central dashboard with up to date quality metrics provides great value to the decision-making process as quality metrics together can provide valuable insights than using it individually.

Build Quality Index is kind of a dashboard which presented quality-related metrics such as Defect Density, Defect Severity Index, Defect Status Distribution (DSD), Test Pass Ratio (TPR) related trend charts are presented on a dashboard concerning each build etc.

In conclusion, software quality metrics provide immense value to increase the quality of application /release build and overall decision making process. Hence, let’s pay more attention on calculating and presenting software quality metrics to take the maximum benefits out of them.

Nuwani Navodya

Senior QA Engineer

To build or to buy a Test Automation Tool

    Should I build or should I buy?

    As continuous integration and development shift the momentum of software delivery to a higher gear, testing teams need to upscale their test strategies. We talk about faster and frequent deliveries with higher quality. We stress on tools that lubricate efficient and effective software delivery. Therefore, quality of these releases play a major role in meeting these delivery expectations. It is for this reason that testing has evolved into an automatic execution process.

    The history of software automated testing has been trailed by the evolution of software development. The introduction of the GUI (Graphical User Interface) applications paved way for automation tools that have the recording and playing back functions. Over the years, we as quality assurance engineers have been investing our time and resources on learning sophisticated enterprise test automation tools that were built by well-known solutions providers such as HP, Microsoft, Atlassian and many more.

    However, there are many instances where these tools have not provided a blanket solution for our automated testing needs. Also, integrating and adjusting an off the shelf automated testing tool could be even more complicated and time consuming. This is when testers should be driven to build their own custom automation tool. This article will help you decide whether to write your own tool for automation and if you choose to do so, what are some of the key parameters you should consider during your implementation.

    Analyze the pros and cons

    As an initial step to decide whether to build your own tool, it is advised to measure the pros and
    cons. Below are some of the advantages of writing your own test tool:

    • Can acquire more control over the test design and architecture.
    • Independence in development and maintenance.
    • Flexibility of adding or removing features as per requirement.
    • Easy to apply coding, testing process and delivery standards in the framework.
    • Creates a platform to market the framework as an enterprise tool.
    • Improve and utilize the knowledge and capacity of the QA Engineers within a team.

    Although investing on a framework on your own seems appealing given the above advantages,
    there are negative impacts of this approach as well. We can list down the disadvantages as
    follows:

    • Consumes time, development resources and other costs.
    • Requires a thorough study and evaluation to assure if such a tool is not already in the market. No need to reinvent the wheel!
    • Requires highly skilled automation engineers.
    • Uncertainty of the return on investment.
    • Requires a strong proof of concept to impress investors.

    Incorporate standards and best practices

    After measuring the pros and cons of building your own tool for test automation, if you finally decide to go ahead with writing your own, then there are few concepts and parameters to consider. It is important to note here that a test automation framework can be explained as a collection of standards, processes that facilitate component interactions and integrations on top of which test scripts can be executed. In a pro-agile development background, writing a framework that incorporates best practices is a challenge. Following are key concepts and standards that facilitate building a powerful automation test framework:

    1. Separate the tests from the framework
      To promote reusability of tests and easier maintainability, it is advised to separate the tests from the framework. Which means that your test scripts need to be included in a separate package while your framework related code exists inside a separate package.
    2. Separate the tests from test data
      This separation will make sure that no modifications will be required to the tests during each test
      run when data needs to be changed for input parameters.
    3. Look for other mechanisms other than UI to verify tests
      This means that the assertions you use in your tests should not only depend on the UI. For
      example, you can verify if a test is passed by a web service call. This will largely cut
      down the UI control waiting times and will eventually speed up test execution.
    4. Use libraries
      For maintainability purposes, it is advised to separate out reusable classes, external connections,
      common components and generic functions into a library. Test script writers can then invoke these libraries in their code.
    5. Following after coding standards
      The automation framework should follow after coding standards to maintain consistency, enforce security measures and also to help coordinate with the development team. This will largely improve the knowledge of the QA engineers who are involved with the test framework development as well.
    6. Maintain versioning
      This will largely help monitor the modifications done to the framework over time. Versioning
      will also play a major role during licensing and deployments, if the tool is eventually
      marketed as a commercial product.

    If you have very specific test automation requirements and you cannot find a suitable automated testing tool in the market to facilitate your needs, the answer is Yes! Consider the pros and cons and build your own tool. The value you are adding to your organization and the benefits you are earning for your personal development will be immeasurable!

    Menaka de Silva 

    Lead QA Engineer

    Exploring BDD with Specflow

    Introduction on BDD

    One of the biggest challenges faced by development teams today is understanding and implementing the exact requirements of the product owner. Dan North’s amazing software development method BDD (Behaviour Driven Development) reduced the requirements knowledge gap between product owners and developers. In addition, it bridged the domain knowledge gap between developers and testers. Also, it reduces the cost of translation and rework. Through this development process, the business will gain a more accurate product rather than a product filled with messy code and full of bugs.

    BDD provides the following benefits:

    • The product owner can obtain a product matching the exact requirements.
    • The product owner will provide the requirement with examples that make it easy to develop and test.
    • Minimizes the domain and technical knowledge gap between testers and developers.
    • Tests/Scenarios are written in a human-readable format. Therefore, it is easy to read and write tests.
    • Tests/Scenarios are written according to the behavior of the application and divided into features.
    • Tests will verify the exact business requirements implemented.
    • Easy to implement test automation.

    Test Automation industry is gaining a lot of advantages through BDD. Several testing tools got introduced along with BDD. Some of the famous ones are JBehave and Cucumber. Specflow is also one of them which is currently gaining the attention of a lot of automation testers. In all of these frameworks, gherkin syntax will be used to write the feature files.

    Getting started with Specflow

    What is Specflow?

    This is called cucumber for .net. As cucumber, it allows users to write feature files/user stories in human-readable formatted gherkin syntax. This is an open-source tool. Therefore, all the technology seekers are welcome to give a try. Visual Studio is recommended to use along with it since the tool is based on .net. Additionally, the VS Debugger will be an added advantage.

    Set up in 5 mins with Visual Studio

    Pre-Conditions for setup

    1. Install Visual Studio (Community Edition will be enough/ Latest Version)
    2. .NET framework should be installed (Latest Version)

    Pre-Conditions for setup

    1. Navigate to Manage Extensions in order to install Specflow
    2. Install Specflow (Configure according to the Visual Studio version installed)

    Tips and Tricks to write handy feature files

    Feature files are considered to be the most important when it comes to the BDD. Scenarios should be simple and should not contain any complex steps. Regardless of the domain expertise, any individual starting from a product owner to a developer should be able to understand any scenario. Below are some tips and tricks which I follow when writing feature files,

    • Use clear simple sentences when writing feature files
    • Include repetitive testing steps in the Background block
    • Use parameterizing rather than hard coding the values inside step files
    • Use Data tables to give multiple values to the parameters
    • Use scenario Outline to run the same test over and over with different parameters
    • Use tags to group test scenarios according to the environment or the test suite
    • Don’t include all the scenarios in one feature file. Always divide it and include a set of related scenarios in one feature file.

    Note: Feature files are supported to write in many natural languages other than English.

    The structure of a feature file is as follows.

    The main structure will be divided into Feature, Background, and Scenarios. There are several keywords which give different meanings and they are explained below,

    1. Feature – Describes the specific functionality/feature of the application
    2. Background – This will contain a collection of pre-conditions to be run before each scenario in the file
    3. Given – Describes pre-condition steps
    4. Scenario – Explains the identified test scenario
    5. And – Describes conditions
    6. When – Describes actions
    7. Then – Describes expected results

    Example of Scenario Outline

    Project Structure

    There is no recommended way for the project structure. However, the project should be manageable and easy to re-work. Below is a sample project structure,

    The project can be divided into Page Objects, Specifications containing the feature files and another folder containing the step definition files. App.config file will define the chrome driver and the unit test provider for the project. RunSettings files can be used to store different URLs, usernames, and passwords and re-use when writing the tests.

    Writing your first test

    Follow the steps in the video to write your first test.

    Finally, this is a wonderful technology which will make our daily work much more exciting. Keep learning more about Specflow and BDD. There is more to discover and learn. As Dan North says “BDD is not about conversations, it’s about shipping working, tested software that matters, using conversations.”

    Sajitha Dissanayake

    Sajitha Dissanayake is a Trainee Associate QA Engineer  at Zone24x7

    How you would test a microservices architecture?


    First of all, did you ever thought of this topic as an employee who ensuring the quality of the product and worrying to release your product almost bug-free to UAT? Yes, I am talking about testers who think upside down when it comes to test planning in a new testing engagement.

    “What is the architecture I am looking at?”
    “It’s a microservices architecture… “
    “So what it makes sense to me as a tester? Does that matter?”


    If you came across with aforementioned answer you should read this else just don’t. But you might get crossed soon.

    The identification of microservice architecture is clear. But testing is complex than you think. Testing a microservice architecture is still new for many testers in the industry even today. Why? Because even now, new development work starts mostly in a monolithic architecture. I am not an architecture but a professional tester. I’ll explain this the way all testers will understand.

    Why are architects selecting microservices architecture?

    Monolithic architecture is well known & well spread in the industry. It’s simply the whole code in one piece. There the whole system developed as a single box and deploying to production. It’s independent and interconnected. This is good sometimes excellent for small systems and as a startup. But this era it’s all about web applications and they evolving rapidly. So many of us

    testing/tested systems with monolithic architectures. Companies like Amazon, eBay and google all shifted to microservices architecture. When your system expanding, old fashioned monolithic architectures cannot resist the load. Unable to plug

    new technologies. Unable to utilize them in your CI/CD programs because even a little change/update to the system, you need to deploy the whole system to the production which is as you know is hectic to do continuously.

    What is microservices architecture?

    The solution is indeed microservices. What are these? Your system has broken down to different services without keeping everything together on the code level. For example, let’s look at an insurance system where you will identify claims, policy coverage, alerts & notifications, new businesses & renewals into different services. Then if we consider a single service, within that it will develop the way a monolithic architecture works. But the difference is there are connections in between other microservices so designers need to carefully plan that. With this, all the microservices not need to stick to one technology or database. Developers have space to design their services separately even in different technologies. But don’t forget still this is one system. I am not going to explain further as my objective is to showcase the testing side of this whole thing.

    What are the testing challenges?

    So what are the challenges? Why in terms of testing systems with microservices architecture getting more complex during designing test cases and execution? There are two reasons,

    • Interconnections between microservices.
    • Different technologies each microservice may use.

    That’s all. Here, connections between microservices something we always have to deal with in a microservices architecture. Later, the different technologies each microservice will use is something may not always happen. There can be a system built in a microservices architecture that utilizes the same technologies across the application functionalities. So testers need to understand the architecture clearly.

    Web services and microservices. Two different things

    Now let’s focus on the topic. How you would test a system with such an architecture? Right now some of you who read this may conflict microservices against the web services. Let me clarify the later. Web services tell us how the communication between two devices/applications held over the world wide web (www). There are two architectural styles. Simple object access protocol or SOAP and Representational state transfer or REST. REST API uses advantages of URL exposure like @path(“WeatherService”) while SOAP API uses services interfaces like @WebService.

    So now you may understand that the microservices architecture is one thing and web services and API testing is one thing. Connections I am talking about is which built inside the architecture. Communication among the microservices.

    How to overcome the challenges in your strategy?

    To address the first challenge which I was mentioned earlier (Connections which exists between microservices) performing an integration testing is vital. Here I should highlight that it’s very important and plays a big role in your overall test suite. You should find a way in your test suite to figure out to test the connections. It can be a manual test (front-end) or an API test. Understanding the dependencies is crucial. And in integration testing, I would say API testing is more efficient. Here too you should clarify two things. Microservices and APIs (endpoints).

    A particular microservice may have one or more endpoints based on the design. Or maybe there may not any endpoints connected at all. Such a microservice should have a dependency on another microservice and the later should have one or more endpoints. Sometimes there can be a dependency chain I would say. When planning your API testing you should know the microservices tasks and their dependency (if any). This is something you may not face in a monolithic architecture where such design may not exist. When we say ‘Integration Testing’, various people say various terms. But what you need to know is to cover the integration among microservices. We can call this system integration testing as well. Whatever the name says, make sure to achieve the coverage.

    The second challenge I mentioned, different technologies each microservice used is something may not always happen. But you should be prepared. You need to understand the complete microservice architecture and their database connections and external communications with other microservices. Knowing one microservice doesn’t mean you understood the architecture. So your API testing and test data preparation may get vary if two microservices in the same system use two different technologies. For example, one microservice may use SQL and others use an open-source database technology. So you need to figure out how you should design and execute these test cases which is vital and will save lots of time during the test execution.

    So after all, the E2E (end to end) testing is the most critical testing type when you are testing a system with microservices architecture. This is where you should test the end to end journey. Business scenarios. In your test suite, you need to identify the E2E test cases carefully. And cover majority if not all of the microservices in the system. Why E2E is this important? This is because in a microservices architecture the two challenges I highlighted. You doing system integration to cover the connections. But end-to-end testing is where you cover business flows. While covering business flows, the different technologies involved in each microservice are covered and the external connections (dependent microservices and data stores).

    The solution in brief,

    The conclusion is that the classic thinking of integration testing is not part of QA is somewhat not applicable here. You have to consider integration testing while doing system testing we called it system integration testing. Then the E2E testing is critical. As highlighted before, the main reason I would say that is because microservices having an external connection. And you need to think about different technologies which involved in architecture as well. There’s a strong possibility that you might deal with multiple technologies.

    The test design phase is vital. You need to plan your test suite carefully. And involving QA in architecture discussions during the design phase is crucial. And please make sure to prepare your test environment as same as in production which is very important here as well.

    The trend is towards microservices architecture especially when we are talking about web applications. So hope you have taken all that I have mentioned. And keep this architecture in mind because sooner am repeating, you might get crossed if not already…

    Chandima Athapattu

    Lead QA Engineer

    Splitting Estimation

    In agile testing, as all are aware, we need to estimate the time for the items that we take to the current sprint.

    Even though we usually give one figure either in hours or in days, frankly, there are
    so many sub tasks that are hidden in our main test task. So not only should requirement analysis, finding impacted areas, mind map creation, test case preparations be considered prior to the execution and also we must think of the test round that might occur if there was any bug being reported during the test execution.

    The above practice will help you decide what went wrong in your estimation for the current task as well as how you can estimate correctly if you get a similar task at a future sprint. Another advantage of this is that you can focus more on your estimated time before you start each work, which enables you to stick to your plan to the letter.

    However, during our sprint planning it is difficult to give an exact total time for a task at once. As a corrective action for this, it is recommend splitting your task to the smallest chunks. This will be helpful to estimate more closely to the actual time spent without underestimating or overestimating. I would like to recommend a template that you can use during your spring planning. During the sprint, you can find out in which task you have given incorrect estimations if it has gone beyond the estimation.

    This is just a sample template and you can change it as per your requirement and tasks. This template can be customized and used by any team member who likes to estimate accurately without struggling. As a bonus point, I will give you a sample estimation sheet which can be shared with your Dev Team.


    Happy Estimations ….. 🙂

    References:

    Image Courtesy: unsplash.com/@bradneathery

    Malkanthi Hewage

    Senior QA Engineer 

    Data Science Quality Assurance Kickoff

    What is Data Science? Well in simple terms you could say it is the “Studying of Data”. Nowadays Data Science has become a prominent subject since it plays a major role in assisting industries with predictions and analysis which has become handy when making their business decisions.

    Some of the actual use cases of Data science can be listed as follows: Internet Search, Recommendation Systems, Targeted Advertising, Image Recognition, Speech Recognition, Gaming, Airline Route Planning, Fraud and Risk detection and etc. And there are many more other applications that are basically run on the top of concept “Data Science”.

    Now let’s jump-off to the point where Quality assurance comes to the play. Assuring quality of Data Science Solutions is not going to be an easy task, since the testing strategy can get vary based on the provided Data Science Solution. Unfortunately, we do not have much resources available to refer when it comes to the Quality Assurance of Data Science. With the available information we have, we can split the process of Data Science Application Testing in to three main components as below:

    Let’s take one by one and clarify each component further.

    Data Validation:

    This component is to validate that the data is not corrupted and is accurate. To ensure the Data Validation component is achieved, below are few techniques you could follow;

    • Validating the schema of the record
    • Validating the Data Source
    • Validating the Data format
    • Validate duplicate records
    • Verify Non-existence of empty records

    Process Validation:

    This component can be simply named as Business Logic Validation, where you could verify the business
    logic within the system node by node and then verify it against different nodes.

    Output Validation:

    In this component, you could validate the end result which is the processed data against the expected result. For this there are multiple techniques that can be used. Jacquard Index, Jacquard Distance, Precision & Recall and Area under the Curve are few of the basic techniques among them.

    As a Quality Assurance engineer and a beginner in Data Science field what is our commitment?

    • Keep in your mind that the Testing life cycle and the QA Phases can be applied as it is for the Data Science test applications. QA Practices and standards will be the same no matter what domain, what field, what application it is.
    • Get the requirement and the given solution from the responsible party and analyze each and every point of the design where you could involve with testing. These testing points could fall under any components Data Validation, Process Validation or Output Validation.
    • Come up with your own testing approach for those identified points. Decide whether your approach should be manual or automated. And train yourself with the required knowledge materials accordingly. Ex: If your testing must be automated and you have to involve with scripting, expertise yourself with the relevant scripting language.

    It is true that we have lack of resources related to Data Science Quality Assurance. And Data Science testing is going to be a complicated task since your testing strategy going to get vary with each requirement. But once you initiate the task, you could always start publishing your own resources for Data Science QA. And adapting to the changing nature of Data Science will be a snap to you.

    Happy Data Science Testing!!!

    References:

    1. https://www.analyticsvidhya.com/blog/2015/09/applications-data-science/
    2. https://www.cabotsolutions.com/2017/12/big-data-testing-how-to-overcome-quality-challenges

    Image Courtesy: freepik.com/@starline

    Elaine Nanayakkara

    Senior QA Engineer 

    Go mouse-less for an hour – Insight into Accessibility Testing

    Computer mouse has become a device that plays a significant role in the IT industry. It truly does wonders to make our work easier. Yet, I’d like to challenge you all to use only the keyboard for navigation purposes for an hour and surf your web application or system to get a completely new level of experience.

    Now it’s time to think about an answer for the following question.

    Did you come across any obstacles or difficulties when using your application without a mouse?

    This is one instance where Accessibility Testing comes in to our picture.

    Image 01: Considering the user’s perspective of Accessibility

    Why is Accessibility Testing Important?

    It is important to ensure that our product innovation deliveries are done to everybody who consumes it including people with special needs. According to World Health Organization, over a billion people (about 15%) around the world have some form of disability. Did you ever assume the percentage to be this high? These users might be the users of your application as well.

    If technology empowers individuals with special needs, it has a great ability to inspire their lives in various ways. By performing Accessibility Testing, we can ensure that everyone regardless of the age or the ability can use any application. So that no one will be missed out from this digital era. This is why Accessibility Testing is needed!

    Further, Accessibility Testing is not only focusing on validating usability aspects, but also it ensures that an application can be accessed by people with several disabilities including visual, auditory, speech, physical, cognitive, learning, language, and neurological disabilities. Besides, being on the IT industry, it is our responsibility to implement software products beyond the traditional standard rules. So that it can be accessed by anyone and everyone!

    Image 02 : Accessibility Testing

    How to Perform Accessibility Testing?

    Accessibility Testing can be done using both manual and automated testing methods. However, it involves in depth manual test inspection of individual pages & all the functionalities in order to do an effective Accessibility Test. There are various online tools which can be used for Accessibility Testing. Some of the accessibility technologies include speech recognition software, screen readers, screen magnification software, braille embossers, voice recorders, special keyboards and many more. By using Assistive Technology, we can support people with disabilities to perform activities they were previously unable to do or had difficulties in accomplishing.

    Test Strategy and Approach:

    In the beginning of Accessibility Testing, it is normal to feel it as a struggle. Reason being, the team members are required to use solely the keyboard. This could be tricky to start with as we are so used to rely on keyboards. However, when you start to work with screen readers, that experience itself would feel like a whole other world.

    Accessibility Testing can include:

    • Reviewing the application structure
      Example – Reviewing header navigation & header order
    • Testing keyboard compatibility
      Example – Verifying tab order index
    • Testing media
      Example – Audio/video and captions
    • Testing assistive technology devices
      Example – Using screen readers
    • Real user experience monitoring

    Legal Compliance

    An interesting fact is that government organisations around the world have come up with legalizations with related to Accessibility aspects. It requires the software products to be accessible by people with special needs. Therefore, Accessibility Testing is important to ensure these legal compliances in order to avoid potential lawsuits. At the moment you might feel like making your product accessible isn’t a necessity. Yet, it sure will be in the future. You could be charged for violation fines which can really damage your company financially.

    Web Content Accessibility Guidelines (WCAG)

    WCAG contains a series of guidelines to improve Web Accessibility. Whether the testing is automated or manual, compliance check should be done according to Accessibility Testing guidelines. This is an essential part when working with accessibility. There are several standards for accessibility. Such as,

    1. W3C’s WCAG 1.0
    2. W3C’s WCAG 2.0
    3. Section 508 etc.

    Out of all these guidelines, WCAG 2.0 is accepted worldwide. These standards provide information on how to make a web application or system accessible. Design, Development &Testing should ideally be compliant to these accessibility guidelines.

    WCAG is based on four principles

    1. Principle 01 : Perceivable
      Information and user interface elements should be delivered to users in a way that they can process and understand. Example: By providing text alternatives for non-text content, it can be easily transformed into other forms of communication such as large print, braille, speech, symbols or simpler language.

      Example: By providing text alternatives for non-text content, it can be easily transformed into other forms of communication such as large print, braille, speech, symbols or simpler language.

    2. Principle 02 : Operable
      Users should be able to successfully use buttons, controls, navigation & other user interface components.

      Example: Keyboard focus should be implemented in order for a keyboard user to follow the links to social media sites of an application.


    3. Principle 03 : Understandable
      Information given in the user interface and how to operate the system should be easily understandable.

      Example: Required fields in a registration form could be presented with an asterisk mark. If there is no such indication, user will not understand the reason why the form cannot be submitted.


    4. Principle 04 : Robust
      Application should be designed to operate on a wide variety of technologies. Customers should have the flexibility to choose the technology they prefer to interact with the application and it’s content.

      Example: An application requires a specific browser version to make use of its functions. If user does not have that particular browser version installed, then there will be no way for that user to experience the features of the application.


    Myths

    There are several myths around accessibility. Establishing an accessibility culture in project teams might be challenging as it requires debunking these myths. Let’s have a look at some of those misconceptions.

    MYTH 01: A small percentage of your audience will need an accessible web application.

    REALITY: Not even close! Accessible applications can be useful and benefit users more than you may realize. It is not necessarily needed to have some form of disability in order to get the benefit of it.

    Example of the benefited individuals:

    • Elderly population
    • Users with low vision or hearing
    • Users with cognitive limitations
    • Users with neurological disabilities
    • Users with situational or temporary disabilities
    • Users with physical disabilities

    MYTH 02: Web accessibility is time-consuming, expensive and extra effort is needed.

    REALITY: Not really! It is important to focus on accessibility implementation and testing from the very beginning of the development life cycle. It should be initiated during the Design & Development stages. That way you can save time, effort, and cost in the long run. However, if you wait to test accessibility at the end, major rewrites could be needed from the design itself to make the site accessible. So being a little proactive about accessibility from the beginning would benefit the whole project.

    MYTH 03: Accessibility will be guaranteed just by using automated testing tools

    REALITY: Apparently not! Accessibility testing tools could be very needful and helpful. Yet, these automated tools cannot replace manual testing. Usage of automated testing tools will be effective only when it is coupled with manual testing as many of the WCAG checklist points cannot be solely tested by using tools and it requires human judgment. Hence we cannot purely rely on tools. Manual testing is also required with the right balance in order to develop your application truly accessible.

    Example: A tool has the capability to test if an image has an ALT text description. Yet, the tool is not able to verify whether the given description is meaningful.

    MYTH 04: Tend to believe that accessibility testing cannot be automated.

    REALITY: Truth is that running a browser extension or integrating an accessibility tool in an automated test is a simple step. Frameworks such as Appium & Selenium can be easily used for accessibility test automation with easy maintenance as they are platform independent. Even though manual testing is essential for accessibility, a certain portion of the test scenarios could be automated. At the same time there could be certain accessibility test scenarios which cannot be automated at all. Therefore, identifying the test automation scope correctly is important.

    MYTH 05: Only developers are responsible for ensuring Accessibility.

    REALITY: Ensuring accessibility is a responsibility of every single person in your team. Not just the Developers or QA. This process should be done as a collaborative activity with the help of Project Managers, Business Analysts, UI/UX Engineers, Developers and QA. It will be more efficient and compliant if Accessibility is applied into the team culture.

    Wrapping Up

    Now it is the best time to debunk all the accessibility myths you had all these years and start applying it to your applications today. It would create a wonderful experience among your team and your customers.

    As Google says, “Everyone should be able to access and enjoy the web. We’re committed to making that a reality.” Point well taken! Good accessibility is directly proportional to a delightful customer experience. It is our responsibility to make it a reality!


    References

    Disability and health :
    https://www.who.int/news-room/fact-sheets/detail/disability-and-health
    Web Content Accessibility Guidelines :
    https://en.m.wikipedia.org/wiki/Web_Content_Accessibility_Guidelines
    Accessibility Principles:
    https://www.w3.org/WAI/fundamentals/accessibility-principles/
    Google Accessibility :
    https://www.google.com/accessibility/
    Image 01 Source:
    https://dynomapper.com/blog/27-accessibility-testing/275accessibility-testing-considering-the-user-s-perspective-of-accessibility
    Image 02 Source:
    https://dynomapper.com/blog/27-accessibility-testing/275-accessibility-testing-considering-the-user-s-perspective-of-accessibility

    Lalendri Gamalathge

    Lalendri Gamalathge is a Senior QA Engineer  at Zone24x7