Codeless Test Automation with TestProject

On visualizing the present technological demand, the market need for newly developed software is flourishing. Most of the software development companies thrive to keep up with the current market demand. As a result, development is carried out using agile software development methodologies and other strategies to speed up the process. For assuring the quality of a software product, automated testing came into play; a solution to this trending demand. Even though automated testing helps to eliminate specific manual testing scenarios, it is not always practical to automate large and more complex tests because the time and effort to automate large test suites fail to keep up with the market demand. This resulted in making testers transition to codeless automation testing.

What is TestProject and TestProject Agent? 

If you are in the market for a cloud-based, test automation development framework that allows you to create automated tests for Web, Mobile, and API with little to no coding skills, TestProject is the perfect choice for you. TestProject is a community-powered end-to-end test automation platform that offers robust test recording capabilities. It provides 100% support to selenium and Appium so you can even continue using your existing scripts, and It also provides cross-language support for python, C#, and Java.

TestProject agent is a cross-platform local component that sits on your machine. It takes care of the entire execution on-premise and communicates directly with the TestProject’s cloud testing repository. This removes the complexity of installing and managing drivers required for Selenium and Appium and can also be used to trigger tests remotely in a virtual environment. 

Benefits of using TestProject:

  1. Automatic Step Recorder
  2. AI-Powered Self-Healing  
  3. Adaptive Wait Capability 
  4. Automation Assistant

Automatic Step Recorder

Even though codeless test automation tools act as a great alternative, the opinion on recorded tests are diverse. Some testing professionals do not find recorded tests to be as beneficial as coded tests. With TestProject’s Automatic step recorder, you can mix recorded tests with coded features to achieve full potential upon user desire. 

TestProject’s automatic step recorder will record every move and every interaction made as the tester interacts with the application under test. TestProject’s recorder sits directly on top of the application, and it comes with a built-in intuitive element explorer, which can determine multiple locator strategies as you hover over any element within the application. TestProject stores the web element’s locators inside the test within a collaborative POM repository. This enables testers to just update the changed locator without making any other changes to the test. These recorded steps can then be exported into selenium code, extending the capability to customize tests further upon the user’s request.

AI Powered Self-Healing

TestProject’s AI-Powered self-healing technology monitors your tests and selects the most robust selector strategies. When recording a test, as mentioned earlier, TestProject will automatically store multiple locator strategies within the POM repository and notify the user of a better, more stable way to run the test. This enables testers to overcome challenges when locating dynamic elements. Dynamic locators are when an element’s locator is changed after a fresh page load. This means tests are more likely to fail upon execution. Whenever TestProject comes across a broken locator with TestProject’s Self-Healing capability, it automatically prioritizes a different locator type. This repair tests on the fly, producing better results. Whenever a Self-Healing mechanism is used to recover from an error, it would be indicated in the test report.

TestProject’s AI Self-Healing feature is designed to be non-intrusive. You can always override suggested locators by adding your own and making them primary, or you can even remove the automatically generated ones.

Adaptive Wait Capability

Timing is one of the biggest automation obstacles, and it is affected by many factors, such as internet speed, device performance or various events. Traditionally, we use async waits to overcome these obstacles, but it is believed to cause up to 45% of test flakiness and failures. Test flakiness is when a test has a non-deterministic output. Therefore, each time the test is run for the same code can either pass or fail.

TestProject’s adaptive wait capability is smart enough to eliminate those errors, making tests wait for actions and validations before proceeding with the test. This adaptive wait functionality can be used within any UI test step or Validation. TestProject will adapt to the actual application’s loading pace and then execute the next action once the relevant conditions are met.

Automation Assistant

In certain cases, there can be false-positive actions when running automated tests, especially when testing mobile applications. False-positive actions are when certain actions are reported as passed but did not perform the task. To overcome these types of test automation obstacles, TestProject comes with an Automation Assistant which provides suggestions to make automated tests more efficient and stable. Automation Assistant analyzes each step and detects actions where they did not reach their target goal and attempts to fix it automatically. This can mainly be beneficial in the events of pop-ups and other events that cause the flow of the running test to break. Automation Assistant even provides tips to help with performing tests on the TestProject platform. 


The usage of codeless test automation tools will continue to rise in the coming years as it becomes a fundamental practice for QA teams. Among the codeless automation tools currently available in the market, TestProject is one of the few tools that completely support codeless testing, and that offers advanced scripting capabilities. All these features come at no cost making TestProject the only cloud-based free codeless test automation tool.

Timothy Samuel

Trainee Associate QA Engineer

Importance of AI for test automation

Software Testing Evolution

In software development life cycle, software testing has higher importance. Even though developers develop according to requirement, during testing only we make sure that everything meets the expectation. When looking at past decades, it is quite evident that software testing has gradually evolved. At the first stage, testing started only after the software development was completed. Currently, most of the companies have moved to agile processes and that has enabled software testing to be initiated parallel to the development process. Reason behind selecting agile processes is finding bugs as early as possible and fixing those earlier. For that automation testing came to play a role with the manual testing process. In this era, it can be seen that Artificial Intelligence (AI) is stepping out to software testing. It is worth talking about and application of AI to test automation. In this article, we will be taking a look at AI test automation and it will also provide guidance on how to apply it.

What is AI?

AI is also known as machine intelligence. Simply, machines performing natural human intelligence is what’s known as ‘AI’ or Artificial Intelligence. Image Recognition, Speech Recognition, Chatbots, Natural Language Generation etc. came to the world as applications because of AI support.
When looking at the current AI systems, most of them belong to “Limited memory” (based on the past experiences, system/machine reacts). Above mentioned can be also be used in test automation to maintain tests and here will provide directions to apply AI to your project.

Why should you apply AI on Test Automation?

As the survey conducted by testcraft using 200+ testers, followings are the major concerns identified in testing

  • Test Maintenance
  • Not enough man power
  • Lack of integrations
  • Want to increase coverage
  • Hard to find good test engineers
  • Not able to keep up with an agile schedule

Out of all participants, 50% of testers have mentioned that Test Maintenance is the biggest bottleneck for testing. So AI will be the best solution to overcome this issue. Other than the test maintenance, AI supports time saving, stability of tests, finding bugs fastest way and fixing those much faster etc.

AI enhance the software testing efficiency

“Self-healing” is the mechanism that is used in AI to overcome the above mentioned drawbacks. What self-healing does is, it identifies damages, errors by itself and has ability repair those damages and fix errors automatically by itself without human involvement.

The problem in test automation is, once we automated things, if those got failed we have to spend some time to validate it to identify root cause. This will be time taken one. So by patterning this self-healing behavior in to our test automation, mainly it supports error handling and also for information flow management. When occurring errors, AI is able to adjust those by viewing system response and accordingly patterned to self-heal those errors. With this mechanism, followings are the advantages can be gained for testing.

  1. Able to maintain test stability
  2. Able to create most reliable automated tests
  3. Able to identify bugs earlier and resolve those so fast way.
  4. Able to save time and reduce the cost of failures
  5. Reduce the maintenance
  6. Able to follow continues learning with data using and make correct decisions to overcome failures

AI test automation Tools

Following are the currently most popular AI test automation tools

  • Applitools
  • SauceLabs
  • Testim
  • Sealights
  • Test.AI
  • Mabl
  • ReTest
  • ReportPortal

Stay informed about future AI test automation trends

Applying AI for test automation will be a great opportunity for software testing in future since it addressing current test maintain issues without any human involvement. As a result of that testers will not have to go into the code manually to identify issues and to validate those issues. To get the greatest benefit, it is good keeping touch with future AI test automation trends. For that you can follow Blogs, research articles related to AI.

AI in test automation is not an obstacle, but an opportunity

Vindya Gunarathna

QA Engineer

Test Design Technique – Equivalence Partitioning

“Your test equipment is lying to you and it is your job to figure out how.”

Charles Rush

What is a software test design technique?
Why do we need to use a technique for testing?

Software test design technique is a method, which a quality assurance engineer can use to derive test cases, test scenarios according to the specification/structure of the application. However, why we need to use a technique for this. We can just simply go through the specification document or a structural document and derive the test cases. We know that as a fundamental software testing principle, “Exhaustive testing is impossible” and sometime we are waiting our time by attending to unnecessary test scenarios.

To overcome these conflicts, we can use test design techniques so we can utilize the test execution time to maximum coverage.

As shown in the below diagram, we can divide testing in to two categories such as Static testing and Dynamic testing. In this blog I will do a discussion on “Equivalence Partitioning” technique which falls under black-box testing method.

Let’s discuss what is equivalence partitioning. Equivalence partitioning is testing various groups of inputs that we expect the application to handle the same way according to each group in a similar behavior. This method identifies equivalent classes / equivalent partitions and these are equivalent because the application should handle them in a same way. We can divide these equivalent partitions into two categories.

  1. Valid equivalent partitions – describes the valid scenarios which the application should handle
  2. Invalid equivalent partitions – describes the invalid scenarios which the application should handle or reject

Note: We can use multiple valid classes at a time. However, keep in mind that we cannot multiple invalid classes in one scenario because one invalid class value might mask the incorrect handling of the other invalid class.

How to derive equivalent classes;

Assume we have to validate an input field where it is accepting numbers, characters and application should reject special characters. Then,

Set – Keyboard input
Subset A – Numbers (Valid)
Subset B – Characters (Valid)
Subset C- Special Characters (Invalid)

This is the basic visualization of equivalence partitioning. But we need to remember that we can apply equivalence partitioning iteratively. In the above example, for the subset A it is accepting numbers. We can divide numbers class to sub partitions like integer values, decimal values, negative values etc. as shown below.

How to compose test cases;
Assume we have 2 sets as X and Y. X1 and X2 are the subsets of X and Y1, Y2 and Y3 are the subsets of Y. Assuming all these subsets are independent and valid we can compose test cases using both X and Y subsets and compose a new test case. This helps to reduce the number of test cases and covers the same scenarios using less amount of test cases.

I have only shared the fundamental of equivalence partitioning. Try google and seek more 😉

Let’s see again with another test design technique.!

Praveena Karunarathna

Was Senior QA Engineer

Data Science Quality Assurance Kickoff

What is Data Science? Well in simple terms you could say it is the “Studying of Data”. Nowadays Data Science has become a prominent subject since it plays a major role in assisting industries with predictions and analysis which has become handy when making their business decisions.

Some of the actual use cases of Data science can be listed as follows: Internet Search, Recommendation Systems, Targeted Advertising, Image Recognition, Speech Recognition, Gaming, Airline Route Planning, Fraud and Risk detection and etc. And there are many more other applications that are basically run on the top of concept “Data Science”.

Now let’s jump-off to the point where Quality assurance comes to the play. Assuring quality of Data Science Solutions is not going to be an easy task, since the testing strategy can get vary based on the provided Data Science Solution. Unfortunately, we do not have much resources available to refer when it comes to the Quality Assurance of Data Science. With the available information we have, we can split the process of Data Science Application Testing in to three main components as below:

Let’s take one by one and clarify each component further.

Data Validation:

This component is to validate that the data is not corrupted and is accurate. To ensure the Data Validation component is achieved, below are few techniques you could follow;

  • Validating the schema of the record
  • Validating the Data Source
  • Validating the Data format
  • Validate duplicate records
  • Verify Non-existence of empty records

Process Validation:

This component can be simply named as Business Logic Validation, where you could verify the business
logic within the system node by node and then verify it against different nodes.

Output Validation:

In this component, you could validate the end result which is the processed data against the expected result. For this there are multiple techniques that can be used. Jacquard Index, Jacquard Distance, Precision & Recall and Area under the Curve are few of the basic techniques among them.

As a Quality Assurance engineer and a beginner in Data Science field what is our commitment?

  • Keep in your mind that the Testing life cycle and the QA Phases can be applied as it is for the Data Science test applications. QA Practices and standards will be the same no matter what domain, what field, what application it is.
  • Get the requirement and the given solution from the responsible party and analyze each and every point of the design where you could involve with testing. These testing points could fall under any components Data Validation, Process Validation or Output Validation.
  • Come up with your own testing approach for those identified points. Decide whether your approach should be manual or automated. And train yourself with the required knowledge materials accordingly. Ex: If your testing must be automated and you have to involve with scripting, expertise yourself with the relevant scripting language.

It is true that we have lack of resources related to Data Science Quality Assurance. And Data Science testing is going to be a complicated task since your testing strategy going to get vary with each requirement. But once you initiate the task, you could always start publishing your own resources for Data Science QA. And adapting to the changing nature of Data Science will be a snap to you.

Happy Data Science Testing!!!



Image Courtesy:

Elaine Nanayakkara

Senior QA Engineer 

How Many Test Cases Do You Need? How Many Do You Have?

A recent discussion with a top-brass individual in Software QA Domain, brought up this very question, which I really didn’t bother about for the past seven years as a QA professional. After the discussion was over, I managed to find some time to reflect on it. Does it matter how many test cases are there to test a particular application?

First, a bit of theoretical background…

Well, this is something we all know. According to the ISTQB foundation level text (which by the way, can be considered as a comprehensive handbook for a test engineer) and IEEE829 Test Documentation Standard, test cases are stemmed out as part of test design and documentation procedures.

Given a software system to test, the test engineers first analyze the system specification to find test conditions – which in simple terms, are things that could be tested. Once the engineers are satisfied that they have found a good amount of conditions that covers the system’s functional and non-functional aspects (which is named as ‘Test Oracle’), they select which of these to be actually used for testing and prioritize them based on importance and risk levels assigned to each one.

Then come the test cases – they are the detailed specifications on how one or more test conditions are actually tested. A pre-determined set of pre-conditions, inputs, steps to follow and expected outputs are documented as a test case, which resides in a formal test specification.

On top of these, test procedures are also documented to specify how the test cases should be executed. Incase test automation comes in to the picture, the procedures may extend to automation scripts etc.

But what is most important and the thing to remember is that no matter how these standards and ‘conventional ways of doing things’ are set, during the actual execution, it all depends on several factors.

Practically, you can’t always do it by the book…

Unless you’re developing and testing your own product and you are not in a hurry to take it to the market, which is highly unlikely, there is a plethora of constraints to consider.

First and foremost, it is Time. If you are working on a fixed-cost project, you have to utilize the maximum amount of available time for testing. This leaves the documentation with a smaller ration. If the project is budgeted on time-and-material basis, the customer will want to see the latest iteration in working condition after each build or sprint. Again leaving documentation and design a smaller chunk of the time available.

But anyway, you cannot test without doing any of these…

Yes, you have to analyze the system and identify the test conditions – otherwise there is little or no means of knowing what kind of a beast you are dealing with. And yes, you have to prepare test cases – otherwise there is no way to communicate to other stakeholders of the project, what you test and how you do it. And even if you do all the testing by yourself, keeping track of what to do and what have been done is always the best practice.

You may use different means to keep track of these test cases – ranging from spreadsheet based check-lists to well-structured and mapped test cases in a test management tool. The means of managing test cases can be selected to suit the constraints relevant to the project.

Which method should you choose?

The common rule of thumb is, the more details you have, the better the test case is. For an example, ‘Input numbers 2 and 3, calculate total, correct output should be 5’ is more clear cut than just saying ‘test the sum of two numbers’.

But still, it might make no sense to spend three days to compose the test cases, if you only have five days to test, fix bugs and re-test a particular application. The way the test cases should be prepared will have to be decided on a case-by-case basis.

The best way is to be ‘Efficient & Effective’!

As our management gurus put it, ‘to be efficient is to do the thing right, and to be effective is to do the right thing’. Putting this into context, it can be safely considered effective if the test cases you have adequately cover the functional and non-functional aspects of the software under test.

Efficiency, however, is in a league of its own. Depending on the person, one might cover a certain function with just one test case whereas another might require more than one. If all the functional and non-functional aspects can be covered with say, 10 test cases, no purpose is served by having more than 10 test cases or less thereof.

Turns out, it is a numbers game…

There is this specific case I want to cite as an example. For this particular web-based application, my test team had prepared about 90 test cases, excluding certain edge scenarios. We ensured all the test conditions were covered and the test execution time was typically 32 man hours.

Once the application finally passed through my team and sent to the client, he wanted to test it again using some third-party testers. After about two weeks, this test team had reported 80 new defects! Both my test team and development team were not convinced by this result, so we called up a triage meeting to review the defects. After thoroughly analyzing all 80 defects, what the team determined was that only two of the reported 80 incidents were actually valid defects.

When we took it back to that third-party test team, it was found out that they have prepared over 300 test cases and to execute all these, the team (we were told there were two engineers) took about two whole weeks. Amazed by this result, we cordially requested them to share the test cases just to get an idea on what they were up to. Needless to say, even after months and several reminders, it is apparent that they did not want to share the test cases with us. Therefore, the conclusion was that they have exaggerated on the work they did.

Ultimately it comes back to the numbers game I was talking about. Showing your management or the client that you have a large number of test cases and logging a large number of defects is one method some test teams resort to, in order to just ‘show’ that they did good work. Since the management or a client would rarely dig deep to examine the validity in such situations, it was a safe bet to make.

At the end of the day, ‘How many test cases should be there?’

The first thing to do is to identify the test conditions. The time spent here will never go wasted since this also helps you to understand the system better. Then you have to understand the nature of the project including the perception of the management and the client about testing.

From there, use test design techniques you prefer to carefully craft the test cases. Remember, the set of test cases that comes out as a result of this should be both efficient and effective.

The challenge is to prepare not more or less but the right amount of test cases which will provide complete coverage to achieve the greatest effectiveness.

Reviewing this work in collaboration with the development and business teams is always a good practice. It would help to bridge any gaps or misunderstandings. Other than that, as long as the set of test cases you have could cover all functional and non-functional aspects of the software application under test, the number of test cases does not really matter.

Image Courtesy:

Deshal Weerasinghe

Deshal Weerasinghe was an Associate QA Lead at Zone24x7.