Applications for RFID in the new normal

Due to the ongoing global pandemic situation, health has become a global concern and a priority. People are paying much attention to social distancing, limiting the time spent within communities and contactless access to daily activities. In this backdrop, RFID becomes a key enabler for many types of solutions. RFID is not new to the world. However it can be a novel experience for general society. Therefore it is important to identify how businesses and communities could adopt RFID to maintain their ordinary lifestyle with emphasis for safety and social distancing. Here are some areas where RFID could help.

Implementing product tracking systems (Logistics and supply chain)

When promoting RFID for product trackers in logistics and supply chain management, it limits the usage of human resources. Additionally, it ensures social distancing.

Also by using RFID we have the ability to take inventory without touching the goods. As RFID enables contactless controlling of assets, It helps to reduce the communicability of pathogenic diseases. Therefore RFID can be used in workplaces and organizations as an effective precaution. 

Contactless payment

Utilizing RFID for contactless payments decreases interaction with notes and cards. Moreover, RFID reduces the time a person spends to make a payment and reduces exposure.

Attendance tracking

Attendance tracking has become another area that can apply RFID solutions. Even though most of the employees in many companies are working from home these days, some offices/companies are kept open practicing social distancing within the premises. A quick swipe of the RFID based badge/ID can be much safer than using fingerprint recognition attendance tracking.

Asset Tracking

RFID presents opportunities to track assets used and shared by multiple people. This kind of continuous tracking can be used to manage the risk of spreading the disease and helps rapid identification and decontamination of equipment, tools which have come in contact with an infected person. Simultaneously, it will help to analyse the spreading of the virus within a communicable area. 

Interactive marketing

When RFID is combined with interactive marketing, it becomes smarter and safer. Business owners can use remote scanners to read RFID tags placed on different products, enabling them to record a variety of information including quantities of various stock items and their exact locations.These tags contain unique product numbers. If consumers pay for the goods with a loyalty card, businesses can link the purchases to the recorded RFID data and use that information for marketing purposes by mapping out consumers’ buying patterns. By using these data retail stores can make improvements.

Since 2011, Zone 24×7, design, implement and integrate custom RFID solutions with core business operations, for the efficient management of merchandise & business assets. We also partner with clients to maximize returns from their existing RFID investments.

We are focusing on ;

  • Inventory & Item Level Visibility
  • Asset Tracking
  • Evaluation And Selection Of RFID Tags, Labels and Hardware Infrastructure
  • RFID Middleware/Software
  • RFID-based Localization

You can get in touch with our RFID products/solution from here

As we all are in the fight against COVID 19, we can use technology to overcome our struggles. For that, applications based on RFID play a major role in easing our day to day tasks.

Lehindu Atapattu

Trainee Associate Digital Marketing

How Business Analyst can add value in the SSDLC



The role of the Business Analyst (BA) is generally understood as the stakeholder who is responsible for managing the software requirements. While it briefly adds who a business analyst is, the real role goes beyond that. BA implies from the brainstorming phase of the project to its decomposition. In each of these phases, BA plays a vital and diversified role in bringing an idea to life. However, the way in which the BA views the security aspects of software is the focus of this article.

Understanding about Data Protection Standards and Regulations

To start with, BA’s understanding of the software security standards and it’s applicability plays a bigger role. There are different data protection standards and regulations available in the world. GDPR (General Data Protection Regulation), CCPA (California Consumer Privacy Act), HIPAA (Health Insurance Portability and Accountability Act), PCI (Payment Card Industry), ISO27001 (International Organization for Standards on Information Security Standard) are some of them. These standards and regulations significantly impact the SDLC (Software Development Life Cycle) and corresponding IT-development processes of organizations that plan to roll out information systems’ projects. They also increase the complexity of the functional, non-functional, and technical designs associated with the various business and technical layers outlined in systems. It is a software engineering responsibility to follow basic data protection principles. However, as a BA, it is better to have an understanding of at least a few of these standards and regulations, because the data protection requirements need to be addressed in the planning stage of the SDLC and documented to avoid significant cost overruns and rework later in the SDLC process. 

Accordingly, based on the client, the type of the project, and region the software is being developed, the standards which need to adhere may differ. However, upon the selection of one, more or none of the above standards, the specific security-related aspects will differ as well. Hence, the focus of this article is given to how a BA can involve in a generic Secure Software Development Life Cycle (SSDLC) without being specific to the aforementioned security standards. Yet, it must be highlighted that, if a project was undertaken under such a specific security standard, the respective specific security concerns must be addressed by the BA and the rest of the team members.


The SDLC is a framework that interprets the process used by organizations to develop an application from its initiation to its closure.

That being said, In general, SDLC includes the following phases:

  1. Planning and requirements gathering
  2. Architecture and designing
  3. Implementation/Coding
  4. Testing 
  5. Release and maintenance

Simply, the SSDLC will be derived by applying software security aspects to each stage of the SDLC. Business Analyst’s involvement in the SSDLC is further detailed out in the subsequent sections 

Generic Security Touch-points

As discussed in the overview, once software security aspects are applied to each phase of the SDLC, it’s considered as SSDLC. Therefore, security touch-points are introduced at each level of the SDLC phases.

The involvement of the security touch-points can be easily demonstrated by the

Gary McGraw’s influential touch-point model as follows,

(, 2020)
Among these security touchpoints, the focus is given to the involvement of the BA.

BA’s Involvement in The SSDLC

Security enforced planning and
modeling techniques

By praising the selected SDLC methodology; planning, and modeling techniques shall be carried out by deriving the scope into multiple modules and sub-modules which allows easy maintenance and easy management of them. More the modules and sub-modules were derived and separated, more the security of each module can be enforced individually (Rather following all-in-one concepts) for a better-combined security output once all the modules are functioning in a single place. Accordingly, the Business functional architecture of the system shall be designed to fulfill the aforementioned context.

A sample section of the Business functional architecture is attached below for a further illustration

Security Requirements Specification

Security Requirements in SRS

All the requirements elicited for the project will be documented in the Software requirement specification (SRS) document format. Functional and Non-functional requirements captured will be elaborated as spec points under each SRS maintained for each module. 

A sample section of the SRS document is attached below for a further illustration 

Accordingly, as part of the functional and non-functional requirements gathering process, security requirements shall also be captured and documented the SRS,

A sample depiction of a security requirement is attached below for a further illustration 

Security Requirements in User Stories

Apart from the SRS, user stories documents will also be maintained for each sprint. Therefore, security requirements shall also be covered either as user stories or scenarios under the user stories to ensure focus on the security

A sample depiction of a security scenario of a user story is attached below for a further illustration

Abuse-Case identification

As an initial requirement gathering step, Use-Case diagrams will be used to identify interactions among actors in a system. In the same mechanism, Abuse cases (also called misuse cases) shall also be identified by modeling adversarial actions to the system

A sample depiction of an abuse-case is attached below for a further illustration

Evil User Stories

Evil user stories are equal to the standard user stories, yet they express what an evil-minded user (abusive user) would do in the system instead of what a standard user would do. These stories highlight what the system is not expecting a user to do. By understanding what a hostile user would do in the product, acceptance criteria and the scenarios can be developed against it to better defend against them.

A sample depiction of an evil user story is attached below for a further illustration

Security Enforced KB Articles

Once the development is done and ready to be deployed, in order to enforce the security in release and the maintenance; procedures and documentation will be used such as Deployment guides, user guides, support team guides, etc.

Thamal De Silva

Senior Business Analyst

Kalindu Jayathilaka

Consultant – Business Solutions

A New Take on Caching

Any application out there in the world, especially production level applications possess a huge amount of data. These are obviously stored in a secure location. But every time the application is in need of using these data, it has to access those storages. This practice arouses a few inconveniences when the data is required faster or frequently. Also, when reading data becomes an expensive operation, it is quite inconvenient for the application to function robustly. If you’re looking to overcome this inconvenience, caching is your solution.

Caching enables you to access your stored data much faster. The data which is stored in a cache can be from the direct data source or generated by processing the data in a request. Hence, in subsequent requests the application does not have to access the data source or reprocess the data. Instead, it can simply access the cache and serve the request smoothly and much faster.

Using caching as is, has a small problem with it. The cache will be available to use as long as the application is up and running. The moment the application restarts, the cache will be empty and you will lose the data which was previously in the cache. The solution for this is using a cache which can persist.

A persistent cache stores the data in the file system or the system memory. In a situation where the application stops or crashes, the data in the cache will not be lost. Instead, it will be stored in the file system or the system memory, waiting to be loaded back to the new cache created after the restarting of the application. This ensures that the application will pick up from the same state as it was before crashing/stopping.

Check out the linked white paper to get an understanding of a few of the existing solutions for persistent caches and some of their limitations.

Furthermore, the linked white paper introduces an approach that can be taken to solve the limitations that arise from existing solutions. It is noteworthy to mention that the introduced approach is a customization done due to certain stipulations. Check out the white paper to get to know what those stipulations are. If you have similar stipulations, the introduced approach is the perfect fit for you to implement a persistent cache.

The following are the different types of caches built using this customization.

  • Basic Persistent Cache
  • Persistent Cache with TTL (Time-to-Live)
  • Persistent Cache with Per Row TTL
  • Persistent Loading Cache

The white paper extensively discusses each of these types and other related documentation needed to get you started. Before that let’s have a look at the features of this customization.

  • Any object type can be used as the key or value without depending on the library specific wrapper objects.
  • Does not require external configuration files.
  • Can specify a common TTL or a per-row TTL.
  • Loading cache features with TTL.
  • TTL can be specified as any timeunit using Java 8’s ChronoUnit.
  • Features like batch storing of data and saving keys/values if they are absent.

To wrap up, here we have introduced a customization for a feature rich persistent caching which can be used when a caching mechanism is needed with persistence. To get a comprehensive understanding of this approach do check out the white paper linked below.

Read More >

Let us know what you think of this approach.
Happy coding!

Mariam Zaheer

Software Engineer

Choosing the Right Algorithm at the Right Time – The Science of Impactful Product Recommendations

With the evolution of technology, online retail shopping has come into action, playing a major role in the modern world. A personalized recommendation system aims at identifying products that are of most relevance to a user, based on their past interactions.

This enhances a user’s intention to browse more products and makes them more likely to buy these products, effectively increasing business revenue and user experience. Hence, it is of vital importance that the evaluation of recommendations in such a context provides an end user output based on criteria which is selected in a way that maximizes business revenue and user experience. This chosen ‘most optimal criteria’ may vary due to different user preferences, seasons, and many other factors. Therefore, selecting the most optimal criteria has to be done very thoroughly, for which an effective and efficient evaluation technique is essential.

Where Do You Stand?

In this fast-moving modern world, People tend to buy online due to their busy schedules and easement and any outdated organization that doesn’t support this will be left behind. In a post Covid-19 world, online retailing and e-commerce without a doubt will increase immensely, forcing almost every organization to use online retailing for survival. Recommendation systems play a very important role in this, helping out with revenue and user experience. All the leading retailers worldwide use modern recommendation systems. It is definite that online retailers that use primitive recommendation systems will not be competitive enough to survive among the others who already use standard recommendation systems.

Multi Armed Bandit

Evaluation of recommendations can be categorized into two: offline evaluation and online evaluation. An example for offline evaluation is the Multivariate Testing Method which allows exploration of the most optimal criteria within a specific period of time, but afterward serves recommendations using the winning criteria. Hence it only provides a single cycle of exploration to exploitation, and does not allow automated further exploration cycles. This leads to a requirement of manual intervention once the criteria pass its optimal performance. These limitations bring out the necessity of online evaluation that supports automated multiple exploration cycles, which leads us to Multi Armed Bandit. The Multi Armed Bandit problem is a concept where a fixed limited set of resources are to be allocated among competing choices in a manner that maximizes their expected gain.

Multi Armed Bandit In A Retail Context

The endless expansion of e-commerce has led retailers to advertise their products by displaying. This is done via recommendation after considering various factors. Recommendation systems are growing progressively in the field of online retail due to their capability in offering personalized experiences to unique users. They make it easier for users to access the content they are interested in, which results in a competitive advantage for the retailer. Hence it is necessary to have smart recommendation systems. Recommendation systems using Multi Armed Bandit are capable of continuous learning, that is continuously exploring winning criteria and exploiting them without manual intervention.

What We At Zone24x7 Do

We excel in offering smart recommendation systems. We are well experienced in coming up with recommendation systems that give out different results to the user each day by processing massive loads of data in the intelligent back-end. We have studied every possible way to do that and selected 3 effective algorithms to the MAB problem, which are in summary:

  • Epsilon Greedy Algorithms
  • Upper Confidence Bound Algorithms (UCB)
  • Thompson Sampling

We chose Thompson Sampling for the retail recommendation system and it has been one of the highest performing solutions due to less cumulative regret. It is also the highest cost-effective solution when it comes to implementation.

Multi Armed Bandit can be recognized as the core ideology of the online evaluation system and only a brief explanation about it is given here.

To read more on this:

Read More >

Key Services, Platforms & Products :

Big Data Analytics | Data Science | Analytics Center

Umesh Perera

Software Engineer

IoT Security Testing – Identifying the Scope

IoT (Internet of Things); Where it stands now?

A couple of years back only a handful of people who were involved in the subject knew what IoT is all about. IoT or Internet of Things you should know by now. It’s revolutionized the way we interact with the day-to-day devices and of course technology is very common and IoT will be the next big thing in the coming years. If you want to ensure that, simply google the IoT predictions. Here I am not going to talk about what IoT all about.

There are thousands if not maybe millions of articles on the same subject on the Internet. So in case if you are sitting in the IT industry for several years by now, I hope that you might get crossed with IoT at some point in your career by now. Or if you are a newbie who just entered the arena, my advice is that it’s worth paying some attention to the subject.

Security of an IoT device

Let me dig into the subject now. Day by day, second by second every application and every device connected to the internet is becoming more vulnerable to hackers. The reason is it’s a constant battle where hackers are trying to steal valuable information and people who developed them releasing patches to close the holes in their systems. Why? Ultimately it’s all about money and business. There will be a time where cybersecurity knowledge for a local police officer is mandatory. When it comes to IoT devices, this will be far more critical as they are all about your private data or devices where you get services for your own needs. This leads developers/testers to have a serious thought on the security (physical, firmware and software) of the device.

There are considerable differences when ensuring the security of web/mobile applications to IoT devices. Here onwards, I’ll talk about the areas where you should concentrate on when identifying the scope for an IoT security testing initiative. If you are someone who is engaged in an IoT project and who is also playing the role of developer, QA or just an enthusiast about the subject, the content below will be a valuable piece for your arsenal on the cybersecurity domain.

  • Process matters

    The basics should exist where you will start with scope identification. Then you should start with the threat modeling and map the attack surface. With this, you will be able to see the bigger picture. And you will be able to easily identify the rest what describes below.
  • Hardware (Physical) security

    This will be something new for you in case you only dealt with the application security so far. When you are ensuring the security of the hardware, there are several aspects that you should concentrate on.
    1. Does the device have places that are exposed to human interaction?

      For example, If there is a display (front-end) that exists and applications are usable then we are mostly talking about android (mobile) application security testing. If there is no real estate to display something or having a display just for informational purposes, then you completely out of that headache.
    2. Does the device contain any physical ports?

      Here you need to make sure that if the device having physical ports what are they used for and for what purpose they used. You need to make sure that unnecessary physical ports do not exist. If there are USB ports, then you need to make sure the accessibility of the device by using those. USB debugging should be false unless a hacker will be able to get the root access and do whatever he wishes.
    3. How does the device get access to the internet? Is that through Ethernet or WiFi?

      You need to find out whether a device can connect to the internet via both Ethernet and WiFi or only using a single medium. Based on that your testing methods will get change.
    4. What are the other connectivity methods?

      And at last, if the device connects to Bluetooth, Zigbee or another wireless communication medium, then you need to make sure required security measures are addressed when implementing them. Also, evaluate whether the device trusted the data before accepting them. When it comes to physical security, we have to think about how the device is intended to be accessed from outside. Then close all the access points other than the intended channel. Also if the device store any passwords or any other sensitive data, it is required to make sure that the hardware exists will not expose those data to an outsider (tamper mechanisms).

      If you are interested in this particular subject, it’s better to get more familiar with secure microcontrollers, secure key storage, encryption for physical data channels (pin pad cables, inter IC communication links) and tamper switches. And last, the hardware security standards. It’s worth getting to know the kind of standards your hardware following (Ex- MISRA C). Once you make sure of the above aspects in terms of the coverage on hardware security, you are pretty much safe.

  • Firmware security

    This is the most important piece of your IoT device. This will control all that matters from sensors to the operating system. So having a look at installed firmware on your device is mandatory and if you missed it then you probably missing the big fish of your IoT security testing initiative. There are three major areas in firmware security.
    1. Invest more time on debugging interfaces (USB/Serial/JTAG/SWI)
    2. Protect your bootloader
    3. Implement continuous monitoring on both devices and firmware sides. In addition to the above, please be aware of the firmware level attacks. Below are the possible areas you should consider,
  • Vulnerabilities in third-party components and libraries.

    When developing the firmware there are many third-party libraries and components developers use. So not only scanning via an automated tool but getting a list of all of them and manually validating is critical.
  • Injection attacks where a hacker can alter the firmware logic.

    Then the injection attacks, this is a broader subject. What you need to ensure is if the IoT device can directly interact with the user interface and user can input data via a provided application or even when interacting with the operating system you need to make sure all the fields are properly validated so the user cannot perform injection attacks. Based on the technology the method of injection attacks getting different. If you deal with an application then it can be SQL or NoSQL attacks.

    If you dealing with the OS it can be command injections where you can alter the firmware logic so that you disrupt the normal functions of the device. So it’s very important that making sure your IoT device having a very good defense (on-boot/periodic firmware integrity check) on these types of attacks.
  • Sensitive information at rest and transit

    When talking about sensitive data or PII (Personally Identifiable Information) whether they exist or not in your IoT device will depend on which purpose they intended to be used. It can be even inside your body which monitors your health condition or operating at home or operating publically. What you need to worry about is making sure what kind of data passing through and stores in your device. Can they be classified as sensitive information? If yes, you need to make sure two things. How they transit within the device or to the outside and how they stored.

    When data in transit especially from the backend server to the IoT device and vice versa or when passing the data to some other third-party peripherals it should be secured. Make sure the channel was secured. And make sure they use not only TLS is enough but also the version. Anything below TLS version 1.2 considered not recommended by the industry now. When storing data you should verify PII data stored in plaintext or ciphertext (result on encryption performed on the plain text).
  • DoS attacks

    Another important aspect that you should look at is the DoS (Denial of Service) attacks targeting the firmware. With this hacker can crash the system by utilizing all the available memory. In such a situation please make sure proper mechanisms are enforced concerning the security of the firmware.
  • Key management on client-side

    Another important point when it comes to firmware security is the key management of your IoT device. As you know when a device service or an application communicating with its backend server it uses a secret key to establish the connection. So, in this case, it could be a service running on the firmware. So please make sure where the key is stored on the device and how it stored. Was that the same key used all the time or any key rotation mechanism is implemented. This is very important since a hacker can steal the secret key and do whatever he wants after that.
  • Open ports and services (To the network)

    Finally, you should be aware of what are the open ports and services to the network. Any unnecessary ports should be closed. For example, if the device allows port 23, someone can get into the device via Telnet and take control of it unless proper security mechanisms are not enforced.
  • Software security

    If your IoT device dealing with some applications on top of the firmware then this section matters. Some devices may not have any software which is directly running with the firmware but some may have software that will interact with the firmware and the device (This will depend on the service your IoT device provides). If it uses any software, that means mostly we talking about android applications. You should primarily look on below,

    1. Vulnerabilities exist in the APK
    2. Data in transit and at rest
    3. Injection attacks via input fields
    4. Authentication and authorization mechanisms.

    Here I would not be going to describe each in detail as most covered in the previous sections. But here the applicability is about the software applications. So that you should separately test each area.


In conclusion, before you start everything as I mentioned in the beginning, planning matters where you will perform a deep dive into the overall architecture and then to the threat model. By doing that you will identify where your device stands in terms of the security and how well you should enforce the corresponding security mechanisms. If you consider the areas that I highlighted above when identifying the scope in your IoT security testing, you have a good start to a secure IoT device.

If you have time for preparation, It’s better to study common IoT infrastructures and components first to get some understanding of individual components. Then it will also help to design and study testing procedures relevant to them.

So fasten your seatbelt and start securing your IoT device if already not. You will save lots of money for your business and maybe you will be the one who saves the business ultimately. And besides, I would like to write down that if you are to become a security test professional and you were succeeded in performing your IoT security testing work, please be aware that you enlightened your security testing journey with an area that the future represents…

Chandima Athapattu

Chandima Athapattu is a Lead QA Engineer at Zone24x7.

Woes of a Fleet Support Engineer During a Pandemic


Pandemics such as COVID-19 lead many organizations to request their employees around the world to adapt to work from home (WFH), while attempting to run operations in unaffected regions as efficiently as possible. This is especially true for organizations that have large global transport and logistics operations. Technology such as wearables and handheld smart devices are a crucial part of any such operation today. The upkeep of this technology is heavily dependent on tech support engineers, who are expected to investigate and remediate user queries and potential breakdowns from all over the world. This operation is commonly known as Remote Monitoring and Troubleshooting (RMT).

This article discusses the challenges faced by such support engineers during WFH periods and how the right tool could have solved these problems.

Off-the-Shelf EMM Software (a.k.a MDM) are Failures as RMT Solutions

* EMM – Enterprise Mobility Management * MDM – Mobile Device Management * RMT – Remote Monitoring and Troubleshooting

Due to misinformed decisions of organizations, many support engineers are stuck with off the shelf EMM tools for supporting Remote Monitoring and Troubleshooting (RMT) requests. Unified Endpoint Management(UEM) tools serve the need of tasks such as O.S and application updates, security compliance and lock downs; but miserably fall short when it comes to remote monitoring and troubleshooting.
Why? Because every organization’s uniqueness reflects in their devices, software as well as how they use them. Being able to troubleshoot such devices and applications require bespoke solutions. Such bespoke solutions are not the focus of UEMs, as they don’t focus on offering all the details required for a support engineer to perform RCA (Root Cause Analysis). As a result, they are forced to resort to very cumbersome methods of obtaining the information they need, such as Remote Desktop, or using a mix of their own scripts and dashboards, which are highly insecure and unstable.

Remote Desktop is a Cumbersome Bandwidth Bandit

Due to the lack of information and remote control for troubleshooting, whenever a ticket is opened, support engineers have got used to resorting to remote desktop access to the end device to get the data and control that they need to solve a problem. Remote desktop is heavily dependent on a stable and consistently fast network connection, which is not something that you can expect from someone working from home.

Domestic Data Rates Are Exorbitantly Higher

Organization’s generally opt for unmetered leased lines for their internet connectivity. When support engineers work from home, they are forced to use domestic connections which are metered, and result in sending the operational costs through the roof.

Can’t Look Away for a Second, but You Have To

One of the biggest perks of working from home, is the fact that you get to be around your loved ones. But, support engineers who need to keep their eyes on a terminal or a display are under a lot of tension whenever they are away from it even for a second. At the same time, being away for a short time is not something that can be fully avoided. This is a lose-lose situation for the organization as well the employee. The support engineer faces anxiety of failing to meet SLAs because he/she didn’t see the incident on time and the organization suffers the consequences.

MATRIX24x7 The Solution





A Tailor Made Solution Vs. Ready Made Software

  • Free yourself from the fixed functionality and rigidity of off the shelf, ready-made applications.
  • Mold the platform to your requirements vs. adapting to it.

A Partner Vs. A Vendor

  • Empathetically driven towards gaining a full understanding of the business and operation prior to solutioning.
  • Access to our best tech and domain experts vs. business sales intermediaries.

This is exactly what MATRIX24x7 is. An RMT platform that can deliver tailor made solutions, and a partner that operates as the CTO of your support operation.

Heshan Perera

Associate Architect / Manager – IoT Platform

Benefits of the Right Remote Monitoring & Troubleshooting (RMT) Solution to IT Support Teams


We are in an age where the COVID-19 pandemic has pushed organizations to adapt to remote working. L1, L2 and L3 support teams are some of the most affected by this change. We discuss why this situation impacts these support teams and how they need to adapt to run like a well-oiled machine, when working remotely.

MATRIX24x7 is an RMT platform that:

  • Helps solve more problems at L1 level.
  • Reduces back and forth communication for problem solving.
  • Enables rapid remediation through custom one-click commands.
  • Reduces time spent on recurring issues through automated troubleshooting.

A Background of L1, L2 and L3

Organizations break down their support team into three layers (L1, L2 and L3) in order to maintain a balance between the cost of operation and service levels. A major driver of this structure is that most support requests can be addressed by resources with very low technical expertise. As a result, the L1, L2 and L3 support teams demonstrate the following characteristics respectively.

Remote Working Aggravates Common Problems at Each Support Layer

  1. Misleading Tickets
    • When L1 encounters a problem outside their script, they communicate the problem to L2 with a ticket.
    • The content of these tickets are misleading to the L2 support team due to lack of information.
    • A lot of correspondence with the end user as well the L1 support engineer is required to resolve these.
    • How Remote Working Aggravates the Problem
    • Remote workers do not have access to the comms facilities available in their office premises.
    • Even when they are available, accessing such solutions remotely is cumbersome.
    • This has a toll on keeping up with SLAs.
    • Better Ticket Accuracy with MATRIX24x7
    • MATRIX24x7 integrates directly with your ticketing system.
    • Automatically adds all the information about a device or software to a ticket with the snapshot of device status at the time issue occurred
    • L2 and L3 support teams can jump start troubleshooting while greatly reducing the need for back and forth communication.
    • Generate tickets to the correct support team, based on the type of issue; thus allowing the quick resolution of issues and increased customer satisfaction.
    • Automated L1 ticket generation based on proactive monitoring of the system.
  2. Lack of Information / Data
    • After tickets are raised, L2 and L3 support teams need to have information at hand that enable them to perform RCA and troubleshoot efficiently.
    • However, most such support teams in the world only have off-the-shelf RMT tools at hand, which provide fixed information and functionality.
    • Certain teams are stuck with UEM (Unified Endpoint Management) software as their RMT solution. Please refer to our article on The Woes of a Fleet Support Engineer During a Pandemic for more details on why UEMs don’t cut it.
    • How Remote Working Aggravates the Problem
    • When teams work from the same location, they have the luxury of leveraging the experience and expertise of their colleagues easily, it is just a case of walking to the desk and talking it out.
    • On the other hand, remote working is far more challenging, where teams struggle to communicate efficiently without visual aids and gestures.
    • Therefore, it is essential to limit back and forth communication to obtain required information to identify the root cause of the problem, so that troubleshooting can commence.
    • In depth / Relevant Data through MATRIX24x7
    • The MATRIX24x7 platform eases and enables organizations to have access to scenario specific parameters critical to the efficiency of their RMT operation.
    • Our RMT experts will analyze your information needs and make them all available through its single portal.
  3. Lack of Control
    • Once a root cause is identified, the next step is to troubleshoot.
    • Off-the-shelf RMT platforms are very limited in terms of the actions that can be taken to remedy a problem.
    • Available methods include sledgehammer techniques such as remote desktop and device restarts that disrupt the operation.
    • As these approaches are disruptive to operations, support engineers are forced to trouble non-tech savvy end users or even worse, deploy tech support engineers to physically visit unattended devices to correct problems.
    • How Remote Working Aggravates the Problem
    • During a pandemic, deploying tech support engineers is an incredibly impractical task with all the lockdowns in motion.
    • Apart from this, maintaining a conversation over domestic network and phone connections becomes difficult due to delays and interference.
    • Granular, Purposeful Control through MATRIX24x7
    • The MATRIX24x7 platform makes adding custom configurations and commands a breeze.
    • Our RMT expert team will work with your teams to understand their wishlist in terms of remote control and configurability and work towards enabling such functionality through the platform.
  4. Inability to Remain Fixated on a Monitoring Screen
    • Support engineers who need to keep their eyes on a terminal or a display in order to stay on top of support tickets.
    • This is possible while at your workplace.
    • However, this changes during a remote work situation.
    • How Remote Working Aggravates the Problem
    • Support engineers are under a lot of tension whenever they are away from it even for a second.
    • At the same time, being away for a short time is not something that can be fully avoided.
    • This is a lose-lose situation for the organization as well the employee.
    • The support engineer faces anxiety of failing to meet SLAs because he/she didn’t see the incident on time and the organization suffers the consequences.
    • Multi-channel Alerting through MATRIX24x7
    • MATRIX24x7 eliminates the need to spend every waking hour in front of the display.
    • Have autonomous health rules to detect problems and notify the support team via your favourite medium.
  5. Lack Of Enthusiasm In Dealing With Repetitive Tasks
    • Repeating the same set of tasks over and over again will kill the enthusiasm of the support engineer.
    • Negatively impacts the SLAs.
    • High turnover of support personnel which is a strain on the support training team.
    • How Remote Working Aggravates the Problem
    • It is possible to lose attention to details and carry out the wrong rectification steps on the issue which could be detrimental.
    • Further negatively impacts the SLAs.
    • Health Rules and Self – Healing through MATRIX24x7
    • Automated self – healing rules which will address known and recurring issues thus eliminating time wastage on such issues.
    • Ability to fine tune the self – healing rules for recurring issues based on previous experiences.
Isuru Samarasinghe

Associate Tech Lead/RMM & IoT Team

Towards SOC Compliance

Continuous business value creation while handling customers’ data in a secure way is a must for SaaS (Software as a Service) providers to survive in today’s rapidly changing, highly coupled and deadly competitive business environment. Even though the organizations follow their own techniques and processes, customers expect some kind of mutually recognized guarantee to ensure that their data is handled in a secure manner. Achieving Service Organization Control(SOC) compliance is one such guarantee. Hence, SaaS providers may at least achieve SOC 2 Compliance to ensure their customers that their services meet data security requirements.

Being a Technology Solution company who wants to get emotionally touched with their clients (SaaS providers), it is important to implement some reports and adhere to some procedures to make sure our clients achieve SOC2 Compliance certificate. However, the reports that are needed (Eg: User Snapshot Report, Change Client Report, Failed Login Report … ) and processes to follow (Eg: Selected Sensitive Fields Encryption, Employee Deletion & Obfuscation with GDPR, scrubbing data, decommissioning canceled clients, maintain audit logs) will be decided by the service providers based on their specific business practices to comply with selected trust service principles out of five trust service principles.

Pic 1 : FiveTrust Service Principles

(Note: SOC related trust service principles are security, availability, processing integrity, confidentiality and privacy. For SOC2 compliance it is not a must to have all of them in place)

Readiness Testing

Even Though SOC2 audit is done by an independent CPA (Certified Public Accountant) or accountancy organization, we (QA engineers) also can do a basic readiness testing by verifying the newly created reports/scripts/logs, sensitive data encryption etc… These small actions will provide significant confidence to SaaS providers to move forward with SOC2 audit as it reduces the risk of exceptions and failures. Furthermore, QA team needs to make sure the newly added changes do not affect the functionality and the performance of the application.

Validating Sensitive Fields Encryption

Encrypting all the identified sensitive fields is an easy way to achieve two trusted principles, privacy and confidentiality. Data can be encrypted as required after proper identification of all the tables (in both systemDB and clientDB) which include the identified sensitive fields. Then those tables/fields can be validated by querying as expected to check whether the ciphertext is shown in the required fields, instead of actual data. If you still want to check whether the correct data is stored as ciphertext, that can be easily done using SSMS 2017. A certificate will be created on the database server when identified fields are encrypted using ‘Always Encrypted’. Then by exporting that certificate, and import it to your machine can help you to view both exact data and ciphertext based on the mode selected.

Validating Intrusion Detection and Prevention

Intrusion detection and prevention can be done via network and application firewalls, custom rules and implementing two-way authentication to achieve security. Implementing two-factor authentication requires users to provide two ways of verification when logging into an account, such as a password and one-time passcode (OTP) sent to a mobile device. It strengthens intrusion prevention by adding an extra layer of protection to the application’s sensitive data. Verification can be done using an application that supports time-based one-time password token generation (TOTP token Generators). Permission structure, UI validations, reset password flow, enabling and disabling 2FA, are few areas that need to be taken into consideration when testing 2FA.

Maintaining a Failed Login Report will also help to keep track of all the failed login attempts. This information can be used to investigate and make quick and informed decisions about detected intrusions.

As QA engineers we need to make sure that failed login report includes accurate and relevant information related to all failed login attempts for all types of user accounts regardless of active/inactive status. Related data can be stored in the Eventlog table. Related xml can be viewed by querying as needed.

Pic 2 : Sample xml related to failed logon report

Testing Audit Logs

To achieve SOC compliance, monitoring authorized and unauthorized system configuration changes, and user access levels changes is essential. Adding/ Editing permissions of user access levels, changing already defined permission structures are few activities that need attention. QA team can validate the related xml and its content by performing such activities.

Pic 3 : Sample xml related to changing report permissions

Validating Decommissioning Cancelled Client Process

Decommission inactive/canceled clients from the application and remove all historical data is essential to satisfy security requirements around data retention policies. The development team can come up with a script to remove the data from the system database. This script can be given to SaaS providers so that they can execute it when they need to decommission the canceled or inactivated clients.

Following steps can be used to validate the Decommissioning Cancelled Client Process,

  1. Inactive / cancel the client (via application /using their license)
  2. Execute the script to remove the historical data
  3. Verify no record is there in the systemDB after decommissioning (You can basically check deleted clientID do not exist in systemDB )

Apart from just validating reports/xmls, as proactive QA engineers we need to follow up on modifying the scrubbing scripts, sensitive data encryption, event logs and audit logs with their xml with the schema modification. These actions will help to increase our value significantly.

Moreover, if we want our clients to achieve SOC 2 compliance, we need to adhere to the agreed process, because in SOC audit even the process is validated. Below mentioned few examples for such agreed rules/process in terms of quality assurance.

  • All the production candidates should be QA verified to do a production release.
  • All the changes should be approved by an authorized party.
  • QA sign off should be given as a document not via email and that document should be attached to the production release ticket.

Performing end to end SOC compliance testing cannot be done by QA engineers, especially the offshore QA team. However, they can test the newly implemented two-factor authentication, sensitive fields encryption, audit logs, scrubbing scripts, decommissioning canceled clients scripts and few other reports related to intrusion detection, disaster recovery and security incident handling.

These small actions will provide great value addition for the customers by increasing significance confidence to proceed with SOC2 audit. So why not proceed with these few step

These small actions will provide great value addition for the customers by increasing significance confidence to proceed with SOC2 audit. So why not proceed with these few steps.


Nuwani Navodya

Associate Lead QA Engineer

Execute Testing on Autonomous Robot Platform

Robotics is an interdisciplinary branch of engineering and science that includes mechanical engineering, electronic engineering, information engineering, computer science, and much more. Robotics deals with the design, construction, operation, and use of robots, as well as computer systems for their control, sensory feedback, and information processing.

A robot is a machine programmable by a computer capable of carrying out a complex series of actions automatically. It can be autonomous or semi-autonomous. An external control device can guide robots or the control may be embedded within.

When we consider about Robot test engineering nowadays-mobile autonomous robots are progressively entering the mass market. Thus, manufacturers have to perform quality assurance tests on a series of robots. Therefore, tests should be repeatable and as automated as possible. Testing activity applied to mobile robotic systems is a challenge because new features, such as non-determinism of inputs, communication among components and time constraints must be considered. Simulations have been used to support the development and validation of these systems. Coverage testing criteria can contribute to this scenario adding mechanisms for measuring quality during the development of systems.

When we move from software test engineering to robot test engineering we mainly focus on the following five areas.

  1. Stress Testing
  2. Performance Testing
  3. Longevity Testing
  4. Security Testing
  5. Functional Testing

Let’s take a look at each, one by one:

Stress Testing

Stress testing is defined as a type of software testing that verifies the stability & reliability of the system. We use the stress test in Robot test engineering to determine the robustness and error handling under extremely heavy load conditions of the robot as well as its backend. Under stress test following areas are addressed.

  • Highly Accelerated Life Test (HALT)

    HALT is a technique designed to discover weak links of a product. It can trigger failure modes faster than traditional testing approaches. Rather than applying low-stress levels over long durations, HALT testing applies high-stress levels for short durations well beyond the expected field environment. Hardware units like electronics, drive systems, sensors used in the robot are tested under the HALT test by driving robot with different speed limits. Software components, for example, the robot’s control system, which is the brain of the robot, communication between the robot to the backend as well as the monitoring and managing tools of the robot are validated under extreme workload or stress.

    When we come to the control node stress test, we stress the different states of the robot and test how it behaves under stress. Failures that are triggered under the above-mentioned tests do not have pass/fail results and it requires a root cause analysis and corrective action to achieve optimum value from testing. It creates the ability to learn more about the design and material limitations as well as the bottlenecks and provides opportunities to continually improve the design before bringing the robot to market. As most items are only as good as their weakest link, the testing should be repeated to systematically improve the overall robustness of the product.

  • Environmental conditions

    Environmental conditions testing is the measurement of the performance of equipment under specified environmental conditions such as temperature, static charge, obstacles, climate. When we run the robot in different environmental conditions, how environmental factors affect the robot’s smooth functioning and how much stress it creates on hardware as well as software of the robot was tested.

  • Benchmarking values

    Considering the Mean Time Between Failures (MTBF) and the performance statistics of each component, the benchmark is derived for the robot.

    A couple of examples are;

    • The battery charge retention time
    • The motor gear wastage
    • RF antenna interference

Performance Testing

Performance testing is the process of determining the speed, responsiveness, and stability of a computer, network, software program or device under a workload.

In Robot test engineering, we do a performance test to ensure the performance of software, network &, hardware.

Under software, QA should ensure the performance of software resources used like MQTT broker, responsiveness of monitoring & troubleshooting software.

To ensure the network performance, QA should validate the teleoperation control, which used to control the robot externally from a remote location. This involves accessing and controlling the robot in real-time (autonomous mode & manual mode) including in the Network performance test.

The QA should ensure that the performance of the sensor system, control System, and drive system of the robot is carried under hardware performance.

Longevity Testing

Longevity test is an operational testing scheme that uses a baseline work efficiency specification to evaluate large enterprise applications and hardware performance. LT is applied for error checking or heavy usage after a live operational period and is contingent on complexity and size. In robot test engineering QA should carry out the Longevity test and ensure;

  • Hardware (sensors & electronics)

    How the hardware performs in the long run and benchmarks the value for maintenance.
  • Drive system (Performance in continuous operation)

    How the robot’s control node and navigation system performs in the long run and benchmarks the value for maintenance.
  • Environment (Parameters change with runtime)

    How these environmental factors (temperature, static charge, obstacles, climate) affected the robot in the long run and benchmark value for maintenance.
  • Software (Performance over time)

    How backend, managing and monitoring systems behave in the long run and identify the bottlenecks to optimize the software.

Security Testing

Security testing is a process intended to reveal flaws in the security mechanisms of an information system that protects data and maintains functionality as intended. When we come to robot test engineering, security is a very sensible and important component to verify.

First, the reasons for an attack needs to be identified. Then, the methods of attacking needs to be thought of, for each reason. Finding out whether precautions have already been taken for the above methods would be the next step. However, a system cannot be secured 100% due to the existence of unavoidable reasons. Such an instance would be, an attacker trying to remove the nuts and bolts of a robot. This will also be identified as a threat that cannot be avoided or take any precautions to. A scope will be laid out in the initial stages and only these threats will be looked into. Some threats that are out of scope will be kept unattended.

Functional Testing

When we consider about functional testing there are 4 main areas, which we consider.

  • Software
  • Navigation/Autonomous functions
  • Use cases
  • Simulation

Under software testing, we test the robot’s control node which is the brain of the robot by testing its every state with all the possible scenarios and also we test the robot’s utilities like RDL (Report Definition Language) which uses to communicate between robots and the backend. In addition, QA should ensure the software backend as well as managing and monitoring units of the robot-like Troubleshooter application.

Robot Navigation stack plays a major role. The ROS (Robot Operating System ) navigation stack is useful for mobile robots to move from place to place reliably. The job of navigation stack is to produce a safe path for the robot to execute, by processing data from odometry, sensors and environment map.

QA validates the Navigation Stack and the autonomous function of the robot to ensure that the robot navigates safely and correctly with avoiding blockers. A robot is designed for specific use cases and in the case of AZIRO, the robot’s primary use case was to scan for RFID inventory whilst navigating the store floor. The QA should be able to thoroughly validate this use case with special emphasis on the algorithm used by the robot for navigation and testing the edge cases.

When we do the robot functional testing QA use a testing environment. In the test field firstly it is required to assure the intractability of hardware and software by running the robot. Upon fine-tuning the robot’s performance, the next step was to create a more advanced testing environment which was very similar to the actual store environment.

Finally, the robot is deployed in the real-world environment, and a final testing round is carried out to ensure the robot works as expected.

Let’s go to the deeper level of robot testing in the next chapter.

To be continued….



A Methodology for Testing Mobile Autonomous Robots

Gihani Naligama

Gihani Naligama is a QA Engineer at Zone24x7

Five Things to Consider When Selecting your RFID Solutions Partner

The Selection of an appropriate RFID solutions partner is a key consideration for any organization planning to deploy RFID. While there are a number of leading solution providers, reputation alone is not the only factor which should be considered when making this choice. A lot depends on the project approach adopted by the solutions provider. Following are some key factors to consider.

  1. If the business hasn’t already carried out a comprehensive requirements study to validate the choice of RFID, this should be the first task of the consultant or the RFID solutions provider. A previous post highlighted some of the factors to be considered when deciding whether RFID is for you. A comprehensive study should establish the current as well as the future needs of the business which needs to be fulfilled with RFID.
    • Identify the data types and frequency at which such data needs to be captured
    • How and where would this data be stored?
    • Who are the consumers of this data and how will it be shared?
  2. If the requirements study phase establishes a clear need for the business to deploy RFID, it is advisable that the solutions provider initiate a Proof of Concept (PoC) study. The objective of this study is to test key technology decisions and assumptions to ensure the achievement of all the critical success criteria prior to a full-scale roll out.
    • Environmental factors tend to significantly impact the design decisions of RFID systems. The presence of metal surfaces, aqueous liquids and other factors which contribute to a ‘noisy environment’ may adversely impact the readability of RFID tags. These need to be extensively investigated in the solution architecture & design by the solutions team.
    • Different combinations of readers, antennas and tags are extensively tested by the solutions team to identify the optimal combinations and configurations suited for the specific business needs. A well planned and executed proof of concept study is invaluable to make the right technology choices and ensure the right solution.
    • RFID solution experts with the right skills and relevant past experience in implementing real world solutions will consider all of the above and any other factors which might impact the ultimate success of the project.
    • Finally, all the key findings from PoC study should be collected and presented to key stakeholders to create alignment prior to full-scale deployment.
  3. Upon securing the approval to move forward, many of the purchase decisions need to be made. An RFID solutions partner with the right global partnerships can source the equipment and components and also expedite the installation of the RFID infrastructure minimizing impact on ongoing business activities.
  4. Developing a comprehensive solution is not only the job of RFID experts. The RFID solutions provider should also be prepared to make available multiple other skills including Business Analysts, Test Engineers, Project and Program Managers and Software Engineers to craft the total solution.
  5. From product tagging to operating the RFID equipment, business users must be methodically trained by the solutions team on both the equipment and the overall solution to ensure proper change management to ensure a smooth transition into RFID.

Zone24x7 counts over a decade of experience in deploying RFID solutions across multiple industries. We have custom designed hardware, software, robotics and fully integrated solutions to cater to the unique and challenging business needs of our clients. This blog post outlines an approach which we have successfully adopted during our many client engagements to maximize the success of our projects.

Nuwan Weerasinghe

Head of Marketing