Drive Faster Quality Analysis Through Tags and Customized Test Code
Quality visibility is all about logic. It should be fast, easy and mostly effective – if the dashboard or report that you’re using gives you the right ‘hint’ of what is the next move – then you’re all set. When it comes to gaining fast and efficient quality analysis, these are usually the pains we hear:
- Test executions – length as well as context-driven test cases
- Planning test executions –based on trends and insights is a challenge -e.g. which tests are finding more bugs than others?
- Flaky tests are an ongoing pain in the digital space due to the inconsistent pass/fail across platforms
- On-Demand quality dashboards that reflect the app quality per CI Job, app build, functionality tested area, etc.
But if the call to action is missing from your test execution data, there are probably few steps you can take to build the right context to support any decision-making process. The optimal visibility is the ability to differentiate between an Insight and “Actionable data”.
- An insight when it’s available, usually describes a high-level finding: “we know we have a problem somewhere, we need to check where it is”. It’s clear that the way to resolution requires few steps:
- High level investigation
- Triaging of the problem/s
- RCA (Root cause analysis)
- “Actionable Data” means that teams can name the problem and from there will know (or work out) what needs to be done
“Easy to say, hard to convey” is what we hear from some professionals when we discuss what it takes to build the ultimate quality visibility. They usually refer to the need to digest a large number of executions or the burden of analyzing a long execution report with large amounts of commands and artifacts.
Recommended Approach – Reporting Test Driven Development (RTDD)
In today’s digital space, organizations adopt open-source tools like Selenium, Appium and leverage execution frameworks like TestNG and then drive these executions through Jenkins for their CI workflows.
If, to get back to the problem statement in the previous paragraph, such executions pile up into thousands of scripts that can’t be distinguished by platform, functional area, or customized property – this translates into long post-execution analysis that impacts all personas involved in the software release cycle.
Shortening the feedback loop means that customers can optimize the release process and response time to resolve issues or even get fast feedback from their new code changes, and with that, really drive Agile practices within their organization.
When customers develop their products with quality analysis in mind, it allows them to increase release velocity by removing bottlenecks around triaging, quality insights and more.
Let’s understand what the term RTDD means: Basically, this methodology connects to existing SDLC processes, with the simple change of adding relevant tags, logical steps and other customized properties and filters into the test suites.
When the test developer authors his code, and can apply the right tags (See below, Table 1, for recommended list) as part of his test scenarios, he can:
- Filter the execution by platform, functional area, build or any other tag
- Generate on-demand dashboard to get quality insights into drive decisions
- Speed up triage failures per platform or functional area
- Plan and optimize test execution cycles that only cover what is important to that specific execution
Perfecto has built a thorough best practices document that combines accumulated experience from enterprise customers, software experts and others, that guides practitioners on how to take existing test automation code – whether written in Selenium, Appium in the supporting dev languages, or commercial tools like HPE UFT – and convert them into RTDD-compliant scripts. The end result of such adoption can be seen in the screenshots (below) showing: summary dashboard, filtered execution flow, single test report based on platform and tags, and more.
Logical steps documentation and tagging:
Logical steps with tags shown in a single test report for faster drill down:
Quality Dashboard with grouping by Tags and Devices:
Filtering on the below dashboard by failed platforms (e.g. All iPhone 5S):
The above approach of RTDD should allow managers, test automation engineers, and developers of UI/Unit and other CI-related tests to extend either a legacy test report, a testNG report or other – to a more customizable test report that, as demonstrated above, can allow them to achieve the following outcomes:
- Better structured test scenarios and test suites
- Use tagging from early test authoring as a method for faster triaging and prioritizing fixes
- Shift tag-based tests into planned test activities (CI, Regression, Specific functional area testing, etc.)
- Easily filter big test data and drill down into specific failures per test, per platform, per test result or through groups.
- Eliminate flaky tests through high-quality visibility into failures
The result of the above is a facilitation of a methodological-based RTDD workflow that can be maintained much easier than before.
Get the best practices guide HERE