The DevOps era is here, and with it comes the need for testing on many platforms- mobile, desktop web, IoT, chatbots- and many more. With the current pace of innovation, it’s difficult for automated testing to keep up, whether for mobile, cross-browser, or desktop apps. Two or three years ago, organizations were releasing applications a few times a year; today, organizations are releasing a few times a week- or even a few times a day! Obviously, with such a relentless release schedule, teams need to be very picky about what is included in their CI test cycles
The Current State of Automated Testing for Responsive Web Design
For mobile, we have the leading open-source frameworks such as Appium, Espresso and XCUITest. For web, open-source frameworks such as Selenium, Protractor, Nightwatch.JS, and WebDriver.IO are by far the leading tools for developers and testers to cover their web and responsive-web scenarios. So, the tools for mobile and web are out there- what’s the problem then?
There are 5 Problems Associated with Automated Testing in 2017:
- Lack of structured testing and guidelines
- Unstable test labs
- Quantity of tests versus quality
- Test framework limitations
- Market dynamics and platform coverage
Let’s take a deeper look at each one of these challenges:
1. Structure & Guidelines
Teams that use the aforementioned test frameworks often do not implement reusable tests that leverage the page object model (POM). In addition, critical object locators aren’t robust enough. Also, teams often develop test cases either through BDD or traditional coding without considering the consequences – there are no tags or anchors embedded in the test cases which allow developers or test engineers to quickly drill down and resolve the issues. Test cases are code and should be treated as code, with all its implicit meanings (SCM, maintenance, etc.). In addition, when testing digital platforms, teams need to plan for the unplanned and assure that there are no surprises within the CI automated build cycle.
2. Unstable Test Labs
Developers and feature teams invest a lot in their code and their tests; however, if in the money-time (CI execution), lab tests are failing due to a disconnected or improperly initialized device, the entire workflow is broken.
Teams need to realize the importance of a robust test lab, whether local or in the cloud; they also need to know the initial state of the platform in the lab prior to executing the test. Planning a post-commit test execution through CI while the target devices or desktop VM’s are in an unknown state, disconnected, or being scheduled by a different team for another task, is bad practice and a risk to the overall workflow.
As stated above, teams need fast feedback and, as such, having a very long and inefficient test suite misses the target. When tests are not well structured and there is lack of reusability, teams often find themselves executing too many tests, sometimes duplicated ones that don’t cover the proper use cases, and don’t bring much value to either the developers or the testers.
The point is not to ask teams to reduce their testing efforts but rather to focus the tests on the correct user scenarios, the correct target platforms, and to learn and improve from the experience of previous test runs. Good time-saving practices include using a centralized repository of tests together with predefined macros, and shrinking the setup of your environments.There is no need to do a long set of prerequisites when the only thing you’re trying to validate is a simple functionality.
In addition, there needs to be a risk-managed decision on what to automate. Not every test needs to be automated, and, certainly, not every test needs to be part of CI commit. A test scenario which is either hard to automate or not designed from a development stand-point to be easily tested might not be worth the effort to automate since it may turn out to be flaky and expensive to maintain going forward. There are some great practices for overcoming complex automated scenarios, such as in the following example from Amir Rozenberg foraudio based testing.
4. Test Framework Limitations
The current testing landscape is falling behind when it comes compliance with the latest technologies. Automating visual testing or reliable fingerprint authentication on mobile and desktop web, as well as chatbots and other voice commands, is still a challenge for existingtest frameworks.
As can be seen in the above comparison table, which is changing constantly, current test frameworks complement each other in many cases. The recommendation for practitioners is to leverage more than one framework throughout the SDLC, based on requirements and skillsets, in order to achieve maximum test coverage. Developers might find that, as part of the mobile SDLC, frameworks such as XCUITest and Espresso are more appealing due to the performance of the framework, the fact that they are embedded into their IDE and workflows, and that they can automate more scenarios than tools such as Appium. On the other hand, Appium can complement the functional testing with more system-level based testing, additional interoperability, automation and more.
Automating everything isn’t the right goal. Automating what’s important, on the other hand, will provide the best return on investment for both devs and testers. Knowing what the correct mix of tests vs. platform is depends on various factors.
- Existing market analytics – which platforms, which user patterns, and which geos are the most relevant and used most by your customers
- Test data analytics – which test cases that aren’t flaky, are finding the most issues, and cover the key features of your app or web site
- Market coverage – assure that you are testing against the optimal list of devices, desktop browsers, IoT and other platforms of your end users, and fine tune and calibrate your lab according to market changes.
To ensure a successful automation strategy, and that your teams can shift to a daily release cadence- or even quicker- you need to take these 5 issues into consideration. Your test suite should only include the most relevant test cases, running continuously per-commit, automatically, in parallel, on the key target platforms.
The open source community should play an active role in helping practitioners build a more robust and sustainable automation. Closing functionality gaps in supporting new OS features, as well as helping with cross platform testing, is something that can be enhanced. In addition, the community can be more proactive in advising about the right tool stacks and combinations to use with respect to the use cases teams might have, such as using Jasmine with other frameworks, etc.