3 Steps To Improving Build Quality Through Automated Testing

3 Steps To Improving Build Quality Through Automated Testing

//3 Steps To Improving Build Quality Through Automated Testing

As you develop your app to meet new user expectations, your codebase grows larger and increasingly complex. Automated testing in CI helps to keep defect rate down, but how do you know that everything you’ve built works for all your users? 

Escaped defects are the bane of development and product teams. UI testing can help you ensure that your new work doesn’t introduce more re-work later, but comes with its own set of challenges. Despite constant changes in the web and mobile app ecosystem, developers need to move fast and not break stuff. This requires us to ask, “how can we improve velocity and quality at the same time?”

Step #1: Just Say No to Flaky Tests

Flaky automated tests throw a monkey wrench into the software delivery process. They represent a tangible barrier between downstream development and delivery activities. A flaky testing process can be caused by several factors, the short list of which is:

  • Test execution environment is inconsistently managed or unreliable
  • Code changes are not reflected in test design
  • Rigid UI element locator strategies cause tests to break on various platform versions

We dive into more detail about how to solve these challenges in part one of our downloadable series on improving coverage in build verification.

Step #2: Fit the Right Testing into Every Cycle

Defects get in the way of forward progress. They force us to focus on fixing, so we spend less time on innovative work. Development teams measure their velocity in terms of story points. The fewer of these points they deliver, the slower they become. Fixing defects, a common type of re-work, is often the result of a lack of quality from a previous iteration.

As these defects get perpetuated to future work, there is less time for to work that benefits users. This is what I call “the downward spiral of defects”, an anti-pattern from which no one benefits.

Build verification testing is a key enabler to going faster. Introducing UI testing as part of the continuous integration process provides a more complete picture of the user experience than traditional late-cycle UI testing approaches. Better, earlier feedback ultimately drives higher confidence in the release readiness of every build.

Once stable tests and correct data are prepared, execution of tests requires resources to run on, typically in a lab or simulated environment. Running tests across various devices, platforms, and endpoints also requires an orchestration layer to maintain these processes and results.

We go into detail on how to maximize platform coverage and speed during CI builds in part two of our series, available for download here.

Step #3: Stick to an Effective Test Maintenance Strategy

Test maintenance is more than just cleanup work. It’s what you need to do to make sure your development cycles are completed efficiently. With an effective test maintenance strategy, you can maximize the impact of feedback from testing on early development cycles and fight technical debt at the same time.

One approach to reducing time and risk in regression testing is to perform less testing more often . In other words, break your monolithic test suites apart into groups of tests that can be run continuously. This doesn’t remove the need to run full regression testing before releases. It minimizes the likelihood that defects will be caught late in release cycles, reducing last-minute thrash and production ultimately production incidents. This frees valuable time for app teams to work on things that improve the user experience.

Every time you replace a feature or screen in your app, look for tests that fail afterwards before moving them into regression suites. Either fix them or flag them for ‘technical debt’ work in the next sprint. Also, don’t write arbitrary code in your tests! Sleeps, waits, and timers that have no connection to the actual state of the UI will cause more trouble in the long run.

For more recommendations on how to keep testing sprawl in check, download the third part in our series here.

Paul Bruce is a Developer Evangelist at Perfecto, focusing on the value of quality and velocity throughout the software lifecycle. He previously worked as an advocate for API development and testing practices and as a full stack developer before that. He now writes, speaks, listens, and teaches about software delivery patterns in modern enterprises and key industries around the world.

Leave A Comment