User Journey, A Way to Prioritize Your Test Strategy

User Journey, A Way to Prioritize Your Test Strategy

//User Journey, A Way to Prioritize Your Test Strategy

In the current age of delivering highly complex digital applications, in a competitive market for high-expectation end users, maintaining quality seems like a daunting task. Furthermore, agile cycles are shrinking to meet competitive deadlines.

A typical product owner needs to balance 3 sprint investments (frankly, on a daily basis): innovation, tech debt and bugs, and testing. They all need to be optimized. In the case of testing that means considering: efficient parallel executions, ongoing availability of devices and browsers, reliable scripting and executions (avoid false negatives), and reporting suite efficiency, to name a few.

One topic to consider is prioritization of test executions. The traditional approach would be to look at atomic test cases per platform (device or browser) and execute all related tests on that platform. One customer I talked to recently spoke of 4 main platforms, 90% of tests are executed on those, and then there are 7 more which share the rest of the executions.

A slightly different approach to prioritization and reporting is taking the perspective of users’ journeys rather than singular user flows. For example, as a consumer of Amazon (assume for a minute I’m researching a slightly expensive item) I would:

  • Probably hear from a friend about this product and likely check it out on the mobile app, maybe add to cart
  • Come home, take another look at it, compare to other products and see reviews. Maybe then buy it. That’s likely to happen on a desktop browser
  • I would then track shipment via email
  • Let’s say I got the product – I’m not happy with it and I’d like to return it. I’d probably initiate the return and print the shipping label on my desktop browser
  • Again, I’d track the return progress via the app and email
Consumer journeys across digital channels

Customer journeys across digital channels

To summarize, you could think of two or three journeys here: search product, buy it, and return. These journeys could happen on the mobile app, desktop browser and perhaps a tablet. But not all journeys happen on all devices or one: unless it’s a cheap product, I will probably not buy it on the app, and I’m less likely to fire a desktop browser to lookup a product based on a friend’s recommendation if I’m out and about.

Take the example below from the insurance world. Here again, customers would go through phases in different channels and screens.

Insurance customer journey across digital channels Source: Remarkgroup.com

Insurance customer journey across digital channels Source: Remarkgroup.com

When you consider user journeys, they also represent a measure of marketing and business success: how many users could buy the product? How many users were able to contact customer support and get the help they needed? The business doesn’t care about the platform you tested it on, as long as it matches what most users are using.

To summarize, we recommend using analytics to define these user journeys on relevant devices and browsers. Consider prioritizing your testing and reporting to align with those. Trying to test all user flows on every device is unrealistic and will drive you to lose the insight that you seek.

Make sure to check out this post on different test approaches (BDD, TDD/ATDD), and this post on RTDD both very relevant to this discussion. 

2017-08-05T18:33:06+00:00 Mobile Application Testing|0 Comments

About the Author:

Leave A Comment