Image Blog Performance Testing in the Age of Agile and Digital Transformation
March 27, 2017

Performance Testing in the Age of Agile

Mobile Application Testing
Back to top

Performance Testing Has Evolved

 Performance testing does not resemble what it used to be, say, five years ago. When you examine the tools provided and their capabilities, they tell a story. Careful examination and tuning of the scalability, potential break points, and efficiency of backend services was the main objective. What changed? Here are a few pointers to get started:

  • Digital transformation: The pivotal role played by mobile apps and web sites in the core business of many companies undergoing digital transformation raises the bar on app performance. Users no longer tolerate staring at spinning wheels or splash screens for too long, and will abandon a service provider if their app is not responsive enough.
  • Client-side power: Rich capabilities on client devices, clearly demonstrated by native mobile apps, contribute significantly to the end user experience. On the web side, the introduction of HTML 5 closes the gap by leveraging local sensors and ensuring continuous experience, online and offline.
  • Agile: With the introduction of these modern technologies, the market becomes very competitive, and teams need to deliver features to market much faster. Traditional performance testing has typically been done out of cycle or, at best, right before releasing to production.

As performance testing becomes more complex, it helps if we break it up into two practices: Single user performance testing, and multi-user load testing.

Back to top

Single User Performance Testing

Single user performance testing needs to take into account all of the factors that could impact the end-user experience across all tiers, including last mile networks and client devices: network changes and signal degradation, changes in location, usage of onboard resources and sensors (that consume CPU, memory, etc.), background applications competing for device resources, etc.

Elements

   Figure 1: Elements outside your app can impact the user experience

All of these variables should be considered in the test scenarios and they should all be automated since the time to test is very limited. SLAs for acceptable user experience should be set (e.g., time to login, time to accomplish a task) and measured in all scenarios. Ideally, many of the user conditions (background apps, sensor usage, location, network conditions, etc.) can be grouped into a “persona” that represents a certain type of user as defined by the marketing team or the line of business. The persona can then be used as a parameter to the script, and overlay user conditions on top of the functional test flows.

Wind Tunnel

 Figure 2: A persona (“Georgia” in this case) can be incorporated into test scripts as a parameter

The outcome of the script should be a detailed report that’s easy to understand and compare across browsers and devices, including the steps, measured KPIs (response times, resource usage), screenshots, and a video recording of the execution. Specifically, the measurement of responsiveness of the application is important and can be done using visual analysis:

 

App

Figure 3: Measuring the time to launch the Starbucks application

 Device vitals are valuable to record since they provide visibility into network, CPU, and memory consumption during the execution of the test script. Furthermore, the report should include a detailed network traffic log: PCAP (packet capture) file to review packet retransmissions, and a HAR file to optimize network utilization by the app.

Waterfall Chart

                           Figure 4: A waterfall chart based on recorded HAR file

Back to top

Load Testing

While easy to integrate load into your script (ex.: Taurus), Load testing requires a setup that can sustain and support a high load on the environment. A typical dev/test environment is not powerful enough to simulate production and it is difficult to use it for gaining the insight sought. It is recommended that in this scenario, high load be applied on the service APIs, backend server architecture, etc., using virtual users that make direct calls to the backend web or application servers. At the same time, parallel execution of real devices on real networks should be used for measuring the true end-user experience while the backend is being stressed.

Test Architecture

Figure 5: Load and user experience testing- architecture perspective

 

Back to top

Putting It All Together

To fit performance testing into the Agile delivery cycle, we increasingly see organizations conducting user experience testing on a weekly or even nightly basis. Certainly the number of executions grow, and there’s more data to analyze. Performance expertise is being introduced to the application/feature teams for developing more efficient code and streamlining and accelerating the testing. To validate many variations in the application responsiveness, tests need to run in parallel on a robust execution grid. The tool we typically see being used for user experience testing is the “Wind Tunnel”: a cloud-based, persona-driven single user performance testing envelope. Load testing, on the other hand, is still typically performed less frequently and sometimes by a different team as it requires a performance-like staging environment to run on and additional expertise in building load test scenarios combining virtual users with real devices and a mix of scripts that represent production activities by different personas.

Incycle

Figure 6: Single user performance shifts into the cycle

Interested in giving Perfecto a try? Enjoy the power of the industry's best testing platform for free.

Start Trial

Related Resources

Back to top