5 Cross-browser Testing Mistakes You Might be Making Right Now

5 Cross-browser Testing Mistakes You Might be Making Right Now

//5 Cross-browser Testing Mistakes You Might be Making Right Now

In our business, we have a unique opportunity to see the strategic problems which, sadly, lead some teams to fall short of their web quality goals. The direct outcome of these failures:  poor customer experiences and impacts on businesses’ bottom lines.  Also, having to retool your quality strategy costs time and money- and sometimes even employees!  

So, you’d like the recipe for cross-browser testing success?  Well, in principle, the ingredients are pretty simple:  

  1. create a realistic analysis of your quality needs
  2. set coherent requirements
  3. select the correct approach

In practice, however, the best plans can be derailed by simple misunderstandings. Let’s take a look at a handful of examples of mistakes we commonly see with browser testing- and solutions to deal with them.

Mistake: Thinking mobile is someone else’s problem

Reality: Mobile and desktop have become one web- they’re no longer separate things

Your test plans should include all relevant browsers across desktop and mobile platforms.

Mobile web is now the most important way for customers to connect with your brand.  Most mobile searches go through your website. Mobile can no longer be viewed as a separate offering:  users access from desktop and mobile interchangeably.  They will accept that some functionalities might not be present on mobile, but, generally, users expect a consistent experience between the two. Engineering should no longer separate coding and testing between desktop and mobile.  You can’t afford to have the desktop team “fix” something on their side which breaks the mobile side-  sometimes without even knowing what changed.

 

 

Mitsake: Maintaining multiple platforms is no big deal

Reality: When it comes to multiple platforms, maintenance is a major challenge

With the relentless evolution of browsers and OSs, developers and testers don’t have extra time for maintenance. You know the platforms (device, OS, browser and version) you need and the capacity (number of VMs/devices you need to accomplish the test suite in a given time). You might think, “Let’s set it up, internally or AWS (for example), no big deal… a couple of VMs configured, a Selenium grid, Jenkins reporting.”

A week later, a new Chrome version comes out. You need to update 6 of the 10 VMs. Also, there are new Chrome beta and developer versions. Two weeks later, it’s Firefox. Then, a Windows security patch. And that’s just VMs: what about setting up and maintaining Mac machines or mobile devices?

 

 

 

                                                                                                                                   Figure 1: Chrome and Firefox release calendar

 

The truth? Developers and testers simply don’t have time for maintenance. It’s the same for IT. In a few weeks, 50% of the VMs you set up originally aren’t operational. Within a few months, capacity is down to almost nothing, and you’re back to testing on developers machines.

Between the cost of VMs (AWS or yours), maintenance labor, costs of fixing automation etc., the estimated cost per configuration is over $50K USD/year. Then, multiply that by the number of configurations, capacity needs, different teams, etc.

In our view, the minimal set of desktop configurations alone is 33:

                                                                                                                               Figure 2: Perfecto recommended desktop web coverage

 

Mistake: I can do it all with open source

Reality: Open source will give you most of what you need to achieve in a testing solution- but not all.

Some solutions require more than open source alone can provide.

Disclaimer: We’re all for open source, we use it, you should too! This is about that extra mile that will make the difference.  Consider several areas:

  • Community readiness to changes:  your lab, whether from a vendor or your own, needs to allow immediate access to new devices, browsers and OSs. Ensure your expectation for readiness meets the lab SLA.
  • Test automation:  sometimes you need a little more-  for example, entering credentials into a login popup (read more here).
  • Enterprise grade lab:  security compliance, data retention for audits, setting secure connections with backend servers, lab slicing by team etc. Lots to think about. Read more on open source test automation frameworks.

 

Mistake: If the lab can manage 10 scripts, it can manage 1,000

Reality: Labs don’t scale if they weren’t designed to scale

Large parallelization requires orchestration, elasticity and weak-link-oriented thinking.

Success means lots of test automation, parallel threads etc. Selenium grid will get it done, right?

Well, first, ensure your lab can scale to your needs. If the number of VMs is limited, that’s a problem. Second, ensure your lab is stable even when the parallelization scales. It doesn’t help when false failures increase with large test suite sizes.

 

Mistake: Scanning 10 reports isn’t bad- 1,000 shouldn’t be that much worse

Reality: Scanning incremental numbers of execution reports inefficiently is a laborious task that results in human errors and takes loads of time to accomplish

Again, let’s imagine you’re doing lots of executions overnight. You’re probably using Jenkins or some other tool to orchestrate and give you reporting, and that’s good enough- right?

Well, here are a few thoughts on how to make this work:

  • Accelerate root cause detection:  allow grouping of items from hundreds or thousands of executions- test name, platform, step, application/build version, custom tags, etc. This grouping will eliminate plowing through thousands of executions to find, for example, that login fails on Chrome 61 beta. One could spend days on this inefficient, frustrating task.
  • Identify trending risks at a feature level:  if the team is working on a strategic feature, you want to know whether the quality of that feature is acceptable (or at least trending in that direction), or you should invest more resources into it.

 

Figure 3: CI Dashboard offer feature-level quality trending over time

  • Flexible interface to drive quality:  quality is an organizational effort and awareness is a big step toward that goal. A tool’s ability to communicate real-time status by persona is key in achieving alignment and risk mitigation.

                                                                                                                           Figure 4: Management Dashboard offers risk perspective by platform

 

Have you fallen into any of these traps?

Hopefully, through these examples, we’ve illustrated that browser testing remains a challenge that cannot be taken lightly. The digital transformation that is now taking place is impacting all apps and all screens. Scalable and reliable testing must take place on a continuous basis. As you plan your testing needs and solution requirements, we hope that these guidelines will be of assistance.

2017-08-15T14:36:58+00:00 Responsive Web Design|0 Comments

About the Author:

Leave A Comment