Continuous Testing is a phrase used a lot these days, but what does it mean? On the surface, one definition could be “test all the time” – but that doesn’t really quite cover it.
If you were to ask a developer, a QA engineer, or a CIO, you might get somewhat different definitions based on their particular perspective.
The gurus at Gartner describe it as:
“Systems [providing] automation of the software build and validation process driven in a continuous way by running a configured sequence of operations every time a software change is checked into the source code management repository. “ [link]
It Starts with the Developer
Continuous Testing (CT) begins on the developer’s desktop where unit tests can be run as part of every local build. Once the code is checked in, integration and other system level tests are run automatically. If those tests pass, automated end-to-end tests can run in order to ensure that the system still works as expected. Other testing – stress tests, performance tests, or other large tests – may also run before releasing software to customers. After release, monitoring and alerts are (for some devs anyway) another flavor of testing that may happen during production.
Continuous Testing in a Nutshell
In short, CT is essentially lots of tests, each running at the most appropriate point in the development cycle.
We want to create tests that will find issues with the customer experience. We also want to have feedback from tests as quickly as possible and, to that end, we target our tests to find issues at the earliest possible time. Bugs that can be found by unit tests should be found by unit tests. The same is true for acceptance and integration tests, and this is especially important with end-to-end tests. While a lot of automation efforts focus on end-to-end tests, they should only exist for bugs that can only be found with end-to-end tests.
Software products with good CT systems enable teams to deliver changes and updates to customers frequently and safely and to use testing at every single integration point to help us determine whether we’re heading in the right direction or not.
Tips and Traps
Just the Right Amount of Automation
CT is not just about having a lot of automation. It’s about having the right automation running at the best possible time to find issues. For some web services, automated tests may be all you need in order to ship. The amount and types of testing you do, your desired shipping frequency, your customers, and other risks all combine to help you make this business decision.
Not too much…
It’s also easy to fall in the trap of trying to write too much automation. Don’t try to automate everything you can automate; however, you should automate everything that should be automated. Yes, that’s a tautology, but it’s easy to fall into the trap of both under-automating and over-automating if you don’t put enough thought into designing your tests.
If you’re the tester on a team doing CT, it doesn’t mean you write all of the automation! In fact, the team may be better served if you assist and coach in their automation efforts rather than attempting to try to write too much of the automation yourself. Besides, if everyone is writing automation, your team will learn automation tips from each other and you will likely end up with a pretty robust and reliable set of tests. Along these lines, I’ve personally seen a lot of success pairing testers and developers on testing and test automation tasks.
Finally, even with a great suite of automated tests, don’t neglect monitoring as a means of discovering errors in your software. Even if your robust and complex army of tests show no issues, chances are that customers will see errors you never anticipated. Good monitoring (and alerts) will give you huge insights into what your customers are seeing and often will give you new testing ideas as well.
Eran Kinsbruner and I are going to talk through and expand on many of these thoughts about Continuous Testing – and a lot more in the upcoming webinar (see below). Hope to interact with you there.