A while ago I presented an automated testing overview which was geared towards developers at first. Following initial feedback I realized that I needed to provide a wider context where I explain the immediate business benefits that automated testing brings to the table. This post is a long delayed accompanying note for the first part of my original presentation.
There is lot of published literature, together with many blog entries and tutorials around the topic of automated software testing in general. The content is mainly targeted at developers and this allowed this topic to become well known and understood by many.
I also have encountered situations where developers met resistance from management when they tried to introduce automated testing. I think to overcome this resistance automated testing should be presented slightly different to a non-developer audience: with a stronger emphasis on the business costs and benefits rather than on pure technical merits.
A distinction is made between functional, integration and unit testing but I prefer to refer to all these types of tests as automated testing. This name helps me to capture two facts:
- they are coded tests;
- they can run unattended and on a schedule.
I think this name also makes the concept simpler to non-developers.
I have seen teams of developers that wanted to implement automated testing but faced a couple of difficult technical challenges:
- the third party frameworks used were not testable;
- their own code was not testable.
These challenges meant that in order to introduce unit testing or integration testing they had to start a refactoring exercise that had to be included in their planning schedule. Even if the effort was thinned out throughout many development cycles there were a couple of bigger refactorings that had to be undertaken, one at a time. Needless to say these couple of "extra tasks" caused heated discussions between developers, testers and managers leading to long delays in the introduction of automated testing for a particular software product.
In order to analyze the effort required to introduce automated testing a distinction needs to be made between:
But I think the biggest business value comes from implementing the endpoint tests first. This also follows the principles of outside-in testing and produces test results that can be understood by non-developers.
A key asset are the quality assurance testing plans that in theory should already describe the manual steps required to test all product endpoints. Having this asset allows you to present the automated testing as a return in investment proposition and the following points become relevant:
A successful delivery of this initial effort paves the way for expanding automated testing for the product internals. The key asset here is the analysis of the current velocity for enhancing parts of the product. Having this asset allows to predict and measure the velocity gained after the introduction of automated tests for a specific part of the product.
Associating a saving cost with each refactoring exercise makes the process measurable and easy to understand by all product stakeholders.
NOTES
There is lot of published literature, together with many blog entries and tutorials around the topic of automated software testing in general. The content is mainly targeted at developers and this allowed this topic to become well known and understood by many.
I also have encountered situations where developers met resistance from management when they tried to introduce automated testing. I think to overcome this resistance automated testing should be presented slightly different to a non-developer audience: with a stronger emphasis on the business costs and benefits rather than on pure technical merits.
A distinction is made between functional, integration and unit testing but I prefer to refer to all these types of tests as automated testing. This name helps me to capture two facts:
- they are coded tests;
- they can run unattended and on a schedule.
I think this name also makes the concept simpler to non-developers.
I have seen teams of developers that wanted to implement automated testing but faced a couple of difficult technical challenges:
- the third party frameworks used were not testable;
- their own code was not testable.
These challenges meant that in order to introduce unit testing or integration testing they had to start a refactoring exercise that had to be included in their planning schedule. Even if the effort was thinned out throughout many development cycles there were a couple of bigger refactorings that had to be undertaken, one at a time. Needless to say these couple of "extra tasks" caused heated discussions between developers, testers and managers leading to long delays in the introduction of automated testing for a particular software product.
In order to analyze the effort required to introduce automated testing a distinction needs to be made between:
- automated tests that target the product endpoints: (web) service methods, web pages and the user interface in general;
- automated tests that target the product internals: specific classes, integration points.
But I think the biggest business value comes from implementing the endpoint tests first. This also follows the principles of outside-in testing and produces test results that can be understood by non-developers.
A key asset are the quality assurance testing plans that in theory should already describe the manual steps required to test all product endpoints. Having this asset allows you to present the automated testing as a return in investment proposition and the following points become relevant:
- Implementing automated testing allows the team to lower the costs associated with tedious manual testing over time.
- By planning the effort to automate the manual test plans you can easily contrast its cost with the total manual testing cost over the lifetime of the product(or a product version).
- Reducing the cost of manual testing does not mean you can also reduce the size of the test team: you allocate more time for exploratory testing which is engaging and will add more value to the software product. Testers are more creative in their explorations, the overall work satisfaction increases and testers will work more with developers to enhance existing tests or discuss new scenarios.
A successful delivery of this initial effort paves the way for expanding automated testing for the product internals. The key asset here is the analysis of the current velocity for enhancing parts of the product. Having this asset allows to predict and measure the velocity gained after the introduction of automated tests for a specific part of the product.
Associating a saving cost with each refactoring exercise makes the process measurable and easy to understand by all product stakeholders.
NOTES
- My initial presentation slides can be found here: Automated testing overview;
- The code used in the presentation demos can be found here: Unit testing overview;
- To implement outside-in testing I have successfully used a BDD framework based on xUnit : xBehave.net and I strongly recommend it.
No comments:
Post a Comment