Grasping through the SW Test Automation Landscape one issue that often arises is the business case for executing a test automation engagement. (Not for typical Unit Tests, those should be part of the development effort; the focus here is Functional Testing and GUI Testing).
The first thing I should say, as a disclaimer, is that SW test automation should not replace manual testing. The main reason for that is the scope, divergent for each.
Equivalent Manual Test Effort (EMTE) is ‘The’ most wildly spread measure to evaluate automation benefit. It can be easily understood:
EMTE = Amount of effort required to run the test case manually
Example: Test ‘A’ takes 2h to run manually, if we run it twice on a test cycle the EMTE = 2* 2h = 4h.
Taking this measure into account it’s tempting to say that the automation test cycle can reduce the need for manual testing by an EMTE amount of effort.
However there are some concerns to take into account:
#1 Automated Tests are very different from Manual Tests
Common sense can lead us into thinking that Automated Tests mimic Manual Tests, but that thought is far from the truth.
When poorly developed, yes; an automated test can restrict itself into a Record/Playback, adding little value and creating a maintenance problem.
The true value of automation is (1) The Depth of the validations it can achieve, using the application DB or API to assure that the test execution had the desired state outcome on the application DB, logging, interfaces... (2) The Coverage can easily be spawn, for eg. by testing all the form field combinations possible, a tedious task often neglected on manual tests. (3) Reuse: The instant orchestration of new test stories based on test building blocks.
We can’t expect that manual tests will give us the same: coverage, depth and reuse. The effort would be huge and the ‘reuse’ simply impossible.
#2 EMTE can be inflated heavily with no return value
Once the test set is automated it can be ran multiple times with no effort. But running the same tests time and time again, only brings value if there are changes on the environment either by new SW deployment or boundary changes on integrated applications.
If the EMTE is 500h for test cycle, run it twice and you have improved the measure by 100%? If there were no changes on the environment you are inflating EMTE erroneously.
#3 Some of the core benefits of test automation are achieved on the test development phase
Most of the defects are detected on the test development phase, when you run the test for the first time. After the first successfully run most of the test purpose is reached, and will become a regression test from then on.
This is easily explained by the fact that manual testing doesn’t have all the coverage of automation.
Eg. On a test automation engagement I worked on, there were 80 tests for the ‘create client‘ functionality, each of those created a different client configuration (business, soho, residential, with TV, with POTS, with Ethernet, with billing address different from the client address…). Before that, the manual regression cycle only addressed a couple of possible configurations. After automation detected most of the defects during test development, there was no need to include the 80 tests (that took 12h to run) on the regression set. Only when a major release was deployed, the 80 tests were run.
So, use EMTE, but use it wisely!
Equivalent Manual Test Effort (EMTE) is ‘The’ most wildly spread measure to evaluate automation benefit. It can be easily understood:
EMTE = Amount of effort required to run the test case manually
Example: Test ‘A’ takes 2h to run manually, if we run it twice on a test cycle the EMTE = 2* 2h = 4h.
Taking this measure into account it’s tempting to say that the automation test cycle can reduce the need for manual testing by an EMTE amount of effort.
However there are some concerns to take into account:
#1 Automated Tests are very different from Manual Tests
Common sense can lead us into thinking that Automated Tests mimic Manual Tests, but that thought is far from the truth.
When poorly developed, yes; an automated test can restrict itself into a Record/Playback, adding little value and creating a maintenance problem.
The true value of automation is (1) The Depth of the validations it can achieve, using the application DB or API to assure that the test execution had the desired state outcome on the application DB, logging, interfaces... (2) The Coverage can easily be spawn, for eg. by testing all the form field combinations possible, a tedious task often neglected on manual tests. (3) Reuse: The instant orchestration of new test stories based on test building blocks.
We can’t expect that manual tests will give us the same: coverage, depth and reuse. The effort would be huge and the ‘reuse’ simply impossible.
#2 EMTE can be inflated heavily with no return value
Once the test set is automated it can be ran multiple times with no effort. But running the same tests time and time again, only brings value if there are changes on the environment either by new SW deployment or boundary changes on integrated applications.
If the EMTE is 500h for test cycle, run it twice and you have improved the measure by 100%? If there were no changes on the environment you are inflating EMTE erroneously.
#3 Some of the core benefits of test automation are achieved on the test development phase
Most of the defects are detected on the test development phase, when you run the test for the first time. After the first successfully run most of the test purpose is reached, and will become a regression test from then on.
This is easily explained by the fact that manual testing doesn’t have all the coverage of automation.
Eg. On a test automation engagement I worked on, there were 80 tests for the ‘create client‘ functionality, each of those created a different client configuration (business, soho, residential, with TV, with POTS, with Ethernet, with billing address different from the client address…). Before that, the manual regression cycle only addressed a couple of possible configurations. After automation detected most of the defects during test development, there was no need to include the 80 tests (that took 12h to run) on the regression set. Only when a major release was deployed, the 80 tests were run.
So, use EMTE, but use it wisely!