Saturday, October 9, 2010

Software testing principals checklist

Use the questions below as a guide to ensure that your product goes through proper testing. This is by no means an exhaustive checklist (as experienced testers should tell) but a Ready Reckoner to remind us, just in case we get over-confident!

1. Do you keep the software static and freeze it from modifications during testing?
2. Are you documenting test cases and test results?
3. Have you specified the expected test results?
4. Have you defined the completion criteria and a reasonable stopping point for testing, given that it is impossible to test every possible set of conditions within a finite period of time?
5. Are unit tests performed on all individual programs, modules or components by the developers prior to integrating those components into a system to be tested as a whole?
6. Is component integration testing done by developers prior to or in parallel with system testing as a whole (individual components are combined with other components to make sure that necessary communications, links and data sharing occur properly)?
7. Have users seen prototypes, been involved with the design, and understood the evolution of the system so that user acceptance testing is an ongoing activity and not a one-time activity that occurs at the end of the project (which will likely yield negative reactions)?
8. Is testing (beyond the unit and component integration levels) done by an independent tester rather than by the individual or team who developed the software (developer)?
9. Have you assigned detail-oriented, responsible personnel to the testing task? Are you confident that they will test every case with objectivity and patience, no matter how tedious the procedure?
10. Did you plan the testing under the assumption that errors will be found and not from a biased perspective that the product is free of flaws?
11. Validation testing: Do you test to see if the software yields an appropriate response for invalid or unexpected input conditions as well as valid 3. Is stress testing (a.k.a load testing, volume testing or performance testing) conducted to determine the failure point of the system under extreme levels of usage done as part of the system testing? Is response time tested?
12. Are there mechanisms in the stress testing to confirm that the system is not only processing heavy loads at high speed but is at the same time producing the correct transaction information?
13. Are regression tests conducted, regardless of testing phase, whenever a change is made to the system to confirm that implementation of changes have not adversely affected other functions?
conditions? Do the error messages displayed to users reflect the validity of the input?
14. Validation testing: Do you avoid testing all possible valid values within a range (which is time-wasting, redundant testing) and test the boundary conditions of that range instead (those data values that cause the logic to take a different path)?
15. Destructive testing: Do you test to see that the software does things it should not do?
16. Is the ratio of destructive testing to validation testing approximately 4:1 in a mature test suite?
17. Will your testing environment remain stable and not compromise the integrity of your tests, such that results can be reproduced during multiple passes of the same test?
18. Do you prioritize test cases according to risk (i.e., test the highest impact-highest probability cases), to reduce the infinite amount of testing required to a finite amount?

No comments:

Post a Comment

 

©2010 Software Testing powered by Free Blogger Templates | Author : Anand Satish