In practice, a team that fails to plan is a team leading towards failure. Planning ahead can save you time, effort and a lot of embarrassment. Furthermore it allows you to foresee upcoming challenges and risks, and aids in planning for contingencies and mitigations. This could not be more true when it comes to QA testing in software development.
Thanuja Wljewardena, Software Quality Assurance lead from Mitra Innovation, shares her thoughts on this subject.
The importance of planned testing
The importance of planning ahead for testing is highlighted in the book ‘The Mythical Man-Month’, by Frederick P Brooks (1995). Brooks states: “Failure to allow enough time for system testing … is peculiarly disastrous. Since the delay comes at the end of the schedule, no one is aware of schedule trouble until almost the delivery date [and] delay at this point has unusually severe … financial repercussions. The project is fully staffed, and cost-per day is maximum [as are the associated opportunity costs]. It is therefore very important to allow enough system test time in the original schedule .”
When it comes to software development, ‘Planned tests’ are the best way to plan ahead and overcome the problems stated by Brooks, as they serve as the means of communicating with fellow team members, testers, peers, managers and other stakeholders. Planned tests also provide an opportunity for team members to provide input to, and influence the overall test plan; especially in areas of organisation-wide testing policies and motivations, test scopes and objectives.
In addition to the above, planned tests allow software teams to be more adaptive towards change, which is important throughout any software project life cycle, where features and functionalities change continually. When changes occur, planned tests will ensure that the changes are coordinated with the whole project.
No testing plan? The common issues that will arise
If there is no testing plan in a software development project, there are a number of common issues that will arise. These include.
Incomplete functional coverage — Testing teams will face difficulties in performing all of the software’s functions comprehensively.
Vulnerabilities in risk management — Project owners will face difficulties in accurately measuring overall risk issues with regard to code coverage and quality metrics.
Deviating from the script — Testers will tend to focus on the ideal paths instead of the real paths. With little or no time to prepare, ideal paths are defined by best-guessing or through developer feedback instead of, careful consideration of how users will understand the system, or how users understand real-world analogues to the application tasks.
Challenges in isolating errors — When tester’s make the tests up they go along, reproducing any the specific errors that have previously been found can be difficult, and reproducing the previous tests performed will be even harder. This will most definitely lead to discrepancies when trying to measure quality over successive code cycles.
Inefficiencies in the long term — Effective quality assurance programs expand their base of documentation of the product, and of the testing process over time. This increases the coverage and granularity of tests. Great testing requires comprehensive test setup and preparation. A continued pattern of quick-and-dirty testing is a sign that the product or application will be expensive and unsustainable in the long run.
Developing your QA testing plan for overall project effectiveness
As experts in software development and quality assurance, the list below outlines how to develop an effective QA testing plan.
1.) Review the data schema – In any software development project, review the data schema to find out all of the required and non required fields for each area of functionality that your testing will affect.
2.) Discussions with relevant groups – Talk with the groups or departments who are responsible for maintaining and/or creating those functionalities. For example, a business analyst would be a good person to contact in a department where test data is created, because a business analyst would have a wide breadth of knowledge as to what exactly is being tested.
3.) Identify which tests – From your set of requirements, find out which tests, are being run and which ones are not. Sometimes, aspects of the software that were not originally on the test plan may still need to be tested so that a particular transaction can be completed.
4.) Develop a Unified Testbed Structure – Create a unified testbed structure that can be reused and built upon. Since testing will only increase as time goes on, it makes sense to create a data structure that can be used throughout the company. A testbed such as this, can then be refashioned and customised according to the needs of individual departments.
5.) Consistent communication – Changes that are made to the system’s structure, both within the database and the system’s interface, need to be consistently communicated to the person or group that is responsible for maintaining the testbed. This is because all changes must align with the software’s functionality requirements, and there is nothing worse than receiving a system that has changes to its interface and architecture without knowing about it. When this occurs, more time needs to be taken to find out what should be changed in the testbed, and how the change will affect other parts of the system.
6.) Run multiple tests – Run multiple types of data transactions or functional tests within the testbed to assure the quality of all permutations in the test plan. After the testing is completed, more data can be entered for each of the test plans to test against the system.
We hope you’ve found this advice useful. Please get in touch with us at Mitra Innovation if you need help with software development projects, or you need further advice.