Figure 16 - uploaded by Wasif Afzal
Content may be subject to copyright.
An example system test schedule. 

An example system test schedule. 

Source publication
Article
Full-text available
Software metrics plays an important role in measuring attributes that are critical to the success of a software project. Measurement of these attributes helps to make the characteristics and relationships between the attributes clearer. This in turn supports informed decision making. The field of software engineering is affected by infrequent, inco...

Context in source publication

Context 1
... If a master test plan exists, then the system test plan seeks to add content to the sections in the master test plan that is specific to the system test plan. While describing the test items section in [5], IEEE recommends the following documentation to be referenced: All important test planning issues are also important project planning issues , [6] therefore a tentative (if not complete) project plan is to be made available for test planning. A project plan provides useful perspective in planning the achievement of software testing milestones, features to be tested and not to be tested, environmental needs, responsibilities, risks and contingencies. Figure 14 can be modified to fit the input requirements for system testing as shown in Figure 15: Since test planning is an activity that is live (agile), therefore, there is no stringent criterion for its ending. However, the system test schedule mentions appropriate timelines regarding when a particular milestone will be achieved in system testing (in accordance with the overall time allocated for system testing in project schedule), therefore, the test team needs to shift efforts to test design according to the set schedule. Figure 16 depicts a system test schedule for an example ...

Similar publications

Article
Full-text available
The paper presents an analysis of 83 versions of industrial, open-source and academic projects. We have empirically evaluated whether those project types constitute separate classes of projects with regard to defect prediction. Statistical tests proved that there exist significant differences between the models trained on the aforementioned project...
Article
Full-text available
Assuring quality IT solutions demands assuring quality in all aspects of software development process. For this we need well-formed methods with suitable metrics with which we can verify if a process, software product, software component, software artifact or other part of possible solution meets defined quality characteristics. XML Schemas and cor...

Citations

... (d) Prioritization and Organization: Test cases should be arranged in chronological order based on criticality. According to Afzal (2007), not all test cases are equally important; therefore, the test cases need to be prioritized. The test case prioritization is needed because full regression testing requires a lot of time and system resources; therefore, there is a need to identify which test cases should be run first (Berberyan & Ali, 2019). ...
Article
Full-text available
A test case is a cornerstone of the testing process; hence, it is quintessential to ensure the quality of the test cases. However, test case design in the Agile testing process has limitations that affect its quality standards, leading to the failure of many software projects. Previous studies are limited in providing a clear guideline for assessing the test case quality. However, evaluating the quality of test cases will help software testing practitioners to understand some critical issues in designing test cases from various perspectives. Therefore, this paper aims to present the verified factors and criteria of test case quality by software testing experts in an Agile environment. The proposed factors and criteria were verified through expert review techniques, which included an online survey of professionals and those in domain and knowledge experts. Twenty-three industry practitioners, including developers, testers, and systems analysts, were used to identify the domain experts. In contrast, the knowledge experts were drawn from 13 academic areas of software engineering and testing. The results showed that the quality of test cases is paramount in Agile projects; therefore, the experts accept seven factors and 32 criteria with a few modifications and suggestions. The finding may assist the researchers in improving the proposed factors and criteria for validation purposes. Hence, may contribute to software testing practitioners as a guideline in constructing, designing, and assessing compelling test cases. Most significantly, the fixed factors and criteria may contribute to the body of knowledge in the software testing domain and industries.
Article
Full-text available
Tests on software are performed to ensure it has no defects. Software testing companies and organizations claimed that the testing problem increased their time and budget for testing by over 50%. As a result, the product’s release is delayed, which attracts customer complaints. Testing involves test cases, which are a vital part of the process. Therefore, quality test cases likely to identify defects and meet users’ requirements must be chosen. This paper aims to identify and verify the characteristics, sub-characteristics, and metrics that construct the proposed model for producing high-quality black-box test cases in an Agile Software Development environment. An expert review approach was used to verify the proposed model. Ten academic and six industry experts contributed to this verification. The results showed that six characteristics, 22 sub-characteristics, and 56 metrics are accepted to be included in the proposed model. Furthermore, findings revealed that the proposed model is comprehensive, consistent, relevant, and well-organized for Agile projects.
Article
Empirical validation of software metrics used to predict software quality attributes is important to ensure their practical relevance in software organizations. The aim of this work is to find the relation of object-oriented (OO) metrics with fault proneness at different severity levels of faults. For this purpose, different prediction models have been developed using regression and machine learning methods. We evaluate and compare the performance of these methods to find which method performs better at different severity levels of faults and empirically validate OO metrics given by Chidamber and Kemerer. The results of the empirical study are based on public domain NASA data set. The performance of the predicted models was evaluated using Receiver Operating Characteristic (ROC) analysis. The results show that the area under the curve (measured from the ROC analysis) of models predicted using high severity faults is low as compared with the area under the curve of the model predicted with respect to medium and low severity faults. However, the number of faults in the classes correctly classified by predicted models with respect to high severity faults is not low. This study also shows that the performance of machine learning methods is better than logistic regression method with respect to all the severities of faults. Based on the results, it is reasonable to claim that models targeted at different severity levels of faults could help for planning and executing testing by focusing resources on fault-prone parts of the design and code that are likely to cause serious failures.
Article
Full-text available
Projects following iterative software development methodologies must still be managed in a way as to maximize quality and minimize costs. However, there are indications that predicting test effort in iterative development is challenging and currently there seem to be no models for test effort prediction. This paper introduces and validates a dynamic Bayesian network for predicting test effort in iterative software devel- opment. The proposed model is validated by the use of data from two industrial projects. The accuracy of the results has been verified through different prediction accuracy measurements and statistical tests. The results from the validation confirm that the model has the ability to predict test effort in iterative projects accurately.