Fig 1 - uploaded by Yan-Fu Li
Content may be subject to copyright.
The OR relation in SPL domain, its semantics, and its similarity to the relation between a fault and the test cases revealing it.

The OR relation in SPL domain, its semantics, and its similarity to the relation between a fault and the test cases revealing it.

Source publication
Article
Full-text available
Test-suite minimization is one key technique for optimizing the software testing process. Due to the need to balance multiple factors, multi-criteria test-suite minimization (MCTSM) becomes a popular research topic in the recent decade. The MCTSM problem is typically modeled as integer linear programming (ILP) problem and solved with weighted-sum s...

Contexts in source publication

Context 1
... Figure 1(a), we show a general case of an OR-relation among features in a system of SPLfeature f is the parent feature and features f 1 to f n are the OR-relation subfeatures. In Figure 1(b), we use proposition logic to model the semantics behind this OR-relation, according to References [6,98]. ...
Context 2
... Figure 1(a), we show a general case of an OR-relation among features in a system of SPLfeature f is the parent feature and features f 1 to f n are the OR-relation subfeatures. In Figure 1(b), we use proposition logic to model the semantics behind this OR-relation, according to References [6,98]. In Figure 1(c), we also use proposition logic to model the semantics behind a fault as well as the test cases that can reveal this fault. ...
Context 3
... Figure 1(b), we use proposition logic to model the semantics behind this OR-relation, according to References [6,98]. In Figure 1(c), we also use proposition logic to model the semantics behind a fault as well as the test cases that can reveal this fault. Analogically, the fault is semantically like the parent feature in SPL, and the test cases are semantically like those OR-relation subfeatures. ...
Context 4
... true according to the semantics in Figure 1(c). Here, ...

Similar publications

Article
Full-text available
One crucial advantage of additive manufacturing regarding the optimization of lattice structures is that there is a reduction in manufacturing constraints compared to classical manufacturing methods. To make full use of these advantages and to exploit the resulting potential, it is necessary that lattice structures are designed using optimization....

Citations

... Therefore in our FSE work, these bounds are updated by using the maximum and minimum values discovered so far during the tuning to approximate the true scales. Note that using the true scales of the objectives (if known) or their close approximations for normalization is a widely used method in SBSE [98], [80], [89], [32], [4]. ...
Article
Full-text available
Software configuration tuning is essential for optimizing a given performance objective (e.g., minimizing latency). Yet, due to the software’s intrinsically complex configuration landscape and expensive measurement, there has been a rather mild success, particularly in preventing the search from being trapped in local optima. To address this issue, in this paper we take a different perspective. Instead of focusing on improving the optimizer, we work on the level of optimization model and propose a meta multi-objectivization (MMO) model that considers an auxiliary performance objective (e.g., throughput in addition to latency). What makes this model distinct is that we do not optimize the auxiliary performance objective, but rather use it to make similarly-performing while different configurations less comparable (i.e. Pareto nondominated to each other), thus preventing the search from being trapped in local optima. Importantly, by designing a new normalization method, we show how to effectively use the MMO model without worrying about its weight–the only yet highly sensitive parameter that can affect its effectiveness. Experiments on 22 cases from 11 real-world software systems/environments confirm that our MMO model with the new normalization performs better than its state-of-the-art single-objective counterparts on 82% cases while achieving up to 2:09× speedup. For 68% of the cases, the new normalization also enables the MMO model to outperform the instance when using it with the normalization from our prior FSE work under pre-tuned best weights, saving a great amount of resources which would be otherwise necessary to find a good weight. We also demonstrate that the MMO model with the new normalization can consolidate recent model-based tuning tools on 68% of the cases with up to 1:22× speedup in general.
... Several notable practical conditions where the MOIP problems discover their utilizations are supply chain design, scheduling, financial planning, and also logistics planning [18,20]. The MOIP issues are practically tricky and complicated, as the majority of them, even their single-objective versions, fall into the category of computationally intractable problems [15,21]. ...
... Here, FMOP is solved with the crisp single-objective weighting model proposed in [20][21][22][23][24][25][26][27] for problems with K objectives. In the following equations (27)(28)(29)(30)(31)(32)(33), ( denotes the level of achievement of the k-th objective: ...
... According to [21], in many situations, case study strategies are used to develop models with the help of relevant personal, group, social, or organizational information. Thus, they are similar to experimental studies and can be validated by similar methods. ...
Article
Full-text available
Adequate and desirable connections between suppliers and customers necessitate an appropriate flow of information. Therefore, a promising and proper data collaboration in the supply chain is of tremendous significance. Thus, the study’s main objective is to provide multiple objective programming models under uncertain conditions to assess the performance of suppliers. To meet that aim, a case study for the reliability assessment of the presented model is carried out. That section is associated with supply chain visibility (SCV). Likewise, the likelihood of unpredicted and undesirable incidents involving supply chain risk (SCR) is taken into consideration. The intimate relation between visibility and risk of the supply chain is deemed efficient for the performance of the supply chain. Incoherence in maximization and minimization of SCR and SCV and other factors, including costs, capacity, or demand, necessitates multiple objective programming models to assess suppliers’ performance to accomplish the before-mentioned aims. The study’s results indicate the high reliability of the proposed model. Besides, the numeral results reveal that decision-makers in selecting suppliers mainly decrease SCR and then attempt to enhance SCV. In conclusion, the provided model in the study can be a desirable model for analyzing and estimating supplier performance with SCR and SCV simultaneously.
... validation activities. These activities include several optimization aspects, including automated test case generation [1, 2, 4, 20, 40, 52, 62, 66-68, 74, 87], test case selection/minimization [8,75,94,[97][98][99]101] and test prioritization [14,36,38,53]. ...
... The multi-objective test case selection problem can be formulated in two ways [97], by using a weighted fitness function, where a multi-objective problem is converted into a single-objective problem [94], or by adopting multiple objectives [98]. In this paper we opted for the second approach, although the proposed seeding strategies can be used with any kind of population-based search algorithm that is applied to solve the test case selection problem, including single-objective search algorithms. ...
... Engströem et al. identified 28 techniques for regression test selection [35]. Besides evolutionary search-based approaches, other techniques have been proposed for test case selection, including multi-objective lineal programming techniques [97], greedy-based algorithms [33], or reinforcement learning [91]. Our approach is intended to support population-based search-based test case selection techniques. ...
Article
Full-text available
The time it takes software systems to be tested is usually long. Search-based test selection has been a widely investigated technique to optimize the testing process. In this paper, we propose a set of seeding strategies for the test case selection problem that generate the initial population of pareto-based multi-objective algorithms, with the goals of (1) helping to find an overall better set of solutions and (2) enhancing the convergence of the algorithms. The seeding strategies were integrated with four state-of-the-art multi-objective search algorithms and applied into two contexts where regression-testing is paramount: (1) Simulation-based testing of Cyber-Physical Systems and (2) Continuous Integration. For the first context, we evaluated our approach by using six fitness function combinations and six independent case studies, whereas in the second context we derived a total of six fitness function combinations and employed four case studies. Our evaluation suggests that some of the proposed seeding strategies are indeed helpful for solving the multi-objective test case selection problem. Specifically, the proposed seeding strategies provided a higher convergence of the algorithms towards optimal solutions in 96% of the studied scenarios and an overall cost-effectiveness with a standard search budget in 85% of the studied scenarios.
... The former searches for a good approximation of the Pareto front, from which the stakeholders make their choice [62,63,83]. The latter directly searches for a single solution that maximizes the aggregated scalar fitness of the objectives (e.g., by weighted sum [7,41,102]), on the basis of a set of weights (also called a weight vector) that reflects relative importance between the objectives. ...
... In SBSE, there exist some studies that have touched on the comparison between weighted search and Pareto search. For weighted search, those studies use the given weight vector to simplify the problem and guide the search, but when it comes to comparing the results returned by weighted search with those by Pareto search, they either considered generic quality indicators (e.g., hypervolume [109]) which are designed for Pareto search (such as [80,98,102]), or the value on every objective of the SBSE problem, e.g., [105]. Such comparisons apparently disadvantage weighted search since the stakeholders' preferences (weights) are only used in the search but not in the evaluation. ...
... That is, to evaluate weighted search under a situation that the preferences are assumed to be unavailable. This certainly results in the conclusion that Pareto search is always better than weighted search [80,98,102,105]. In this work, we aim to make a more fair and comprehensive comparison between weighted search and Pareto search under clear preferences in multi-objective SBSE. ...
Preprint
Full-text available
In presence of multiple objectives to be optimized in Search-Based Software Engineering (SBSE), Pareto search has been commonly adopted. It searches for a good approximation of the problem's Pareto optimal solutions, from which the stakeholders choose the most preferred solution according to their preferences. However, when clear preferences of the stakeholders (e.g., a set of weights which reflect relative importance between objectives) are available prior to the search, weighted search is believed to be the first choice since it simplifies the search via converting the original multi-objective problem into a single-objective one and enable the search to focus on what only the stakeholders are interested in. This paper questions such a "weighted search first" belief. We show that the weights can, in fact, be harmful to the search process even in the presence of clear preferences. Specifically, we conduct a large scale empirical study which consists of 38 systems/projects from three representative SBSE problems, together with two types of search budget and nine sets of weights, leading to 604 cases of comparisons. Our key finding is that weighted search reaches a certain level of solution quality by consuming relatively less resources at the early stage of the search; however, Pareto search is at the majority of the time (up to 77% of the cases) significantly better than its weighted counterpart, as long as we allow a sufficient, but not unrealistic search budget. This, together with other findings and actionable suggestions in the paper, allows us to codify pragmatic and comprehensive guidance on choosing weighted and Pareto search for SBSE under the circumstance that clear preferences are available. All code and data can be accessed at: https://github.com/ideas-labo/pareto-vs-weight-for-sbse.
... Therefore in our FSE work, these bounds are updated by using the maximum and minimum values discovered so far during the tuning to approximate the true scales. Note that using the true scales of the objectives (if known) or their close approximations for normalization is a widely used method in SBSE [86], [69], [78], [29], [3]. ...
Preprint
Full-text available
Software configuration tuning is essential for optimizing a given performance objective (e.g., minimizing latency). Yet, due to the software's intrinsically complex configuration landscape and expensive measurement, there has been a rather mild success, particularly in preventing the search from being trapped in local optima. To address this issue, in this paper we take a different perspective. Instead of focusing on improving the optimizer, we work on the level of optimization model and propose a meta multi-objectivization (MMO) model that considers an auxiliary performance objective (e.g., throughput in addition to latency). What makes this model unique is that we do not optimize the auxiliary performance objective, but rather use it to make similarly-performing while different configurations less comparable (i.e. Pareto nondominated to each other), thus preventing the search from being trapped in local optima. Importantly, we show how to effectively use the MMO model without worrying about its weight -- the only yet highly sensitive parameter that can affect its effectiveness. Experiments on 22 cases from 11 real-world software systems/environments confirm that our MMO model with the new normalization performs better than its state-of-the-art single-objective counterparts on 82% cases while achieving up to 2.09x speedup. For 67% of the cases, the new normalization also enables the MMO model to outperform the instance when using it with the normalization used in our prior FSE work under pre-tuned best weights, saving a great amount of resources which would be otherwise necessary to find a good weight. We also demonstrate that the MMO model with the new normalization can consolidate Flash, a recent model-based tuning tool, on 68% of the cases with 1.22x speedup in general.
... Orsan Ozener and Hasan Sozer (Orsan Ozener & Sozer, 2020) proposed a formulation of the test-suite minimization problem in 2020 that solves the issues in heuristic techniques or integer linear programming focusing on a specific criterion or bi-criteria. In 2020, Yinxing Xue, and Yan Li (Xue & Li, 2020) proved that integer linear programming models multi-criteria test-suite minimization then they proposed a multi-objective integer programming approach to solve it. ...
Article
Full-text available
Test-suite minimization problem is an essential problem in software engineering as its application helps to improve the software quality. This paper proposes a quantum algorithm to solve the test-suite minimization problem with high probability in $$O\left({\sqrt {{2^n}} } \right)$$O2n, where $$n$$n is the number of test cases. It generates an incomplete superposition to find the best solution. It also handles the non-uniform amplitudes’ distribution case for the system with multisolutions. The proposed algorithm uses amplitude amplification techniques to search for the minimum number of test cases required to test all the requirements. The proposed algorithm employs two quantum search algorithms, Younes et al. algorithm for quantum searching via entanglement and partial diffusion to prepare incomplete superpositions that represent different search spaces such that the number of test cases is incremented in each search space, and updated Arima’s algorithm to handle the multisolutions case. The updated Arima’s algorithm searches for a quantum state that satisfies an oracle that represent the instance of the test-suite minimization problem.
Article
The multi-objective testing resource allocation problem (MOTRAP) is concerned on how to reasonably plan the testing time of software testers to save the cost and improve the reliability as much as possible. The feasible solution space of a MOTRAP is determined by its variables (i.e., the time invested in each component) and constraints (e.g., the pre-specified reliability, cost, or time). Although a variety of state-of-the-art constrained multi-objective optimisers can be used to find individual solutions in this space, their search remains inefficient and expensive due to the fact that this space is very tiny compare to the large search space. The decision maker may often suffer a prolonged but unsuccessful search that fails to return a feasible solution. In this work, we first formulate a heavily constrained MOTRAP on the basis of an architecture-based model, in which reliability, cost, and time are optimised under the pre-specified multiple constraints on reliability, cost, and time. Then, to estimate the feasible solution space of this specific MOTRAP, we develop theoretical and algorithmic approaches to deduce new tighter lower and upper bounds on variables from constraints. Importantly, our approach can help the decision maker identify whether their constraint settings are practicable, and meanwhile, the derived bounds can just enclose the tiny feasible solution space and help off-the-shelf constrained multi-objective optimisers make the search within the feasible solution space as much as possible. Additionally, to further make good use of these bounds, we propose a generalised bound constraint handling method that can be readily employed by constrained multi-objective optimisers to pull infeasible solutions back into the estimated space with theoretical guarantee. Finally, we evaluate our approach on application and empirical cases. Experimental results reveal that our approach significantly enhances the efficiency, effectiveness, and robustness of off-the-shelf constrained multi-objective optimisers and state-of-the-art bound constraint handling methods at finding high-quality solutions for the decision maker. These improvements may help the decision maker take the stress out of setting constraints and selecting constrained multi-objective optimisers and facilitate the testing planning more efficiently and effectively.
Article
In presence of multiple objectives to be optimized in Search-Based Software Engineering (SBSE), Pareto search has been commonly adopted. It searches for a good approximation of the problem’s Pareto optimal solutions, from which the stakeholders choose the most preferred solution according to their preferences. However, when clear preferences of the stakeholders (e.g., a set of weights which reflect relative importance between objectives) are available prior to the search, weighted search is believed to be the first choice since it simplifies the search via converting the original multi-objective problem into a single-objective one and enable the search to focus on what only the stakeholders are interested in. This paper questions such a “weighted search first” belief. We show that the weights can, in fact, be harmful to the search process even in the presence of clear preferences. Specifically, we conduct a large scale empirical study which consists of 38 systems/projects from three representative SBSE problems, together with two types of search budget and nine sets of weights, leading to 604 cases of comparisons. Our key finding is that weighted search reaches a certain level of solution quality by consuming relatively less resources at the early stage of the search; however, Pareto search is at the majority of the time (up to 77% of the cases) significantly better than its weighted counterpart, as long as we allow a sufficient, but not unrealistic search budget. This is a beneficial result, as it discovers a potentially new “rule-of-thumb” for the SBSE community: even when clear preferences are available, it is recommended to always consider Pareto search by default for multi-objective SBSE problems provided that solution quality is more important. Weighted search, in contrast, should only be preferred when the resource/search budget is limited, especially for expensive SBSE problems. This, together with other findings and actionable suggestions in the paper, allows us to codify pragmatic and comprehensive guidance on choosing weighted and Pareto search for SBSE under the circumstance that clear preferences are available. All code and data can be accessed at: https://github.com/ideas-labo/pareto-vs-weight-for-sbse.