Article

Combining Taguchi and Response Surface Philosophies: A Dual Response Approach

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

G. Taguchi and his school have made significant advances in the use of experimental design and analysis in industry. Of particular significance is their promotion of the use of statistical methods to achieve certain objectives in a mean response while simultaneously minimizing the variance. Unfortunately, many statisticians criticize the Taguchi techniques of analysis, particularly those based on the signal-to-noise ratio. This paper shows how the dual response approach of optimizing a “primary response” function while satisfying conditions on a “secondary response” function can be used to achieve the goals of the Taguchi philosophy within a more rigorous statistical methodology. An example illustrates this point.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In this example, the proposed method is illustrated using the printing ink problem taken from Box and Draper [23]. Many other authors also discussed the printing ink problem [18,20,21,[24][25][26][27][28] as an ideal example for presenting the properties of their approaches. ...
... Supposing second-order models were adequate, the fitted responses for the mean and standard deviation of the characteristic of interest were given by Vining and Myers [24]: (20) In both the fitted models, the group means and the group standard deviations as responses were used (rather than using all the experimental data). ...
... Using the default expected loss 2 E and bearing in mind given data (T μ = 500, K = 1, Δ = 100), the The results of the proposed method, based on a cuboidal experimental region, are summarized and compared with those of other authors in Table 6. As can be seen in Table 6, the optimal results obtained by different methods are similar and comparable, with the exception of the results presented by Vining and Myers [24]. Figure 13 displays the overlaid contour plot of the PLF estimated mean and standard deviation responses when keeping the value of 2 x (pressure) at its optimal level. ...
Article
Full-text available
Taguchi first developed the quality loss function to better estimate the economic losses incurred by manufacturers and customers caused by quality characteristics being off-target. The quality loss function measures the quality loss caused by a deviation of a quality characteristic from its defined target value. Several researchers have proposed different revised loss functions for overcoming some flaws of the Taguchi loss function. This paper recommends a new family of quality loss functions, which is very flexible, simple, and easy to implement. Three real case studies demonstrated the usability and capabilities of the proposed new loss function for quantifying and predicting quality losses.
... For example, a panel of practitioners and researchers edited by Nair et al. (1992) discussed the implementation of Taguchi method. Other researchers including Kackar (1985), Vining and Myers (1990) and Myers and Montgomery (1995) highlighted a number of shortcomings attached to Taguchi method in robust design method. In view of the weaknesses of Taguchi method, Box and Draper (1987) developed response surface methodology (RSM) to find the optimal factor settings for the input variables which can either maximize or minimize the given response functions. ...
... In this situation, both the mean and the standard deviation of the response should be considered when determining the optimum conditions for the input variables. Continuous research in this area leads to the development of dual response surface optimization (DRSO) proposed by Vining and Myers (1990), which attempts to optimize both the mean and standard deviation of the response. Lin and Tu (1995) pointed out that the Vining and Myers (1990) approach (VM) does not guarantee global optimal due to restriction of the constraint to a specific value. ...
... Continuous research in this area leads to the development of dual response surface optimization (DRSO) proposed by Vining and Myers (1990), which attempts to optimize both the mean and standard deviation of the response. Lin and Tu (1995) pointed out that the Vining and Myers (1990) approach (VM) does not guarantee global optimal due to restriction of the constraint to a specific value. In this respect, Lin and Tu (1995) proposed LT optimization scheme, which is based on mean squared error (MSE) objective function that allows a small bias. ...
Article
Full-text available
The Lin and Tu (LT) optimization scheme which is based on mean squared error (MSE) objective function is the commonly used optimization scheme for estimating the optimal mean response in robust dual response surface optimization. The ordinary least squares (OLS) method is often used to estimate the parameters of the process location and process scale models of the responses. However, the OLS is not efficient for the unbalanced design data since this kind of data make the errors of a model become heteroscedastic, which produces large standard errors of the estimates. To remedy this problem, a weighted least squares (WLS) method is put forward. Since the LT optimization scheme produces a large difference between the estimates of the mean response and the experimenter actual target value, we propose a new optimization scheme. The OLS and the WLS are integrated in the proposed scheme to determine the optimal solution of the estimated responses. The results of the simulation study and real example indicate that the WLS is superior when compared with the OLS method irrespective of the optimization scheme used. However, the combination of WLS and the proposed optimization scheme (PFO) signify more efficient results when compared to the WLS combined with the LT optimization scheme.
... Various optimization methods of dual response surface optimization (DRSO), which attempt to simultaneously optimize both the mean and the standard deviation of the responses, were proposed in the literature [4][5][6][7]. Vining and Myers [8] were the first to introduce dual response surface optimization (DRSO) using response surface methodology (RSM). We denote this method as the VM method. ...
... Nonetheless, this method concentrates more on the bias and forces the estimated mean response to be close to the target value, which is not adequately efficient. Moreover, no specific penalty constant value, ξ was given to Equation (8). Hence, practitioners will find these approaches difficult since they must do trial and error subjective judgment. ...
... To clarify the idea, the process mean, , process standard deviation, , criteria values, CV along with the value , are presented in Table 2 for Catapult data. In this example, the design point = 1, 3,6,8,9,10,11,12,13,14 were removed from the analysis since their corresponding CV exceed the median CV = 27.72, Consequently, only design point = 2,4,5,7,15,16,17,18,19,20 are remained. ...
Article
Full-text available
The dual response surface methodology is a widely used technique in industrial engineering for simultaneously optimizing both the process mean and process standard deviation functions of the response variables. Many optimization techniques have been proposed to optimize the two fitted response surface functions that include the penalty function method (PM). The PM method has been shown to be more efficient than some existing methods. However, the drawback of the PM method is that it does not have a specific rule for determining the penalty constant; thus, in practice, practitioners will find this method difficult since it depends on subjective judgments. Moreover, in most dual response optimization methods, the sample mean and sample standard deviation of the response often use non-outlier-resistant estimators. The ordinary least squares (OLS) method is also usually used to estimate the parameters of the process mean and process standard deviation functions. Nevertheless, not many statistics practitioners are aware that the OLS procedure and the classical sample mean and sample standard deviation are easily influenced by the presence of outliers. Alternatively, instead of using those classical methods, we propose using a high breakdown and highly efficient robust MM-mean, robust MM-standard deviation, and robust MM regression estimators to overcome these shortcomings. We also propose a new optimization technique that incorporates a systematic method to determine the penalty constant. We call this method the penalty function method based on the decision maker’s (DM) preference structure in obtaining the penalty constant, denoted as PMDM. The performance of our proposed method is investigated by a Monte Carlo simulation study and real examples that employ symmetrical factorial design of experiments (DOE). The results signify that our proposed PMDM method is the most efficient method compared to the other commonly used methods in this study.
... Unfortunately, the orthogonal arrays (OAs), statistical analysis, and signal-to-noise ratios associated with this technique were criticized by Box et al. [2], Leon et al. [3], Box [4], and Nair [5]. Therefore, Vining and Myers [6] proposed the dual response (DR) approach based on response surface methodology (RSM), in which the process mean and variance are estimated separately as functions of control factors. The result is an RD optimization model in which the process mean is prioritized by setting it as a constraint and the process variability is set as an objective function. ...
... The printing data example used by Vining and Myers [6] and Lin and Tu [9] was selected to demonstrate the application of the proposed methods. The printing experiment investigates the effects of speed (x 1 ), pressure (x 2 ), and distance (x 3 ) on a printing machine's ability to add colored ink to a package (y). ...
... In this case study, the target τ 2 = 500. The experimental data are given in Vining and Myers [6]. The estimated mean and standard deviation functions given by LSM-based RSM arê The information used for the RD modeling methods after training with the associated transfer functions, training functions, NN architectures (number of inputs, number of hidden neurons, number of outputs), and number of epochs in the case study are summarized in Tables 10 and 11. ...
Article
Full-text available
In robust design (RD) modeling, the response surface methodology (RSM) based on the least-squares method (LSM) is a useful statistical tool for estimating functional relationships between input factors and their associated output responses. Neural network (NN)-based models provide an alternative means of executing input-output functions without the assumptions necessary with LSM-based RSM. However, current NN-based estimation methods do not always provide suitable response functions. Thus, there is room for improvement in the realm of RD modeling. In this study, a new NN-based RD modeling procedure is proposed to obtain the process mean and standard deviation response functions. Second, RD modeling methods based on the feed-forward back-propagation neural network (FFNN), cascade-forward back-propagation neural network (CFNN), and radial basis function network (RBFN) are proposed. Third, two simulation studies are conducted using a given true function to verify the proposed three methods. Fourth, a case study is examined to illustrate the potential of the proposed approach. In conclusion, a comparative analysis of the three feed-forward NN structure-based modeling methods and conventional LSM-based RSM proposed in this study showed that the proposed methods were significantly lower in the expected quality loss (EQL) and various variability indicators.
... Multiple response problems [6,8] are quite prevalent across various application areas, such as optimal design [9,10] and clinical trials [11,12]. Their goal is to find optimal solutions for a given multiple response problem. ...
... The goal is to develop experimental strategies to achieve some target condition for a characteristic of interest while simultaneously minimizing its dispersity. Meanwhile, the optimization problem usually analyzes two models in a constrained optimization setting [10,14]. Thus, the problem is often solved with more than one objective function, such as multi-objective programming (MOP) [15,16]. ...
... A 2 ), (A 1 ; A 2 )}. Following Vining and Myers (VM) [10] notations, the primary and secondary (respectively) responses with main effects can be written as ...
Article
Full-text available
For data analysis, learning treatment rules in stratified medicine require the optimization of multiple responses. A common approach is to use a multi-objective function to find the optimal setting of the controllable factors. For patients, the optimal setting is a treatment regimen that yields the optimal value of potential responses. However, subclasses of patients are often stratified by their covariates. Thus, this paper proposes a new model called constrained optimization for stratified treatment rules (COSTAR) with multiple responses. This model incorporates covariates to build separate models for optimal responses and stratifies the patients with the balancing score from covariates. The optimal solution enables us to choose the optimal treatment for each subclass of patient. Theoretical results guarantee the identifiability of the solutions with conditional optimal values of multiple responses from survival probabilities. Examples of experiments with factorial designs and survival data validate the efficacy of the proposed method. The results suggest that this method improves the significance of the parameters and adjusted R2 in fitting on the primary response, while the unsupervised clustering method (i.e., k-means) does not. This method, with the fitting model, is more interpretable than the conventional method and provides optimal treatment rules for stratified patients.
... Nevertheless, a further challenge is the consideration of the multiple response case starting from the seminal papers of Derringer and Suich (1980) and Khuri and Conlon (1981). In this direction, many authors have contributed towards improving this relevant topic in order to solve the problem of finding an unique solution which could be an optimal compromise following process issues, noises, weighting and computational aspects (Vining and Myers 1990;Lin and Tu 1995;Tang and Xu 2002;Del Castillo 2007). Recently, further studies have attempted to solve this problem by considering both data analysis and external or subjective decisions (Lee, Kim, and Köksalan 2012); or by defining a Bayesian model-averaging approach based on building of a loss function (Ng 2010). ...
... As also described in the introductive Section, one of the innovative contribution of this study is the joint optimization (in a multiple response case) by using an -only one-joint split-plot estimated model. When speaking about robust process optimization, we should consider the recent developments in this field starting from the Taguchi's two-step procedure (Nair 1992), and the seminal paper on the dual response approach by Vining and Myers (1990), where the expected value of the response (process variable) is optimized with respect to the objective value, also called the target (τ ), and by simultaneously minimizing the process variability. Together with the dual response approach, the robust design concept was introduced by Taguchi in order to make insensitive a product (or a process) with respect to environmental noises and/or other internal or external sources of variability, see Nair (1992) for a detailed discussion of the Taguchi' method. ...
... All the random effects, estimated through the joint modeling (formula (7)), are also included, even though through the estimated confidence limits only. Moreover, through formula (12) we can perform the dual response approach (Vining and Myers 1990), in which the adjustment to the target value and the minimization of process variability are performed simultaneously. ...
Article
Full-text available
This paper deals with a proposal for joint modeling and process optimization for split-plot designs analyzed through mixed response surface models. It addresses the following main issues: i) the building of a joint mixed responsesurface model for a multiple response situation, by defining only one response through which specific coefficients areincluded for studying the association among the responses; ii) the considering of fixed as well as random effects within a joint modeling and optimization context; iii) the achievement of an optimal solution by involving specific as well as common coefficients for the responses. We illustrate our contribution through a case-study related to a split-plot design on electronic components of printed circuit boards (PCBs); we obtain satisfactory results by confirming the validity of thiscontribution, where the qualitative factor PCB is also studied and optimized.
... The most significant alternative to Taguchi's approach is the dual-response model approach based on the response surface methodology (RSM) [8]. In this approach, the process mean and variance (or standard deviations) are approximated as two separate functions of input factors based on the LSM. ...
... In general, the estimated second-order response surface functions are used to analyze RD problems. The estimated process mean and standard deviation functions proposed by Vining and Myers [8] can be defined as follows: The remainder of the study is organized as follows: Section 2 introduces the conventional LSM-based RSM. Section 3 describes the functional-link-NN-based dual-response estimation model. ...
... In general, the estimated second-order response surface functions are used to analyze RD problems. The estimated process mean and standard deviation functions proposed by Vining and Myers [8] can be defined as follows: where x = x 1 , . . . , x i , . . . ...
Article
Full-text available
In the field of robust design, most estimation methods for output responses of input factors are based on the response surface methodology (RSM), which makes several assumptions regarding the input data. However, these assumptions may not consistently hold in real-world industrial problems. Recent studies using artificial neural networks (ANNs) indicate that input–output relationships can be effectively estimated without the assumptions mentioned above. The primary objective of this research is to generate a new, robust design dual-response estimation method based on ANNs. First, a second-order functional-link-NN-based robust design estimation approach has been proposed for the process mean and standard deviation (i.e., the dual-response model). Second, the optimal structure of the proposed network is defined based on the Bayesian information criterion. Finally, the estimated response functions of the proposed functional-link-NN-based estimation method are applied and compared with that obtained using the conventional least squares method (LSM)-based RSM. The numerical example results imply that the proposed functional-link-NN-based dual-response robust design estimation model can provide more effective optimal solutions than the LSM-based RSM, according to the expected quality loss criteria.
... Box [4] studied confounding of the mean and variance in the signal-to-noise (a kind of summary statistic used to evaluate the performance of a system relative to variation caused by the noise) analysis, which weakened Taguchi's approach. Vining and Myers [5] further incorporated the idea of response surfaces each for the process mean and process variance to minimize variations and optimize performance and improve quality. ...
... It therefore, must be taken into cognizance, to avoid poor estimation and predictions [5]. ...
Article
Robust parameter design is a principle in quality improvement methodologies that is directed towards reducing the effects of errors which are either poised by the noise factors or the control factors. Response surface methodology is an effective approach to robust parameter design. Previous studies discussed Robust parameter design based on the response surface model by considering measurement errors in control variables for a single response variable. However, in process design, determining optimal levels of control variables is an important issue in some problems with different outputs. This study therefore investigates the impacts of measurement errors in the levels of control variables on processes with multiple quality characteristics (responses). Different variances of error were tested on the levels of control variables and the analysis of response surface modeling and optimization was performed. The result showed that as measurement errors in the levels of control variables increase, the coefficient of determinations for the multi-response and the expected quality loss deviates from what is obtainable in the initial state. It can be concluded based on the result however, that measurement errors in the levels of control variables exert impacts on robust parameter design for multi-response.
... A response surface model based on Bayesian multiple regression method is proposed to solve the problems of correlation between multi-responses, conflict of multi-objective optimization and uncertainty of model parameters in considering the design of multi-response robust parameters [14]. A dual response surface method is proposed by combining Taguchi's robust parameter design idea with response surface design [15]. The dual response surface model expects to optimize the mean and variance of quality characteristics at the same time, but it seems to  -restraint approach, which selects only one objective response as the optimal objective, while the remaining (k-1) objectives as restraint conditions [15][16][17]. ...
... A dual response surface method is proposed by combining Taguchi's robust parameter design idea with response surface design [15]. The dual response surface model expects to optimize the mean and variance of quality characteristics at the same time, but it seems to  -restraint approach, which selects only one objective response as the optimal objective, while the remaining (k-1) objectives as restraint conditions [15][16][17]. Therefore, in order to conduct robust optimization, the optimization of the both minima of S and y closing to the target should be treated with individual models at the same time, so that the essence of robust design can be reflected rationally. ...
Article
Full-text available
This paper aims to develop a probabilistic approach for robust design with orthogonal experimental methodology in case of target the best on basis of the probabilistic multi-objective optimization. In the treatment, the difference of the target value and the arithmetic mean value of performance indicators of the alternatives is taken as one objective to be minimum, and the square root of mean squared error of actual value of performance indicators from the target value of the alternatives is taken as the second objective to be minimum, which contribute their partial preferable probabilities to the alternative individually. As an application example, the probabilistic method for multi-objective optimization is combined with orthogonal experimental methodology to conduct the optimum design, range analysis is used to total preferable probability subsequently, and ranking sequence of total preferable probability of alternatives is used to complete the optimization option.
... With the advances in high performance computing, numerical analysis and optimization, a new field called robust design optimization (RDO) has emerged. In RDO, the meanvariance optimization (MV) is one of the most direct and fundamental mathematical optimization formulations 2 following Taguchi's idea of not only optimizing the mean performance but to simultaneously reducing the variance [43], ...
... be the maximizer of the inner problem at x, we can write the gradient of the DRO objective (43) as ...
... Subsequently, the fitted model, known as a response surface, is then applied to achieve robust performance through different optimization approaches. In this context, Vining and Myers [72] proposed the dual response surface (DRS) approach introduced by Myers and Carter [73] to solve RPD problems. In the DRS approach, two empirical models are established through RSM, one for the process mean ŷ( ) and the other for process variance ̂ 2 ( ) , which are considered to achieve optimal solutions. ...
... Since then, several optimization approaches have been proposed over the years based on the DRS to achieve robust performance. For example, Vining and Myers [72] and Lin and Tu [74] proposed the minimization of the mean square error (MSE) as an optimization strategy, where the mean, variance and target are combined into a single objective function, such as MSE = [ŷ( ) − ] 2 +̂ 2 ( ). ...
Article
Full-text available
This study proposes a robust optimization strategy to analyze the trade-off between the conflicting objectives of minimizing the cutting time and the distance between the actual value of the mean and total roughness of their respective targets, along with the variation caused by flank wear. The motivation for this work is that the effect of tool wear is often neglected by traditional roughness optimization approaches in hard turning. The wear evolution causes an unwanted variation in roughness, compromising the process’s ability to meet specifications and leading to the underutilization of cutting tools. The methodologies used were as follows: robust parameter design, response surface methodology, combined array, multivariate mean square error, and normal boundary intersection. Cutting speed, feed rate, and depth of cut were the process factors considered. An experimental study was performed to demonstrate the strategy effectiveness for the dry finishing turning of AISI 52,100 hardened steel. The results showed that the effect of flank wear was statistically significant for both roughness. The lowest roughness values with maximum robustness are achieved at the expense of higher cutting times. As the process must be flexible to meet different specifications, the proposed strategy allows exploring other solutions that ensure that the machined parts throughout the tool life meet a roughness specification in the shortest possible cutting time. This decision-making support is the novelty and advantage of the proposed strategy. These solutions prevent the production of non-conforming parts and the underutilization of cutting tools and improve process productivity.
... In order to deal with these situations we shall describe now a two-step modeling procedure that Myers, Khuri, and Vining (1992) and Vining and Myers (1990) referred to as a dual-response approach. The basic idea is to express an observation as a function of design and noise factor effects as well as the 2-factor interactions between design and noise factors, or ...
... In general, these two objectives cannot be achieved independently of each other and hence require engineering as well as statistical considerations. For examples, see Myers, Khuri, and Vining (1992) and Vining and Myers (1990), who also point out how this may involve sequential experimentation. ...
Book
Full-text available
The project of revising Kempthorne’s 1952 book Design and Analysis of Experiments started many years ago. Our desire was to not only make minor changes to what had become a very successful book but to update it and incorporate new developments in the field of experimental design. Our involvement in teaching this topic to graduate students led us soon to the decision to separate the book into two volumes, one for instruction at the MS level and one for instruction and reference at the more advanced level. Volume 1 (Hinkelmann and Kempthorne, 1994) appeared as an Introduction to Experimental Design. It lays the philosophical foundation and discusses the principles of experimental design, going back to the ground-breaking work of the founders of this field, R. A. Fisher and Frank Yates. At the basis of this development lies the randomization theory as advocated by Fisher and the further development of these ideas by Kempthorne in the form of derived inear models. All the basic error control designs, such as completely randomized design, block designs, Latin square type designs, split-plot designs, and their associated analyses are discussed in this context. In doing so we draw a clear distinction among the three components of an experimental design: the error control design, the treatment design, and the sampling design. Volume 2 builds upon these foundations and provides more details about certain aspects of error control and treatment designs and the connections between them. Much of the effort is concentrated on the construction of incomplete block designs for various types of treatment structures, including “ordinary” treatments, control and test treatments, and factorial treatments. This involves, by necessity, a certain amount of combinatorics and leads, almost automatically, to the notions of balancedness, partial balancedness, orthogonality, and uniformity. These, of course, are also generally desirable properties of experimental designs and aspects of their analysis.
... The latest may be caused by safety, regulatory, physical, operational, legal, ethical, economical, or other reasons. There are many methodologies supporting design of experiments, such as full factorial, fractional factorial (Box et al, 1978;Montgomery, 2012), screening (Plackett and Burman, 1946), Taguchi L-designs (Ross, 1996), response surface methodology (Box and Wilson, 1951), dual response (Vining and Myers, 1990;Castillo and Montgomery, 1993;Lin and Tu, 1995), and Nelder-Mead technique (Nelder and Mead, 1965). ...
... Considering the stochastic process of CNN training, three replications have been completed for all trials and dual response methodology has been applied (Vining and Myers, 1990). Based on estimations of the mean and the standard deviation for each trial, the lower confidence limits (LCL) have been calculated at a 90% confidence level as a conservative measure of the achieved accuracy (Glushkovsky, 2018). ...
Preprint
Design of experiments (DOE) is playing an essential role in learning and improving a variety of objects and processes. The article discusses the application of unsupervised machine learning to support the pragmatic designs of complex experiments. Complex experiments are characterized by having a large number of factors, mixed-level designs, and may be subject to constraints that eliminate some unfeasible trials for various reasons. Having such attributes, it is very challenging to design pragmatic experiments that are economically, operationally, and timely sound. It means a significant decrease in the number of required trials from a full factorial design, while still attempting to achieve the defined objectives. A beta variational autoencoder (beta-VAE) has been applied to represent trials of the initial full factorial design after filtering out unfeasible trials on the low dimensional latent space. Regarding visualization and interpretability, the paper is limited to 2D representations. Beta-VAE supports (1) orthogonality of the latent space dimensions, (2) isotropic multivariate standard normal distribution of the representation on the latent space, (3) disentanglement of the latent space representation by levels of factors, (4) propagation of the applied constraints of the initial design into the latent space, and (5) generation of trials by decoding latent space points. Having an initial design representation on the latent space with such properties, it allows for the generation of pragmatic design of experiments (G-DOE) by specifying the number of trials and their pattern on the latent space, such as square or polar grids. Clustering and aggregated gradient metrics have been shown to guide grid specification.
... To achieve this, various techniques have been utilized to approximate implicit functions of design variables using metamodels, also known as response surfaces. These surfaces are constructed by fitting suitable approximating functions to a set of experimental data points (Li et al. [26], Tang, Xu [27], Vining, Myers [28], Yeniay et al. [29]). ...
... where T 0 is the target response value. We can also use the dual-response optimization model proposed by Vining and Myers (1990); Extreme value theory (EVT) is a branch of statistics which study the stochastic behavior of a process at unusually large or small values [1]. Particularly, EVT provides procedures for tail estimation which are scientifically and statistically rational. ...
... The most popular DOE is the Taguchi method, which employs many orthogonal arrays to compare and analyze various strata of each control characteristic 31 . Researchers have confirmed that it is the best way to get standardized items at effective costs [41][42][43] . More research is needed on the Taguchi approach 44 for managing a variety of performance parameters. ...
Article
Full-text available
The comparative study of natural hydroxyapatite (NHAp) from bovine (B) and catfish (C) bones using different fabrication parameters has been extensively researched through traditional investigation. However, the quantitative effect optimization of a novel mix proportion of hydroxyapatite from these bones, and fabrication parameters have not been examined. Hence, this study presents the effect of the powder mixture, compaction pressure, and sintering temperature (as production parameters) on the experimental mechanical properties of naturally derived HAp. The bovine bone and catfish bone biowastes were used in mixed proportions to produce hydroxyapatite via the sol–gel synthesis protocol. The powders were calcined separately at 900 °C to convert the deproteinized biowaste. Next, the powders were combined chemically (sol–gel) in the appropriate ratios (i.e. 45 g of B: 15 g of C (B75/C25), 30 g of B: 30 g of C (B50/C50), and 15 g of B; 45 g of C (B25/C75)). Taguchi design supported by grey relational analysis was employed with an L9 orthogonal array. The Minitab 16 software was employed to analyze the Taguchi design. The result revealed an inconsistency in the powder mixture as the optimum state for individual mechanical properties, but the grey relational analysis (GRA) showed better mechanical properties with a powder mix of B50/C50, 500 Pa compaction pressure, and 900 °C sintering temperature. The obtained result further showed that the novel mix of these powders is a good and promising material for high-strength biomedical applications, having a contribution of 97.79% on hardness and 94.39% on compressive strength of HAp. The obtained experimental grey relational grade of 0.7958 is within the 95% confidence interval, according to confirmation analysis (CA). The optimum powder parameter was examined using X-ray diffraction (XRD), and its structure, size, and elemental makeup were examined using scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS) analysis. The sample had a higher degree of crystallinity and mean crystallite size of 80.42% and 27.3 nm, respectively. The SEM images showed big, gritty grains that are not tightly packed.
... Despite Taguchi's method is practical in engineering field, it has also been criticized for various statistical problems. 13,14 Vining and Myers 15 proposed a dual-response surface methodology (DRSM), which received significant concern for RPD. They indicated that the quality characteristics (the location effect) and process variance (the dispersion effect) could form a dual-response system. ...
Article
To achieve better product quality, the dual‐response surface has been widely used to simultaneously achieve targets for the mean and reduced variance. The general assumption is that the experimental data are normally distributed without censoring. However, reliability data are usually non‐normally distributed and censored. Consequently, the classical dual‐response surface method may not be suitable for handling reliability data. This study proposed a dual‐response surface method based on robust parameter design (RPD) for reliability data. First, a percentile lifetime and an interquartile range (IQR) were used for the location and dispersion effects, respectively. A two‐stage method was then developed to identify the effects of significant factors. During the first stage, the Boruta algorithm was employed for factor effect selection, and during the second stage, the least absolute shrinkage and selection operator (LASSO) dimension reduction method was used to determine the final significant factor effects. Finally, a modified standard particle swarm optimization (SPSO) algorithm was adopted to determine the optimal solution. The proposed method was demonstrated using an industrial thermostat experiment. Compared with other methods, the proposed method performed better over various warranty periods, particularly for the 10th percentile.
... Techniques of approximating implicit functions of design variables using metamodels, i.e., response surface designs, were used for this purpose. Response surfaces are developed by fitting the approximating functions to the set of experimental points [38][39][40][41][42][43] In order to find the global maximum, random search methods, and evolutionary or heuristic algorithms are very often used [42][43][44]. ...
Article
Full-text available
The main subject of this paper is an optimization of steel roof framing used as a load-bearing structure in commercial pavilions. The authors wanted to draw attention to the necessity to take into account the uncertainty in the description of design parameters during optimization. In the first step, using geometrically nonlinear relationships, a static-strength analysis is performed. The decisive form of loss of stability in this steel roof framing is the jump of the node (snap-through), and not the buckling of the most stressed structure bars. Therefore, when creating the limit function, it was decided to make a condition limiting the permissible displacement. Values of the implicit limit function were calculated with Abaqus software based on the finite element method. Reliability analysis, and robust and deterministic optimization were performed using Numpress Explore software. Numpress Explore software communicates with the Abaqus software to perform analysis. The task ended with the generation of information that contained the failure probability, reliability index and the values of optimized areas of the bars’ cross-sections. The end result of the optimization is not a cost analysis, but an assessment of the safety of the structure.
... Although this method has been widely used, it is questioned by Miller [32], Montgomery [33], Tsui [34], and other scholars because it does not consider the interaction between factors, the experiment is lack of sequence, and the SNR index is unreasonable. As an improvement, Vining and Myers [35,36] expressed the RPD problem as a constrained optimization problem and proposed dual-response surface methodology (DRSM) to solve this problem. Its basic idea is to fit the two response surfaces of the process mean and variance respectively, and then minimize the variance with the mean target value as the constraint, so as to improve the robustness of the process. ...
Article
Full-text available
To realize the predictive control of coal preparation quality and ensure that the quality of washing products is close to the minimum coal quality requirement of coal blending to the greatest extent is one of the important means to maximize production and maintain the interests of customers and enterprises. Therefore, the feasibility of introducing the double response surface method and multiobjective genetic algorithm to solve the aforementioned problems is further discussed. By selecting the controllable factors and noise factors affecting the output and determining their respective value levels, the product table method is used to design the robust parameter design test, and the experimental results are obtained, according to the experimental data, the second-order polynomial model of the mean and standard deviation of each response characteristic is established, and the effectiveness of the model is analyzed. Then, the double-response optimization function of each response characteristic is established according to the type of response characteristic. Finally, the corresponding parameter values are solved by multiobjective genetic algorithm. The internal and external surface method is used to design and run 60 tests. Through optimization analysis, the robust parameter settings are 150.68 kpa, 0.18143.73 kpa, and 30%, and the optimal output is ash 8.499%, which yields 69.54%, meeting the requirements of stakeholders. Moreover, compared with the traditional optimization design method, the superiority of the proposed method is verified, which shows that this method is conducive to the transformation of the coal preparation plant from fire-fighting quality management to preventive quality management and provides support for the accurate control and systematic management of the production process of the coal preparation plant.
... The loss function approach (Taguchi, 1986;Pignatiello, 1993) can be applied to solve such a problem since both the location and dispersion effects can be measured by a loss function. Moreover, the dual-response methods (Vining & Myers, 1990;Kim & Lin, 1998;Zeybek et al., 2020) were proposed for robust response optimization. These methods model the location and dispersion effects as two separate "responses" to be optimized. ...
Article
Full-text available
This paper proposes a robust method for multi-response optimization (MRO) considering the location effect, dispersion effect, and model uncertainty simultaneously. We propose a multi-objective optimization model for MRO that simultaneously maximizes the satisfaction degrees of the local and dispersion effects. Specifically, a robust desirability function is used to model the overall satisfaction degree of multiple responses with respect to the two effects. This desirability function evaluates a solution’s performance considering the confidence intervals predicted by the regression models of location and dispersion effects, and thus, can address the problem of model uncertainty. The proposed multi-objective model yields a set of non-dominated solutions that approximate the Pareto front instead of one single solution, which provides more flexibility for decision makers to select the final best compromise solution based on their preferences. To solve the model, a hybrid multi-objective optimization algorithm called NSGAII-DMS that combines non-dominated sorting genetic algorithm II (NSGA-II) with direct multi-search (DMS) is proposed. NSGAII-DMS uses the search mechanism of NSGA-II during the early evolutionary phase to quickly find a set of non-dominated solutions and then uses the search mechanism of DMS to further tune the found non-dominated solutions. Two test examples have shown that the proposed multi-objective MRO model can produce a set of robust solutions by considering the location effect, dispersion effect, and model uncertainty. Further analyses illustrate that NSGAII-DMS shows significantly better search performance than several well-known benchmark multi-objective optimization algorithms, which include NSGA-II, SPEA2, MOEA/D, and DMS.
... The aim of the graphical methods is to optimize both mean and variance of a response variable simultaneously. In addition to this graphical method, there have been several methods for optimizing mean and variance of a single response, called dual response surface optimization (DRSO) [20][21][22][23][24]. They build separate response surface models for mean and standard deviation of a response variable and optimize these two functions simultaneously. ...
Article
Full-text available
Wire + arc additive manufacturing is an arc welding process that uses non-consumable tungsten electrodes to produce the weld. The material used in this study is a titanium, carbon, zirconium, and molybdenum alloy that is physically and chemically stable and has good performance for use as a welding and high-temperature heating element. In this study, welding experiments are designed based on a central composite design, and single-layer wire + arc additive manufacturing is performed using the titanium, carbon, zirconium, and molybdenum alloy. Consequently, 17 beads are obtained and the height, width, left and right toe angles, which represent the geometry of the beads are measured. Based on the measured geometry, response surface models for mean and standard deviation of the four geometries are fitted. Mean absolute percentage error of the four response surface models is 16.6% on average which implies that the models are reasonably well fitted. Based on the response surface models, the optimal settings for the Wire + arc additive manufacturing parameters are obtained by using a desirability function method. At the optimal setting, the desirability function value shows 0.85 on average which is close to ideal value of 1.00. This result indicates that valid optimal settings for the process parameters can be obtained via the proposed method.
... Variable selection is a key issue in classification and regression models. Commonly used methods include t-test (Vining & Myers, 1990), Akaike information criterion (AIC) (Lee, 2010), backward elimination (Chan et al., 2011), and stepwise regression (Lin & Tu, 1995), etc. However, all above methods have disadvantages such as severe restrictions, unstable results, and over-fitting (Breiman, 1995). ...
Article
Full-text available
It is essential to explore the optimal environmental regulation policy combination for high-quality economic development due to the mutual constraints and confrontation between environmental regulation policies. Based on provincial panel data of China from 2000 to 2019, this study uses the spatial Durbin model to examine the synergy effect of different environmental regulation policy combinations on high-quality economic development from the dual perspective of local effects and spillover effect of environmental regulation, respectively. Furthermore, it explores the threshold effect of environmental regulation policy synergy on high-quality economic development through technological innovation at provincial level as well as in three major regions of China using the threshold regression model. The results indicate that environmental regulation policy synergy has a spatial spillover effect on high-quality economic development, and there is a positive U-shaped relationship with them no matter which policy combination. However, there are significant differences in the local effects and spillover effects of environmental regulation policy synergy under different policy combinations. Moreover, there is a double threshold for environmental regulation policy synergy effect on high-quality economic development except in central China, which policy combinations of 13 and 23 have a single threshold effect, while the other policy combinations have no threshold effect. Finally, the optimal environmental regulation policy combination for high-quality economic development is given through comparative analysis. In response to the above conclusions, this paper puts forward the targeted policy implications.
... This method can be used to identify controllable parameter (design variable) values that satisfy a set of performance requirement despite variations in uncontrollable parameters (noise factors). Type I robust design was first proposed by Genichi Taguchi (Taguchi 1986, Taguchi and Clausing 1990, Taguchi 1993 and has been carried forward by many researchers (Vining and Myers 1990, Welch, Yu et al. 1990, Shoemaker, Tsui et al. 1991, Chen, Allen et al. 1996. ...
Chapter
This chapter discusses the fundamental questions: the essence of robust design, and the fundamental requirements to conduct the robust design.
Chapter
This chapter aims to develop a probabilistic approach of robust design with one objective, which is based on the probabilistic multi-objective optimization. In the treatment, the arithmetic mean value of performance utility indicator of the objective of an alternative is taken as one sub-objective, and the dispersion of performance utility indicator is taken as other sub-objective, the dual sub-objectives contribute their part of partial preferable probabilities to the alternative individually. Thus it becomes an optimization problem of dual sub-objectives simultaneously. Furthermore, three cases, i.e., the larger the better, the smaller the better and target the best, are formulated respectively. Moreover, ranking sequence of total preferable probabilities of alternatives is used to complete the optimization option. Subsequently, some application examples are given.
Conference Paper
div class="section abstract"> For the design optimization of the electric bus body frame orienting frontal crash, considering the uncertainties that may affect the crashworthiness performance, a robust optimization scheme considering tolerance design is proposed, which maps the acceptable variations in objectives and feasibility into the parameter space, allowing for the analysis of robustness. Two contribution analysis methods, namely the entropy weight and TOPSIS method, along with the grey correlation calculations method, are adopted to screen all the design variables. Fifteen shape design variables with a relatively high impact are chosen for design optimization. A symmetric tolerance and interval model is used to depict the uncertainty associated with the 15 shape design variables of key components in the bus body frame to form an uncertainty optimization problem in the form of an interval, and a triple-objective robust optimization model is developed to optimize the shape design variables and tolerances simultaneously. The optimization problem is handled by the double-loop nested optimization algorithm, which combines the NSGA-III and the sequential quadratic programming (SQP) methods. The four sets of optimized crashworthiness midpoint values obtained from the optimization that meet the requirements of robustness and fluctuation range, as well as the tolerance evaluation coefficients, are analyzed, discussed, and compared with the measurements before optimization. The results demonstrate that the proposed robust optimization scheme, considering tolerances, effectively enhances the crashworthiness of the bus body frame under practical manufacturing conditions while ensuring the robustness of the optimization results. </div
Chapter
Full-text available
La Optimización Robusta es una herramienta estadística matemática que ha procurado fortalecer las características críticas de un producto y/o sistema cuando estos operen bajo condiciones adversas que podrían afectar su desempeño y/o rendimiento. Se ha llevado a cabo estudiando la variación presente en las variables (factores, por ejemplo: Temperatura, Humedad, Voltaje, Polvo, etc.…) de mayor incidencia y como estas impactan en el resultado, para posteriormente identificar los mejores niveles de operación con la mínima variación presente en las condiciones de ambiente adverso (estrés). Además, la Ingeniería Predictiva es una herramienta probabilística matemática que se ha utilizado para modelar el comportamiento de cierta característica de un producto y/o sistema utilizando una función de probabilidad y predecir una ocurrencia o porcentaje de desempeño frente a ciertas condiciones operantes en el sistema. Este tipo de herramientas son utilizadas por Ingenieros, Científicos e Investigadores para proponer soluciones de peso a problemas relacionados mayormente al diseño y manufactura de productos, aunque no se descarta su uso para el diseño de sistemas tales como Software, Redes, Logística u otros además de servicios. Sin lugar a duda, los profesionales en el campo de la ingeniería deberán estar familiarizados con las técnicas de optimización y predicción como estas, por lo que se requiere que sean examinadas en detalle. El objetivo de este trabajo es observar las generalidades de ambas herramientas: Optimización Robusta e Ingeniería Predictiva además de un breve caso de aplicación. Palabras Claves: Robusto, Superficie de Respuesta, Respuesta Dual, Modelo Probabilístico, Índice de Capacidad (Cpk), Rendimiento.
Article
Mathematical model development for multivariate decision-making problems (i.e., multiple parameter and response optimization problems) has received significant attention and has been investigated by many researchers and practitioners. Although many different optimization models and methods, including priority and weight allocation problems, have been proposed in the literature, there is significant room for improvement. The majority of existing optimization methods may have difficulties in determining reliable priorities and weights for practical industrial situations. In addition, because they often focus on tradeoffs only among multiple output responses, these methods may not incorporate the multiple priority and/or weight effects directly from input factors to output responses (i.e., quality characteristics). To address these problems, the primary objective of this research is to propose a new multi-objective optimization approach by integrating a game theory principle (i.e., the Stackelberg leadership game) into a robust parameter design model to determine the optimal factor settings. First, a multi-response robust design optimization (RDO) problem was formulated using a mean squared error model and response surface methodology. A Stackelberg leadership game is then integrated into this RDO problem, where multiple responses play the role of game participants. Second, the integrated RDO model using the Stackelberg game-based multi-response (SGMR) formulation approach is analyzed by decomposition into various sequential optimization models in terms of different leader–follower relationships. Finally, non-dominated solutions are obtained from the proposed model by evaluating the overall quality loss (OQL). A pharmaceutical numerical example was performed to demonstrate the proposed integrated model and to examine whether this model can determine the most efficient solutions when the relative importance among multiple responses is unidentified. In addition, a comparative study associated with the conventional multi-objective optimization methods and the newly proposed RDO model is presented, and the proposed RDO model provided significantly better solutions than the conventional methods in the relevant comparative study.
Article
Full-text available
This study proposes a novel hybrid approach, called Adaptive/Elevator Kinematics Optimization algorithm based on dual response algorithm (A/EKO-DRA), to enhance the robust parameters estimation and design of the plaster milling process. The A/EKO-DRA method reduces variability while maintaining the desired output target, thereby minimizing the impact of variance on the expected stucco combined water. The performance of the A/EKO-DRA is compared with conventional processes through numerical examples and simulations. The results show that the A/EKO-DRA method has the lowest mean absolute errors among other methods in terms of parameter estimation, and it achieves the response mean of 5.927 percent, which meets the target value of 5.9 percent for industrial enclosures, with much reduction in the response variance. Overall, the A/EKO-DRA method is a promising approach for optimizing the plaster milling process parameters.
Chapter
The first section of this chapter introduces statistical process control (SPC) and robust design (RD), two important statistical methodologies for quality and productivity improvement. Section 11.1 describes in-depth SPC theory and tools for monitoring independent and autocorrelated data with a single quality characteristic. The relationship between SPC methods and automatic process control methods is discussed and differences in their philosophies, techniques, efficiencies, and design are contrasted. SPC methods for monitoring multivariate quality characteristics are also briefly reviewed.Section 11.2 considers univariate RD, with emphasis on experimental design, performance measures and modeling of the latter. Combined and product arrays are featured and performance measures examined, include signal-to-noise ratios SNR, PerMIAs, process response, process variance, and desirability functions. Of central importance is the decomposition of the expected value of squared-error loss into variance and off-target components which sometimes allows the dimensionality of the optimization problem to be reduced. Besides, this section deals with multivariate RD and demonstrates that the objective function for the multiple characteristic case is typically formed by additive or multiplicative combination of the univariate objective functions, and lists RD case studies originating from applications in manufacturing, reliability, and tolerance design.Section 11.3 discusses the mainstream methods used in the prognostics and health management (PHM) framework, including updated research from the literatures of both statistical science and engineering. Additionally, this section provides an overview of the systems health monitoring and management (SHMM) framework, discusses its basic structure, and lists several applications of SHMM to complex systems and to critical components within the context of a big data environment.KeywordsControl chartMinimum mean square errorRobust designDesirability functionStatistical process controlHealth monitoring and management
Article
Full-text available
The normal distribution approach is often used in regression analysis at the Response Surface Methodology (RSM) modeling stage. Several studies have shown that the normal distribution approach has drawbacks compared to the more robust t-distribution approach. The t-distribution approach is found to control size much more successfully in small samples compared to existing methods in the presence of moderately heavy tails. In many RSM applications, there is more than one response (multiresponse), which is usually correlated with each other (multivariate). On the other hand, the actual response surface habitually indicates the curve by the optimal value, so the second-order model is used. This paper aims to develop the second-order multiresponse surface model using a multivariate t-distribution approach. This work also provides the parameter estimation procedure and hypothesis testing for the significance of the parameter. First, the parameter estimation is performed using the Maximum Likelihood Estimation (MLE), followed by the Expectation–Maximization algorithm as an iterative method to find (local) maximum likelihood. Next, the Likelihood Ratio Test (LRT) method is used to test the parameters simultaneously. The model obtained uses this approach to determine the conditions of input variables that optimize the Paracetamol tablets’ physical quality characteristics.
Article
In engineering system design, minimizing the variations of the quality measurements while guaranteeing their overall quality up to certain levels, namely the robust parameter design (RPD), is crucial. Recent works have dealt with the design of a system whose response-control variables relationship is a deterministic function with a complex shape and function evaluation is expensive. In this work, we propose a Bayesian optimization method for the RPD of stochastic functions. Dual stochastic response models are carefully designed for stochastic functions. The heterogeneous variance of the sample mean is addressed by the predictive mean of the log variance surrogate model in a two-step approach. We establish an acquisition function that favors exploration across the feasible and optimality-improvable regions to effectively and efficiently solve the stochastic constrained optimization problem. The performance of our proposed method is demonstrated by the extensive numerical and case studies. Note to Practitioners —Many manufacturing processes involve undesirable variations, which create variations in the final products. For example, many emerging manufacturing processes, such as nanomanufacturing, involve complex physical and chemical dynamics and transformation, creating variations in the manufacturing output. In such processes, it is crucial to design the manufacturing processes or products so that they have minimum variations in their quality. Meanwhile, it is also important to maintain the overall quality of the designed processes or products. Furthermore, acquiring data from many advanced manufacturing processes is often very costly, especially in the designing stage. In this work, we propose a data-driven method that automatically finds the best setting of manufacturing processes or products with the minimum variations of quality and a given constraint on the average quality satisfied. Our proposed method is used before conducting every experiment; It analyzes the historical data from previous experiments and provides a setting to be used in the next experiment. Our proposed method efficiently utilizes the historical data, and thus finds the best robust setting by conducting only a small number of experiments.
Article
Additive Manufacturing (colloquially: 3D printing) is ready to emerge from its niche status and become a feasible mainstream technology due to its increasing number of industrial applications. However, the restricted size of the prints, resulting from the printer's limited bed size is an aspect hindering the acceptance of this technology. Splitting the CAD model and joining them with some viable technique can be an evocative solution to this challenge. This article investigates the welding of MEX-AM parts made from frequently chosen thermoplastics (Acrylonitrile Butadiene Styrene/Polylactic Acid) ABS/PLA with the assistance of electromagnetic microwave radiations, employing inexpensive Fe2O3 as the absorbent material. The foremost objective of the investigation was to counterbalance the dimension limits of MEX 3D printers by microwave welding and simultaneously optimize the parameters involved in the process. Critical parameters including microwave power level, absorbent quantity, welding pressure, infill percentage and material combination were optimized via (Design of Experiments/Analysis of Variance) DOE/ANOVA statistical tools. The outcome resulted in the joint efficiency reaching upto 99.53%. ABS being amorphous, responded well to the microwaves displaying better shear strength and ductility compared to PLA. Further, it was observed that a temperature of 120 to 165 °C with lead-times of 5 to 8 min was approximately required for the successful welds to happen. The predicted and validated results revealed that for maximum joint efficiency, the parameters to be set should be 1000 W, 0.2 g absorbent, 400 g dead weight and 35% infill percentage with ABS/ABS as the printing material. The results when applied to repair a drone body, weld a (Unmanned Aerial Vehicle) UAV wingspan, helmet sections as well as pump flanges displayed good structural strength and integrity. The proposed technique displayed enormous potential as it could easily be reproduced even on a household microwave oven obtaining complex 3D structures and raising the adequacy of smaller bed 3D printers.
Article
In the response surface based RPD, the optimal setting of the controllable factor is highly dependent on the accuracy of the response surface. Classically, in order to improve the accuracy of the response surface, it is necessary to add more samples. The larger the number of samples, the higher the accuracy. Traditional RPD usually uses a one-shot modeling method to construct a response surface. Whenever the number of samples increases, all samples need to be learned from the beginning to rebuild the response surface. However, The one-shot modeling method significantly increases the time of model training and the complexity of model training. We present an incremental strategy to build response models. Our solution is based on the Huber-support vector regression machine. In this article, the incremental Huber-SVR model is proposed to construct the response surface in robust parameter design. The proposed algorithm can continuously integrate new sample information into the already built model. In incremental HSVR-RPD, we can use the optimal settings of the previous controllable factors, the currently observed noise factor and the corresponding response to improve the accuracy of the response surface, so as to obtain more reliable recommended settings in the next stage.
Article
Robust design (RD) is a powerful technique to reduce the variability when the mean is at the specified target value. A large number of RD studies focus on controllable design factors for the nominal-the-best (N-type) quality characteristic (QC). On the other hand, the customer’s perspective is necessary to be integrated while considering the larger-the-better (L-type) and the smaller-the-better (S-type) QCs for a number of situations. Therefore, this paper is three-fold. One, the four-stepped proposed methodology development is presented for the L- and S-type QCs where design factors are denoted as controllable and noise. Thus, the estimated fitted functions are found using the p-value concept for the mean, standard deviation, and variance to deal with the L- and S-type QCs. Two, customer-focused L- and S-type QCs-based robust design models are offered to obtain the optimum settings of design factors. In addition, comparison studies are conducted between the proposed customer-focused and traditional models. In conclusion, the L- and S-type QCs-based numerical examples are illustrated to present the performance of the four-stepped proposed methodology development.
Chapter
Drilling is a very basic and important application necessary in nearly all the machining operations and manufacturing process. While doing the drilling operation, the formation of burrs is also a very common phenomenon. Burrs are unwanted as they are responsible for decreasing the quality as well as the overall strength of finally manufactured item. In this chapter, an experimental investigation of major process parameters that effect the burr formation for Mild Steel ASTM A36 which have an enormous number of applications in several fields like automobile parts, building structures, machine tools, etc. The major process parameters that were taken for the investigation are drilling speed, speed of operation (cutting), drill bit diameter, feed rate, ratio of the mixture of the cutting fluid. In this chapter, the analysis is done for the type of burr produced while taking different process parameters separately. This experimental investigative study as discussed in this chapter will enable us to get familiar with the relation between the burr type of different types like crown burr, haphazard burr, rolled over burr, uniform, etc. and their corresponding causing process parameters as briefed above. The properties of burrs through microscopic images taken with the help of photo profilometer and digital microscope are also discussed in the later sections of this chapter.KeywordsDrillingMild steelDrilling speedDrill bit diameterFeed rateHaphazard and rolled over burr
Article
Many process parameters may affect product reliability significantly. Manufacturers need to identify the key processes and optimize their parameter levels. Traditional methods only improve product lifetime through maximizing the location parameter. They ignore the robustness of the product. In this article, we improve product lifetime and reduce variance simultaneously by changing the levels of experimental factors. Considering warranty and customer satisfaction, we introduce a new loss function, and get the average loss cost of manufacturer and customer. We obtain the optimal levels of experimental factors through minimizing the total loss cost. We illustrate the proposed method using a real industrial experiment. The results show that the optimal solution is changed with warranty period and design lifetime. Furthermore, we explain the changes of experimental factors, and discuss some related issues in actual application scenarios.
Article
Traditional robust parameter design (RPD) is a useful tool for offline quality improvement, which is mainly applied to physical experiment. If the optimal setting for controllable factors is not desirable, the offline PRD cannot utilize the new‐obtained sample to update the optimal setting. In this paper, the offline RPD is extended to online PRD; and the background for RPD is enlarged from physical experiment to computer experiment, where the Kriging model is typically utilized to create a single response surface. In the process of building the incremental Kriging (IKriging), if the optimal setting is not desirable, it can be regarded as a new‐arrival sample with the online measurements of the noise factors to update the Kriging model. While constructing the response surface with the IKriging, it is updated according to the new‐arrival sample rather than the all (n+1) samples, substantially enhancing the efficiency of constructing the response surface. In the proposed IKriging based online RPD strategy, an appropriate algorithm is designed to numerically get the next optimal setting for the controllable factors, and a criterion is established to determine the next site for the noise factors to observe the response. By employing the IKriging model, the response surface can be online constructed efficiently, and furthermore the optimal setting for the controllable factor can be obtained immediately, meeting the requirement of online quality adjustment. The proposed methodology is confirmed using five examples, which illustrate that the underlying optimal settings can be found by utilizing the proposed IKriging based online RPD strategy for the computer experiment, and the IKriging based RPD method is preferable to existing online robust design methods.
Article
Quality improvement is the most effective activity for the process and product development cycle, while minimizing the process and product variation. For this particular purpose, robust design models are proposed to reduce the process and product variance. However, the majority of robust design models in the literature deals with certain situations. This article has four objectives. First, an experimental design matrix is generated for both qualitative and quantitative input variables under uncertainty. Secondly, triangular fuzzy numbers are used to measure the values of a response variable while dealing with an α-level cut strategy. Fitted fuzzy mean, standard deviation and variance response functions are also obtained. Thirdly, a fuzzy mixed-integer robust design optimization model is proposed to obtain the optimum operating conditions of input variables under uncertainty. Finally, a numerical example is presented to show the effectiveness of the proposed fuzzy-based methodology development for an uncertain environment.
Article
Response surface methodology (RSM) is an appropriate tool for modeling and analyzing existing or new products. In the literature, considerable attention has been paid to develop RSM models while using controllable design factors. However, there are some situations where uncontrollable design factors are required to conduct an experiment. Therefore, this paper is four-fold. One, an A-optimal design is selected as the appropriate design to generate the design matrix. Two, an exchange algorithm is proposed to construct A-optimal design points while dealing with uncontrollable design factors. In addition, fitted mean and standard deviation response functions are obtained with both controllable and uncontrollable design factors. Three, a multiple response-based mixed-integer nonlinear model and its solution procedure are proposed to find optimum operating conditions of both controllable and uncontrollable design factors. Next, a case study is presented to show the effectiveness of the proposed methodology. It is also reported from the case study that the proposed optimization model may achieve a variance reduction of up to 73% compared to the traditional counterpart. Finally, comparison and validation studies are conducted to verify the optimum operating conditions.
Article
There is extensive research in the monitoring of a process whose characteristics are represented as profiles. However, most current techniques require all observations from each profile to determine the process state. We study the use of a Shewhart chart based on a Gaussian process model with heteroscedasticity for the online monitoring of profiles, while these are being developed, where the central line is the predictive mean and the control limits are based on the prediction band. The advantage is that we do not have to wait until a profile ends to make process corrections. Our results indicate that our method is effective.
Article
Off-line quality control methods are quality and cost control activities conducted at the product and process design stages to improve product manufacturability and reliability, and to reduce product development and lifetime costs. Parameter design is an off-line quality control method. At the product design stage the goal of parameter design is to identify settings of product design characteristics that make the product's performance less sensitive to the effects of environmental variables, deterioration, and manufacturing variations. Because parameter design reduces performance variation by reducing the influence of the sources of variation rather than by controlling them, it is a very cost-effective technique for improving product quality. This paper introduces the concepts of off-line quality control and parameter design and then discusses the Taguchi Method for conducting parameter design experiments.
Article
The purpose of this paper is to present the theory and develop an algorithm associated with the exploration of a dual response surface system. The approach is to find conditions on a set of independent or “design” variables which maximize (or minimize) a “primary response” function subject to the condition that a “constraint response” function takes on some specified or desirable value. A method is outlined whereby a user can generate simple two dimensional plots to determine the conditions of constrained maximum primary response regardless of the number of independent variables in the system. He thus is able to reduce to simple plotting the complex task of exploring the dual response system. The procedure that is used to generate the plots depends on the nature of the individual univariate response functions.In certain situations it becomes necessary to apply the additional constraint that the located operating conditions are a certain “distance” from the origin of the independent variables (or the center of the experimental design). The methods derived and discussed in the paper are applicable only to quadratic response functions.
Article
In a 1959 paper, A. E. Hoerl discussed a method for examining a second order response surface. Thii paper provides a mathematically simpler derivation of the technique and proofs of some stated properties.
Article
A multiplicative model is proposed for analyzing multifactor experiments which are conducted to study the effect of changes in the levels of the factors on the variance of a chance variable. The model is a direct analogue of the additive model for means. Univariate and multivariate applications to factorial experiments, experimental designs, and multiplicative response surfaces for variances are discussed.
Article
A distinguishing feature of Japanese quality improvement techniques is an emphasis on the designing of quality into the product and into the process that makes the product. In particular, experimental design is used to discover conditions that minimize variance and appropriately control the mean level. The direct estimation of variance by replication at each of the design points, however, can be excessively expensive in experimental runs. In this article we show how it is sometimes possible to use unreplicated fractional designs to identify factors that affect variance in addition to those that affect the mean.
Article
In this note, we discuss k-factor, second order designs with minimum number of points ½(k + l)(k + 2), in particular, those which are extensions of designs that give minimum generalized variance for k = 2 and 3. The experimental region is the unit cuboid. Minimum point designs of this type are unknown for k ≥ 4, and these designs are the best found to date except for k = 4, where a better design is known. Kiefer has shown that these designs cannot be the best for k ≥ 7, via an existence result but, even here, specific better designs are not known and appear difficult to obtain. We also discuss some difficulties of using, in practice, designs that, are D-optimal (that is give minimum generalized variance when the number of points is not restricted).
Article
Hybrid designs were created to achieve the same degree of orthogonality as central composite or regular polyhedral designs, to be near-minimum-point, and to be near-rotatable. They resemble central composite designs which have been augmented with at extra variable column. Eight designs are presented covering 3, 4, and 6 variables. All of these are at or within one point of minimum. Characteristics relevant to choice of design are discussed. Efliciencies are compared to central composite or polyhedral designs on n-spheres. A 46 point 7 variable design is also presented which, although it is not near-minimum, is an economical alternative to a 79 point central composite design.
Article
Parameter design is a method, popularized by Japanese quality expert G. Taguchi, for designing products and manufacturing processes that are robust to uncontrollable variations. In parameter design, Taguchi's stated objective is to find the settings of product or process design parameters that minimize average quadratic loss—that is, the average squared deviation of the response from its target value. Yet, in practice, to choose the settings of design parameters he maximizes a set of measures called signal-to-noise ratios. In general, he gives no connection between these two optimization problems. In this article, we show that for certain underlying models for the product or process response maximization of the signal-to-noise ratio leads to minimization of average quadratic loss. The signal-to-noise ratios take advantage of the existence of special design parameters called adjustment parameters. When these parameters exist, use of the signal-to-noise ratio allows the parameter design optimization procedure to be conveniently decomposed into two smaller optimization steps, the first being maximization of the signal-to-noise ratio. We show, however, that under different models (or loss functions) other performance measures give convenient two-step procedures, but the signal-to-noise ratios do not.
Article
A Japanese ceramic tile manufacturer knew in 1953 that is more costly to control causes of manufacturing variations than to make a process insensitive to these variations. The Ina Tile Company knew that an uneven temperature distribution in the kiln caused variation in the size of the tiles. Since uneven temperature distribution was an assignable cause of variation, a process quality control approach would have increased manufacturing cost. The company wanted to reduce the size variation without increasing cost. Therefore, instead of controlling temperature distribution they tried to find a tile formulation that reduced the effect of uneven temperature distribution on the uniformity of tiles. Through a designed experiment, the Ina Tile Company found a cost-effective method for reducing tile size variation caused by uneven temperature distribution in the kiln. The company found that increasing the content of lime in the tile formulation from 1% to 5% reduced the tile size variation by a factor of ten. This discovery was a breakthrough for the ceramic tile industry.
Article
Recent developments in quality engineering methods have led to considerable interest in the analysis of dispersion effects from designed experiments. A commonly used method for identifying important dispersion effects from replicated experiments is based on least squares analysis of the logarithm of the within-replication variance (Bartlett and Kendall 1946). Box and Meyer (1986) introduced a pooling technique for unreplicated two-level experiments. We extend this to replicated two-level experiments and compare its performance with the least squares analysis. We show that both of these methods can be obtained as special cases of maximum likelihood estimation under normal theory. The pooling technique is generally biased and is not recommended for model identification. The least squares analysis performs well as a model identification tool, but the estimators can be inefficient. In such cases we recommend that the parameters of the identified submodel be estimated by maximum likelihood. We derive some properties of the maximum likelihood estimator in balanced designs. An experiment for the robust design of leaf springs for trucks is used to illustrate the results.
Article
A class of incomplete three level factorial designs useful for estimating the coefficients in a second degree graduating polynomial are described. The designs either meet, or approximately meet, the criterion of rotatability and for the most part can be orthogonally blocked. A fully worked example is included.
Quality Engineering Through Design Optimization
  • G Taguchi
  • M S Phadke
  • Box G. E. P.