Chapter

An Enhanced Genetic Algorithm Integrated with Orthogonal Design

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Chapter 9 introduced an innovative computational intelligence method based on simulated annealing, to perform optimization of new products. In this chapter, we introduce another computational intelligence method known as evolutionary algorithms to perform optimization of new products.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Book
Full-text available
Fuzzy Set Theory - And Its Applications, Third Edition is a textbook for courses in fuzzy set theory. It can also be used as an introduction to the subject. The character of a textbook is balanced with the dynamic nature of the research in the field by including many useful references to develop a deeper understanding among interested readers. The book updates the research agenda (which has witnessed profound and startling advances since its inception some 30 years ago) with chapters on possibility theory, fuzzy logic and approximate reasoning, expert systems, fuzzy control, fuzzy data analysis, decision making and fuzzy set models in operations research. All chapters have been updated. Exercises are included.
Article
Full-text available
This research outlines the Taguchi optimization methodology, which is applied to optimize cutting parameters in drilling of glass fiber reinforced composite (GFRC) material. Analysis of variance (ANOVA) is used to study the effect of process parameters on machining process. This procedure eliminates the need for repeated experiments, time and conserves the material by the conventional procedure. The drilling parameters and specimen parameters evaluated are speed, feed rate, drill size and specimen thickness. A series of experiments are conducted using TRIAC VMC CNC machining center to relate the cutting parameters and material parameters on the cutting thrust and torque. The measured results were collected and analyzed with the help of the commercial software package MINITAB14. An orthogonal array, signal-to-noise ratio are employed to analyze the influence of these parameters on cutting force and torque during drilling. The method could be useful in predicting thrust and torque parameters as a function of cutting parameters and specimen parameters. The main objective is to find the important factors and combination of factors influence the machining process to achieve low cutting low cutting thrust and torque. From the analysis of the Taguchi method indicates that among the all-significant parameters, speed and drill size are more significant influence on cutting thrust than the specimen thickness and the feed rate. Study of response table indicates that the specimen thickness, and drill size are the significant parameters of torque. From the interaction among process parameters, thickness and drill size together is more dominant factor than any other combination for the torque characteristic.
Article
Full-text available
An evolutionary approach to designing accurate classifiers with a compact fuzzy-rule base using a scatter partition of feature space is proposed, in which all the elements of the fuzzy classifier design problem have been moved in parameters of a complex optimization problem. An intelligent genetic algorithm (IGA) is used to effectively solve the design problem of fuzzy classifiers with many tuning parameters. The merits of the proposed method are threefold: 1) the proposed method has high search ability to efficiently find fuzzy rule-based systems with high fitness values, 2) obtained fuzzy rules have high interpretability, and 3) obtained compact classifiers have high classification accuracy on unseen test patterns. The sensitivity of control parameters of the proposed method is empirically analyzed to show the robustness of the IGA-based method. The performance comparison and statistical analysis of experimental results using ten-fold cross validation show that the IGA-based method without heuristics is efficient in designing accurate and compact fuzzy classifiers using 11 well-known data sets with numerical attribute values.
Conference Paper
Full-text available
In practical applications to instances of optimization problems it would be of great benefit if we could decide a priori which algorithm is suited to the problem; or at least, which of several candidates might be better than others. In some cases it is obvious: optimizing a linear function of variables subject to linear constraints for instance. However, such examples are rare, and this is certainly the case for potential applications of evolutionary algorithms (EAs). For this reason, various predictive measures have been suggested. These all sample the Universe of all potential solutions in order to compute some statistics that are thought to aid the decision. This paper describes some of these ideas and then discusses and evaluates them using an adversary argument
Conference Paper
Full-text available
Together with MATLAB and SIMULlNK, the genetic algorithm (GA) Toolbox described presents a familiar and unified environment for the control engineer to experiment with and apply GAs to tasks in control systems engineering. Whilst the GA Toolbox was developed with the emphasis on control engineering applications, it should prove equally as useful in the general field of GAs, particularly given the range of domain-specific toolboxes available for the MATLAB package
Article
Full-text available
We discuss implicit and explicit knowledge representation mechanisms for evolutionary algorithms (EAs). We also describe offline and online metaheuristics as examples of explicit methods to leverage this knowledge. We illustrate the benefits of this approach with four real-world applications. The first application is automated insurance underwriting-a discrete classification problem, which requires a careful tradeoff between the percentage of insurance applications handled by the classifier and its classification accuracy. The second application is flexible design and manufacturing-a combinatorial assignment problem, where we optimize design and manufacturing assignments with respect to time and cost of design and manufacturing for a given product. Both problems use metaheuristics as a way to encode domain knowledge. In the first application, the EA is used at the metalevel, while in the second application, the EA is the object-level problem solver. In both cases, the EAs use a single-valued fitness function that represents the required tradeoffs. The third application is a lamp spectrum optimization that is formulated as a multiobjective optimization problem. Using domain customized mutation operators, we obtain a well-sampled Pareto front showing all the nondominated solutions. The fourth application describes a scheduling problem for the maintenance tasks of a constellation of 25 low earth orbit satellites. The domain knowledge in this application is embedded in the design of a structured chromosome, a collection of time-value transformations to reflect static constraints, and a time-dependent penalty function to prevent schedule collisions.
Article
Full-text available
This work proposes two intelligent evolutionary algorithms IEA and IMOEA using a novel intelligent gene collector (IGC) to solve single and multiobjective large parameter optimization problems, respectively. IGC is the main phase in an intelligent recombination operator of IEA and IMOEA. Based on orthogonal experimental design, IGC uses a divide-and-conquer approach, which consists of adaptively dividing two individuals of parents into N pairs of gene segments, economically identifying the potentially better one of two gene segments of each pair, and systematically obtaining a potentially good approximation to the best one of all combinations using at most 2N fitness evaluations. IMOEA utilizes a novel generalized Pareto-based scale-independent fitness function for efficiently finding a set of Pareto-optimal solutions to a multiobjective optimization problem. The advantages of IEA and IMOEA are their simplicity, efficiency, and flexibility. It is shown empirically that IEA and IMOEA have high performance in solving benchmark functions comprising many parameters, as compared with some existing EAs.
Article
Full-text available
In this paper, a hybrid Taguchi-genetic algorithm (HTGA) is proposed to solve global numerical optimization problems with continuous variables. The HTGA combines the traditional genetic algorithm (TGA), which has a powerful global exploration capability, with the Taguchi method, which can exploit the optimum offspring. The Taguchi method is inserted between crossover and mutation operations of a TGA. Then, the systematic reasoning ability of the Taguchi method is incorporated in the crossover operations to select the better genes to achieve crossover, and consequently, enhance the genetic algorithm. Therefore, the HTGA can be more robust, statistically sound, and quickly convergent. The proposed HTGA is effectively applied to solve 15 benchmark problems of global optimization with 30 or 100 dimensions and very large numbers of local minima. The computational experiments show that the proposed HTGA not only can find optimal or close-to-optimal solutions but also can obtain both better and more robust results than the existing algorithm reported recently in the literature.
Article
Full-text available
The use of intelligent techniques in the manufacturing field has been growing the last decades due to the fact that most manufacturing optimization problems are combinatorial and NP hard. This paper examines recent developments in the field of evolutionary computation for manufacturing optimization. Significant papers in various areas are highlighted, and comparisons of results are given wherever data are available. A wide range of problems is covered, from job shop and flow shop scheduling, to process planning and assembly line balancing
Article
Full-text available
Many multimedia communication applications require a source to send multimedia information to multiple destinations through a communication network. To support these applications, it is necessary to determine a multicast tree of minimal cost to connect the source node to the destination nodes subject to delay constraints on multimedia communication. This problem is known as multimedia multicast routing and has been proved to be NP-complete. The paper proposes an orthogonal genetic algorithm for multimedia multicast routing. Its salient feature is to incorporate an experimental design method called orthogonal design into the crossover operation. As a result, it can search the solution space in a statistically sound manner and it is well suited for parallel implementation and execution. We execute the orthogonal genetic algorithm to solve two sets of benchmark test problems. The results indicate that for practical problem sizes, the orthogonal genetic algorithm can find near optimal solutions within moderate numbers of generations
Article
Full-text available
This brief proposes an efficient method for designing accurate structure-specified mixed H<sub>2</sub>/H<sub>∞</sub> optimal controllers for systems with uncertainties and disturbance using an intelligent genetic algorithm (IGA). The newly-developed IGA with intelligent crossover based on orthogonal experimental design (OED) is efficient for solving intractable engineering problems with lots of design parameters. The IGA-based method without using prior domain knowledge can efficiently solve design problems of multi-input-multi-output (MIMO) optimal control systems, which is very suitable for practical engineering designs. High performance and validity of the proposed method are evaluated by two test problems, a MIMO distillation column model and a MIMO super maneuverable F18/HARV fighter aircraft system. It is shown empirically that the IGA-based method has good tracking performance, robust stability and disturbance attenuation for both controllers, compared with the existing methods.
Article
Full-text available
In this paper the optimal population size N is empirically computed for the ONEMAX function and truncation selection. N depends on the size of the problem n, the probability p 0 of the advantageous allele in the initial population and the selection intensity I . The dependency of N on I is very complex. By numerically fitting the data the following formula could be obtained: N = 1 + f(I) Delta p n Delta ln n Delta (1= p p 0 Gamma 1). Furthermore it is shown that the minimal number of function evaluations FE needed to find the optimum is fairly constant in the range 1:0 I 1:4. 1 Introduction In [MSV93] the breeder genetic algorithm was introduced. For a simplified model a predictive theory could be developed ([MSV94]). This model assumes additive gene effects and uniform crossover. The most simple example of additive gene effects is the fitness function ONEMAX of size n. ONEMAX gives just the number of 1's in the string. The model needs five parameters to descr...
Article
Full-text available
This paper reports work done over the past three years using rank-based allocation of reproductive trials. New evidence and arguments are presented which suggest that allocating reproductive trials according to rank is superior to fitness proportionate reproduction. Ranking can not only be used to slow search speed, but also to increase search speed when appropriate. Furthermore, the use of ranking provides a degree of control over selective pressure that is not possible with fitness proportionate reproduction. The use of rank-based allocation of reproductive trials is discussed in the context of 1) Holland's schema theorem, 2) DeJong's standard test suite, and 3) a set of neural net optimization problems that are larger than the problems in the standard test suite. The GENITOR algorithm is also discussed; this algorithm is specifically designed to allocate reproductive trials according to rank.
Article
Full-text available
We introduce basic guidelines for developing test suites for evolutionary algorithms and examine common test functions in terms of these guidelines. Two methods of designing test functions are introduced which address specific issues relevant to comparative studies of evolutionary algorithms. The first method produces representation invariant functions.
Article
The investigated mesh optimization problem C(N, n) for surface approximation, which is NP-hard, is to minimize the global error between a digital surface and its approximating mesh surface by efficiently locating a limited number n of grid points which are a subset of the original N sample points. This paper proposes an efficient coarse-to-fine evolutionary algorithm (CTFEA) with a novel orthogonal array crossovet (OAX) for solving the mesh optimization problem. OAX adaptively divides the meshes of parents into a number of parts using a tuning parameter for applying a coarse-to-fine technique. Meshes of children are formed from an intelligent combination of the good parts from their parents rather than the conventional random combination. The better one of two parts in two parents is chosen by evaluating the contribution of the individual parts to the fitness function based on orthogonal experimental design. The coarse-to-fine technique of CTFEA can advantageously solve large mesh optimization problems. Furthermore, CTFEA using an additional inheritance technique can further efficiently locate the grid points in the mesh surface. It is shown empirically that CTFEA outperforms the existing evolutionary algorithm in terms of both approximation quality and convergence speed, especially in solving large mesh optimization problems.
Article
Lapping is a very complicated and random process resulting from the variation of abrasive grains by its sizes and shapes and from the numerous variables which have an effect on the process quality. Thus it needs to be analyzed by experiment rather than by theory to obtain the relative effects of variables quantitatively. In this study, the cylindrical lapping experiment designed by Taguchi's L. orthogonal array was performed and analyzed by Yates' ANOVA table. As a result, effective variables and interaction effects were identified and discussed. Also the optimal variable combination to obtain the largest percentage improvement of surface roughness was selected and confirmatory experiments were performed.
Article
Recent research shows that orthogonal array based crossovers outperform standard and existing crossovers in evolutionary algorithms in solving parametrical problems with high dimensions and multi-optima. However, those crossovers employed so far, ignore the consideration of interactions between genes. In this paper, we propose a method to improve the existing orthogonal array based crossovers by integrating information of interactions between genes. It is empirically shown that the proposed orthogonal array based crossover outperforms significantly both the existing orthogonal array based crossovers and standard crossovers on solving parametrical benchmark functions that interactions exist between variables. To further compare the proposed orthogonal array based crossover with the existing crossovers in evolutionary algorithms, a validation test based on car door design is used in which the effectiveness of the proposed orthogonal array based crossover is studied.
Article
Due to a wide range of variability of engineering properties for recycled concrete, in general, a large number of experiments are usually required as to decide a suitable mixture for obtaining the desired requirements for concrete made with recycled concrete coarse/fine aggregate. This article adopts Taguchi's approach with an L16 (215) orthogonal array and two-level factor to reduce the numbers of experiment. Five control factors and four responses (slump and compressive strengths at 7, 14, and 28 days) were used. Using analysis of variance (ANOVA) and significance test with F statistic to check the existence of interaction and level of significance, and computed results of total contribution rate, an optimal mixture of concrete qualifying the desired engineering properties with the recycled concrete aggregates can easily be selected among experiments under consideration.
Article
Developments in computational models of evolutionary processes have led to the realization of powerful, robust, and general optimization and adaptive systems collectively called evolutionary algorithms. In this paper, we consider one member of this class of algorithms, the genetic algorithm, and describe the features and characteristics that are particularly appropriate for applications in control systems engineering. The versatility and robust qualities of the algorithm are considered and a number of application areas described. Some prospective future directions are also identified.
Article
In Quality Function Deployment, determination of the target values for engineering requirements is the most difficult process. A fuzzy optimization model is presented for the determination of target values for engineering requirements in Quality Function Deployment. An inexact genetic algorithm approach was introduced to solve the model that takes the mutation along the weighted gradient direction as a genetic operator. Instead of obtaining one set of exact optimal target values, the approach can generate a family of inexact optimal target values setting within an acceptable satisfaction degree. Through an interactive approach, a design team can determine a combination of preferred solution sets from which a set of preferred target values of engineering requirements based on a specific design scenario can be obtained. An example of car door design is used to illustrate the approach.
Article
An optimization problem for polygonal approximation of 2-D shapes is investigated in this paper. The optimization problem for a digital contour of N points with the approximating polygon of K vertices has a search space of C(N, K) instances, i.e., the number of ways of choosing K vertices out of N points. A genetic-algorithm-based method has been proposed for determining the optimal polygons of digital curves, and its performance is better than that of several existing methods for the polygonal approximation problems. This paper proposes an efficient evolutionary algorithm (EEA) with a novel orthogonal array crossover for obtaining the optimal solution to the polygonal approximation problem. It is shown empirically that the proposed EEA outperforms the existing genetic-algorithm-based method under the same cost conditions in terms of the quality of the best solution, average solution, variance of solutions, and the convergence speed, especially in solving large polygonal approximation problems.
Article
Fluid dispensing is a popular process in the semiconductor manufacturing industry, commonly being used in die-bonding as well as microchip encapsulation of electronic packaging. Modelling the fluid dispensing process is important to understanding the process behaviour as well as determining the optimum operating conditions of the process for a high-yield, low-cost and robust operation. In this paper, an approach to integrating neural networks with a modified genetic algorithm is presented to model the fluid dispensing process for electronic packaging. The modified genetic algorithm is proposed by incorporating the crossover operator with an orthogonal array. We compare the modified genetic algorithm with the standard genetic algorithm. The results indicate that a better quality encapsulation can be obtained based on the modified genetic algorithm.
Conference Paper
There is general consensus that the coding of a problem domain holds an important key to a successful application. However, there is disagreement as to what aspects of a representation and a problem domain make the application ‘hard’ for a genetic algorithm (GA). This paper suggests a simple statistic, a regression analysis predicting the function value from the bits, as a mean to measure the amount of nonlinearity in a representation, and an interesting perspective on GA-hardness. This statistics is termed epistasis variance for its analogy to the use of epistasis in genetics, and presents a perspective on GA-hardness different to those presented in recent works on deceptive problems. Two new findings result form the epistasis analysis. One, a step towards defining and understanding the role of epistasis in GAs, and in the search for understanding GA-hardness. Two, that three elements contribute to GA-hardness: the structure of the solution space, the representation of the solution space, and the sampling error as a result of finite and often small population sizes. These three elements are not necessarily linked, and furthermore, the effect of each of them on GA-hardness is not fixed.
Article
In this paper, we formulate a special type of multiobjective optimization problems, named biobjective 0/1 combinatorial optimization problem BOCOP, and propose an inheritable genetic algorithm IGA with orthogonal array crossover (OAX) to efficiently find a complete set of nondominated solutions to BOCOP. BOCOP with n binary variables has two incommensurable and often competing objectives: minimizing the sum r of values of all binary variables and optimizing the system performance. BOCOP is NP-hard having a finite number C(n, r) of feasible solutions for a limited number r. The merits of IGA are threefold as follows: 1) OAX with the systematic reasoning ability based on orthogonal experimental design can efficiently explore the search space of C(n, r); 2) IGA can efficiently search the space of C(n, r+/-1) by inheriting a good solution in the space of C(n, r); and 3) The single-objective IGA can economically obtain a complete set of high-quality nondominated solutions in a single run. Two applications of BOCOP are used to illustrate the effectiveness of the proposed algorithm: polygonal approximation problem (PAP) and the problem of editing a minimum reference set for nearest neighbor classification (MRSP). It is shown empirically that IGA is efficient in finding complete sets of nondominated solutions to PAP and MRSP, compared with some existing methods.
Article
An integrated formulation and solution approach to Quality Function Deployment (QFD) is presented. Various models are developed by defining the major model components (namely, system parameters, objectives, and constraints) in a crisp or fuzzy way using multiattribute value theory combined with fuzzy regression and fuzzy optimization theory. The proposed approach would allow a design team to reconcile tradeoffs among the various performance characteristics representing customer satisfaction as well as the inherent fuzziness in the system. In addition, the modeling approach presented makes it possible to assess separately the effects of possibility and flexibility inherent or permitted in the design process on the overall design. Knowledge of the impact of the possibility and flexibility on customer satisfaction can also serve as a guideline for acquiring additional information to reduce fuzziness in the system parameters as well as determine how much flexibility is warranted or possible to improve a design. The proposed modeling approach would be applicable to a wide spectrum of design problems where multiple design criteria and functional design relationships are interacting and/or conflicting in an uncertain, qualitative, and fuzzy way.
Conference Paper
Based on our observation, some major steps in the genetic algorithm, such as the crossover operator, can be considered as experiments. The aim is to apply experimental design techniques to improve the crossover operator, so that the resulting operator can be more robust and statistically sound. Taguchi method is a systematic and time-efficient approach that can aid in experimental design. Here we apply Taguchi method to tailor a new crossover operator so that the operator can estimate the best point in the search space determined by the parents. Experimental result shows that the proposed operator outperforms the classical GA crossover strategy on some parametrical problems.
Conference Paper
This paper proposes a novel genetic algorithm-based systematic reasoning approach using an orthogonal array crossover (OAX) for solving the traveling salesman problem (TSP). OAX makes use of the systematic reasoning ability of orthogonal arrays that can effectively preserve superior sub-paths from parents and guide the solution towards better quality. OAX combines the advantages of two traditional approaches: canonical approach and heuristic approach. It is shown empirically that OAX outperforms various superior crossovers in both accuracy and speed. An improved OAX with a well-known heuristic method is also presented.
Article
The notion of Pareto-optimality is one of the major approaches to multiobjective programming. While it is desirable to find more Pareto-optimal solutions, it is also desirable to find the ones scattered uniformly over the Pareto frontier in order to provide a variety of compromise solutions to the decision maker. We design a genetic algorithm for this purpose. We compose multiple fitness functions to guide the search, where each fitness function is equal to a weighted sum of the normalized objective functions and we apply an experimental design method called uniform design to select the weights. As a result, the search directions guided by these fitness functions are scattered uniformly toward the Pareto frontier in the objective space. With multiple fitness functions, we design a selection scheme to maintain a good and diverse population. In addition, we apply the uniform design to generate a good initial population and design a new crossover operator for searching the Pareto-optimal solutions. The numerical results demonstrate that the proposed algorithm can find the Pareto-optimal solutions scattered uniformly over the Pareto frontier.
Article
We design a genetic algorithm called the orthogonal genetic algorithm with quantization for global numerical optimization with continuous variables. Our objective is to apply methods of experimental design to enhance the genetic algorithm, so that the resulting algorithm can be more robust and statistically sound. A quantization technique is proposed to complement an experimental design method called orthogonal design. We apply the resulting methodology to generate an initial population of points that are scattered uniformly over the feasible solution space, so that the algorithm can evenly scan the feasible solution space once to locate good points for further exploration in subsequent iterations. In addition, we apply the quantization technique and orthogonal design to tailor a new crossover operator, such that this crossover operator can generate a small, but representative sample of points as the potential offspring. We execute the proposed algorithm to solve 15 benchmark problems with 30 or 100 dimensions and very large numbers of local minima. The results show that the proposed algorithm can find optimal or close-to-optimal solutions
Article
Evolutionary programming (EP) has been applied with success to many numerical and combinatorial optimization problems in recent years. EP has rather slow convergence rates, however, on some function optimization problems. In the paper, a “fast EP” (FEP) is proposed which uses a Cauchy instead of Gaussian mutation as the primary search operator. The relationship between FEP and classical EP (CEP) is similar to that between fast simulated annealing and the classical version. Both analytical and empirical studies have been carried out to evaluate the performance of FEP and CEP for different function optimization problems. The paper shows that FEP is very good at search in a large neighborhood while CEP is better at search in a small local neighborhood. For a suite of 23 benchmark problems, FEP performs much better than CEP for multimodal functions with many local minima while being comparable to CEP in performance for unimodal and multimodal functions with only a few local minima. The paper also shows the relationship between the search step size and the probability of finding a global optimum and thus explains why FEP performs better than CEP on some functions but not on others. In addition, the importance of the neighborhood size and its relationship to the probability of finding a near-optimum is investigated. Based on these analyses, an improved FEP (IFEP) is proposed and tested empirically. This technique mixes different search operators (mutations). The experimental results show that IFEP performs better than or as well as the better of FEP and CEP for most benchmark problems tested
Article
The authors discuss the Taguchi method as an approach to design optimization for quality. The method is briefly explained, and its application is illustrated for a propulsion system design optimization study for an advanced space transportation vehicle. The results suggest that the Taguchi method is a systematic and efficient approach that can aid in designing for performance, quality, and cost. Principal benefits include significant time and resource savings and the determination of parametric sensitivities and interactions
Intelligent genetic algorithm with a new intelligent crossover using orthogonal arrays
  • S Y Ho
  • L S Shu
  • H M Chen
Benchmark problems for control system design
  • E J Davision