Chapter

How to Solve It: Modern Heuristics

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We’ve discussed a few traditional problem-solving strategies. Some of them guarantee finding the global solution, others don’t, but they all share a common pattern. Either they guarantee discovering the global solution, but are too expensive (i.e., too time consuming) for solving typical real-world problems, or else they have a tendency of “getting stuck” in local optima. Since there is almost no chance to speed up algorithms that guarantee finding the global solution, i.e., there is almost no chance of finding polynomial-time algorithms for most real problems (as they tend to be NP-hard), the other remaining option aims at designing algorithms that are capable of escaping local optima.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Tackling real-world problems can be very challenging compared to standard benchmarking integer programming models [15, 27ś29]. Multiple aspects contribute to creating this gap, such as bad modelling practices (e.g., oversimpliőcation, linearisation), intractability (high computational complexity of solution methods), hard and soft constraints (heavy constraints...), external factors (stochastic environment, uncertainty about data), and composition of interdependent (mutually dependent) sub-problems [4,22]. ...
... This is followed by the Maximal Preservative Crossover (MPX) operator, which returns a new solution by combining the parents (line 19). The hill climbing function (Algorithm 2) is then applied to the new solution with a probability R LS (lines [22][23][24]. If the new solution does not already exists in the offspring population Q, it is included. ...
Preprint
Full-text available
Problems with multiple interdependent components offer a better representation of the real-world situations where globally optimal solutions are preferred over optimal solutions for the individual components. One such model is the Travelling Thief Problem (TTP); while is appears popular and while it may offer a better benchmarking alternative to the standard benchmarks, only one form of inter-component dependency is investigated. The goal of this paper is to study the impact of different models of dependency on the fitness landscape using performance prediction models (regression analysis). To conduct the analysis, we consider a generalised model of the TTP, where the dependencies between the two components of the problem are tunable via the problem features. The regression model was able to predict the expected runtime of the algorithm based on the problem features. Furthermore, the results show that the contribution of the item value drop dependency is significantly higher than the velocity change dependency.
... It involves creating a "guess" and checking if that guess fits the answer to the situation in question. And if necessary, the guess can be reversed to guarantee a better and accurate solution [4]. This technique would be better applied to some HPs. ...
... The Work Backward heuristic requires consideration of the problem in reversed order. Working backwards" allows the heuristic to solve a problem by assuming that this problem's solution is already known, and what is left is to construct how many solutions can have been obtained [4]. Not all issues can be solved by working backwards as larger problems take longer to be performed this way; the following question is an example of problems that can be solved backwards. ...
Preprint
This paper demonstrated a new approach to tuning the Ant Colony Optimization (ACO) algorithm's parameters using a unique automatic sequential Monte Carlo evaluation method, which utilises genetic algorithm techniques. We firmly believe that the available approaches do not exploit an in-depth understanding of the effect of individual parameters on the behaviour of ACO algorithms. Therefore, this study has been conducted to understand better the impact of these variables on finding the optimum solution. Experiments were conducted on different types of Travel-Salesman problems. These problems were chosen from the TSPLIB categorisation. We were able to show a slight improvement in the quality of the solution. We also found that every individual case will have a specific combination that works best. Still, it is possible to use mutation to create a list of parameter combinations that improve the solution's quality over a large selection of optimisation problems.
... The Kernel-based learning methods are inspired by the statistical theory of learning and the dimensions of Vapnik-Chervonenkis (VC) [63], such as support vector machines (SVMs). These latter are supervised learning methods. ...
... SVMs [63] have been integrated with sampling methods to deal with the classification problem in imbalanced datasets. Among these methods, we have the following. ...
Chapter
Full-text available
Classification is a data mining task. It aims to extract knowledge from large datasets. There are two kinds of classification. The first one is known as complete classification, and it is applied to balanced datasets. However, when it is applied to imbalanced ones, it is called partial classification or a problem of classification in imbalanced datasets, which is a fundamental problem in machine learning, and it has received much attention. Considering the importance of this issue, a large amount of techniques have been proposed trying to address this problem. These proposals can be divided into three levels: the algorithm level, the data level, and the hybrid level. In this chapter, we will present the classification problem in imbalanced datasets, its domains of application, its appropriate measures of performances, and its approaches and techniques.
... Optimization is a broad area, and we refer to the books [61], [135], [138] for further details. ...
Article
Full-text available
We present an interdisciplinary survey of the history of loosely coupled systems. We apply the presented concepts in communication networks and suggest hybrid self-organizing networks (SONs) as a universal model for future networks. Self-organizing networks can fulfill the tight requirements of future networks but are challenging to use due to their complexity and immaturity. Moreover, the lack of an externally defined goal and centralized control has resulted in many distributed self-organizing systems failing. This is because the nonlinear relationships between the system parts result in emergence, i.e., we cannot predict the behavior of the whole from the behavior of the parts. Furthermore, a set of local optima does not produce a global optimum. Hybrid SONs tackle these challenges with loose or weak coupling of interacting agents that combine centralized control for global optimization with distributed control for local optimization. In the loose centralized control of almost autonomous agents, decisions are made mostly locally with small delays. This architecture has beneficial properties such as stability, obtained by decoupling the feedback loops: vertically with time-scale separation and horizontally with interference avoidance. Applications of loose coupling include modular electronics and computer design, structured software design, and service-oriented architectures, especially for microservices. Cross-layer design for network optimization is a new reason to use loose coupling in networks to improve stability. We also summarize some recent trends and present a roadmap to the future. We expect that loose coupling will be widely used in self-organizing networks of future wireless systems.
... We then employ simulated annealing [33] to search for the parameter t combination that minimizes the score T . Each term is as follows: ...
Article
We designed a new algorithm for reconstructing three-dimensional (3D) geometric diagrams. By extracting the orthogonality of the coordinate axes, the contents of labels, the positional relationships between labels and other elements, and direct and indirect connection relationships between elements, we define a score function and derive the optimal parameters that minimize it. This allows the elements to be reconstructed in 3D space and allows the system to create diagrams with different viewpoints. In our system, the reconstructed elements are redrawn with explicit orthogonal coordinate axes. The user can directly move the axes as an interface for changing viewpoints. The system allows the user to edit and attach elements to the diagram interactively.
... Optimization can use either random or deterministic methods to reach the optimum solution. In deterministic methods, it requires at most one path towards the target solution, while in random or stochastic methods, it is run randomly and is not subject to deterministic limitations [3,4]. In general, stochastic techniques are grouped with heuristics and meta-heuristics. ...
Article
Full-text available
The Whale Optimization Algorithm (WOA) is one of the recent meta-heuristic algorithms. WOA has advantages such as an exploration mechanism that leads towards the global optimum, a suitable balance between exploration and exploitation that avoids the local optimum, and a very good exploitation capability. In this study, five new hybrid algorithms are proposed to develop these advantages. Two of them are developed by combining WOA and Particle Swarm Optimization (PSO) algorithms, and three of them are developed by adding the Lévy flight algorithm to this combination in different ways. The proposed algorithms have been tested with 23 mathematical optimization problems, and in order to make a more accurate comparison, the average optimization results and corresponding standard deviation results are calculated by running these algorithms 30 times for each optimization problem. The proposed algorithms' performances were evaluated among themselves, and the WOALFVWPSO algorithm performed better among these algorithms. This proposed algorithm has been first compared with WOA and PSO, then with other algorithms in the literature. According to WOA and PSO, the proposed algorithm performs better in 19 of 23 mathematical optimization problems, and according to other literature, it performs better in 15 of 23 problems. Also, the proposed algorithm has been applied to the pressure vessel design engineering problem and achieved the best result compared to other algorithms in the literature. It has been proven that the WOALFVWPSO algorithm provides competitive solutions for most optimization problems when compared to meta-heuristic algorithms in the literature.
... Case studies prove that production can be optimized by means of transformation production procedures (Modrak, 2009). Moreover, many problems can be solved by means of adequate multi-criteria decision-making using modern heuristics (Michalewicz & Fogel, 2004). ...
... ILS is a metaheuristic that enhances the performance of local search algorithms for combinatorial optimization. Local search algorithms perform local changes to an existing solution to attempt to find a better one (Michalewicz and Fogel 2004). A weakness of these algorithms is that they tend to get stuck in a locally optimal solution instead of the global optimum, since they do not examine all possible changes to the existing solution. ...
Article
Full-text available
Two-level screening designs are widely applied in manufacturing industry to identify influential factors of a system. These designs have each factor at two levels and are traditionally constructed using standard algorithms, which rely on a pre-specified linear model. Since the assumed model may depart from the truth, two-level QB\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_B$$\end{document}-optimal designs have been developed to provide efficient parameter estimates for several potential models. These designs also have an overarching goal that models that are more likely to be the best for explaining the data are estimated more efficiently than the rest. However, there is no effective algorithm for constructing them. This article proposes two methods: a mixed-integer programming algorithm that guarantees convergence to the two-level QB\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_B$$\end{document}-optimal designs; and, a heuristic algorithm that employs a novel formula to find good designs in short computing times. Using numerical experiments, we show that our mixed-integer programming algorithm is attractive to find small optimal designs, and our heuristic algorithm is the most computationally-effective approach to construct both small and large designs, when compared to benchmark heuristic algorithms.
... Partially due to the population-based property and the self-adaptivity [2], evolutionary algorithms (EAs) have been widely recognized as an effective tool for DOPs. Most EA developments implicitly assume that the objective function evaluations (FEs) are computationally cheap or trivial. ...
Preprint
Many real-world problems are usually computationally costly and the objective functions evolve over time. Data-driven, a.k.a. surrogate-assisted, evolutionary optimization has been recognized as an effective approach for tackling expensive black-box optimization problems in a static environment whereas it has rarely been studied under dynamic environments. This paper proposes a simple but effective transfer learning framework to empower data-driven evolutionary optimization to solve dynamic optimization problems. Specifically, it applies a hierarchical multi-output Gaussian process to capture the correlation between data collected from different time steps with a linearly increased number of hyperparameters. Furthermore, an adaptive source task selection along with a bespoke warm staring initialization mechanisms are proposed to better leverage the knowledge extracted from previous optimization exercises. By doing so, the data-driven evolutionary optimization can jump start the optimization in the new environment with a strictly limited computational budget. Experiments on synthetic benchmark test problems and a real-world case study demonstrate the effectiveness of our proposed algorithm against nine state-of-the-art peer algorithms.
... In classical GA, the population is a matrix of constant size p × g and is initially generated randomly, where p is the number of individuals and g is the number of genes in each individual. Each gene receives the value of a variable of the problem and each individual of the population is a possible solution to the proposed problem and its quality measure is performed through the evaluation function [67]. After being evaluated, the population is submitted to genetic manipulators, elitism, and selection, and then to crossover and mutation operators. ...
Article
Full-text available
This work proposes the design of an optimization method for high-power LED luminaires with the introduction of new evaluation metrics. A luminaire geometry computational method is deployed to conduct thermal and optical analysis. This current effort novels by designing a tool that enables the analysis of uniformity for individual luminaire over the target plane in accordance with international regulatory standards. Additionally, adequate thermal management is conducted to guarantee nominal operation standard values determined by LED vendors. The results of this optimization method present luminaire models with different geometries that allow the stabilization of the temperature within the safety and uniform illuminance distribution thresholds. The resulting solution proposes the design of a 2×2 HP-LED rectangular luminaire. During simulations, the temperature of the LED reaches a maximum value of 73.9∘ C in a steady state with a uniform index of 0.228 for its individual luminaire. The overall uniform index identified for two separate and adjacent luminaire points in a pedestrian walk is 0.5413 with a minimal illuminance of 36.95 lx, maximum illuminance of 93.65 lx and average illuminance of 68.27 lx. Overall, we conclude that the currently adopted metric, which takes into consideration only the ratio between the minimum and the average illuminance, is not efficient and it cannot distinguish different luminaire geometry standards according to their uniform illuminance distribution. The metric proposed and designed in this work is capable of evaluating illuminance and thermal threshold criteria, as well as classifying different sorts of luminaries.
... At this point it is interesting to recall that the ε-DE, as well as any metaheuristic and nature-inspired methods such as Genetic Algorithm (Back, 1996), does not guarantee that a global optimum is reached (Michalewicz and Fogel, 2013). Nevertheless, as we have mentioned in the Introduction, such methods have been widely and successfully employed for several types of industrial applications due to their field-proven ability to deal with complex search spaces where the classical deterministic optimization methods cannot be applied (Schmidt and Thierauf, 2005). ...
Article
This work presents a new approach to the optimal design of mooring systems for oil & gas floating production units (FPU). The proposed optimization procedure integrates an efficient optimization algorithm, the ε-Constrained Differential Evolution, with Artificial Neural Networks (ANNs) as metamodels that evaluate the candidate solutions along the optimization loop, effectively replacing the computationally expensive Finite Element nonlinear time-domain simulations. The ANNs are trained with data from time-domain simulations of representative sample configurations. This integration of a metamodel and an optimization algorithm into a unified framework for the optimal design of FPU spread mooring systems comprises an innovative tool for such applications. Other innovative aspects include enhancements in the modeling of the optimization problem by defining design variables, objective function and constraints that are more simple, direct, and appropriate to actual taut leg and semi-taut leg configurations. A case study considering a real VLCC-sized spread-moored FPSO operating in a deep-water field is presented to illustrate the practical application of the optimization tool. The results indicate that, with virtually no human supervision, the tool can provide feasible solutions with noticeable overall cost reductions (in terms of the CAPEX of the system), and with drastic reductions of total computational requirement.
... Stochastic approach techniques are introduced to solve irregular problems. Irregular problems refer to problems that are high-dimensional and/ or discontinuous and/or multimodal and/or NP-Complete which make enumerative and deterministic approaches incapable of finding acceptable results in an acceptable time [30]. Stochastic algorithms usually have a fitness function which assigns fitness values to possible solutions. ...
Article
Full-text available
Solving many-objective optimization problems has become one of the most popular research areas in recent years due to its ever-increasing applications in industries and other fields. In this paper, a novel two-phase hybrid feeder (TPHF) is proposed to provide elite solutions for selection mechanism of many-objective optimization algorithms (MaOAs) to improve their performance. The proposed TPHF framework generates solutions using a novel particle swarm optimization (PSO) operator along with a genetic algorithm operator during a two-phase calculated search based on the average velocity of the PSO particles. TPHF focuses on the worst solutions of the population to find a better place for them. Therefore, it frequently resets the PSO particles to the worst solutions. The new PSO operator uses the novel idea of dynamic inertia and learning factors and a novel velocity update equation. The classic global bests set of the classic PSO operator is replaced by a PSO feeder which uses the novel idea of using groups of the best/worst solutions to feed the new PSO operator based on the phase of the search. The proposed TPHF is applied to some of the most famous and state-of-the-art MaOAs with their corresponding default parameters. The result of the comparison between these MaOAs with their corresponding TPHF versions on MaF test suite shows a significant improvement in the performance of the TPHF versions in 62.2% of the cases. Graphical abstract
... However, one method to calibrate a mathematical model uses a nonlinear multivariate optimization function; therefore, these optimization problems can have local optimal solutions; these problems are called multimodal [44]. In recent years, global optimization methods have been increasingly used to solve these types of problems [45] due to the advantages of obtaining optimal global solutions. There are various parameter adjustment techniques for mathematical models. ...
Article
Full-text available
Photosynthesis is a vital process for the planet. Its estimation involves the measurement of different variables and its processing through a mathematical model. This article presents a black-box mathematical model to estimate the net photosynthesis and its digital implementation. The model uses variables such as: leaf temperature, relative leaf humidity, and incident radiation. The model was elaborated with obtained data from Capsicum annuum L. plants and calibrated using genetic algorithms. The model was validated with Capsicum annuum L. and Capsicum chinense Jacq. plants, achieving average errors of 3% in Capsicum annuum L. and 18.4% in Capsicum chinense Jacq. The error in Capsicum chinense Jacq. was due to the different experimental conditions. According to evaluation, all correlation coefficients (Rho) are greater than 0.98, resulting from the comparison with the LI-COR Li-6800 equipment. The digital implementation consists of an FPGA for data acquisition and processing, as well as a Raspberry Pi for IoT and in situ interfaces; thus, generating a useful net photosynthesis device with non-invasive sensors. This proposal presents an innovative, portable, and low-scale way to estimate the photosynthetic process in vivo, in situ, and in vitro, using non-invasive techniques.
... The crossover operator permits chromosomes to inherit some promising traits from two (possibly more) selected parents while mutation seeks to introduce some new traits that can enhance the solution obtained from the crossover operation (Davis, 1987;Goldberg, 1989 Sastry et al., 2005;Goldberg, 1989;Davis, 1991;Beasley et al., 1993;Reeves, 1995;Mitchell, 1996;Michalewicz & Fogel, 2000). An overview of GA as implemented in this work is given in Figure 4.2. ...
Thesis
Full-text available
This research work focused on the performance of heuristics and metaheuristics for the recently defined Hostel Space Allocation Problem (HSAP), a new instance of the space allocation problem (SAP) in higher institutions of learning (HIL). SAP is a combinatorial optimisation problem that involves the distribution of spaces available amongst a set of deserving entities (rooms, bed spaces, office spaces etc.) so that the available spaces are optimally utilized and complied with the given set of constraints. HSAP deals with the allocation of bed space in available but limited halls of residence to competing groups of students such that given requirements and constraints are satisfied as much as possible. The problem was recently introduced in literature and a preliminary, baseline solution using a Genetic Algorithm (GA) was provided to show the viability of heuristics in solving the problem rather than recourse to the usual manual processing. Since the administration of hostel space allocation varies across institutions, countries and continents, the available instance is defined as obtained from a top institution in Nigeria. This instance identified is the point of focus for this research study. The main aim of this thesis is to study the strength and performance of some Local Search (LS) heuristics in solving this problem. In the process, however, some hybrid techniques that combine both population-based and LS heuristics in providing solutions are derived. This enables one to carry out a comprehensive comparative study aimed at determining which heuristics and/or combination performs best for the given problem. HSAP is a multi-objective and multi-stage problem. Each stage of the allocation has different requirements and constraints. An attempt is made to provide a formulation of these problems as an optimisation problem and then provides various inter-related heuristics and meta-heuristics to solve it at different levels of the allocation process. Specifically, Hill Climbing (HC), Simulated Annealing (SA), Tabu Search (TS), Late Acceptance Hill Climbing (LAHC) and GA were applied to distribute the students at all three levels of allocation. At each level, a comparison of the algorithms is presented. In addition, variants of the algorithms were performed from a multi-objective perspective with promising and better solutions compared to the results obtained from the manual method used by the administrators in the institutions. Comparisons and analyses of the results obtained from the above methods were done. Obtaining datasets for HSAP is a very difficult task as most institutions either do not keep proper records of past allocations or are not willing to make such records available for research purposes. The only dataset available which is also used for simulation in this study is the one recently reported in the literature. However, to test the robustness of the algorithms, two new data sets that follow the pattern of the known dataset obtained from literature are randomly generated. Results obtained with these datasets further demonstrate the viability of applying tested operations research techniques efficiently to solve this new instance of SAP.
... Since this involves many terms, the main attempt is minimizing the self-weight of the structure, indirectly connected to material cost, i.e., material usage and natural resources consumption [51]. Several strategies have been developed over the years to handle constraints [52][53][54]. In the present work, the penalty function-based approach was implemented due to its simplicity, allowing converting the problem with OF f (x) into a new unconstrained version φ(x): ...
Article
Full-text available
This paper discusses the size and shape optimization of a guyed radio mast for radiocommunications. The considered structure represents a widely industrial solution due to the recent spread of 5G and 6G mobile networks. The guyed radio mast was modeled with the finite element software SAP2000 and optimized through a genetic optimization algorithm (GA). The optimization exploits the open application programming interfaces (OAPI) SAP2000-Matlab. Static and dynamic analyses were carried out to provide realistic design scenarios of the mast structure. The authors considered the action of wind, ice, and seismic loads as variable loads. A parametric study on the most critical design variables includes several optimization scenarios to minimize the structure's total self-weight by varying the most relevant parameters selected by a preliminary sensitivity analysis. In conclusion, final design considerations are discussed by highlighting the best optimization scenario in terms of the objective function and the number of parameters involved in the analysis.
... Global optimization methods are widely used in engineering calculations when designing complex technical systems, problems of information processing, decision making and optimal control, machine learning, etc. [1][2][3][4][5][6][7]. As a rule, the application of the necessary optimality conditions for finding the optimal open-loop control of discrete deterministic systems is associated with the solution of a boundary value problem for a system of difference equations, the computational complexity of which greatly increases with the state vector dimension. ...
Article
Full-text available
A new hybrid metaheuristic method for optimizing the objective function on a parallelepiped set of admissible solutions is proposed. It mimics the behavior of a school of river perch when looking for food. The algorithm uses the ideas of several methods: a frog-leaping method, migration algorithms, a cuckoo algorithm and a path-relinking procedure. As an application, a wide class of problems of finding the optimal control of deterministic discrete dynamical systems with a nonseparable performance criterion is chosen. For this class of optimization problems, it is difficult to apply the discrete maximum principle and its generalizations as a necessary optimality condition and the Bellman equation as a sufficient optimality condition. The desire to extend the class of problems to be solved to control problems of trajectory bundles and stochastic problems leads to the need to use not only classical adaptive random search procedures, but also new approaches combining the ideas of migration algorithms and swarm intelligence methods. The efficiency of this method is demonstrated and an analysis is performed by solving several optimal deterministic discrete control problems: two nonseparable problems (Luus–Tassone and LiHaimes) and five classic linear systems control problems with known exact solutions.
... There are around 1000 unique types of bats [7], their sizes can differ broadly, extending from the small honey bee bat of around to the goliath with of around weigh around. Uses echolocation to a specific every one of animal groups, miniaturized scale bats utilize echolocation broadly, while super bats don't. ...
Article
Full-text available
There is a significant necessity to compress the medical images for the purposes of communication and storage. Most currently available compression techniques produce an extremely high compression ratio with a high-quality loss. In medical applications, the diagnostically significant regions (interest region) should have a high image quality. Therefore, it is preferable to compress the interest regions by utilizing the Lossless compression techniques, whilst the diagnostically lesser-significant regions (non-interest region) can be compressed by utilizing the Lossy compression techniques. In this paper, a hybrid technique of Set Partition in Hierarchical Tree (SPIHT) and Bat inspired algorithms have been utilized for Lossless compression the interest region, and the non-interest region is loosely compressed with the Discrete Cosine Transform (DCT) technique. The experimental results present that the proposed hybrid technique enhances the compression performance and ratio. Also, the utilization of DCT increases compression performance with low computational complexity.
... GA is a powerful search algorithm that has been used in many fields for optimization. The principles of GA have been described in many texts and references (23)(24)(25)(26)(27)(28)(29)(30)(31)(32)(33). Some references also provide an overview of GA (34,35,38,39). ...
Conference Paper
Full-text available
In recent years artificial intelligence (AI)-based heuristics, such as Simulated Annealing, Tabu Search, Genetic, and Ant Algorithms have been extensively applied to solve many transportation location optimization problems. Despite their widespread applicability, there are no guidelines available to the user community regarding the strengths and weaknesses of these heuristics when applied in different problem contexts. Generally, the Traveling Salesman Problem (TSP) is treated as the benchmark problem upon which different heuristics are tested to assess their relative effectiveness in terms of precision and efficiency. But, it is well acknowledged that every optimization problem is different and offers a variety of challenges preventing the seamless, across the board, and widespread applicability of such heuristics. In this paper we assess the effectiveness of two AI-based heuristics, genetic and ant algorithms in two distinct and relatively complex transportation location optimization problems, (1) highway alignment optimization, and (2) rail transit station location optimization. Three separate algorithms GSLO, AHAO, and ASLO are developed for this purpose. We find that while Genetic Algorithms are particularly effective in exhaustive searching in a continuous space and converging to a global optimum, Ant Algorithms may be particularly suited for local search in discrete spaces. Further, the selection of an appropriate algorithm should largely depend on the complexity of the optimization problem in hand rather than having a "plug and play" approach when selecting these algorithms. The computation time and the quality of the results have to be traded off, if and when needed. It has been observed that for the two transportation problems (simplistic version) mentioned above, Ant Algorithm gives almost the same results to that of Genetic Algorithm, but with lesser computation complexities.
... Even though the best solutions provided by that search mechanisms could be considered feasible, no proof regarding optimality can be drawn. On the other hand, approximated mathematical models that can be solved by optimality do not ensure the optimality in the original model, or worse, they do not ensure even feasibility in the original problem, and as a matter of truth, they are often infeasible [3]. ...
Article
Full-text available
In the last years, a lot of effort was placed into approximated or relaxed models and heuristic and metaheuristic algorithms to solve complex problems, mainly with non-linear and non-convex natures, in a reasonable time. On one hand, approximated/relaxed mathematical models often provide convergence guarantees and allow the problem to be solved to global optimality. On the other hand, there is no guarantee that the optimal solution of the modified problem is even feasible in the original one. In contrast with that, the metaheuristic algorithms lack mathematical proof for optimality, but as the obtained solutions can be tested against the original problem, the feasibility can be ensured. In this sense, this work brings a new method combining exact solutions from a Mixed-Integer-Linear-Problem (MILP) Transmission Expansion Planning (TEP) model and stochastic solutions from metaheuristic algorithms to solve the non-linear and non-convex TEP problem. We identify the issues that came up with the linear approximations and metaheuristics procedures and we introduce a MILP-Based Heuristic (MBH) algorithm to overcome these issues. We demonstrate our method on a single-stage TEP with the RTS 24 nodes and on a multi-stage TEP with the IEEE 118 nodes test system. The AC TEP solution was obtained using Evolutionary Computation, while the DC TEP solution was obtained using a commercial solver. From the simulations results, the novel MBH method was able to reduce in 42% and in 85% the investment cost from an evolutionary computation solution for the single-stage and multi-stage TEP, respectively.
... The distribution of these local optima is referred as ruggedness: a rugged landscape consists in neighboring points with very different fitness. As mentioned in [MF04], ruggedness of a landscape may affect the search as it is possible for an algorithm to get stuck in a local optimum which can slow down or compromise the search for global optimum. ...
Thesis
Radar networks are complex systems that need to be configured to maximize their coverage or the probability of detection of a target.The optimization of radar networks is a challenging task that is typically performed by experts with the help of simulators.Alternatively, black-box optimization algorithms can be used to solve these complex problems. Many heuristic algorithms were developed to solve black-box optimization problems and these algorithms exhibit complementarity of performance depending on the structure of the problem.Therefore, selecting the appropriate algorithm is a crucial task. The objective of this CIFRE PhD is to perform a landscape-aware algorithm selection of metaheuristics in order to optimize radar networks.The main contributions of this PhD thesis are twofold.In this thesis, we define six properties that landscape features should satisfy and we study to what degree landscape features satisfy these properties. One of the six properties is the invariance to the sampling strategy. We found that, surprisingly to what was recommended in the literature, the sampling strategy actually matters. We found important discrepancies in the feature values computed from different sampling strategies. Overall, we found that none of the features satisfy all defined properties. These features represent the core of a landscape-aware algorithm selection. We applied the landscape-aware algorithm selection of metaheuristics on the optimization of radar network use-cases. On this use-cases, algorithms have similar performances and the gain to perform an automated selection of algorithms is small. Nevertheless, the performance of the landscape-aware algorithm selection of metaheuristics is similar to the performance of the single best solver (SBS).
... Tackling difficult optimization problems requires using metaheuristics [21], very often it is needed to create new ones [34], e.g. hybridizing the existing algorithms [30]. ...
Chapter
Full-text available
Socio-cognitive computing is a paradigm developed for the last several years, it consists in introducing into metaheuristics mechanisms inspired by inter-individual learning and cognition. It was successfully applied in hybridizing ACO and PSO metaheuristics. In this paper we have followed our previous experiences in order to hybridize the acclaimed evolution strategies. The newly constructed hybrids were applied to popular benchmarks and compared with their referential versions.
... Due to the size and the complexity of management problems, the use of genetic algorithms is proposed as a method of optimization. The concept of genetic algorithms is based on the imitation of the biological process of species evolution as a mean to find the optimum solution (Michalewicz 1996, Michalewicz & Fogel 2004. Figure 6 shows the genetic algorithm application flowchart. ...
... The process of executing deterministic requires at most one direction towards the solution, otherwise; it is terminated. However, the randomized or stochastic technique executes randomly and violates the deterministic constraints [3], [4]. Overall, stochastic is classified as heuristic and metaheuristic. ...
Preprint
Full-text available
p>Note: This paper has been accepted by the journal of neural computing and applications. A recent metaheuristic algorithm, such as Whale Optimization Algorithm (WOA), was proposed. The idea of proposing this algorithm belongs to the hunting behavior of the humpback whale. However, WOA suffers from poor performance in the exploitation phase and stagnates in the local best solution. Grey Wolf Optimization (GWO) is a very competitive algorithm comparing to other common metaheuristic algorithms as it has a super performance in the exploitation phase while it is tested on unimodal benchmark functions. Therefore, the aim of this paper is to hybridize GWO with WOA to overcome the problems. GWO can perform well in exploiting optimal solutions. In this paper, a hybridized WOA with GWO which is called WOAGWO is presented. The proposed hybridized model consists of two steps. Firstly, the hunting mechanism of GWO is embedded into the WOA exploitation phase with a new condition which is related to GWO. Secondly, a new technique is added to the exploration phase to improve the solution after each iteration. Experimentations are tested on three different standard test functions which are called benchmark functions: 23 common functions, 25 CEC2005 functions and 10 CEC2019 functions. The proposed WOAGWO is also evaluated against original WOA, GWO and three other commonly used algorithms. Results show that WOAGWO outperforms other algorithms depending on the Wilcoxon rank-sum test. Finally, WOAGWO is likewise applied to solve an engineering problem such as pressure vessel design. Then the results prove that WOAGWO achieves optimum solution which is better than WOA and Fitness Dependent Optimizer (FDO).</p
... More specifically, each individual explores a new strategy with a small probability and switches to it only if the new variant provides a larger payoff in the same environment. This is akin to the ideas of reinforcement learning (Izquierdo et al. 2012;Sandholm 2010) and stochastic hill climbing (Michalewicz and Fogel 2013). ...
Article
Full-text available
Social insects allocate their workforce in a decentralised fashion, addressing multiple tasks and responding effectively to environmental changes. This process is fundamental to their ecological success, but the mechanisms behind it are not well understood. While most models focus on internal and individual factors, empirical evidence highlights the importance of ecology and social interactions. To address this gap, we propose a game theoretical model of task allocation. Our main findings are twofold: Firstly, the specialisation emerging from self-organised task allocation can be largely determined by the ecology. Weakly specialised colonies in which all individuals perform more than one task emerge when foraging is cheap; in contrast, harsher environments with high foraging costs lead to strong specialisation in which each individual fully engages in a single task. Secondly, social interactions lead to important differences in dynamic environments. Colonies whose individuals rely on their own experience are predicted to be more flexible when dealing with change than colonies relying on social information. We also find that, counter to intuition, strongly specialised colonies may perform suboptimally, whereas the group performance of weakly specialised colonies approaches optimality. Our simulation results fully agree with the predictions of the mathematical model for the regions where the latter is analytically tractable. Our results are useful in framing relevant and important empirical questions, where ecology and interactions are key elements of hypotheses and predictions.
... A well-known disadvantage of using an EA (and similar methods) on a problem of this complexity is the presence of local objective minima in the evolution solution process (Michalewicz & Fogel, 2013). The following shows that, for the efficient frontier problem, this phenomenon can be understood in terms of the EA solution being trapped on non-optimal (i.e. ...
Article
Full-text available
Real-world portfolio optimisation problems are often NP-hard, their efficient frontiers (EFs) in practice being calculated by randomised algorithms. In this work, a deterministic method of decomposition of EFs into a short sequence of sub-EFs is presented. These sub-EFs may be calculated by a quadratic programming algorithm, the collection of such sub-EFs then being subjected to a sifting process to produce the full EF. Full EFs of portfolio optimisation problems with small-cardinality constraints are computed to a high resolution, providing a fast and practical alternative to randomised algorithms. The method may also be used with other practical classes of portfolio problems, complete with differing measures of risk. Finally, it is shown that the identified sub-EFs correspond closely to local optima of the objective function of a case study evolutionary algorithm.
... It finds the minimum fabrication cost, considering four design variables: 5 , 6 , k , p and constraints of shear stress , bending stress in the beam , buckling load on the bar N , and end deflection on the beam (Michalewicz & Fogel, 2004). The mathematical model has the form: Bacterial foraging optimization algorithm with mutation to solve constrained problems | 1-16 ...
Article
Full-text available
A simple version of a Swarm Intelligence algorithm called bacterial foraging optimization algorithm with mutation and dynamic stepsize (BFOAM-DS) is proposed. The bacterial foraging algorithm has the ability to explore and exploit the search space through its chemotactic operator. However, premature convergence is a disadvantage. This proposal uses a mutation operator in a swim, similar to evolutionary algorithms, combined with a dynamic stepsize operator to improve its performance and allows a better balance between the exploration and exploitation of the search space. BFOAM-DS was tested in three well-known engineering design optimization problems. Results were analyzed with basic statistics and common measures for nature-inspired constrained optimization problems to evaluate the behavior of the swim with a mutation operator and the dynamic stepsize operator. Results were compared against a previous version of the proposed algorithm to conclude that BFOAM-DS is competitive and better than a previous version of the algorithm.
... There are around 1000 unique types of bats [7], their sizes can differ broadly, extending from the small honey bee bat of around to the goliath with of around weigh around. Uses echolocation to a specific every one of animal groups, miniaturized scale bats utilize echolocation broadly, while super bats don't. ...
Article
Full-text available
There is a significant necessity to compress the medical images for the purposes of communication and storage. Most currently available compression techniques produce an extremely high compression ratio with a high-quality loss. In medical applications, the diagnostically significant regions (interest region) should have a high image quality. Therefore, it is preferable to compress the interest regions by utilizing the Lossless compression techniques, whilst the diagnostically lessersignificant regions (non-interest region) can be compressed by utilizing the Lossy compression techniques. In this paper, a hybrid technique of Set Partition in Hierarchical Tree (SPIHT) and Bat inspired algorithms have been utilized for Lossless compression the interest region, and the non-interest region is loosely compressed with the Discrete Cosine Transform (DCT) technique. The experimental results present that the proposed hybrid technique enhances the compression performance and ratio. Also, the utilization of DCT increases compression performance with low computational complexity.
Article
Full-text available
Problems with multiple interdependent components offer a better representation of the real-world situations where globally optimal solutions are preferred over optimal solutions for the individual components. One such model is the Travelling Thief Problem (TTP); while it may offer a better benchmarking alternative to the standard models, only one form of inter-component dependency is investigated. The goal of this paper is to study the impact of different models of dependency on the fitness landscape using performance prediction models (regression analysis). To conduct the analysis, we consider a generalised model of the TTP, where the dependencies between the two components of the problem are tunable through problem features. We use regression trees to predict the instance difficulty using an efficient memetic algorithm that is agnostic to the domain knowledge to avoid any bias. We report all the decision trees resulting from the regression model, which is the core in understanding the relationship between the dependencies (represented by the features) and problem difficulty (represented by the runtime). The regression model was able to predict the expected runtime of the algorithm based on the problem features. Furthermore, the results show that the contribution of the item value drop dependency is significantly higher than the velocity change dependency.
Article
Large classes of standard single‐field slow‐roll inflationary models consistent with the required number of e‐folds, the current bounds on the spectral index of scalar perturbations, the tensor‐to‐scalar ratio, and the scale of inflation can be efficiently constructed using genetic algorithms. The setup is modular and can be easily adapted to include further phenomenological constraints. A semi‐comprehensive search for sextic polynomial potentials results in ∼O(300,000)$\sim \mathcal {O}(300,000)$ viable models for inflation. The analysis of this dataset reveals a preference for models with a tensor‐to‐scalar ratio in the range 0.0001≤r≤0.0004$0.0001\le r\le 0.0004$. We also consider potentials that involve cosine and exponential terms. In the last part we explore more complex methods of search relying on reinforcement learning and genetic programming. While reinforcement learning proves more difficult to use in this context, the genetic programming approach has the potential to uncover a multitude of viable inflationary models with new functional forms.
Article
Full-text available
Multi-hop data transmission in Vehicular Ad Hoc Network (VANET) is mostly affected by vehicle’s mobility, intermittent connection, insufficient bandwidth, and multichannel switching. Geographic routing technique in cognitive vehicular ad hoc network (CR-VANET) resolves bandwidth scarcity and connectivity issues simultaneously. The proposed QoS aware stochastic relaxation approach (QASRA) is a geographic routing protocol that additionally performs network exploration under inappropriate connectivity and exploits the already existing valid solutions while discovering routes in urban CR-VANET. The candidate forwarders are prioritized depending upon their closeness from destination, relative velocity from sender, and their street efficiency in terms of connectivity and delay. Transmission is done over minimally occupied cognitive or service channels. Different sets of experiments were performed to evaluate the effect of growing vehicular density, primary users (PUs) and CBR connection pairs in an urban scenario. The simulation on NS-2.24 platform demonstrates that at higher velocities, that are in between 20 and 60 km/hr, the average packet delivery ratio (PDR) is 60% when the density of vehicles were altered, 63.6% when PU’s count is changed and 69% when number of CBR connection pair is varied. The average end-to-end delay is 1.03 s when the density of vehicles is altered, 0.734 when PU’s count is changed and, 0.756 when number of CBR connection pair is varied. The average PU’s success ratio is 68.4% when density of vehicles were changed, 61.4% when PU’s count is changed and, 64.4% when the number of CBR connection pairs is varied. The analysis done through simulation demonstrates that a successful delivery of both secondary and primary users is achieved in minimum time when compared with other traditional methods.
Article
Centralized wastewater treatment has been the favorite wastewater treatment strategy until a few decades ago, in order to exploit each possible scale economy. Nowadays, water stress and resource scarcity, due to population growth and climate change, call for water reuse and resource recovery, and these goals do not often find in centralization the best solution. Today, the reuse of reclaimed water can take place at different levels and represents an option of primary importance; therefore, in some cases, centralized systems may be economically and environmentally unsustainable for this additional purpose, and the search for the optimal infrastructure centralization degree must take into account these goals. This review analyzes studies that investigated the search of the best centralization level of wastewater collection and treatment, focusing on the methodologies applied to take the decision and highlighting strengths and weaknesses of the different approaches and how they have evolved over time. The final goal is to guide planners and decision-makers in choosing and handling the most suitable method to assess the centralization level of wastewater infrastructures, based on the objectives set out. The reviewed studies cover a period of twenty years. The differences found along this time span show an ongoing paradigm shift towards hybrid systems, which combine centralized and decentralized wastewater treatments that promote the storage of treated water and various forms of local water reuse and resource recovery. The protection of human health and the environment (which primarily promotes water reuse and resource recovery) has become the main challenge of wastewater treatment systems, that will presumably improve further their economic, social and environmental sustainability to achieve urban development in the context of the water-energy-food security nexus.
Article
Full-text available
Chemical reactors are employed to produce several materials, which are utilized in numerous applications. The wide use of these chemical engineering units shows their importance as their performance vastly affects the production process. Thus, improving these units will develop the process and/or the manufactured material. Multi-objective optimization (MOO) with evolutionary algorithms (EA’s) has been used to solve several real world complex problems for improving the performance of chemical reactors with conflicting objectives. These objectives are of different nature as they could be economy, environment, safety, energy, exergy and/or process related. In this review, a brief description for MOO and EA’s and their several types and applications is given. Then, MOO studies, which are related to the materials’ production via chemical reactors, those were conducted with EA’s are classified into different classes and discussed. The studies were classified according to the produced material to hydrogen and synthesis gas, petrochemicals and hydrocarbons, biochemical, polymerization and other general processes. Finally, some guidelines are given to help in deciding on future research.
Thesis
Full-text available
Population growth and urbanisation trends bring many consequences related to the increase in global energy consumption, CO2 emissions and a decrease in arable land per person. High‑rises have been one of the inevitable buildings of metropoles to provide extra floor space since the early examples in the 19th century. Therefore, optimisation of high-rise buildings has been the focus of researchers because of significant performance enhancement, mainly in energy consumption and generation. Based on the facts of the 21st century, optimising high-rise buildings for multiple vital resources (such as energy, food, and water) is necessary for a sustainable future. This research suggests “self-sufficient high-rise buildings” that can generate and efficiently consume vital resources in addition to dense habitation for sustainable living in metropoles. The complexity of self-sufficient high-rise building optimisation is more challenging than optimising regular high-rises that have not been addressed in the literature. The main challenge behind the research is the integration of multiple performance aspects of self-sufficiency related to the vital resources of human beings (energy, food, and water) and consideration of large numbers of design parameters related to these multiple performance aspects. Therefore, the dissertation presents a framework for performance optimisation of self-sufficient high-rise buildings using artificial intelligence focusing on the conceptual phase of the design process. The output of this dissertation supports decision-makers to suggest well-performing high-rise buildings involving the aspects of self sufficiency in a reasonable timeframe.
Article
Full-text available
One of the most promising technologies to be employed in beyond 5G networks is based on the concept of cell-free (CF) massive MIMO, in which a predetermined set of access points (APs) jointly cooperate in the data transmission and reception to/from the user equipment (UE). In order to efficiently manage radio resources, the CF central processing unit implement uplink power control policies. These policies aim to optimize a given network utility function. In this paper, we investigate the max-min fairness optimization problem, in which the spectral efficiency performance of the UE with the worst channel conditions is prioritized, taking into account a per-UE power constraint and assuming linear maximum ratio combining at APs. Existing solutions have typically relied on second-order cone programming with convex approximations, which exhibit high computational complexity and scalability issues. Therefore, meta-heuristics (MHs) are explored as alternative optimization schemes, capable of providing (near)-optimal solutions with reasonable computational effort. Three MH approaches with different operation principles are compared. Numerical results show that the differential evolution algorithm exhibits the best trade-off between solution quality and run time, being able to reach (near)-optimal solutions faster than the bisection approach while coping with the scalability issues of the geometric programming-based algorithm.
Article
The immensity of the string landscape and the difficulty of identifying solutions that match the observed features of particle physics have raised serious questions about the predictive power of string theory. Modern methods of optimisation and search can, however, significantly improve the prospects of constructing the standard model in string theory. In this paper we scrutinise a corner of the heterotic string landscape consisting of compactifications on Calabi‐Yau three‐folds with monad bundles and show that genetic algorithms can be successfully used to generate anomaly‐free supersymmetric SO(10)$SO(10)$ GUTs with three families of fermions that have the right ingredients to accommodate the standard model. We compare this method with reinforcement learning and find that the two methods have similar efficacy but somewhat complementary characteristics.
Article
Full-text available
Motivated from the shortage of the existing research studies on impacts of dangerously contagious diseases on firms’ financial performance, this study sheds light on the impacts of Coronavirus (Covid-19) outbreak on financial performance upon on the quarterly data of 126 Chinese listed firms across 16 industries. Overall, the Covid-19 outbreak reduced Chinese listed firms’ financial performance proxied by the revenue growth rate, ROA, ROE, and asset turnover. This outbreak’s negative effects on Chinese firms’ profitability were much smaller than that on their revenue growth rates. While this outbreak’s negative effects on financial performance of Chinese listed firms were bigger for those that were seriously affected by this pandemic like airlines, travel, and entertainment (ATE), this pandemic’s effects were positive for the medicine industry. In the meanwhile, Chinese listed firms that located in high-risk regions suffered a bigger financial loss during the outbreak, and especially there was a strong Hubei effect. The corporate culture and CSR moderated the inverse relationship between this outbreak and Chinese firms’ financial performance. Findings of this study contribute to enrich the existing literature on impacts of the Covid-19 outbreak on firms’ financial performance worldwide and suggest helpful practical and theoretical implications.
Article
Full-text available
Previsões de séries temporais auxiliam a tomada de decisão em diversas áreas como marketing, economia e indústria, sendo que a principal finalidade é estimar o comportamento futuro de uma sequência de observações. Nesse sentido, conjuntos de modelos (ensembles) híbridos, que combinam modelos de aprendizado de máquina e estatísticos, têm se mostrado eficientes para prever séries temporais. Entretanto, a correta seleção dos modelos e da combinação em um ensemble é importante para assegurar o desempenho do sistema. Assim, este trabalho propõe e compara diferentes abordagens de ensembles híbridos para melhorar a previsão de séries temporais. São propostos um conjunto de modelos de previsão e diferentes estratégias de combinação para lidar com séries temporais de diferentes padrões. A primeira abordagem de ensembles combina um conjunto de modelos com acurácia utilizando quatro estratégias de combinação. Já a segunda abordagem ensemble seleciona automaticamente modelos e a combinação utilizando meta-heurísticas. As abordagens são comparadas utilizando um novo conjunto de dados de uma empresa de distribuição de cosméticos e um conjunto de dados público. Os resultados demonstram que os ensembles propostos são eficientes para diminuir o erro de previsão.
Article
Full-text available
Bu makalede üretim ve kurulum süreleri stokastik olan kapasite kısıtlı çok ürünlü dinamik parti büyüklüğü belirleme problemi ele alınmıştır. Bu problemde tüm sürelerin stokastik olduğu durum göz önünde bulundurularak hem verimli hem de güvenilir üretim planları elde edilmektedir. Ele alınan problemin amacı klasik üretim maliyetleri ve ek mesai maliyetlerinden oluşan toplam maliyeti en küçüklemektir. Klasik maliyetler, üretim, kurulum ve envanter tutmaktan kaynaklanmaktadır. Ek mesai maliyetleri ise makinenin zaman kapasitesini aşacak şekilde kullanılmasından dolayı ortaya çıkmaktadır. Öncelikle, belirli bir üretim ve kurulum planı için beklenen ek mesai süresini kesin olarak hesaplayan bir prosedür önerilmiştir. Problemi etkin bir şekilde çözmek için tabu algoritmasına dayanan bir çözüm yaklaşımı geliştirilmiştir. Bu yaklaşım üç aşamadan oluşmaktadır: Başlangıç, iyileştirme ve planlama. Algoritmanın ilk aşamasında olurlu planlar üreten bir başlangıç metodu önerilmiştir. Bulunan planlar makalede önerilen tabu arama metoduyla iyileştirilmektedir. Planlama aşamasında, yerel arama metodunun bulduğu çözümleri iyileştirmek için bir doğrusal programlama modeli geliştirilmiştir. Çözüm yöntemimizin performansı literatürde yayınlanmış alt sınırlar kullanılarak onaylanmıştır. Ayrıca, sonuçlar tabu arama yöntemimizin makul sürelerde çok iyi çözümler elde ederek iyi performans sergilediğini göstermektedir.
Article
Full-text available
Optimization problems are ubiquitous in scientific research, engineering, and daily lives. However, solving a complex optimization problem often requires excessive computing resource and time and faces challenges in easily getting trapped into local optima. Here, we propose a memristive optimizer hardware based on a Hopfield network, which introduces transient chaos to simulated annealing in aid of jumping out of the local optima while ensuring convergence. A single memristor crossbar is used to store the weight parameters of a fully connected Hopfield network and adjust the network dynamics in situ. Furthermore, we harness the intrinsic nonlinearity of memristors within the crossbar to implement an efficient and simplified annealing process for the optimization. Solutions of continuous function optimizations on sphere function and Matyas function as well as combinatorial optimization on Max-cut problem are experimentally demonstrated, indicating great potential of the transiently chaotic memristive network in solving optimization problems in general.
Article
Full-text available
A recent metaheuristic algorithm, such as Whale optimization algorithm (WOA), was proposed. The idea of proposing this algorithm belongs to the hunting behavior of the humpback whale. However, WOA suffers from poor performance in the exploitation phase and stagnates in the local best solution. Grey wolf optimization (GWO) is a very competitive algorithm comparing to other common metaheuristic algorithms as it has a super performance in the exploitation phase, while it is tested on unimodal benchmark functions. Therefore, the aim of this paper is to hybridize GWO with WOA to overcome the problems. GWO can perform well in exploiting optimal solutions. In this paper, a hybridized WOA with GWO which is called WOAGWO is presented. The proposed hybridized model consists of two steps. Firstly, the hunting mechanism of GWO is embedded into the WOA exploitation phase with a new condition which is related to GWO. Secondly, a new technique is added to the exploration phase to improve the solution after each iteration. Experimentations are tested on three different standard test functions which are called benchmark functions: 23 common functions, 25 CEC2005 functions, and 10 CEC2019 functions. The proposed WOAGWO is also evaluated against original WOA, GWO, and three other commonly used algorithms. Results show that WOAGWO outperforms other algorithms depending on the Wilcoxon rank-sum test. Finally, WOAGWO is likewise applied to solve an engineering problem such as pressure vessel design. Then the results prove that WOAGWO achieves optimum solution which is better than WOA and fitness-dependent optimizer (FDO).
Chapter
This chapter is devoted to the single and multiobjective optimization problems. The chapter contains formulation of diverse mechanical and thermo-mechanical problems as well as practical applications of optimal design for the problems considered. The shape and topology optimization for various types of problems is considered. Several objectives for optimization problems are proposed, formulated and implemented. The bio-inspired algorithms and hybrid algorithms coupled with FEM or BEM are used in numerical examples. The formulation and solutions of sample problems for linear elastic, nonlinear elastoplastic, composites and structures with crack are described in detail. Optimization problems with more than one criterion are presented in the context of optimization for coupled fields problems.
Article
Full-text available
Multimodal optimization is still one of the most challenging tasks in the evolutionary computation field, when multiple global and local optima need to be effectively and efficiently located. In this paper, a niching particle swarm optimization (PSO)-based Euclidean distance and hierarchical clustering (EDHC) for multimodal optimization is proposed. This technique first uses the Euclidean distance-based PSO algorithm to perform preliminarily search. In this phase, the particles are rapidly clustered around peaks. Secondly, hierarchical clustering is applied to identify and concentrate the particles distributed around each peak to finely search as a whole. Finally, a small-world network topology is adopted in each niche to improve the exploitation ability of the algorithm. At the end of this paper, the proposed EDHC-PSO algorithm is applied to the traveling salesman problems (TSP) after being discretized. The experiments demonstrate that the proposed method outperforms existing niching techniques on benchmark problems and is effective for TSP.
ResearchGate has not been able to resolve any references for this publication.