Article

Genetic Algorithms In Search, Optimization, and Machine Learning

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... We observe the population of the 100 organisms (i.e., chromosomes) during a time interval within which the relative fitnesses of the 2 5 imaginable chromosome variants (types) do not change. We use a simple model, fitness rule and evolutionary simulation algorithm from one of the first books from the pioneer epoch of genetic algorithms (GA) [8], and its implementation in C (SGA-C 2 ). The model is for discrete generations that do not overlap. ...
... 31= n max . In the model, the relative fitness [8], or viability [9], of each genome is (n/n max ) 10 ; the book gives some justification for this choice. Thus, the highest fitness is assigned to the haplotype 11111 and the lowest fitness to the haplotype 00000. ...
... Furthermore, the fitness model sets up a strong selection-pressure gradient, increasing from left (least significant bit) to right (most significant bit): substitutions of nucleotides (bits) at the right end will have dramatic effects on fitness, while the effects of substitutions at the left end will be far smaller. 3 The algorithm takes care of updating or 'renormalizing' at each (non-overlapping) generation in such a way that the total population size of the following generation is again 100 ( [8]; see also [9]). ...
Preprint
Full-text available
A haplotype block, or simply a block, is a chromosomal segment, DNA base sequence or string that occurs in only a few variants or types in the genomes of a population of interest, and that has an encapsulated or 'private' frequency distribution of the string types that is not shared by neighbouring blocks or regions on the same chromosome. We consider two inverse problems of genetic interest: from just the frequencies of the symbol types (4 base types, possible single-base alleles) at each position (point, base/nucleotide) along the string, infer the location of the left and right boundaries of the block (block extent), and the number and relative frequencies of the string types occurring in the block (block structure). The large majority of variable positions in human and also other (e.g., fungal) genomes appear to be biallelic, i.e., the position allows only a choice between two possible symbols. The symbols can then be encoded as 0 (major) and 1 (minor), or as $\uparrow$ and $\downarrow$ as in Ising models, so the scenario reduces to problems on Boolean strings/bitstrings and Boolean matrices. The specifying of major allele frequencies (MAF) as used in genetics fits naturally into this framework. A simple example from human chromosome 9 is presented.
... In a typical approach, each chromosome is assigned a probability of reproduction, P i P i ,..., 2 , 1 , = , so that its likelihood of being selected is proportional to its fitness relative to the other chromosomes in the population. If the fitness of each chromosome is a strictly positive number to be maximized, this is often accomplished using roulette-wheel selection [15]. Successive trials were conducted in which a chromosome was selected until all available positions were filled. ...
... Coding designs for a special problem are key to using GAs effectively. There are two basic principles for designing GAs coding [15], where the user should select a coding so that short, low-order schemata are relevant to the underlying problem and relatively unrelated to schemata over other fixed positions, and the user should select the smallest alphabet that permits a natural expression of the problem. ...
... The reproduction operator can be implemented in an algorithmic form in a number of ways. In this study, we consider the easiest methods for roulette wheels [15]. ...
Preprint
Full-text available
Modeling the reaction time required for the removal of chemical oxygen demand (COD) from wastewater is important because can be an indicator of harmful pollutants that may pose risks to human health.. Here, we evaluate the use of an inverse artificial neural network (ANNi) to optimize the removal of COD from aqueous herbicide solutions using sonophotocatalysis as the treatment process.. This algorithm takes the operating conditions as input and estimates the optimal reaction time to reach the required effluent COD. This model was validated by comparison with both experimental measurements and simulated analysis, resulting in low error (0.5%), high Pearson correlation (R2 = 0.9956), and short computing time (less than one minute). The results for the proposed methodology (ANNi-GAs) were efficient and attractive in a reasonable way providing a satisfactory picture of the relationships between the optimal input parameters. Such parameters control the degradation process of herbicides online and can be used to predict the behavior of COD removal, thus taking actions to reduce the impact of aqueous treatment with alazine and gesaprim herbicides on aquatic, soil biota, and human health. Therefore, this technique constitutes a promising framework for predicting the environmental impact of herbicides within a tolerable degree of error.
... The first study on genetic algorithms in the literature is Holland's work on Machine Learning. Goldberg's [2] work on the control of gas pipelines, later influenced by this work, proved that genetic algorithms can have practical uses. ...
... However, this method is not used in genetic algorithms due to the condition that the fitness values are positive. A minimization problem solved by genetic algorithms is transformed into a maximization problem by subtracting the objective function value of the problem from a large value determined at the beginning [2]. ...
... It is not possible to replace such a chromosome with the crossover operator. In this case, the gene in the randomly selected site on the chromosome is modified by interfering with the chromosomes that make up the population at a certain rate [2]. It is recommended to take the mutation rate between 0.0001 and 0.05 [12]. ...
Article
Full-text available
Genetic algorithms, a stochastic research method, emerged by adapting the development process of biological systems to the computer environment. Operations carried out in genetic algorithms are performed on units stored in computer memory, similar to natural populations. Today, many linear or nonlinear methods have been developed for the solution of optimization problems. Since genetic algorithms are heuristic, they may not find the optimum result for a given problem. However, it gives very close to optimum values for problems that cannot be solved by known methods or whose solution time increases exponentially with the solution of the problem. Genetic algorithms are initially applied to nonlinear optimization problems. In this study, a genetic algorithm was applied to the single-span beam, a single-span beam with a gap in its body. While applying to the genetic algorithm the problems developed back-controlled selection, randomly mixed crossover, double-sensitivity mutation operators, and backward-controlled stopping criterion were used. As a result, developed genetic algorithm operators were applied to the too-big-sized beam problems. These beams' dimensions were too big but they weren’t deep beams according to ACI 318-95 rules. Keywords: Deep beams; genetic algorithms; reinforcement diameters; selection Operator.
... GA's adaptability in adjusting search strategies reduces overall optimization time for LSTM parameters. Particularly useful for non-differentiable functions, GA, independent of gradients, is well-suited for optimizing complex time series data [62]. The procedure commences with the initialization of a pool of potential solutions, each one corresponding to a different set of hyperparameters of the LSTM network. ...
... The offspring, as well as likely some of the initial solutions, are the next generation population that replaces the previous one. The mentioned iterative process continues for a limited number of generations or until a specific termination condition is met, such as reaching the maximum number of iterations or meeting the convergence criteria [62]. After training and validating the LSTM-GA model, the prediction step is conducted at various time spans, including 1-hour, 2-hour, and 3-hour intervals. ...
Article
Full-text available
This paper introduces an innovative ensemble troposphere conditions forecasting method using wet refractivity within the context of Global Navigation Satellite System (GNSS) troposphere tomography. The current models lack coverage of diverse geographical locations and weather conditions, and they do not utilize high-spatial resolution tropospheric data to cover a large area. Moreover, their deterministic prediction mode may introduce high uncertainty into the results. This paper leverages Long Short-Term Memory (LSTM) networks and Genetic Algorithms (GA) to optimize hyperparameters, enabling the prediction of Three-Dimensional (3-D) wet refractivity fields for ensemble forecasting under various weather conditions including rain bands sweeping in Poland and a storm in California. A comparison of the 3-hour predictions with Weather Research and Forecasting (WRF) model outputs at levels with a height lower than 3000 m shows Root Mean Squared Error (RMSE) values of 4.15 and 3.18 ppm for Poland and California, respectively. After utilizing the Generative Adversarial Network (GAN) to produce realistic time series, ensemble forecasting is conducted. The model demonstrates exceptional accuracy in both regions, yielding an optimal threshold of 0.41, which shows a point at which the balance between True Positive (TP) and True Negative (TN) instances is optimized, achieving a sensitivity of 0.967 and precision of 0.973 in Poland. Additionally, it achieves an optimal threshold of 0.52, yielding a sensitivity of 0.982 and precision of 0.993 in California. The low False Positive Rate (FPR) of 0.027 in Poland and 0.011 in California underscore the adaptability and reliability of the model across diverse datasets.
... In the crossover process, two-parent chromosomes mate to create better offspring chromosome(s), whereas the mutation process randomly alters some of the genetic material in the chromosomes. The crossover process is highly important among these processes for designing and implementing GAs (Goldberg, 1989). One very positive thing is that any crossover technique constructed for the TSP can be used for the GTSP with some modifications. ...
... There are numerous selection methods available in literature. The roulette wheel selection (RWS) (Goldberg, 1989) method using the fitness proportional rule is applied for our GAs. ...
Article
Full-text available
Here, we consider the generalized travelling salesman problem (GTSP), which is a generalization of the travelling salesman problem (TSP). This problem has several real-life applications. Since the problem is complex and NP-hard, solving this problem by exact methods is very difficult. Therefore, researchers have applied several heuristic algorithms to solve this problem. We propose the application of genetic algorithms (GAs) to obtain a solution. In the GA, three operators-selection, crossover, and mutation-are successively applied to a group of chromosomes to obtain a solution to an optimization problem. The crossover operator is applied to create better offspring and thus to converge the population, and the mutation operator is applied to explore the areas that cannot be explored by the crossover operator and thus to diversify the search space. All the crossover and mutation operators developed for the TSP can be used for the GTSP with some modifications. A better combination of these two operators can create a very good GA to obtain optimal solutions to the GTSP instances. Therefore, four crossover and three mutation operators are used here to develop GAs for solving the GTSP. Then, GAs is compared on several benchmark GTSPLIB instances. Our experiment shows the effectiveness of the sequential constructive crossover operator combined with the insertion mutation operator for this problem.
... GA was initially proposed by John H. Holland in 1975, but its widespread recognition came with the publication of "Adaptation in Natural and Arti cial Systems" in 1992 (Holland, 1992(. In 1989, David E. Goldberg translated this theoretical framework into code, sparking the vibrant development and application of GA (Goldberg, 1989). Drawing inspiration from evolutionary biology, GA operates by selecting different chromosomes from parent generations for crossover and mutation. ...
Preprint
Full-text available
Due to the increasing demand for hyperspectral image (HSI) classification, there is a need for improvements and enhancements to achieve more accurate and cost-effective results. Image processing plays a significant role in HSI classification, primarily used for image smoothing and denoising. Filtering, a popular method in image processing, is typically based on mathematical equations. However, in this study, filtering is treated as an optimization problem to provide a novel filter for HSI processing and classification. An optimized filter (OF) was generated and optimized using genetic algorithm (GA) based on the Pavia University (PU) dataset, which preprocessed using Minimum Noise Fraction (MNF). Subsequently, the OF was applied to HSI classification for three datasets using Extreme Gradient Boosting (XGB). The results were compared with median filter (MF) and Gaussian filter (GF). The findings demonstrated that, in comparison to MF and GF, OF exhibited the strongest enhancement and achieved the highest accuracy in most situations, including different sampling scenarios for various datasets. Moreover, OF demonstrated excellent performance in aiding HSI classification, especially in classes with a higher number of samples. The study's outcomes highlight the feasibility of generating a filter specifically for HSI processing and classification using GA, which is deemed acceptable and effective. Based on the results, filtering has evolved into an optimization problem, expanding beyond being solely a mathematical problem. Filters can now be generated and optimized based on the goals and requirements of image-related tasks, extending beyond HSI applications.
... This operator functions by inverting the genetic values at specific loci within chromosomes, doing so with a predefined likelihood. Algorithms based on GA incorporate a variety of crossover operators, including but not limited to boundary, uniform, nonuniform, directional, and Gaussian mutations [47]. ...
Article
Full-text available
With the growth of online networks, understanding the intricate structure of communities has become vital. Traditional community detection algorithms, while effective to an extent, often fall short in complex systems. This study introduced a meta-heuristic approach for community detection that leveraged a memetic algorithm, combining genetic algorithms (GA) with the stochastic hill climbing (SHC) algorithm as a local optimization method to enhance modularity scores, which was a measure of the strength of community structure within a network. We conducted comprehensive experiments on five social network datasets (Zachary's Karate Club, Dolphin Social Network, Books About U.S. Politics, American College Football, and the Jazz Club Dataset). Also, we executed an ablation study based on modularity and convergence speed to determine the efficiency of local search. Our method outperformed other GA-based community detection methods, delivering higher maximum and average modularity scores, indicative of a superior detection of community structures. The effectiveness of local search was notable in its ability to accelerate convergence toward the global optimum. Our results not only demonstrated the algorithm's robustness across different network complexities but also underscored the significance of local search in achieving consistent and reliable modularity scores in community detection.
... Developed by pioneering studies like Holland (1975), Holland (1987) or Goldberg (1989), among others, a GA mimics the process of natural selection with, consequently, a related vocabulary borrowing extensively from the Theory of Evolution. A typical GA starts by encoding an array of randomly selected candidate solutions in binary forms, assimilated to chromosomes, each member of a larger population. ...
Article
Full-text available
We address the classical errors-in-variables (EIV) problem in multivariate linear regression with N dependent variables where each left-hand-side variable is a function of a common predictor X subject to measurement error. Our contribution consists in employing the remaining N −1 regressions as extra information to obtain a filtered version of the mismeasured series X. We test the performance of our approach using simulations whereby we control for different cases like low vs. high R 2 models, small vs. large sample or small vs. large measurement error variances. The results suggest that the multivariate-Compact Genetic Algorithm (mCGA) approach yields estimates with lower mean-square-errors (MSEs). The MSEs are decreasing as the number of dependent variables increases. When there is no measurement error, our method gives results similar to those that would have been obtained by ordinary least-squares.
... The GA is a widely used optimization technique known for its ability to minimize the cost function by creating diverse chromosomes using crossover and mutation functions (Goldberg, 1989). Each chromosome consists of genes that represent a combination of input variables, selected based on probability from a population of chromosomes. ...
Article
Full-text available
Yarn is a fundamental element in most textile products. Among various yarn manufacturing methods, the ring spinning system is particularly important due to its benefits, such as high yarn quality, even-ness, low hairiness, and ease of handling. The parameters of the drafting zone in this system greatly impact yarn quality. Typically, adjusting these parameters in the drafting zone is time-consuming and costly using trial-and-error method. This study introduces an algorithmic approach using response surface methodology (RSM), experimental modeling, and multi-objective optimization to decrease unevenness percentage (U%) and imperfection index (IPI). Input parameters optimized include cots hardness of front and back top rollers, spacer size, and break draft. Results showed that the artificial neural network (ANN) predicts response parameters superiorly with determination coefficient close to 1, compared to RSM, which has a determination coefficient of about 0.72. Therefore, ANN was chosen for optimization. Additionally, combining the genetic algorithm (GA) with two ANN-based models reduced IPI from 39 to 33.67 and a decreased from 9.73% to 9.67% occurred in terms of U%. The final Input settings were the cots hardness of the front roller of 70 shores and the cots hardness of the back roller of 76 shores, spacer size 2.8 mm, and break draft of 1.26. This method efficiently optimizes the drafting zone parameter, thus enhancing yarn quality. ARTICLE HISTORY
... Finally, LAA method was compared to the widely used optimization method based on genetic algorithm (GA) for robustness and stability [35]. The LAA provides an operational cost of $192,152.8623 ...
Article
Full-text available
This paper presents an improved optimization algorithm for the energy management of a renewable energy solar/wind microgrid with multiple diesel generators applied to off-grid remote communities. The main objective aims to solve the economic emission dispatch problem with a price penalty factor to minimize the energy cost and the emission level. An enhanced metaheuristic optimization algorithm, Lévy arithmetic algorithm, is applied to improve the searchability for optimal solution compared to the conventional arithmetic algorithm. The Lévy arithmetic method is used for the management of the microgrid and compared to other metaheuristic optimization algorithms for the same application. Comparative analysis demonstrates good cost savings using the Lévy arithmetic algorithm, compared to other optimization algorithms such as the arithmetic algorithm, crow search algorithm, hybrid modified grey wolf algorithm, interior search algorithm, cuckoo search algorithm, particle swarm algorithm, colony algorithm, and genetic algorithm.
... One of the most popular probabilistic selection methods, used frequently in optimization algorithms, especially genetic algorithm [39,40], is roulette wheel selection. Based on their ftness values, the groups will occupy a specifc wheel slice, and selection can occur by turning the wheel and checking the location of the pointer when it stops (see Figure 2(a)). ...
Article
Full-text available
In this study, a novel damage detection framework for skeletal structures is presented. The introduced scheme is based on the optimization-based model updating method. A new multipopulation framework, namely, the Famine Algorithm, is introduced that hopes to reduce the number of objective function evaluations needed. Furthermore, using static displacement patterns, a damage-sensitive feature named pseudo-kinetic energy is presented. By exploiting the new feature, an efficient cost function is developed. Two mathematical benchmark problems and a two-membered truss for damage detection problem are depicted in 2D space to track the search behavior of the Famine Algorithm and show the changes in the search space when using the new feature. Four numerical examples, including three trusses and a frame structure, are used to evaluate the overall performance of the proposed damage detection methods. Moreover, an experimental shear frame is studied to test the performance of the suggested method in real-life problems. The obtained results of the examples reveal that the proposed method can identify and quantify the damaged elements accurately by only utilizing the first five vibrating modes, even in noise-contaminated conditions.
... When it comes to optimization methods, the choice of algorithm is important. Genetic algorithms, a type of evolutionary algorithm, are commonly employed for MDO tasks due to their ability to search for optimal solutions in complex scenarios [22]. Inspired by natural selection and evolution, genetic algorithms excel in handling discontinuous, non-differentiable, or complex objective functions [23]. ...
... Particles are weighted according to estimated segmentation masks and resampled accordingly. We utilize roulette wheel sampling for particle resampling, a technique that incorporates elements from evolutionary computation methodologies [9]. ...
Preprint
Full-text available
Easily accessible sensors, like drones with diverse onboard sensors, have greatly expanded studying animal behavior in natural environments. Yet, analyzing vast, unlabeled video data, often spanning hours, remains a challenge for machine learning, especially in computer vision. Existing approaches often analyze only a few frames. Our focus is on long-term animal behavior analysis. To address this challenge, we utilize classical probabilistic methods for state estimation, such as particle filtering. By incorporating recent advancements in semantic object segmentation, we enable continuous tracking of rapidly evolving object formations, even in scenarios with limited data availability. Particle filters offer a provably optimal algorithmic structure for recursively adding new incoming information. We propose a novel approach for tracking schools of fish in the open ocean from drone videos. Our framework not only performs classical object tracking in 2D, instead it tracks the position and spatial expansion of the fish school in world coordinates by fusing video data and the drone's on board sensor information (GPS and IMU). The presented framework for the first time allows researchers to study collective behavior of fish schools in its natural social and environmental context in a non-invasive and scalable way.
... Using the values of this factor analysis, we attempted to generate 200 people and 2,000 agents using the algorithm in Figure 16, which leads to the same questionnaire results as the factor analysis results. The GA used in this study was based on the tournament method [22] with 1600 integers from 0 to 4 (for the generation of 200 individuals) or 16000 genes (for the generation of 2000 individuals), 100 chromosomes, a cross-incidence rate of 0.8, and a mutation rate of 0.3 ( Figure 18). The machine was Windows 11 (64bit), AMD Ryzen 9 4900H 3.30 GHz processor, and 32.0 GB implemented RAM. ...
... All values in Si-units. Compared algorithms and their respective options[14,15,16,17]. Algorithms used from the MATLAB Global optimization Toolbox R2023b. ...
Preprint
Full-text available
Air-spring-dampers offer a novel alternative to traditional hydraulic damping within suspension systems. However, due to the intricate damping behavior and a multitude of configuration possibilities, the design process of air-spring-dampers proves to be challenging. This paper proposes a solution to address these challenges by introducing a simulation framework integrated with parameter optimization, enabling the simulation and design of any air-spring-damper. The simulation framework incorporates three fundamental modules: volume, mass exchange, and heat exchange. By abstracting the air-spring-damper as a graph and representing it via adjacency matrices, these modules will automatically be connected to construct a simulation model for any air-spring-damper configuration. Furthermore, leveraging the framework simplifies the design process through a parameter optimization. This method allows for a target damping curve to be set. The optimization algorithm then adjusts the air-spring-damper's design parameters until the target curve is achieved. A comparison of algorithms is conducted to determine the most suitable for this optimization problem. The pattern search and surrogate algorithms emerge as strong performers, effectively producing the target damping curve. To simplify the design process further, the common objective of enhancing driving safety and comfort is transformed into an optimization problem within the framework. This leads to the generation of a pareto front, which presents design recommendations that balance safety and comfort optimally.
... Stochastic algorithms are frequently used to solve engineering optimization problems, with many imitating various natural processes. Examples of stochastic algorithms include the genetic algorithm (Goldberg 1989), differential evolution (Storn and Price 1997), charged system search (Kaveh and Talatahari 2010), and particle swarm optimization (Kennedy and Eberhart 1995). Another stochastic method used in engineering optimization applications is harmony search (HS) (Geem et al. 2001). ...
Article
Abstract: In this study, we utilize two metaheuristic algorithms, particle swarm optimization, and biogeography-based optimization, to select and scale ground motion (GM) records for use in the time history analysis of structures. This method ensures that there is no alteration to the phase or shape of the response spectra of the records. The proposed methodology demonstrates an ability to search through hundreds of earthquake records and propose a combination of 11 record pairs and scaling factors, resulting in a mean spectrum that aligns with the target spectrum. We applied the proposed research to two sites in separate geographical regions in the United States: Memphis and San Francisco, and we followed the ASCE 7-22 procedure. Selected ground motions underwent scaling adjustments represented by scalar values in a user-defined range. Furthermore, we present error metrics, comparing the target spectrum with the mean spectrum derived from the selected records. To showcase the effectiveness of our approach, we conducted a comparative analysis against results obtained from the PEER-NGA methodology. The outcomes show that the methodology can be viewed as an effective and reliable approach for obtaining appropriate GM records for the time history analysis of structures.
... To address this, we propose a novel approach using a genetic algorithm to optimize weight merging. Genetic algorithms efficiently search large, complex spaces by iteratively selecting, combining, and mutating weights [26,14,8]. ...
Preprint
Full-text available
In this paper, we introduce a novel method for merging the weights of multiple pre-trained neural networks using a genetic algorithm called MeGA. Traditional techniques, such as weight averaging and ensemble methods, often fail to fully harness the capabilities of pre-trained networks. Our approach leverages a genetic algorithm with tournament selection, crossover, and mutation to optimize weight combinations, creating a more effective fusion. This technique allows the merged model to inherit advantageous features from both parent models, resulting in enhanced accuracy and robustness. Through experiments on the CIFAR-10 dataset, we demonstrate that our genetic algorithm-based weight merging method improves test accuracy compared to individual models and conventional methods. This approach provides a scalable solution for integrating multiple pre-trained networks across various deep learning applications. Github is available at: https://github.com/YUNBLAK/MeGA-Merging-Multiple-Independently-Trained-Neural-Networks-Based-on-Genetic-Algorithm
... The novelty of this embedded global-local GA lies in its ability to address two issues raised by the genetic algorithm itself. First, the real-coded GA faces issues, such as premature convergence, slow convergence, and easily trapped into local optima due to employing discrete variables to look for solutions in continuous search space (Deb & Deb, 2014;Goldberg, 1989;Subbaraj et al., 2011;Tang & Tseng, 2013). The gradient-based Brent's method resolves the problem by directly finding local optimal solutions from the continuous real space, such that the near-global optimal solution can be found with fast convergence and small population size. ...
Article
Full-text available
Developing a three‐dimensional (3D) lithofacies model from boreholes is critical for providing a coherent understanding of complex subsurface geology, which is essential for groundwater studies. This study aims to introduce a new geostatistical method—interval kriging—to efficiently conduct 3D borehole‐based lithological modeling with sand/non‐sand binary indicators. Interval kriging is a best linear unbiased estimator for irregular interval supports. Interval kriging considers 3D anisotropies between two orthogonal components—a horizontal plane and a vertical axis. A new 3D interval semivariogram is developed. To cope with the nonconvexity of estimation variance, the minimization of estimation variance is regulated with an additional regularization term. The minimization problem is solved by a global‐local genetic algorithm embedded with quadratic programming and Brent's method to obtain kriging weights and kriging length. Four numerical and real‐world case studies demonstrate that interval kriging is more computationally efficient than 3D kriging because the covariance matrix is largely reduced without sacrificing borehole data. Moreover, interval kriging produces more realistic geologic characteristics than 2.5D kriging, while conditional to spatial borehole data. Compared to the multiple‐point statistics (MPS) algorithm—SNESIM, interval kriging can reproduce the geological architecture and spatial connectivity of channel‐type features, meanwhile producing tabular‐type features with better connectivity. Because the regularization term constrains kriged value toward 0 or 1, interval kriging produces more certainty in sand/non‐sand classification than 2.5D kriging, 3D kriging, and SNESIM. In conclusion, interval kriging is an effective and efficient 3D geostatistical algorithm that can capture the 3D structural complexity while significantly reducing computational time.
... As above-mentioned, the GA and PSO are two metaheuristic algorithms that have been utilized in many researches. GA has been introduced at first by Holland [36] and then is improved by Goldberg [37] and Davis [38]. Compared to other optimization methods, the GA has more ability to move from the local optimum points towards the global solutions. ...
... Monte Carlo simulations [28] may be used to get good suboptimal sensor locations. However, a better option would be to use genetic algorithms (GA) [29,30]. The genetic algorithm for this problem was developed in MATLAB. ...
... The ga is an evolutionary algorithm inspired by natural selection and survival of the fittest. It was developed by Goldberg in the 1980s and is widely used for such optimization problems [30]. ...
... However, Holland's original goal was to use it to study the phenomenon of adaptation and evolution, when he published his book, Adaptation in Natural and Artificial Systems. Then, Goldberg extended it for optimization and search problems in the 1989s [7]. Since then, Genetic algorithms have been applied in lots of fields such as image and signal Processing, computational finance, bioinformatics, and Robotics. ...
Preprint
Full-text available
In this paper, we tackle the network delays in the Internet of Things (IoT) for an enhanced QoS through a stable and optimized federated fog computing infrastructure. Network delays contribute to a decline in the Quality-of-Service (QoS) for IoT applications and may even disrupt time-critical functions. Our paper addresses the challenge of establishing fog federations, which are designed to enhance QoS. However, instabilities within these federations can lead to the withdrawal of providers, thereby diminishing federation profitability and expected QoS. Additionally, the techniques used to form federations could potentially pose data leakage risks to end-users whose data is involved in the process. In response, we propose a stable and comprehensive federated fog architecture that considers federated network profiling of the environment to enhance the QoS for IoT applications. This paper introduces a decentralized evolutionary game theoretic algorithm built on top of a Genetic Algorithm mechanism that addresses the fog federation formation issue. Furthermore, we present a decentralized federated learning algorithm that predicts the QoS between fog servers without the need to expose users' location to external entities. Such a predictor module enhances the decision-making process when allocating resources during the federation formation phases without exposing the data privacy of the users/servers. Notably, our approach demonstrates superior stability and improved QoS when compared to other benchmark approaches.
... The task of determining the optimal weight vector for the distance function in our clustering model is notably complex due to the variety of factors influencing customer behavior. To address this challenge, we considered a robust optimization technique: genetic algorithms [13]. These algorithms are renowned for their robustness and effectiveness across a myriad of applications, making them an excellent choice for the distance function optimization. ...
Preprint
Full-text available
Customer segmentation is a fundamental process to develop effective marketing strategies, personalize customer experience and boost their retention and loyalty. This problem has been widely addressed in the scientific literature, yet no definitive solution for every case is available. A specific case study characterized by several individualizing features is thoroughly analyzed and discussed in this paper. Because of the case properties a robust and innovative approach to both data handling and analytical processes is required. The study led to a sound proposal for customer segmentation. The highlights of the proposal include a convenient data partition to decompose the problem, an adaptive distance function definition and its optimization through genetic algorithms. These comprehensive data handling strategies not only enhance the dataset reliability for segmentation analysis but also support the operational efficiency and marketing strategies of sports centers, ultimately improving the customer experience.
... GAPSO is a heuristic algorithm that combines genetic algorithm [42] and particle swarm optimization [43]. It enhances the global search capability of particle swarm optimization by using the crossover and mutation operations of genetic algorithm, and guides the genetic process through the individual update rules of particle swarm optimization. ...
Article
Full-text available
Optimizing the architecture of superconducting quantum processors is crucial for improving the efficiency of executing quantum programs. Existing schemes either modify general-purpose architectures, which might lead to an increase in the probability of qubit frequency collisions, or customize special-purpose architectures based on the quantum programs to reduce the gate operations after qubit mapping, but the architectures lack support for the post-mapping gate operations’ optimization of multiple programs, which reduce their reusability. In this study, we propose a new processor architecture design method that reduces the average growth of the total post-mapping gate count on multiple quantum programs as well as to reduce the impact of processor architecture on frequency collisions, and thus improve the reusability of special-purpose processor. The main idea is to construct a new architecture by finding maximum common edge subgraph among multiple special-purpose processor architectures. To show the effectiveness of our method, we selected quantum programs with different functions covering 9 types of qubit numbers for comparison. Comprehensive simulation results show that the architecture schemes generated by using our method outperform two general-purpose architecture schemes based on the square lattice and the eff-5-freq’s special-purpose architecture schemes, respectively. Compared to the all 2-qubit bus and the eff-5-freq’s architecture schemes, after qubit mapping, the architecture schemes of our method have the smallest average growth of gate operations in multiple quantum programs (the largest average growth is 5.63%), which further supports the execution of different quantum programs. Meanwhile, the architecture schemes of our method also reduce the probability of frequency collisions by at least 4.48% compared to all other schemes. Furthermore, we compared our method with another special-purpose design method. In the schemes of different special-purpose architecture design methods, our method is able to generate architectures with better matching for multiple quantum programs. Therefore, our method can provide superconducting quantum processor architecture design with higher reusability for multiple quantum programs.
... For each region the value of the non-extensive parameters q and α are estimated by fitting the observed distribution with the non-extensive model of Equation (10). This was achieved by means of a minimization procedure using two different algorithms, namely, the differential genetic evolution (DGE) [56] and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimization algorithm [57]. DGE is an optimization algorithm that works iteratively improving a candidate solution with regard to a given measure of quality. ...
Article
Full-text available
Mexico is a well-known seismically active country, which is primarily affected by several tectonic plate interactions along the southern Pacific coastline and by active structures in the Gulf of California. In this paper, we investigate this seismicity using the classical Gutenberg–Richter (GR) law and a non-extensive statistical approach based on Tsallis entropy. The analysis is performed using data from the corrected Mexican seismic catalog provided by the National Seismic Service, spanning the period from January 2000 to October 2023, and unlike previous work, it includes six different regions along the entire west coastline of Mexico. The Gutenberg–Richter law fitting to the earthquake sub-catalogs for all six regions studied indicates magnitudes of completeness between 3.30 and 3.76, implying that the majority of seismic movements occur for magnitudes less than 4. The cumulative distribution of earthquakes as derived from the Tsallis entropy was fitted to the corrected catalog data to estimate the q-entropic index for all six regions, which for values greater than one is a measure of the non-extensivity (i.e., non-equilibrium) of the system. All regions display values of the entropic index in the range 1.52≲q≲1.61, slightly lower than previously estimated ( 1.54≲q≲1.70) using catalog data from 1988 to 2010. The reason for this difference is related to the use of modern recording devices, which are sensitive to the detection of a larger number of low-magnitude events compared to older instrumentation.
... This was done on all given datasets. Hyperparameters There are several known methods we can choose from for each genetic operator (initializer, selection, crossover, mutation) [12,10]. We only considered those that return binary vectors, as our solutions are binary. ...
Preprint
Full-text available
In this paper, we deal with bias mitigation techniques that remove specific data points from the training set to aim for a fair representation of the population in that set. Machine learning models are trained on these pre-processed datasets, and their predictions are expected to be fair. However, such approaches may exclude relevant data, making the attained subsets less trustworthy for further usage. To enhance the trustworthiness of prior methods, we propose additional requirements and objectives that the subsets must fulfill in addition to fairness: (1) group coverage, and (2) minimal data loss. While removing entire groups may improve the measured fairness, this practice is very problematic as failing to represent every group cannot be considered fair. In our second concern, we advocate for the retention of data while minimizing discrimination. By introducing a multi-objective optimization problem that considers fairness and data loss, we propose a methodology to find Pareto-optimal solutions that balance these objectives. By identifying such solutions, users can make informed decisions about the trade-off between fairness and data quality and select the most suitable subset for their application.
Article
Particle Swarm Optimization (PSO) has drawn attention due to its widespread use in scientific and engineering fields. However, it suffers from a major limitation which is its slow exploration capability leading to stagnation. To overcome this limitation, various algorithms have been hybridized to improve the exploration phase of PSO but still there is a need to improve it further. Keeping this in mind, this paper proposes a novel hybrid meta-heuristic algorithm called the Hybrid Pelican-Particle Swarm Optimization (HPPSO) for solving complex optimization problems. The purpose of hybridization is motivated by the excellent exploration capability of the Pelican Optimization Algorithm (POA). The performance of the proposed HPPSO has been tested on 33 standard benchmark functions in MATLAB (R2023a). For evaluation, the obtained results of proposed HPPSO algorithm are compared with conventional PSO and POA along with other numerous hybridized algorithms of PSO (PSOGSA, HFPSO, PSOBOA, and PSOGWO). The results are analyzed statistically through convergence curves, boxplot and a non-parametric Wilcoxon signed rank test. These analyses show that the proposed HPPSO algorithm achieves a better optimum than other algorithms used in the present paper.
Article
Full-text available
Suspended sediment load estimation is vital for the development of river initiatives, water resources management, the ecological health of rivers, determination of the economic life of dams and the quality of water resources. In this study, the potential of Feed Forward Neural Network (FFNN), Genetic Algorithm (GA), Particle Swarm Optimization (PSO), and Shuffled Frog Leaping Algorithm (SFLA) models was evaluated for suspended sediment load (SSL) estimation in Yeşilırmak River. The heat map of Pearson correlation values of meteorological and hydrological parameters in 1973–2021, which significantly impacted SSL estimation, was examined to estimate SSL values. As a result of the analysis it was developed a prediction model with three different combinations of precipitation, stream flow and past SSL values (M1: streamflow, M2: streamflow and precipitation, M3: streamflow, precipitation, and SSL). The prediction accuracy of the models was visually compared with the Coefficient of Determination (R²), Bias Factor (BF), Mean Absolute Error (MAE), Mean Bias Error (MBE), Root Mean Square Error (RMSE), Akaike Information Criterion (AIC), Kling-Gupta Efficiency (KGE) statistical criteria and Bland-Altan plot, boxplot, scatter plot and line plot. Based on the analyses, the PSO-ANN model in the M1 model combination showed good estimation performance with an RMSE of 1739.92, MAE of 448.56, AIC of 1061.55, R² of 0.96, MBE of 448.56, and BF of 0.29. Similarly, the SFLA-ANN model in the M2 model combination had an RMSE of 1819.58, MAE of 520.64, AIC of 1069.9, R² of 0.96, MBE of 520.64, and BF of 0.19. In the M3 model combination, the SFLA-ANN model achieved an RMSE of 1423.09, MAE of 759.88, AIC of 1071.9, R² of 0.81, MBE of 411.31, and BF of -0.77. Overall, these models can be considered good estimators as their predicted values are generally close to the measured values. The study outputs can help ensure water structures’ effective lifespan and operation and take precautions against sediment-related disaster risks.
Article
Full-text available
This research delves into the intersection of machine learning and additive manufacturing, specifically focusing on predicting the mechanical strength of FDM-printed PEEK components. The impact of these components is felt across a variety of industries, including aerospace, biomedical, and automobile where the mechanical strength plays a key role in material selection. Throughout the study, the mechanical strength is investigated through experimental analysis of four process parameters: infill density, layer height, printing speed, and infill pattern. Support Vector Regression (SVR) and Random Forest Regression (RFR) are used to accurately predict the ultimate tensile strength of the printed parts, with an average deviation from the experimental value of less than 5%. The study also examines the mechanical strength variation in relation to the process parameters using contour and surface plots. A genetic algorithm (GA) is employed to optimize the more accurate mechanical strength data predicted by RFR algorithm which yielded a maximum ultimate tensile strength value of 66.17 MPa for 80% infill density, 0.103 mm layer height, 25.001 mm/sec printing speed and octet infill pattern. Microstructural studies and test results further support the outcomes obtained through parametric analysis and optimization.
Article
Vehicle Routing Problem (VRP) is important in the transportation and logistics industries. Vehicle Routing Problem with Time Window (VRPTW) is a kind of VRP with the additional time windows constraint in the model and is classified as an NP-hard problem. In this study, we proposed Stas crossover in Genetic Algorithm (GA) to solve VRPTW by developing the problem with K-mean clustering. The experiments use the standard Solomon’s benchmark problem instances for VRPTW. The results with K-mean clustering are shown to perform better for minimum distance and average distance than without K-mean clustering. In the case of location and dispersion characteristics of the customer, the paths with K-mean clustering are arranged into groups and are orderly, but the paths without K-mean clustering are disordered. After that, this paper shows the comparison of the crossover operator performance on instances of Solomon benchmark, and appropriate crossover operators are recommended for each type of problem. The results of the proposed algorithm are better than the best-known solutions from the previous studies for some instances. Moreover, our proposed research will serve as a guideline for a real-world case study.
Book
Full-text available
Buku "Artificial Intelligence : Teori dan Penerapan AI di Berbagai Bidang" memberikan pemahaman mendalam tentang kecerdasan buatan (AI) dan aplikasinya dalam berbagai sektor. Dimulai dengan pengantar dan sejarah AI, buku ini mengulas konsep dasar, jenis sistem AI, teknik pengolahan data, pembelajaran mesin, jaringan saraf tiruan, algoritma genetika, serta pengolahan bahasa alami dan citra. Bagian khusus membahas etika dan tanggung jawab dalam penggunaan AI, memastikan teknologi ini digunakan secara bijak. Buku ini juga mengeksplorasi penerapan AI dalam bisnis, kesehatan, pendidikan, dan instansi pemerintah. Dengan wawasan tentang masa depan AI dan tren teknologi yang berkembang, buku ini menjadi referensi penting bagi pembaca yang ingin memahami dan memanfaatkan AI untuk meningkatkan efisiensi dan inovasi dalam berbagai bidang.
Article
Nanofluids exhibit remarkable thermophysical properties, making them highly promising candidates for heat transfer applications. Viscosity is a crucial property among the thermophysical properties of nanofluids, significantly influencing heat transfer rates and pressure loss computations. In this study, the dynamic viscosity of water-based nanofluids containing Al2O3, TiO2, and ZnO nanoparticles was experimentally measured over a wide range of volumetric concentrations (0.1–1.0%) and temperatures (20–50 °C). Then, the dynamic viscosity of nanofluids is predicted with a multi-layer perceptron artificial neural network (ANN). Moreover, the genetic algorithm (GA) is adopted for obtaining the dynamic viscosity value of nanofluids. Finally, the results obtained from the designed ANN model and GA are compared. The results show the feasibility of predicting the dynamic viscosity with the designed ANN model. The proposed ANN model holds promises to meet demands for the detection of the dynamic viscosity of the nanofluids instead of using theoretical estimation equations or experiments which require substantial expertise or time.
Article
Addressing the Economic load Dispatch (ELD) Problem in power systems is crucial for minimizing generation cost and transmission losses while meeting the load demand. This research explores the application of Grey Wolf Optimization (GWO) to solve the ELD problem, leveraging ‘GWO’s inspiration from grey wolf social behavior. Through simulation, ‘GWO’s superior convergence speed and solution quality compared to traditional techniques is demonstrated. The finding highlights ‘GWO’s effectiveness in enhancing the economic and operational efficiency of power systems, offering promising avenues for sustainable energy management strategies.
Article
Full-text available
The artificial intelligence (AI) industry is increasingly integrating with diverse sectors such as smart logistics, FinTech, entertainment, and cloud computing. This expansion has led to the coexistence of heterogeneous applications within multi-tenant systems, presenting significant scheduling challenges. This paper addresses these challenges by exploring the scheduling of various machine learning workloads in large-scale, multi-tenant cloud systems that utilize heterogeneous GPUs. Traditional scheduling strategies often struggle to achieve satisfactory results due to low GPU utilization in these complex environments. To address this issue, we propose a novel scheduling approach that employs a genetic optimization technique, implemented within a process-oriented discrete-event simulation framework, to effectively orchestrate various machine learning tasks. We evaluate our approach using workload traces from Alibaba’s MLaaS cluster with over 6000 heterogeneous GPUs. The results show that our scheduling improves GPU utilization by 12.8% compared to Round-Robin scheduling, demonstrating the effectiveness of the solution in optimizing cloud-based GPU scheduling.
Article
Full-text available
Photovoltaic (PV) systems face challenges in achieving maximum energy extraction due to the non-linear nature of their current versus voltage (IxV) characteristics, which are influenced by temperature and solar irradiation. These factors lead to variations in power generation. The situation becomes even more complex under partial shading conditions, causing distortion in the characteristic curve and creating discrepancies between local and global maximum power points. Achieving the highest output is crucial to enhancing energy efficiency in such systems. However, conventional maximum power point tracking (MPPT) techniques often struggle to locate the global maximum point required to extract the maximum power from the PV system. This study employs genetic algorithms (GAs) to address this issue. The system can efficiently search for the global maximum point using genetic algorithms, maximizing power extraction from the PV arrangements. The proposed approach is compared with the traditional Perturb and Observe (P&O) method through simulations, demonstrating its superior effectiveness in achieving optimal power generation.
Article
The relevance of the research is proven. The goal of the study was achieved in terms of determining the directions and features of forecasting changes in the directions of formation and use of intelligent economic systems to ensure the development of construction enterprises. The following tasks were solved within the framework of the study: substantiation of the existing theoretical provisions regarding the assessment of the level of formation and use of the intellectual economic system of construction enterprises; determination of trends and models for forecasting changes in the formation and use of intelligent economic systems to ensure the development of construction enterprises; identifying features of forecasting results. As a result of the study, a cause-and-effect relationship was established between the integral factors of the development of construction enterprises and the formation and use of an intellectual economic system. The impact of the development and application of the intelligent economic system to ensure the development of construction enterprises has been reduced. Moreover, there is a decrease in the completeness of the information support of the main economic indicators, slowing down the pace of implementation of geo-information systems and the use of geospatial support, the implementation of social standards, information support and the introduction of security systems. Forecasting of changes in the integral factor of the development of construction enterprises due to the growth of the integral indicator of the formation and use of the intellectual economic system was carried out.This made it possible to identify growth points in the context of the development of scientifically based recommendations for the formation and use of an intelligent economic system of construction enterprises. A scientificmethodical approach to forecasting the influence of the directions of formation and use of the intellectual economic system on the development of construction enterprises is proposed. The development of construction enterprises is determined by a set of directions for the formation and use of production, economic and human potential, aimed at the formation of strategic and tactical advantages to achieve a better position compared to the past state.
Preprint
Full-text available
Gene selection is an essential step for the classification of microarray cancer data. Gene expression cancer data (DNA microarray) facilitates computing the robust and concurrent expression of various genes. Particle swarm optimization (PSO) requires simple operators and less number of parameters for tuning the model in gene selection. The selection of a prognostic gene with small redundancy is a great challenge for the researcher as there are a few complications in PSO based selection method. In this research, a new variant of PSO (Self-inertia weight adaptive PSO) has been proposed. In the proposed algorithm, SIW-APSO-ELM is explored to achieve gene selection prediction accuracies. This novel algorithm establishes a balance between the exploitation and exploration capabilities of the improved inertia weight adaptive particle swarm optimization. The self-inertia weight adaptive particle swarm optimization (SIW-APSO) algorithm is employed for solution explorations. Each particle in the SIW-APSO increases its position and velocity iteratively through an evolutionary process. The extreme learning machine (ELM) has been designed for the selection procedure. The proposed method has been to identify several genes in the cancer dataset. The classification algorithm contains ELM, K- centroid nearest neighbor (KCNN), and support vector machine (SVM) to attain high forecast accuracy as compared to the start-of-the-art methods on microarray cancer datasets that show the effectiveness of the proposed method.
Article
Full-text available
Wireless sensor network (WSN) clustering techniques play a crucial role in extending the network's lifespan through various methods. In WSN, the clustering techniques elect the best cluster heads (CHs) among the deployed sensor nodes in terms of their computational, energy, and link capabilities. The CHs nodes expend more energy than other sensor nodes due to a heavier workload, such as receiving messages from their cluster members and other cluster heads, aggregating all messages, and transmitting them to the base station with the help of non-cluster head nodes in the layered sensor network. Thus, there is a dire need to develop an efficient CHs election algorithm. In this paper, the modified particle swarm optimization (M-PSO) method, along with the Genetic algorithm (GA), is considered for selecting cluster heads and non-cluster heads. The proposed method computes the probability of choosing the best nodes as cluster heads, and the GA is employed to discover the optimum shortest path. The selection of the optimum route is based on the employed objective function. Additionally, the proposed method demonstrates superior performance compared to existing state-of-the-art techniques such as GAPSO-H, EC-PSO, and NEST. However, DMPRP performs 12% better than NEST, EC-PSO, and GAPSO-H overall.
Article
Full-text available
The process of an empty backhaul truck returning to its home domicile after a regular delivery journey has attracted many logistics companies in the modern world economy. This paper studies a selective full truckload multi-depot vehicle routing problem with time windows (SFTMDVRPTW) in an empty return scenario. This problem aims at planning a set of backhaul routes for a fleet of trucks that serve a subset of selected transportation demands from a number of full truckload orders to maximize the overall profit, given constraints of availability and time windows. After reviewing the literature related to full truckload vehicle routing problems, based on the professional characteristics encountered as well as on the resolution approaches used, we formulate a mixed-integer linear programming (MILP) model for the SFTMDVRPTW. Since the problem is NP-hard, we propose a genetic algorithm (GA) to yield a near-optimal solution. A new two-part chromosome is used to represent the solution to our problem. Through a selection grounded on the elitist and roulette method, a new crossover operator called "selected two-part crossover chromosome (S-TCX)" and an exchange mutation operator, new individuals are generated. The proposed MILP model and GA are evaluated on newly randomly generated instances. The findings prove that the GA significantly outperforms the CPLEX solver in solution quality and CPU time.
Article
Full-text available
Evapotranspiration plays a pivotal role in the hydrological cycle. It is essential to develop an accurate computational model for predicting reference evapotranspiration (RET) for agricultural and hydrological applications, especially for the management of irrigation systems, allocation of water resources, assessments of utilization and demand and water use allocations in rural and urban areas. The limitation of climatic data to estimate RET restricted the use of standard Penman–Monteith method recommended by food and agriculture organization (FAO-PM56). Therefore, the current study used climatic data such as minimum, maximum and mean air temperature (Tmax, Tmin, Tmean), mean relative humidity (RHmean), wind speed (U) and sunshine hours (N) to predict RET using gene expression programming (GEP) technique. In this study, a total of 17 different input meteorological combinations were used to develop RET models. The obtained results of each GEP model are compared with FAO-PM56 to evaluate its performance. The GEP-13 model (Tmax, Tmin, RHmean, U) showed the low est errors (RMSE, MAE) and highest efficiencies (R2, NSE) in semi-arid (Faisalabad and Peshawar) and humid (Skardu) conditions while GEP-11 and GEP-12 perform best in arid (Multan, Jacobabad) conditions. However, GEP-11 in Multan and Jacobabad, GEP-7 in Faisalabad, GEP-1 in Peshawar, GEP-13 in Islamabad and Skardu in testing data sets. In testing phase, the GEP models R2 values reach 0.99, RMSE values ranged from 0.27 to 2.65, MAE values from 0.21 to 1.85 and NSE values from 0.18 to 0.99. The study findings indicate that GEP is effective in predicting RET when there are minimal climatic data. Additionally, the mean relative humidity was identified as the most relevant factor across all climatic conditions. The findings of this study may be used to the planning and management of water resources in practical situations, as they demonstrate the impact of input variables on the RET associated with different climatic conditions.
ResearchGate has not been able to resolve any references for this publication.