Figure 9 - uploaded by Theodore Manikas
Content may be subject to copyright.
Graph 3

Graph 3

Source publication
Article
Full-text available
An important stage in circuit design is placement, where components are assigned to physical locations on a chip. A popular contemporary method for placement i s the use of simulated a n n e aling. While this approach has been shown to produce good placement solutions, recent w ork in genetic algorithms has produced promising results. The purpose o...

Similar publications

Article
Full-text available
Ising machines are hardware devices that can solve ground-state search problems of Ising spin models and could be of use in solving various practical combinatorial optimization problems. However, large-scale systems have to be implemented by partitioning into subsystems that are hard to synchronize and where communication between them is difficult....

Citations

... This element-level fitness is formally referred to as goodness. Despite this unique feature, the SE algorithm has received limited attention from researchers, and only a few studies have reported its use to solve NP-hard problems [15][16][17][18]. ...
Article
Full-text available
Wind energy is a potential replacement for traditional, fossil-fuel-based power generation sources. One important factor in the process of wind energy generation is to design of the optimal layout of a wind farm to harness maximum energy. This layout optimization is a complex, NP-hard optimization problem. Due to the sheer complexity of this layout design, intelligent algorithms, such as the ones from the domain of natural computing, are required. One such effective algorithm is the simulated evolution (SE) algorithm. This paper presents a simulated evolution algorithm engineered to solve the wind farm layout design (WFLD)optimization problem. In contrast to many non-deterministic algorithms, such as genetic algorithms and particle swarm optimization which operate on a population, the SE algorithm operates on a single solution, decreasing the computational time. Furthermore, the SE algorithm has only one parameter to tune as opposed to many algorithms that require tuning multiple parameters. A preliminary empirical study is done using data collected from a potential location in the northern region of Saudi Arabia. Experiments are carried out on a 10 × 10 grid with 15 and 20 turbines while considering turbines with a rated capacity of 1.5 MW. Results indicate that a simulated evolution algorithm is a viable option for the said problem.
... However, solution qualities are limited by computational effort or computing resource availability. Since algorithms based on SA cannot be parallelized, EAs often outperform SA for complex problems [Manikas and Cain, 1996]. In the work at hand, we consider a basic evolution strategy (ES) often referred to as (1 + 1)-ES. ...
... Evolutionary algorithms are a class of meta-heuristic optimization algorithms inspired by biological mechanisms [Holland, 1992], [Bäck, 1996]. They are frequently applied to address NP-hard combinatorial optimization problems. ...
... Hence, GDU = (T (t) − T base )dt. Commonly, the interval is a day which is why it is also referred to as Growing Degree Days (GDD) [McMaster and Wilhelm, 1997]. Obviously, this GDU accumulation undergoes seasonal (and geographical) variations causing uncertainties on GDU forecasts. ...
Article
Full-text available
In times of climate change, growing world population, and the resulting scarcity of resources, efficient and economical usage of agricultural land is increasingly important and challenging at the same time. To avoid disadvantages of monocropping for soil and environment, it is advisable to practice intercropping of various plant species whenever possible. However, intercropping is challenging as it requires a balanced planting schedule due to individual cultivation time frames. Maintaining a continuous harvest throughout the season is important as it reduces logistical costs and related greenhouse gas emissions, and can also help to reduce food waste. Motivated by the prevention of food waste, this work proposes a flexible optimization method for a full harvest season of large crop ensembles that complies with given economical and environmental constraints. Our approach applies evolutionary algorithms and we further combine our evolution strategy with a sophisticated hierarchical loss function and adaptive mutation rate. We thus transfer the multi-objective into a pseudo-single-objective optimization problem, for which we obtain faster and better solutions than those of conventional approaches.
... Genetic Algorithm tends to be defined as population based algorithm. What we infer here is, [11]GA provides more than one solution, may be in hundreds, based on the size of the population. ...
... In GA, each iteration is called 'a generation'. So, on every generation, individuals from the current population of solutions are rated in accordance with their 'effectiveness' as being the solution to the problem at hand [11]. Taking these ratings into consideration, a population of new candidate solutions is formed using some operators such as mutation, crossover and selection [4] which are biologically inspired[3]. ...
... C. Algorithm GENETIC ALGORITHM [11] • begin • creating initial population of size P • repeat • select parent 1 and parent 2 from P. ...
Article
Full-text available
This paper reviews and revisits the concepts, algo- rithm followed, the flow of sequence of actions and different op- erators used by Genetic Algorithm. GAs are the metaheuristic algorithm used for solving the searching problems. We will see that Genetic Algorithms has good searching properties which selects its operators depending upon the nature of the problem at hand, that is, if the problem has one optimal solution, Genetic Algorithm as well as Simulated Annealing can be used to solve it but if a problem has more than one solution, then only Genetic Algorithm proves to be suitable and the better choice as it creates several solutions for a problem.
... The type of the problem and the representation of the solution determine which algorithm could be more beneficial. Bost GA and SA have been empirically evaluated against each other in literature for different application domains such as circuit partitioning [MC96], optimal path selection in network routing [NS10], training neural networks [SDJ99], and continuous network design [XWW09]. From these previous studies on GA versus SA, there is no generally accepted superiority of GA over SA. ...
Thesis
Full-text available
Cloud computing enables cloud providers to offer computing infrastructure as a service in the form of virtual machines (VMs). VM placement is a vital component of any cloud management platform (e.g. OpenStack). VM placement is the process of mapping VMs to physical machines (PMs) efficiently according to the cloud provider’s objectives and placement constraints. So far, any VM placement solution adopts either a reservation-based or demand-based VM placement strategies. Reservation-based VM placement allocates VMs to PMs according to the reserved VM size regardless of the actual workload. If a VM is making use of only a fraction of its reservation, then this leads to PM underutilization, which wastes energy and results in more costs. In contrast, demand-based VM placement consolidates VMs based on the actual workloads demand which may lead to better utilization. However, it may incur more service level agreement violations (SLAVs) resulting from overloaded PMs and/or VM migrations among PMs due to workload fluctuations. This thesis aims to introduce a novel VM placement strategy to control the tradeoff between PM utilization and SLAVs that will allow cloud providers to explore the whole space of VM placement options that range from demand-based to reservation-based, with the help of a single parameter. The thesis first presents our strategy called parameter-based VM placement using a static parameter. Then it introduces various algorithms that adjust this parameter continuously at run-time in a way that a provider can maintain the number of SLAVs below a certain (predetermined) threshold while using the smallest possible number of PMs. These algorithms fine-tune the parameter both at the cloud data centre level and at the VM level using reactive and hybrid (reactive and proactive) approaches. An empirical evaluation using CloudSim confirms that the proposed parameter-based VM placement solution offers more flexibility in choosing between different tradeoffs.
... La convergence de ces méthodes est fortement liée à leur paramétrage initial [146]. Les algorithmes stochastiques les plus utilisés pour le dimensionnement de systèmes de production d'énergie et le dispatching de l'énergie, sont notamment le recuit simulé [155], la programmation en essaim particulaire [156], les colonies de fourmis [157], l'algorithme des lucioles [158] et les algorithmes génétiques [156], [159]. Les algorithmes stochastiques visent la recherche de l'optimum global mais cette recherche conduit à un nombre élevé d'évaluations de la fonction objectif et par conséquent un temps de résolution relativement long selon le problème [146]. ...
Thesis
Full-text available
L’accès à l’énergie électrique est indispensable au développement industriel et socio-économique dans tous les pays du monde. Au Bénin, la question du déficit en électricité demeure très préoccupante, et se pose fortement dans les zones rurales du pays. Par ailleurs, le Bénin dispose d’un potentiel intéressant en photovoltaïque (PV) et en hydroélectrique (hydro), mais qui reste globalement sous-exploité. Ce travail de thèse de doctorat porte sur le dimensionnement optimal d’un système hybride hydro-PV-stockage pour une alimentation rurale isolée. Au cours de ces travaux, nous avons modélisé les principaux composants du système hybride hydro-PV-stockage, notamment la conduite forcée, l’équipement électromécanique (la turbine et la génératrice), le générateur PV, les batteries (Bat) et les convertisseurs AC/DC et DC/DC. En effet, la modélisation et l’optimisation de la conduite forcée avec les algorithmes génétiques NSGA II ont permis de noter que le coût d’investissement de la conduite forcée (C_(inv_cond) ) croît avec sa puissance hydraulique (P_cond ). P_cond et C_(inv_cond) croissent respectivement de façon logarithmique et quadratique avec le diamètre (D_cond ). De même, la modélisation et l’optimisation de la génératrice ont montré que sa masse totale croît avec son rendement. Quant à la modélisation du coût des équipements électromécaniques, la prise en compte des facteurs continentaux a permis de mieux estimer ce coût. Le second volet de la thèse est consacré à l’optimisation de différentes configurations de sources d’énergie, notamment la centrale hydroélectrique, les systèmes PV, hydro-PV et hydro-PV-Bat. Deux fonctions objectifs ont été prises en compte : l’énergie totale produite et le coût de production. Les solutions obtenues sont présentées sous forme de front de Pareto. Le coût de production du système PV croît linéairement avec sa production totale en énergie. Pour les cas de la centrale hydroélectrique et du système hybride hydro-PV, les solutions sont regroupées en quatre catégories suivant le nombre d’unités de production hydroélectrique : {n_hyd=1,2,3,4}. Quant au cas du système hybride hydro-PV-Bat, les résultats sont regroupés en deux grandes catégories suivant le nombre de batteries : {n_Bat=64,192 }. Pour n_Bat=64, les solutions sont classées en quatre groupes selon n_hyd: {n_hyd=1,2,3 et 4}, alors que pour n_Bat=192, nous avons trois cas {n_hyd=2,3 et 4}. L’énergie totale produite et le coût de production croissent avec le débit d’équipement nominal Q_(T_n ). Spécifiquement, le compromis entre les fonctions objectifs est en faveur de l’énergie totale produite pour n_hyd=1 (cas de la centrale hydroélectrique), pour {n_hyd=1,2} (cas de hydro-PV) et pour {n_Bat=64 & n_hyd=1 à 4} et {n_Bat=192 & n_hyd=2,3} (cas de hydro-PV-Bat). Dans ces cas, on préférera augmenter l’énergie totale. En revanche, le coût de production est favorisé dans les cas de {n_hyd=2,3,4} (pour la centrale hydroélectrique), de {n_hyd=3,4} (pour hydro-PV) et de {n_Bat=192 & n_hyd=4} (pour hydro-PV-Bat). On optera alors pour la réduction du coût de production.
... to 0. While this scheme is based on the model that was proposed by Manikas et al. in [13], we improved their model by adapting the gain function for more than two sets, and making AcceptGainChange(∆Gain, T ) function load-aware. ...
Conference Paper
Resilience in SDN control plane is a challenging goal when a single controller is employed. Thus, distributed controllers are deployed to realize a resilient and reliable software defined network. However, such a strategy can not succeed without an efficacious controller-switch assignment scheme. In addition to zero-day assignment, online re-assignment is crucial since due to network failures, the connections between controllers and switches may break off intermittently and impair the network operation. In this paper, we propose a reactive assignment model against network failures using integer linear programming based on load distribution of controllers. We augment our proposal with simulated annealing and random assignment approaches for switch, link and controller failures. The experimental results show that our model gives resilience against network failures and load-awareness is a effective strategy for controller assignment.
... The model takes into account several practical constraints such as the attenuation and the coverage of the excited wave. Since GA was proven to be effective in sensor optimization problems (2,8,9), it was adopted in this study and further validated on plates of different shapes with non-convex surfaces. ...
Conference Paper
Full-text available
The optimization of the number and location of piezoelectric (PZT) wafers, used in sensor networks for continuous monitoring of structures, is not completely well developed. This paper presents an effective method based on genetic algorithm for network optimization of piezoelectric wafers, towards the application in the field of structural health monitoring. The proposed objective function is to maximize the coverage of the monitored area, represented by a set of control points, while using the least possible number of sensors. In the optimum solution, each control point should be covered by a user-defined number of sensing paths, defined as the coverage level. During the optimization process, any location on the plate is considered as a potential position for a PZT wafer. A MATLAB code was developed to implement the algorithm, and selected simulation cases have been executed to demonstrate the efficiency of the proposed optimization algorithm. The algorithm provides the flexibility of changing a wide range of problem parameters such as the number of piezoelectric wafers, their coverage range, the required coverage level and the number of control points. The tractability of the model proposed was improved by feeding the solver an initial solution that made the branch and bound technique less extensive.
... For a 95% confidence interval, (1 − ∝) = 0.95 so ∝= 0.05 and ∝ /2 = 0.025. From the z_tables for standard normal distribution (Table III in Freund [35]), z 0.025 = 1.96 [35]. In this study, index 1 refers to the genetic algorithm, while index 2 refers to the simulated annealing method. ...
... For a 95% confidence interval, (1 − ∝) = 0.95 so ∝= 0.05 and ∝ /2 = 0.025. From the z_tables for standard normal distribution (Table III in Freund [35]), z 0.025 = 1.96 [35]. In this study, index 1 refers to the genetic algorithm, while index 2 refers to the simulated annealing method. ...
Article
Full-text available
Determining the positions of facilities, and allocating demands to them, is a vitally important problem. Location-allocation problems are optimization NP-hard procedures. This article evaluates the ordered capacitated multi-objective location-allocation problem for fire stations, using simulated annealing and a genetic algorithm, with goals such as minimizing the distance and time as well as maximizing the coverage. After tuning the parameters of the algorithms using sensitivity analysis, they were used separately to process data for Region 11, Tehran. The results showed that the genetic algorithm was more efficient than simulated annealing, and therefore, the genetic algorithm was used in later steps. Next, we increased the number of stations. Results showed that the model can successfully provide seven optimal locations and allocate high demands (280,000) to stations in a discrete space in a GIS, assuming that the stations’ capacities are known. Following this, we used a weighting program so that in each repetition, we could allot weights to each target randomly. Finally, by repeating the model over 10 independent executions, a set of solutions with the least sum and the highest number of non-dominated solutions was selected from among many non-dominated solutions as the best set of optimal solutions.
... GA is a heuristic search and optimization algorithm inspired by natural evolution [24]. Moreover, GA can find high-quality solutions much faster than any other heuristic algorithm [25], [26]. GA is composed of the following five steps. ...
Article
Adiabatic quantum-flux-parametron (AQFP) logic is a very energy-efficient superconductor logic due to zero static power dissipation and adiabatic switching operation. However there is a shortage of EDA (Electronic Design Automation) software tools adaptable to AQFP logic. Such tools are essential for designing large-scale-integration (LSI) circuits efficiently. Therefore, we have developed a first set of EDA tools that handle cell placement and wiring, written in SKILL, the Cadence scripting language. Our tools also consider the maximum wire length constraint that exists in AQFP logic. First, AQFP logic cells are repositioned via the Genetic Algorithm (GA) to minimize the number of wires that violate the wiring length constraint. Buffers are then automatically inserted as signal repeaters for logic rows where the violations still exist. We then complete the logic wiring of the circuit using the channel routing algorithm. We demonstrate the usability of the EDA tools by designing a 16-bit adder as well as a randomly generated test circuit.
... Besides simulated annealing (SA), we have tried several other meta heuristics. Particularly, we have also tried to apply genetic algorithm (GA) [37,51] in Appendix D and AutoPart [41] algorithm in Appendix E. Results show that SA performs much better. ...
Conference Paper
Modern data analytical tasks often witness very wide tables, from a few hundred columns to a few thousand. While it is commonly agreed that column stores are an appropriate data format for wide tables and analytical workloads, the physical order of columns has not been investigated. Column ordering plays a critical role in I/O performance, because in wide tables accessing the columns in a single horizontal partition may involve multiple disk seeks. An optimal column ordering will incur minimal cumulative disk seek costs for the set of queries applied to the data. In this paper, we aim to find such an optimal column layout to maximize I/O performance. Specifically, we study two problems for column stores on HDFS: column ordering and column duplication. Column ordering seeks an approximately optimal order of columns; column duplication complements column ordering in that some columns may be duplicated multiple times to reduce contention among the queries' diverse requirements on the column order. We consider an actual fine-grained cost model for column accesses and propose algorithms that take a query workload as input and output a column ordering strategy with or without storage redundancy that significantly improves the overall I/O performance. Experimental results over real-life data and production query workloads confirm the effectiveness of the proposed algorithms in diverse settings.