Article

Aerodynamic Noise Control by Optimal Shape Design

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Audet and Dennis (2006) proposed MADS as a way to implement frames so that the directions normalized used in infinitely many POLL steps generate a dense set on the unit sphere at a MADS limit point. This allows strong convergence and excellent computational results for the MADS algorithms (Audet & Orban, 2006;Marsden, 2004). ...
... Once the initial data set, x 1 , ... , x m , is chosen, the cost function is evaluated at these points, and an initial surrogate model is constructed (Marsden, Wang, Dennis, & Moin, 2004). A Kriging surrogate model is utilized to interpolate the data, and to predict the value of the function at a particular location in the parameter space (Marsden, 2004). Kriging is a statistical method based on the usage of spatial correlation functions. ...
... If it is, then she runs the more precise simulation. Her SEARCH includes applying an evolutionary algorithm to the DACE surrogates (Marsden, 2004). Another interesting application of surrogates is in Marsden et al. (2004), where a framework to identify good algorithmic parameter values is provided. ...
Article
The optimal utilization of multiple combined heat and power (CHP) systems is a complex problem. Therefore, efficient methods are required to solve it. In this paper, a recent optimization technique, namely mesh adaptive direct search (MADS) is implemented to solve the combined heat and power economic dispatch (CHPED) problem with bounded feasible operating region. Three test cases taken from the literature are used to evaluate the exploring ability of MADS. Latin hypercube sampling (LHS), particle swarm optimization (PSO) and design and analysis of computer experiments (DACE) surrogate algorithms are used as powerful SEARCH strategies in the MADS algorithm to improve its effectiveness. The numerical results demonstrate that the utilized MADS–LHS, MADS–PSO, MADS–DACE algorithms have acceptable performance when applied to the CHPED problems. The results obtained using the MADS–DACE algorithm are considerably better than or as well as the best known solutions reported previously in the literature. In addition to the superior performance, MADS–DACE provides significant savings of computational effort.
... Audet and Dennis[13]suggested MADS as a way to implement frames so that the directions used in infinitely many POLL steps generate a dense set in the tangent cone at a MADS limit pointˆxpointˆ pointˆx ∈ X. This allows strong convergence results[13,5]and excellent computational results for the MADS algorithms[15,16,41]. ...
... Her SEARCH consists of applying an evolutionary algorithm to the DACE surrogates. See[41]for details. Another interesting application of surrogates is in[15], where a framework to identify good algorithmic parameter values is given. ...
Article
Full-text available
This paper is intended not as a survey, but as an introduction to some ideas behind the class of mesh adaptive direct search (MADS) methods. Space limitations dictate a brief description of various key topics to be provided along with several references, which themselves provide further references. The convergence theory for the methods presented here make a case for clos-ing the gap between nonlinear optimizers and nonsmooth analysts. However these methods are certainly not of purely theoretical interest; they are successful on dif-ficult practical problems. To encourage further use, we give references to avail-able implementations. MADS is implemented in the direct search portion of the MathWorks MATLAB Genetic Algorithm and Direct Search (GADS) Toolbox.
... This might happen if the domain is disjoint, or if feasibility at every step requires the iteration to take small steps along an erratic boundary to get into a better part of the feasible region. Of course this might be mitigated in some problems for which checking the constraints may be less expensive than computing the objective function, and neither MADS version asks for function values at points outside X [19]. ...
... The SEARCH strategy may, for example, be based on a heuristic exploration of the domain, or it may employ surrogate functions based on response surfaces, interpolatory models or simplified physics models.Surrogates are most often tailored for specific applications, see, e.g., [5,7,9,19,20,22]. Let us simply denote by S k the finite set of mesh points used in the SEARCH step at iteration k. ...
Article
Full-text available
We propose a new algorithm for general constrained derivative-free op-timization. As in most methods, constraint violations are aggregated into a single constraint violation function. As in filter methods, a threshold, or bar-rier, is imposed on the constraint violation function, and any trial point whose constraint violation function value exceeds this threshold is discarded from consideration. In the new algorithm, unlike the filter method, the amount of constraint violation subject to the barrier is progressively decreased as the algorithm evolves. Using the Clarke nonsmooth calculus, we prove Clarke stationarity of the sequences of feasible and infeasible trial points. The new method is effective on two academic test problems with up to 50 variables, which were prob-lematic for our GPS filter method. We also test on a chemical engineering problem. The proposed method generally outperforms our LTMADS in the case where no feasible initial points are known, and it does as well when feasible points are known.
... Applications of derivative-free optimization are widely seen in engineering fields, such as tuning of algorithmic parameters [4], optimization of neural networks [5], molecular geometry in biochemistry [6], automatic error analysis [7], dynamic pricing [8] and optimal design in engineering design. Optimal design referring to derivative-free problems contains helicopter rotor blade design [9], wing platform design [10], aeroacoustic shape design [11], hydrodynamic design [12] and so on. ...
Preprint
Full-text available
Derivative-free optimization problems are optimization problems where derivative information is unavailable. The least Frobenius norm updating quadratic interpolation model function is one of the essential under-determined model functions for model-based derivative-free trust-region methods. This article proposes derivative-free optimization with transformed objective functions and gives a trust-region method with the least Frobenius norm model. The model updating formula is based on Powell's formula. The method shares the same framework with those for problems without transformations, and its query scheme is given. We propose the definitions related to optimality-preserving transformations to understand the interpolation model in our method. We prove the existence of model optimality-preserving transformations beyond translation transformation. The necessary and sufficient condition for such transformations is given. The affine transformation with a positive multiplication coefficient is not model optimality-preserving. We also analyze the corresponding least Frobenius norm updating model and its interpolation error when the objective function is affinely transformed. Convergence property of a provable algorithmic framework containing our model is given. Numerical results of solving test problems and a real-world problem with the implementation NEWUOA-Trans show that our method can successfully solve most problems with objective optimality-preserving transformations, even though such transformations will change the optimality of the model function. To our best knowledge, this is the first work providing the model-based derivative-free algorithm and analysis for transformed problems with the function evaluation oracle (not the function-value comparison oracle). This article also proposes the ``moving-target'' optimization problem.
... For example, applications include tuning of algorithmic parameters [5], molecular geometry in biochemistry [50], automatic error analysis [36,37], and dynamic pricing [44]. Optimal design problems in engineering design [10,11,65] refer to derivativefree optimization as well, which include wing platform design [2], aeroacoustic shape design [48,49], and hydrodynamic design [24]. There are still some disadvantages of derivative-free methods on account of unavailable derivative information. ...
Preprint
Full-text available
Derivative-free optimization methods are numerical methods for optimization problems in which no derivative information is used. Such optimization problems are widely seen in many real applications. One particular class of derivative-free optimization algorithms is trust-region algorithms based on quadratic models given by under-determined interpolation. Different techniques in updating the quadratic model from iteration to iteration will give different interpolation models. In this paper, we propose a new way to update the quadratic model by minimizing the $H^2$ norm of the difference between neighbouring quadratic models. Motivation for applying the $H^2$ norm is given, and theoretical properties of our new updating technique are also presented. Projection in the sense of $H^2$ norm and interpolation error analysis of our model function are proposed. We obtain the coefficients of the quadratic model function by using the KKT conditions. Numerical results show advantages of our model, and the derivative-free algorithms based on our least $H^2$ norm updating quadratic model functions can solve test problems with fewer function evaluations than algorithms based on least Frobenius norm updating.
... DFO methods are also widely used in industry and engineering, especially for solving problems that involve heavy simulations. Such problems arise from helicopter rotor blade manufacturing [22,23,191], aeroacoustic shape design [131,132], computational fluid dynamics [62], worst-case analysis of analog circuits [124], rapid-cycling synchrotron accelerator modeling [63], nuclear energy engineering [117,119,118], reservoir engineering and engine calibration [122], and groundwater supply and bioremediation engineering [76,140,215], to name but a few. In general, problems that involve sophisticated models, simulations, or experiments, induce DFO problems. ...
... DFO methods are also widely used in industry and engineering, especially for solving problems that involve heavy simulations. Such problems arise from helicopter rotor blade manufacturing [22,23,190], aeroacoustic shape design [131,132], computational fluid dynamics [62], worst-case analysis of analog circuits [124], rapid-cycling synchrotron accelerator modeling [63], nuclear energy engineering [117,119,118], reservoir engineering and engine calibration [122], and groundwater supply and bioremediation engineering [76,140,214], to name but a few. In general, problems that involve sophisticated models, simulations, or experiments, induce DFO problems. ...
Preprint
Full-text available
This thesis studies derivative-free optimization (DFO), particularly model-based methods and software. These methods are motivated by optimization problems for which it is impossible or prohibitively expensive to access the first-order information of the objective function and possibly the constraint functions. In particular, this thesis presents PDFO, a package we develop to provide both MATLAB and Python interfaces to Powell's model-based DFO solvers, namely COBYLA, UOBYQA, NEWUOA, BOBYQA, and LINCOA. Moreover, a significant part of this thesis is devoted to developing a new DFO method based on the sequential quadratic programming (SQP) method. Therefore, we present an overview of the SQP method and provide some perspectives on its theory and practice. In particular, we show that the objective function of the SQP subproblem is a natural quadratic approximation of the original objective function in the tangent space of a surface. Finally, we elaborate on developing our new DFO method, named COBYQA after Constrained Optimization BY Quadratic Approximations. This derivative-free trust-region SQP method is designed to tackle nonlinearly constrained optimization problems that admit equality and inequality constraints. An important feature of COBYQA is that it always respects bound constraints, if any, which is motivated by applications where the objective function is undefined when bounds are violated. We expose extensive numerical experiments of COBYQA, showing evident advantages of COBYQA compared with Powell's DFO solvers. These experiments demonstrate that COBYQA is an excellent successor to COBYLA as a general-purpose DFO solver. This is the Ph.D. thesis finished under the supervision of Dr. Zaikun Zhang and Prof. Xiaojun Chen at The Hong Kong Polytechnic University. Financial support was provided by the UGC of Hong Kong under the Hong Kong Ph.D. Fellowship Scheme.
... Derivativefree methods have been successfully applied in many areas of study. These include engineering design [7,22,42], molecular geometry [2,44], geophysics [24,68] and finance [35,61], just to name a few. In this section, we describe two examples of these applications. ...
... Within the pattern search framework, the use of the search step for surrogate optimization [5,30] or global optimization [1] is an active area of research. Hart has also used evolutionary programming to design evolutionary pattern search methods (see [16] and the references therein). ...
Article
In this paper we develop, analyze, and test a new algorithm for the global minimization of a function subject to simple bounds without the use of derivatives. The underlying algorithm is a pattern search method, more specifically a coordinate search method, which guarantees convergence to stationary points from arbitrary starting points. In the optional search phase of pattern search we apply a particle swarm scheme to globally explore the possible nonconvexity of the objective function. Our extensive numerical experiments showed that the resulting algorithm is highly competitive with other global optimization methods also based on function values.
... When these methods are used to solve expensive problems, a great deal of cost savings can occur from using searches well suited to the problem. Some convincing statistics on this point are given in [17]. We put no effort into finding effective searches because the main issues treated here concern different ways of polling and of handling constraints. ...
Article
Full-text available
The class of Mesh Adaptive Direct Search (Mads) algorithms is designed for the optimization of constrained black-box problems. The purpose of this paper is to compare instantiations of Mads under different strategies to handle constraints. Intensive numerical tests are conducted from feasible and/or infeasible starting points on three real engineering applications. The three instantiations are Gps, LTMads and OrthoMads. Constraints are handled by the extreme barrier, the progressive barrier, or by a mixture of both. The applications are the optimization of a styrene production process, a MDO mechanical engineering problem, and a well positioning problem, and the codes are publicly available. KeywordsMesh Adaptive Direct Search algorithms (Mads)-Black-box optimization-Constrained optimization-Nonlinear programming-Optimization test problems
Article
Blackbox optimization typically arises when the functions defining the objective and constraints of an optimization problem are computed through a computer simulation. The blackbox is expensive to compute, can have limited precision and can be contaminated with numerical noise. It may also fail to return a valid output, even when the input appears acceptable. Launching twice the simulation from the same input may produce different outputs. These unreliable properties are frequently encountered when dealing with real optimization problems. The term blackbox is used to indicate that the internal structure of the target problem, such as derivatives or their approximations, cannot be exploited as it may be unknown, hidden, unreliable, or inexistent. There are situations where some structure such as bounds or linear constraints may be exploited and in some cases a surrogate of the problem is supplied or a model may be constructed and trusted. This chapter surveys algorithms for this class of problems, including a supporting convergence analysis based on the nonsmooth calculus. The chapter also lists numerous published applications of these methods to real optimization problems. © 2014 Springer Science+Business Media New York. All rights are reserved.
Article
We propose a new constraint-handling approach for general constraints that is applicable to a widely used class of constrained derivative-free optimization methods. As in many methods that allow infeasible iterates, constraint violations are aggregated into a single constraint violation function. As in filter methods, a threshold, or barrier, is imposed on the constraint violation function, and any trial point whose constraint violation function value exceeds this threshold is discarded from consideration. In the new algorithm, unlike the filter method, the amount of constraint violation subject to the barrier is progressively decreased adaptively as the iteration evolves. We test this progressive barrier (PB) approach versus the extreme barrier (EB) with the generalized pattern search (Gps) and the lower triangular mesh adaptive direct search (LTMads) methods for nonlinear derivative-free optimization. Tests are also conducted using the Gps-filter, which uses a version of the Fletcher-Leyffer filter approach. We know that Gps cannot be shown to yield kkt points with this strategy or the filter, but we use the Clarke nonsmooth calculus to prove Clarke stationarity of the sequences of feasible and infeasible trial points for LTMads-PB. Numerical experiments are conducted on three academic test problems with up to 50 variables and on a chemical engineering problem. The new LTMads-PB method generally outperforms our LTMads-EB in the case where no feasible initial points are known, and it does as well when feasible points are known. which leads us to recommend LTMads-PB. Thus the LTMads-PB is a useful practical extension of our earlier LTMads-EB algorithm, particularly in the common case for real problems where no feasible point is known. The same conclusions hold for Gps-PB versus Gps-EB.
Article
In this work, a reduced order multidisciplinary optimization procedure is developed to enable efficient, low frequency, undamped and damped, fully coupled, structural–acoustic optimization of interior cavities backed by flexible structural systems. This new method does not require the solution of traditional eigen value based problems to reduce computational time during optimization, but are instead based on computation of Arnoldi vectors belonging to the induced Krylov Subspaces. The key idea of constructing such a reduced order model is to remove the uncontrollable, unobservable and weakly controllable, observable parts without affecting the noise transfer function of the coupled system. In a unified approach, the validity of the optimization framework is demonstrated on a constrained composite plate/prism cavity coupled system. For the fully coupled, vibro–acoustic, unconstrained optimization problem, the design variables take the form of stacking sequences of a composite structure enclosing the acoustic cavity. The goal of the optimization is to reduce sound pressure levels at the driver’s ear location. It is shown that by incorporating the reduced order modelling procedure within the optimization framework, a significant reduction in computational time can be obtained, without any loss of accuracy—when compared to the direct method. The method could prove as a valuable tool to analyze and optimize complex coupled structural–acoustic systems, where, in addition to fast analysis, a fine frequency resolution is often required. KeywordsKrylov subspace–Arnoldi–Structural–acoustic optimization–Mesh adaptive direct search
Article
Full-text available
In this study, the accuracy of k-ω turbulence model was improved for hydrodynamic predictions of underwater vehicle. The closure coefficients were optimised by applying an optimisation algorithm called Surrogate Management Framework [1, 2] and comparing with the experimental data of SUBOFF submarine model. The outcome revealed the sensitivity of RANS accuracy with respect to various closure coefficients, and highlighted the improvements achieved from using the optimised coefficients.
ResearchGate has not been able to resolve any references for this publication.