Article

Image processing by simulated annealing

Authors:
  • Policlinico Gemelli
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

It is shown that simulated annealing, a statistical mechanics method recently proposed as a tool in solving complex optimization problems, can be used in problems arising in image processing. The problems examined are the estimation of the parameters necessary to describe a geometrical pattern corrupted by noise, the smoothing of bi-level images, and the process of halftoning a continuous-level image. The analogy between the system to be optimized and an equivalent physical system, whose ground state is sought, is put forward by showing that some of these problems are formally equivalent to ground state problems for two-dimensional Ising spin systems. In the case of low signal-to-noise ratios (particularly in image smoothing), the methods proposed here give better results than those obtained with standard techniques.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Originally proposed by Kirkpatrick et al. (1983), SA draws inspiration from the solid annealing process to address optimization problems. Over the years, SA has shown remarkable success in solving complex optimization problems in various fields, including computer (VLSI) design, image processing, molecular physics, and chemistry (see, for example, Wong et al. 2012;Carnevali et al. 1987;Jones 1991, andPannetier et al. 1990). This paper shows that the SA algorithm significantly improves the optimization of the estimator GCov in mixed causal-noncausal autoregressive models, ensuring accurate parameter estimates and correct inference on autoregressive orders. ...
Article
Full-text available
This paper investigates the performance of routinely used optimization algorithms in application to the Generalized Covariance estimator (GCov) for univariate and multivariate mixed causal and noncausal models. The GCov is a semi-parametric estimator with an objective function based on nonlinear autocovariances to identify causal and noncausal orders. When the number and type of nonlinear autocovariances included in the objective function are insufficient/inadequate, or the error density is too close to the Gaussian, identification issues can arise. These issues result in local minima in the objective function, which correspond to parameter values associated with incorrect causal and noncausal orders. Then, depending on the starting point and the optimization algorithm employed, the algorithm can converge to a local minimum. The paper proposes the Simulated Annealing (SA) optimization algorithm as an alternative to conventional numerical optimization methods. The results demonstrate that SA performs well in its application to mixed causal and noncausal models, successfully eliminating the effects of local minima. The proposed approach is illustrated by an empirical study of a bivariate series of commodity prices.
... Originally proposed by Kirkpatrick, Gelatt Jr, and Vecchi (1983), SA draws inspiration from the solid annealing process to address optimization problems. Over the years, SA has shown remarkable success in solving complex optimization problems in various fields, including computer (VLSI) design, image processing, molecular physics, and chemistry (see, for example, Wong, Leong, and Liu (2012), Carnevali, Coletti, and Patarnello (1987), Jones (1991), and Pannetier, Bassas-Alsina, Rodriguez-Carvajal, and Caignaert (1990)). This paper shows that the SA algorithm significantly improves the optimization of the estimator GCov in mixed causal-noncausal autoregressive models, ensuring accurate parameter estimates and correct inference on autoregressive orders. ...
... The reference map of classes is produced by the model  (1) , which describes in a probabilistic manner any prior knowledge or expectation about their spatial configuration. Examples such of models are random geometric features, as in Carnevalli et al. (1985), a discrete Markov Random Field as the Potts model (Frery et al., 2007;Moser et al., 2013), cellular automata (Yadav and Ghosh, 2021), or any suitable and tractable specification. Denote 1 the outcome of  (1) at the first epoch. ...
... Since its introduction as an optimisation tool by Kirkpatrick et al. (1983), it has been very usefulness and has shown efficiency in handling optimisation problems including graph partitioning and colouring, route-planning, layout design, sequencing and scheduling, timetabling and signal processing(see Carnevali et al., 1985;Sechen et al., 1988;Johnson et al., 1989;Ogbu & Smith, 1990;Abramson, 1991;Johnson et al., 1991;Thompson & Dowsland, 1998;Burke & Kendall, 1999;Tian et al., 1999;Liu, 1999;Chen & Luk, 1999;Bouleimen & Lecocq, 2003). Details on the algorithms and other applications can be found in Dowsland (1995) and Henderson et al. (2003). ...
Thesis
Full-text available
This research work focused on the performance of heuristics and metaheuristics for the recently defined Hostel Space Allocation Problem (HSAP), a new instance of the space allocation problem (SAP) in higher institutions of learning (HIL). SAP is a combinatorial optimisation problem that involves the distribution of spaces available amongst a set of deserving entities (rooms, bed spaces, office spaces etc.) so that the available spaces are optimally utilized and complied with the given set of constraints. HSAP deals with the allocation of bed space in available but limited halls of residence to competing groups of students such that given requirements and constraints are satisfied as much as possible. The problem was recently introduced in literature and a preliminary, baseline solution using a Genetic Algorithm (GA) was provided to show the viability of heuristics in solving the problem rather than recourse to the usual manual processing. Since the administration of hostel space allocation varies across institutions, countries and continents, the available instance is defined as obtained from a top institution in Nigeria. This instance identified is the point of focus for this research study. The main aim of this thesis is to study the strength and performance of some Local Search (LS) heuristics in solving this problem. In the process, however, some hybrid techniques that combine both population-based and LS heuristics in providing solutions are derived. This enables one to carry out a comprehensive comparative study aimed at determining which heuristics and/or combination performs best for the given problem. HSAP is a multi-objective and multi-stage problem. Each stage of the allocation has different requirements and constraints. An attempt is made to provide a formulation of these problems as an optimisation problem and then provides various inter-related heuristics and meta-heuristics to solve it at different levels of the allocation process. Specifically, Hill Climbing (HC), Simulated Annealing (SA), Tabu Search (TS), Late Acceptance Hill Climbing (LAHC) and GA were applied to distribute the students at all three levels of allocation. At each level, a comparison of the algorithms is presented. In addition, variants of the algorithms were performed from a multi-objective perspective with promising and better solutions compared to the results obtained from the manual method used by the administrators in the institutions. Comparisons and analyses of the results obtained from the above methods were done. Obtaining datasets for HSAP is a very difficult task as most institutions either do not keep proper records of past allocations or are not willing to make such records available for research purposes. The only dataset available which is also used for simulation in this study is the one recently reported in the literature. However, to test the robustness of the algorithms, two new data sets that follow the pattern of the known dataset obtained from literature are randomly generated. Results obtained with these datasets further demonstrate the viability of applying tested operations research techniques efficiently to solve this new instance of SAP.
... The method of simulated annealing This method has been reported to be used in power distribution systems ( Chiang et al., 1990), optical areas (Kim, 1990), image processing areas (Carnevali, 1985), electronics , and biochemistry areas (Prabhakaran, 1985). It was reported to have solved the travelling salesman problem, and the global wiring problem of silicon chips successfully . ...
Thesis
About air duct design optimizations
... In the design of the CGHs, we used a simulated annealing algorithm for optimizing the CGHs. In this algorithm, the CGHs phase is gradually altered to get the wanted far-field pattern [30]. The pattern distribution in the far-field is í±“ í±¥, í±¦ = í±“(í±¥, í±¦) í±’ −Ф(9,;) . ...
Conference Paper
Full-text available
The achievable data rate in indoor wireless systems that employ visible light communication (VLC) can be limited by multipath propagation. Here, we use computer generated holograms (CGHs) in VLC system design to improve the achievable system data rate. The CGHs are utilized to produce a fixed broad beam from the light source, selecting the light source that offers the best performance. The CGHs direct this beam to a specific zone on the room's communication floor where the receiver is located. This reduces the effect of diffuse reflections. Consequently, decreasing the intersymbol interference (ISI) and enabling the VLC indoor channel to support higher data rates. We consider two settings to examine our propose VLC system and consider lighting constraints. We evaluate the performance in idealistic and realistic room setting in a diffuse environment with up to second order reflections and also under mobility. The results show that using the CGHs enhances the 3 dB bandwidth of the VLC channel and improves the received optical power.
... In the design of the CGHs, we used a simulated annealing algorithm for optimizing the CGHs. In this algorithm, the CGHs phase is gradually altered to get the wanted far-field pattern [30]. The pattern distribution in the far-field is , = ( , ) -Ф(9,;) . ...
Preprint
Full-text available
The achievable data rate in indoor wireless systems that employ visible light communication (VLC) can be limited by multipath propagation. Here, we use computer generated holograms (CGHs) in VLC system design to improve the achievable system data rate. The CGHs are utilized to produce a fixed broad beam from the light source, selecting the light source that offers the best performance. The CGHs direct this beam to a specific zone on the room's communication floor where the receiver is located. This reduces the effect of diffuse reflections. Consequently, decreasing the intersymbol interference (ISI) and enabling the VLC indoor channel to support higher data rates. We consider two settings to examine our propose VLC system and consider lighting constraints. We evaluate the performance in idealistic and realistic room setting in a diffuse environment with up to second order reflections and also under mobility. The results show that using the CGHs enhances the 3dB bandwidth of the VLC channel and improves the received optical power.
... • Glover et Laguna [GL97] : la métaheuristique fait référence à une stratégie principale qui guide et modifie d'autres heuristiques pour produire des solutions au delà de celles qui sont normalement générées dans une recherche de l'optimalité locale. Les métaheuristiques incluent, mais ne sont pas limitées aux méthodes suivantes : la programmation logique avec contrainte (constraint logic programming) [LMY87], les algorithmes génétiques (genetic algorithms) [Hol92], les réseaux de neurones (neural networks) [GM88], le recuit simulé (simulated annealing) [CCP85], l'algorithme de colonie de fourmis (ant colony algorithm) [DMC96], la recherche tabou (tabu search) [GL97], l'algorithme à évolution différentielle (differential evolution) [SP96], l'algorithme de gradient aléatoire adaptatif (greedy randomized adaptive search procedure) [FR89], la recherche locale itérative (iterated local search) [CD06], la recherche à voisinage variable (variable neighborhood search) [MH97], la recherche locale guidée (guided local search) [VT99], l'algorithme d'estimation de distribution (estimation of distribution algorithm) [MP96] • Rendement : il est préférable qu'une bonne métaheuristique ait la possibilité de trouver la solution optimale pour un grand nombre de problèmes. • Efficacité : une bonne métaheuristique devrait trouver la solution finale dans un montant raisonnable de temps de calcul. ...
Thesis
Full-text available
Since the extraordinary technical revolution from analog to digital at the end of 20th century, digital documents have become increasingly used because of their inexpensive and extremely fast diffusion. However, this passage from analog to digital has not been done without causing anxiety in terms of copyrights. Unauthorized people can appro- priate digital documents to make profits at the expense of the legitimate owners of the initial rights, since their contents can be easily copied, modified and distributed without risk of being altered. In this context, in the early 1990s a new technique was introduced which is based mainly on cryptography and steganography, it’s consists in inscribing a watermark into a digital document. This technique is called digital watermarking, in french tatouage numérique. This thesis presents five different contributions relative to digital watermarking and to image processing. The first contribution consist in two new solutions to the problem of false positive detection of the watermark observed in some watermarking algorithms based on singular value decomposition. One of the proposed solution is based on hash functions while the other uses image encryption. The second contribution is new image encryption algorithm based on the principle of the Rubik’s cube. The third contribu- tion is the design of new watermarking algorithm based on lifting wavelet transform and singular value decomposition. A single scaling factor is used to control the strength of the embedded watermark and thus allows for best compromise between watermark ro- bustness and imperceptibility. However, using multiple scaling factors, instead of single scaling factor, is more efficient [CKLS97]. Determining the optimals values of multiples scaling factors is a very difficult and complex problem. In order to find thoses optimal values, we studied a multi-objective genetic algorithm optimization (MOGAO) and a multi-objective ant colony optimization (MOACO) : those are considered as the fourth and fifth contributions of this thesis.
... In this paper, simulated annealing algorithm was used to optimize the output of the CGH where the phase of the CGH gradually changed to obtain the desired far-field pattern. [39]. The distribution of the pattern in the far field is ( , ) = | ( , )| Ф( , ) . ...
Article
Full-text available
In this paper, we propose, design and evaluate two indoor visible light communication (VLC) systems based on computer generated holograms (CGHs); a simple static CGH-VLC system and an adaptive CGH-VLC system. Each transmitter is followed by the CGH, and this CGH is utilized to direct part of the total power from the best transmitter and focus it to a specific area on the communication floor. This leads to reduction in inter-symbol interference (ISI) and increasing in the received optical power, which leads to higher data rates with a reliable connection. In the static CGH-VLC system, the CGH generates 100 beams (all these beams carry same data) from the best transmitter and directs these beams to an area of 2m ×2m on the communication floor. In the adaptive CGH-VLC system, the CGH is used to generate eight beams from the best transmitter and steer these beams to the receiver’s location. In addition, each one of these eight beams carries a different data stream. Whereas in the first system a single photodetector is used (added simplicity), an imaging receiver is used in the second one to obtain spatial multiplexing. We consider the lighting constraints where illumination should be at acceptable level and consider diffusing reflections (up to second order) to find the maximum data rate that can be offered by each system. Moreover, due to the fact that each beam in the adaptive CGH-VLC system conveys a different data stream, co-channel interference (CCI) between beams is taken into account. We evaluate our proposed systems in two different indoor environments: an empty room and a realistic room using simple on-off-keying (OOK) modulation. The results show that the static CGH-VLC system offers a data rate of 8 Gb/s while the adaptive CGH-VLC system can achieve a data rate of 40 Gb/s.
... In this article, a method based on the simulated annealing and "ZKW" algorithms is proposed to deal with an application in such area. Simulated annealing algorithm, developed by Kirkpatrick et al. [34,35], can efficiently solve the NP problem with the advantages of avoiding falling into local optimum and less reliance on the initial solution; therefore, it is widely applied in many areas, such as image processing [36,37], vehicle routing [38,39], production scheduling [40], and machine learning [41]. "ZKW" algorithm, proposed by ZKW [42], is an efficient and fast minimum-cost flow algorithm. ...
Article
Full-text available
To cope with the facility location problem, a method based on simulated annealing and “ZKW” algorithm is proposed in this article. The method is applied to some real cases, which aims to deploy video content server at appropriate nodes in an undirected graph to satisfy the requirements of the consumption nodes with the least cost. Simulated annealing can easily find the optimum with less reliance on the initial solution. “ZKW” algorithm can find the shortest path and calculate the least cost from the server node to consumption node quickly. The results of three kinds of cases illustrate the efficiency of our method, which can obtain the optimum within 90 s. A comparison with Dijkstra and Floyd algorithms shows that, by using “ZKW” algorithm, the method can have large iteration with limited time. Therefore, the proposed method is able to solve this video content server location problem.
... This error can be considered to be a cost function. Simulated annealing (SA) is used to minimise the cost function [21]. The phases and amplitudes of every spot are determined by the hologram pixel pattern and are given by its Fourier transform. ...
... on trouve que l'énergie totale E est celle d'un modèle d'Ising 2D isotrope de constante de couplage initiale T°= 2, en accord avec les valeurs trouvées par Carnevali et al .[8] . Nous verrons dans la suite comment généraliser ce choix aux images multi-classes . ...
Article
Full-text available
We show in this paper the deep relationship between classic models from Statistical Physics and Markovian Random Fields models used in image labelling. We present as an application a markovian relaxation method for enhancement and relaxation of previously classified images. An energy function is defined, which depends only on the labels and on their initial value. The main a priori pixel knowledge results from the confusion matrix of the reference samples used for initial classification. The energy to be minimized includes also terms ensuring simultaneous spatial label regularty, growth of some classes and disparition of some others. The method allows for example to reclassify previous rejection class pixels in their spatial environment . Last we present some results on Remote Sensing multispectral and geological ore images, comparing the performances of Iterated Conditional Modes (ICM) and Simulated Annealing (SA) . Very low CPU time was obtained due to the principle of the method, working on labels instead of gray levels.
... Frühwirth and Waltenberger [5] widespread the hybrid method further, allowing the shape of the estimator to change as a function of the temperature. SA has been used for image denoising before, but with limited success [16]- [17]. Our approach, however, is based on DA and we achieve numerically and visually excellent results. ...
... The hologram is able to modulate only the phase of an incoming wave front, the transmittance amplitude being equal to unity. The analysis used in [27], [28], [51] was used for the design of the CGHs. The hologram H(u, v) is considered to be in the frequency domain. ...
Article
Full-text available
Visible light communication (VLC) systems have typically operated at data rates below 10 Gbps and operation at this data rate was shown to be feasible by using laser diodes (LDs), imaging receivers and delay adaptation techniques (DAT imaging LDs-VLC). However, higher data rates, beyond 10 Gbps, are challenging due to the low signal to noise ratio (SNR) and inter symbol interference (ISI). In this paper, for the first time, to the best of our knowledge, we propose, design and evaluate a VLC system that employs beam steering (of part of the VLC beam) using adaptive finite vocabulary of holograms in conjunction with an imaging receiver and a delay adaptation technique to enhance SNR and to mitigate the impact of ISI at high data rates (20 Gbps). An algorithm was used to estimate the receiver location, so that part of the white light can be directed towards a desired target (receiver) using beam steering to improve SNR. Simulation results of our location estimation algorithm (LEA) indicated that the required time to estimate the position of the VLC receiver is typically within 224 ms in our system and environment. A finite vocabulary of stored holograms is introduced to reduce the computation time required by LEA to identify the best location to steer the beam to the receiver location. The beam steering approach improved the SNR of the fully adaptive VLC system by 15 dB at high data rates (20 Gbps) over the DAT imaging LDs-VLC system in the worst case scenario. In addition, we examined our new proposed system in a very harsh environment with mobility. The results showed that our proposed VLC system has strong robustness against shadowing, signal blockage and mobility. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=7307090
... However, because of the finite resolution of the output device and the complex transmittance of the resulting hologram, the reconstruction will be in error. This error can be used as a cost function (CF), which was minimized by simulated annealing [23]. The amplitudes and phases of every spot are determined by the hologram pixels' pattern and are given by its Fourier transform. ...
Article
Full-text available
In this paper, we introduce a new adaptive optical wireless system that employs a finite vocabulary of stored holograms. We propose a fast delay, angle, and power adaptive holograms (FDAPA-Holograms) approach based on a divide and conquer (D&C) methodology and evaluate it with angle diversity receivers in a mobile optical wireless system. The ultimate goal is to increase the signal-to-noise ratio (SNR), reduce the effect of inter-symbol interference, and eliminate the need to calculate the hologram at each transmitter and receiver location. A significant improvement is achieved in the presence of demanding background illumination noise, receiver noise, multipath propagation, mobility, and shadowing typical in a realistic indoor environment. The combination of beam delay, angle, and power adaptation offers additional degrees of freedom in the link design, resulting in a system that is able to achieve higher data rates (5 Gb/s). At a higher data rate of 5 Gb/s and under eye safety regulations, the proposed FDAPA-Holograms system offers around 13 dB SNR with full mobility in a realistic environment where shadowing exists. The fast search algorithm introduced that is based on a D&C algorithm reduces the computation time required to identify the optimum hologram. Simulation results show that the proposed system, FDAPA-Holograms, can reduce the time required to identify the optimum hologram position from 64 ms taken by a classic adaptive hologram to about 14 ms.
... There are many generic SA algorithms; see Kirkpatrick, Gelatt, and Vecchi [37], Geman and Geman [30] , Ha- jek [35], Gidas [31], and the references therein. In addition, this technique has many applications to image processing, such as Carnevali, Coletti, and Patarnello [11]. The term " annealing " is analogous to the cooling of a liquid or solid in a physical system. ...
... There are many generic SA algorithms; see Kirkpatrick, Gelatt, and Vecchi [37], Geman and Geman [30], Hajek [35], Gidas [31], and the references therein. In addition, this technique has many applications to image processing, such as Carnevali, Coletti, and Patarnello [11]. ...
Article
Full-text available
We study minimization of the difference of $\ell_1$ and $\ell_2$ norms as a non-convex and Lipschitz continuous metric for solving constrained and unconstrained compressed sensing problems. We establish exact (stable) sparse recovery results under a restricted isometry property (RIP) condition for the constrained problem, and a full-rank theorem of the sensing matrix restricted to the support of the sparse solution. We present an iterative method for $\ell_{1-2}$ minimization based on the difference of convex functions algorithm (DCA), and prove that it converges to a stationary point satisfying first order optimality condition. We propose a sparsity oriented simulated annealing (SA) procedure with non-Gaussian random perturbation and prove the almost sure convergence of the combined algorithm (DCASA) to a global minimum. Computation examples on success rates of sparse solution recovery show that if the sensing matrix is ill-conditioned (non RIP satisfying), then our method is better than existing non-convex compressed sensing solvers in the literature. Likewise in the magnetic resonance imaging (MRI) phantom image recovery problem, $\ell_{1-2}$ succeeds with 8 projections. Irrespective of the conditioning of the sensing matrix, $\ell_{1-2}$ is better than $\ell_1$ in both the sparse signal and the MRI phantom image recovery problems.
... Este Capítulo apresenta os conceitos relativos a campos aleatórios Markovianos e a sua importância para a modelagem em visão computacional. Os conceitos apresentados foram extraídos de Kirkpatrick et al (1983), Smith et al (1983Smith et al ( , 1985, Carnevalli et al (1985), Friedland e Adam (1989), Haneishi et al (1989), Lakshmanan e Derin (1989), Aarts e Korst (1990), Geman et al (1984Geman et al ( , 1990, Gelfand et al (1992), Bustos e Ojeda (1993), Chellappa e Jain (1993), Frery (1993), Muzzolini et al (1993) e Li (1995). ...
... where E d (f, d) is the data energy and denotes the data constraints of the segmentation problem; E s (f) is the prior energy and denotes the smoothness constraint. Formulated posterior energy can be minimized in different ways such as iterated conditional modes [13], stochastic gradient descent [14,15], and simulated annealing [16,17]. We used the graph cut approach [4,18] (section 2.3) as MRF solver to introduce the proposed idea. ...
Article
Full-text available
Graph cut minimization formulates the image segmentation as a linear combination of problem constraints. The salient constraints of the computer vision problems are data and smoothness which are combined through a regularization parameter. The main task of the regularization parameter is to determine the weight of the smoothness constraint on the graph energy. However, the difference in functional forms of the constraints forces the regularization weight to balance the inharmonious relationship between the constraints. This paper proposes a new idea: bringing the data and smoothness terms on the common base decreases the difference between the constraint functions. Therefore the regularization weight regularizes the relationship between the constraints more effectively. Bringing the constraints on the common base is carried through the statistical significance measurement. We measure the statistical significance of each term by evaluating the terms according to the other graph terms. Evaluating each term on its own distribution and expressing the cost by the same measurement unit decrease the scale and distribution differences between the constraints and bring the constraint terms on similar base. Therefore, the tradeoff between the terms would be properly regularized. Naturally, the minimization algorithm produces better segmentation results. We demonstrated the effectiveness of the proposed approach on medical images. Experimental results revealed that the proposed idea regularizes the energy terms more effectively and improves the segmentation results significantly.
... Frühwirth and Waltenberger [8] generalized the hybrid method further, allowing the shape of the estimator to change as a function of the temperature. SA has been used for image denoising before, but with limited success [18], [19], [20], [21]. Our approach, however, is based on DA and we achieve numerically and visually excellent results. ...
Article
Full-text available
Image denoising continues to be an active research topic. Although state-of-the-art denoising methods are numerically impressive and approch theoretical limits, they suffer from visible artifacts. While they produce acceptable results for natural images, human eyes are less forgiving when viewing synthetic images. At the same time, current methods are becoming more complex, making analysis, and implementation difficult. We propose image denoising as a simple physical process, which progressively reduces noise by deterministic annealing. The results of our implementation are numerically and visually excellent. We further demonstrate that our method is particularly suited for synthetic images. Finally, we offer a new perspective on image denoising using robust estimators.
... The application of SA was extended from discrete problems [108][109][110] to continuous problems by different methods [111][112][113]. The comparison among these methods made by Goffe et al. [114] concluded that the approach suggested by ...
Article
Full-text available
This work systematically investigates the acoustic interaction between an enclosure and resonators, and establishes systematic design tools based upon the interaction theory to optimize the physical characteristics and the locations of resonators. A general theoretical model is first established to predict the acoustic performance of multiple resonators placed in an acoustic enclosure of arbitrary shape. Analytical solutions for the sound pressure inside the enclosure are obtained when a single resonator is installed, which provide insight into the physics of the acoustic interaction between the enclosure and resonators. The theoretical model is experimentally validated, showing the effectiveness and reliability of the theoretical model. Using the validated acoustic interaction model and the analytical solutions, the internal resistance of a resonator is optimized to improve its performance in a frequency band enclosing acoustic resonances. An energy reduction index is defined to conduct the optimization. The dual process of the energy dissipation and radiation of the resonator is quantified. Optimal resistance and its physical effect on the enclosure-resonator interaction are numerically evaluated and categorized in terms of frequency bandwidths. Predictions on the resonator performance are confirmed by experiments. Comparisons with existing models based on different optimization criteria are also performed. It is shown that the proposed model serves as an effective design tool to determine the optimal internal-resistance of the resonator in a chosen frequency band. Due to the multi-modal coupling, the resonator performance is also affected by its location besides its physical characteristics. When multiple resonators are used, the mutual interaction among resonators leads to the requirement of a systematic optimization tool to determine their locations. In the present work, different optimization methodologies are explored. These include a sequential design approach, the simulated annealing algorithm, and the genetic algorithm. Simulations show that three optimization approaches can all achieve good control performance to different extent. Optimization results reveal the existence of multiple optimal location-configurations of the resonator array. These optimal configurations are verified by experiments. Considering the plural nature of the solutions, engineering design criteria are also proposed. The feasibility and usefulness of this research is demonstrated in the double-glazed window application.
... Modeling this dependence is crucial to achieve good classification accuracy. MRF-based classification methods incorporate this spatial dependence and therefore have been successfully used in image restoration (Bustos, Frery and Ojeda (1998) ;Carnevalli, Coletti and Patarnello (1985); Geman and Geman (1984); Winkler (2006)). ...
Article
Full-text available
When processing synthetic aperture radar (SAR) images, there is a strong need for statistical models of scattering to take into account multiplicative noise. For instance, the classification process needs to be based on the use of statistics. Our main contribution is the choice of an accurate model for SAR images over urban areas and its use in a Markovian classification algorithm. Clutter in SAR images becomes non-Gaussian when the resolution is high or when the area is manmade. Many models have been proposed to fit with non-Gaussian scattering statistics (K, Weibull, Log-normal, etc.), but none of them is flexible enough to model all kinds of surfaces. Frery et al. [IEEE Transactions on Geoscience and Remote Sensing 35 (1997) 648–659] proposed a new class of distributions, $\mathcal{G}$ distribution, arising from the multiplicative model. Classical distributions such as $\mathcal{K}$ are particular cases of this new class. A special case of this class called $\mathcal{G}^{0}$ is shown able to model extremely heterogeneous clutter, such as that of urban areas. The quality of the classification obtained by mixing this model and a Markovian segmentation is high.
Preprint
Full-text available
This paper presented a new comprehensive approach to select cutting parameters for surface roughness in drilling of carbonfiber reinforced polymer composite (CFRP) material. The influence of drilling on surface quality of woven CFRP materials was investigated experimentally. The CFRP material (0/90° fiber orientation) was drilled at different cutting parameters and the surface roughness of the hole was measured. Drilling tests were carried out using carbide drills of 8 mm in diameter at 50, 70, and 90 m/min cutting speeds; 2, 3 and 4 flute numbers and 0.2, 0.3, and 0.4 mm/rev feed rates. Simulated Annealing (SA) and Genetic Algorithm (GA) methods were used for optimization. According to the results of the experimental study and optimization technique, optimum cutting parameters were obtained and started to be changed in order to have the best surface quality.
Chapter
There are some practical means to create multiple diffusing spots. E. Simova et al. combined the diffusing and splitting functions in a single holographic optical element (HOE), while J. Carruthers and J. Kahn used eight laser diodes to produce eight collimated beams. This chapter investigates the use of a computer-generated hologram (CGH) as a beam-splitting element for a multiple-beam transmitter for indoor wireless infrared links. It presents simulation results for the channel parameters when such a transmitter is utilized over a communication link. Comparison with a pure diffuse link is made in order to distinguish the characteristic features of the multispot diffusing configuration (MSDC). A communication link utilizing angle diversity detection with a seven-branch composite receiver is computer simulated, and is compared to the case when a single element wide field-of-view (FOV) receiver is used. The system robustness against shadowing and blockage is also discussed.
Conference Paper
Image processings applications like in object tracking, medical imaging, satellite imaging, face recognition and segmentation requires image denoising as the preprocessing step. Problem with current image denoising methods are bluring and artifacts introduces after removal of noise from image. Current denoising methods are based on patches of image has well denoisinging ability but implemention of such methods are difficult. The patch-less Progressive Image Denoising(PID) is a dual-domain image denoising method which progressively removes reduce the noise from image in each iteration. It has simple implementation using robust noise estimation and deterministic annealing. Its results are artifacts free. It is better for the artifical images i.e. computer generated images or synthetic images. This paper presents comparatively results of PID, Dual-Domain Image Denoising (DDID) and Block Matching and 3D Filtering (BM3D) for both natural and synthetic images contaminated with different levels of AWGN noise.
Conference Paper
The patch-less Progressive Image Denoising(PID) is physical process of reducing the noise in image based on deterministic annealing i.e. temperature decreases from high to low so that shape of kernel changes according to it. The results of PID implementation are good and excellent for both natural and computer generated images i.e. artificial or synthetic images. It estimate the noise using robust noise estimation. PID algorithm is only for denoising additive white Gaussian noise(awgn). For using PID the requirement is original image (noise free image) and amount of noise added to it. In real scenario, it is not possible to get the knowledge of noise level available in any image. This paper gives an approach to automatically estimate the noise level in the given input image and then denoise the image using PID. Experimental results demonstrate that proposed algorithm outperforms both objective and subjective fidelity criteria in image denoising.
Chapter
Many econometric methods, such as nonlinear least squares, the generalized method of moments, and the maximum likelihood method, rely upon optimization to estimate model parameters. Not only do these methods lay the theoretical foundation for many estimators, they are often used to numerically estimate models. Unfortunately, this may be difficult because these algorithms are sometimes touchy and may fail. For the maximum likelihood method, Cramer (1986, p. 77) lists a number of “unpleasant possibilities”: the algorithm may not converge in a reasonable number of steps, it may diverge toward ridiculous values, or even loop through the same point time and again. Also, the algorithm may have difficulty with ridges and plateaus. When faced with such difficulties, the researcher is often reduced to trying different starting values (Cramer, 1986, p. 72 and Finch et al., 1989). Finally, even if the algorithm converges, there is no assurance that it will have converged to a global, rather than a local optimum since conventional algorithms cannot distinguish between them. In sum, there is a poor match between the power of these methods and the algorithms used to implement them.
Chapter
The architecture of an electronic retina for image processing by stochastic relaxation is described. It consists of a 2-D array whose PE implementation uses both analog and digital devices, and is specifically designed to take advantage of an optical random number generator. It is well suited to applications where the input data is a grey-level picture, the output data is a binary picture, and the energy model is a generalization of the Ising model with external field.
Article
In this paper, a novel cooperative particle swarm optimization (CPSO) algorithm which embodies two particle swarms is proposed to alleviate the premature convergence problem of PSO algorithm. The underlying idea of this approach is to utilize random mutation, multiswarms, and the hybrid of many heuristic optimization methods. Firstly, an improved PSO is proposed, which adopts a new learning scheme and a random mutation operator. Then, the two swarms execute IPSO independently to maintain the diversity of populations, after a certain iteration intervals, extremal optimization (EO) and simulated annealing (SA) are introduced to the two swarms separately. By cooperative exchanging information and the hybrid of global exploration ability of PSO, local exploitation of EO and the statistical promise to deliver a globally optimal solution of SA, the performance of the traditional PSO with single swarm is improved. Simulations on a suite of benchmark functions clear demonstrate the superior performance of the proposed algorithm in terms of solution quality, convergence time.
Chapter
There are practical advantages for considering the image restoration or superresolution problem in terms of a neural network formalism. An advantage that has been found is the improved performance with respect to ill-conditioning difficulties. There is a large body of empirical evidence that the neural network approach enlarges the basins of attraction of the energy function minima, thus, enhancing the chances of finding better solutions and making the final solution less dependent on the starting parameters. This chapter explains the way in which both binary (two-state) and nonbinary image reconstruction algorithms can be implemented on very similar (Hopfield) neural architectures. The image restoration algorithms discussed in the chapter were originally aimed at achieving performance beyond the diffraction limit, but are in fact capable of compensating simultaneously or separately for aberrations induced by the optical components and for the limitations of the detector. The chapter also describes some image restoration or superresolution algorithms that can be implemented on an artificial neural network. Image restoration methods are well known to be illconditioned, hence, there is the need to employ regularization techniques.
Article
We study analytical and numerical properties of the \(L_1-L_2\) minimization problem for sparse representation of a signal over a highly coherent dictionary. Though the \(L_1-L_2\) metric is non-convex, it is Lipschitz continuous. The difference of convex algorithm (DCA) is readily applicable for computing the sparse representation coefficients. The \(L_1\) minimization appears as an initialization step of DCA. We further integrate DCA with a non-standard simulated annealing methodology to approximate globally sparse solutions. Non-Gaussian random perturbations are more effective than standard Gaussian perturbations for improving sparsity of solutions. In numerical experiments, we conduct an extensive comparison among sparse penalties such as \(L_0, L_1, L_p\) for \(p\in (0,1)\) based on data from three specific applications (over-sampled discreet cosine basis, differential absorption optical spectroscopy, and image denoising) where highly coherent dictionaries arise. We find numerically that the \(L_1-L_2\) minimization persistently produces better results than \(L_1\) minimization, especially when the sensing matrix is ill-conditioned. In addition, the DCA method outperforms many existing algorithms for other nonconvex metrics.
Article
Color quantization is a common image processing technique where full color images are to be displayed using a limited palette of colors. The choice of a good palette is crucial as it directly determines the quality of the resulting image. Standard quantization approaches aim to minimize the mean squared error (MSE) between the original and the quantized image, which does not correspond well to how humans perceive the image differences. In this article, we introduce a color quantization algorithm that hybridizes an optimization scheme based with an image quality metric that mimics the human visual system. Rather than minimizing the MSE, its objective is to maximize the image fidelity as evaluated by S-CIELAB, an image quality metric that has been shown to work well for various image processing tasks. In particular, we employ a variant of simulated annealing with the objective function describing the S-CIELAB image quality of the quantized image compared with its original. Experimental results based on a set of standard images demonstrate the superiority of our approach in terms of achieved image quality.
Conference Paper
Simulated annealing is a combinatorial optimization method based on randomization techniques. The method originates from the analogy between the annealing of solids, as described by the theory of statistical physics, and the optimization of large combinatorial problems. Here we review the basic theory of simulated annealing and recite a number of applications of the method. The theoretical review includes concepts of the theory of homogeneous and inhomogeneous Markov chains, an analysis of the asymptotic convergence of the algorithm, and a discussion of the finite-time behaviour. The list of applications includes combinatorial optimization problems related to VLSI design, image processing, code design and artificial intelligence.
Conference Paper
The paper outlines a unified treatment of the labeling and learning problems for the so-called hidden Markov chain model currently used in many speech recognition systems and the hidden Pickard random field image model (a small but interesting, causal sub-class of hidden Markov random field models). In both cases, labeling techniques are formulated in terms of Baum’s classical forward-backward recurrence formulae, and learning is accomplished by a specialization of the EM algorithm for mixture identification. Experimental results demonstrate that the approach is subjectively relevant to the image restoration and segmentation problems.
Conference Paper
The simulated annealing technique for solving combinatorial problems is applied to: cluster analysis, isomorphisms of attributed relational graphs, piecewise curve fitting, and feature selection. A novel class of clustering algorithms based on simulated annealing are presented. One such algorithm, ALKMEANS, is proposed as a contrast to the commonly used heuristic clustering algorithm KMEANS; test results demonstrate that ALKMEANS is superior to KMEANS. A simulated annealing algorithm, ALISON, is presented for the problem of isomorphisms of relational graphs
Article
We introduce an optimization algorithm, based on Creutz's microcanonical simulation technique, which has proven very efficient for non-convex optimization tasks associated with image-processing applications. Our algorithm should also constitute a useful heuristic for applications in other domains requiring combinatorial optimization searches.
Article
In this lecture, a pedagogical introduction is given to the general methodology and practice of the Monte Carlo method. Both the static and the dynamic Monte Carlo method are discussed, and the significance of the Monte Carlo method to the modeling of various large scale problems in science and engineering is emphasized.
Article
A novel approach to photonic crystal fabrication is proposed. It is based on the use of computer-generated holograms (CGHs) for photonic crystal structure definition. CGH allows for defining arbitrary structures, perfect or with any type of defect in a single lithographic step. CGH lithography is applicable to both 2D and 3D structures. Furthermore, a CGH can be made reconfigurable if a liquid crystal is used as a hologram medium. Hence, a single device can be used for fabrication of any photonic structure.
Article
A prototype of a parallel machine for the simulation of a two-dimensional Ising spin systems is described. To exploit the maximum possible parallelism there is a processor for each lattice site. A prototype machine with 12 processors has been built. Construction details and performances are given.
Article
Colour quantisation is a common image processing application where full colour images are to be displayed using a limited palette. The choice of a good palette is therefore crucial as it directly determines the quality of the resulting image. While standard quantisation algorithms typically employ domain specific knowledge, we apply a novel variant of simulated annealing as a standard black-box optimisation algorithm to the colour quantisation problem. The main advantage of black-box optimisation algorithms is that they do not require any domain specific knowledge yet are able to provide a near optimal solution. The effectiveness of our approach is evaluated by comparing its performance with several specialised colour quantisation algorithms. The results obtained show that even without any domain specific knowledge our SA based algorithm is able to outperform standard quantisation algorithms. To further improve the performance of the algorithm we combine the SA technique with a standard k-means clustering technique. This hybrid quantisation algorithm is shown to outperform all other algorithms and hence to provide images with superior image quality.
Article
Simulated Annealing (Čemy 1983, Kirkpatrick et al. 1983) is a technique which allows to find optimal or near optimal solutions to difficult optimization problems. It has been especially successful in applications to NP-complete or NP-hard problems, which occur in a variety of fields (Garey and Johnson 1979). These include mathematics with many graph problems (e.g. Brelaz 1979, Bonomi and Lutton 1987, Andresen et al. 1988), condensed matter physics, e.g. with the problem of finding the ground state of spin glasses (Ettelaie and Moore 1985), with the problem of solving the Ginzberg-Landau equations (Doria et al. 1989), engineering problems with the design of integrated circuits including the partitioning as well as the wiring problem (Vecchi and Kirkpatrick 1983, Sechen and Sangiovanni-Vincentelli 1985, Siarry et al. 1987), the design of binary sequences with low autocorrelation (Beenker et al. 1985, Bernasconi 1987, 1988), image processing (Carnevali et al. 1985), design of X-ray mirrors (Würtz and Schneider 1989), statistics with the application as a learning paradigm in neural network theory (Bernasconi 1990) and economics for instance with the travelling salesman problem (e.g. Bonomi and Lutton 1984, Kirkpatrick and Toulouse 1985, Hanf et al. 1990). Naturally, these are only some selected examples, since it is not possible here to give reference to the few hundred simulated annealing papers which appeared during the last years.
Article
The extrapolation in the space domain of a partially observed low space-bandwidth product (SBP) sequence or, equivalently, the resolution of the Fourier spectra in the frequency domain, in the presence of appreciable noise, is considered. The unknown sequence estimate is based on a number of acquired samples on a given measurement interval and the prior knowledge of the signal frequency bandlimit. Using an approach similar to a Monte-Carlo method, the extrapolated sequence samples are constructed from variably sized elementary grains. The new iterative algorithm, at each iteration step, based on a random-number generator, decides both the sample position to be considered and the sign of a grain that might be added to the current sample value. A sample update in each iteration step is either accepted or rejected in accordance with an appropriate decision rule. While exploiting the extrapolated sequence frequency bandlimit as a constraint, this decision rule is based on a non-increasing l1-norm of a cumulative error vector. As the extrapolated sequence approaches its final form, the elementary grain size is decreased to allow for subtle sample updates. A heuristic schedule for gradual change of the elementary grain size, similar to the temperature schedule of the simulated annealing method, is used. A pre-processing, which compensates for the model inconsistency that is due to either the presence of noise and/or the lack of precision of the linear degradation operator, is also introduced. Furthermore, it is shown that an additional constraint, such as a given signal upper bound, greatly improves the quality of reconstruction. Several simulation examples, for the extrapolation of low-SBP sinusoidal and other arbitrary sequences and in the presence of a high level of noise, are presented.
Article
Metallic artificial index gratings are composed of metallic subwavelength structures and show high diffraction efficiency. In this publication a trade-off between different design parameters is carried out. The discussion is based on an approximate model and numerical results obtained by applying optimization techniques to the rigorous diffraction theory.
Article
It is shown that metallic subwavelength surface structures can be used as wavelength filters. The design principle is explained and a simple explanation of the filtering effect is given. Fabrication issues are discussed.
Article
The error diffusion binarization procedure is adapted to multi-level quantization. The threshold parameters then available have a noticeable influence on the process. Characteristic features of the technique are shown together with experimental results.
Article
An electronic halftone procedure based on an iterative Fourier transform algorithm is presented. Constraints are introduced in the image and spectrum to affect the binarization. Flexibility to influence the pixel distribution of the binary image is experimentally demonstrated.
Article
Full-text available
There is a deep and useful connection between statistical mechanics (the behavior of systems with many degrees of freedom in thermal equilibrium at a finite temperature) and multivariate or combinatorial optimization (finding the minimum of a given function depending on many parameters). A detailed analogy with annealing in solids provides a framework for optimization of the properties of very large and complex systems. This connection to statistical mechanics exposes new information and provides an unfamiliar perspective on traditional optimization problems and methods.
Article
Many types of images are composed of regions in which the gray level is approximately constant. Such images can be smoothed and segmented by constructing piecewise constant functions whose constant parts correspond to the regions. The desired function must be (piecewise) smooth and must also be closed to the original image; thus one can regard it as minimizing a two-part cost measure, in which one component measures roughness and the other measures distance, e. g. , from the original image. The functions obtained by minimizing the cost with respect to various measures of this type are compared. The method of steepest descent is used for cost minimization. The relationship of this approach to other methods of image smoothing, including relaxation methods, is also discussed.
Article
Simulated annealing is a stochastic optimization procedure which is widely applicable and has been found effective in several problems arising in computeraided circuit design. This paper derives the method in the context of traditional optimization heuristics and presents experimental studies of its computational efficiency when applied to graph partitioning and traveling salesman problems.