Article

The Levenberg- Marquardt algorithm: implementation and theory

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Within every iteration, the source parameters, i.e., the moments of the dipoles, are gradually altered, resulting in a different solution to the forward problem, which is newly compared to the measured potentials. Common search methods include the iterative Kozlov-Maz'ya-Fomin (KMF) method [10], genetic algorithms [22], the Newton-Raphson method, Gauss-Newton [23], the Levenberg-Marquardt algorithm [24], Tichonov regulation [11,25,26], and neural networks [12,27]. Nowadays, the main research effort is toward finding a more efficient regularization technique to increase the accuracy of the above algorithms [9,[28][29][30][31][32][33]. ...
... V epic,m = S epic ∑ n w nṼn N * m dS n f (24) Having calculated the epicardium nodes' potential, the continuous voltage distribution can be estimated using (18) over the epicardium surface. Summarizing, and in order to extract the epicardial potential distribution, the following 6 steps can be generally identified: ...
... Considering the extracted weighting factor, the epicardium's potential distribution can now be calculated by utilizing Equation (24). ...
Article
Full-text available
An estimation of the electric sources in the heart was conducted using a novel method, based on Huygens’ Principle, aiming at a direct estimation of equivalent bioelectric sources over the heart’s surface in real time. The main scope of this work was to establish a new, fast approach to the solution of the inverse electrocardiography problem. The study was based on recorded electrocardiograms (ECGs). Based on Huygens’ Principle, measurements obtained from the surfaceof a patient’s thorax were interpolated over the surface of the employed volume conductor model and considered as secondary Huygens’ sources. These sources, being non-zero only over the surface under study, were employed to determine the weighting factors of the eigenfunctions’ expansion, describing the generated voltage distribution over the whole conductor volume. With the availability of the potential distribution stemming from measurements, the electromagnetics reciprocity theorem is applied once again to yield the equivalent sources over the pericardium. The methodology is self-validated, since the surface potentials calculated from these equivalent sources are in very good agreement with ECG measurements. The ultimate aim of this effort is to create a tool providing the equivalent epicardial voltage or current sources in real time, i.e., during the ECG measurements with multiple electrodes.
... Third, the turntable is operated to make the transmitter's observations for each control point the same. The target function is established to calibrate the relationship between the transmitter and the turntable, and the Levenberg-Marquardt algorithm is utilized [18]. Finally, the evaluation results are derived by comparing the reference and scanning angles. ...
... According to the objective function of the space-resection calibration method and the proposed method, the calibration residual of each function was determined by control point restrictions. According to Equation (18), the calibration residual revealed the calibration quality directly for the space-resection method, while Equation (15) reflected the calibration quality for the proposed method. The physical meanings of the two equations were similar, and represented the distance from the control point to the measurement vector formed by the two fanned lasers. ...
Article
Full-text available
Rotary laser scanning measurement systems, such as the workshop measurement positioning system (wMPS), play critical roles in manufacturing industries. The wMPS realizes coordinate measurement through the intersection of multiple rotating fanned lasers. The measurement model of multi-laser plane intersection poses challenges in terms of accurately evaluating the system, making it difficult to establish a standardized evaluation method. The traditional evaluation method is based on horizontal and vertical angles derived from scanning angles, which are the direct observation of wMPS. However, the horizontal- and vertical-angle-based methods ignore the assembly errors of fanned laser devices and mechanical shafts. These errors introduce calculation errors and affect the accuracy of angle measurement evaluation. This work proposes a performance evaluation method for the scanning angle independent of the assembly errors above. The transmitter of the wMPS is installed on a high-precision turntable that provides the angle reference. The coordinates of enhanced reference points (ERP) distributed in the calibration space are measured by the laser tracker multilateration method. Then, the spatial relationship between the transmitter and the turntable is reconstructed based on the high-precision turntable and the good rotational repeatability of the transmitter. The simulation was carried out to validate the proposed method. We also studied the effect of fanned laser devices and shaft assembly errors on horizontal and vertical angles. Subsequently, the calibration results were validated by comparing the residuals with those derived from the space-resection method. Furthermore, the method was also validated by comparing the reference and scanning angles. The results show that the maximum angle measurement error was approximately 2.79″, while the average angle measurement error was approximately 1.26″. The uncertainty (k = 1) of the scanning angle was approximately 1.7″. Finally, the coordinate measurement test was carried out to verify the proposed method by laser tracker. The results show that the average re-scanning error was 2.17″.
... The radar range covers wavelengths from a few millimeters to several meters and is divided into a number of bands, depending on the propagation properties (Table 1). Several bands are used in applications related to military reconnaissance; mainly, C (4-8 GHz), X (8)(9)(10)(11)(12), and Ku (12)(13)(14)(15)(16)(17)(18). Thus, the authors focused on the radar properties of materials for these frequency bands. ...
... The leastsq function from the optimize programming library belonging to the SciPy module was used. The leastsq function is an iterative function using Jacobians based on the difference between the observed objective data (measurements for x) and a defined non-linear function of the parameters f (x, coefficients), where the least squares approach is used to minimize [15]. The mathematical model was used to calculate the formula of the Gaussian curve function described by Formula (10): ...
Article
Full-text available
The article presents the Gaussian model of the electromagnetic radiation attenuation properties of two resin systems containing 75% or 80% of a carbonyl iron load as an absorber in the 4–18 GHz range. For the attenuation values obtained in the laboratory, mathematical fitting was performed in the range of 4–40 GHz to visualize the full curve characteristics. The simulated curves fitted up to a 0.998 R2 value of the experimental results. The in depth analysis of the simulated spectra allowed a thorough evaluation of the influence of the type of resin, absorber load, and layer thickness on reflection loss parameters such as the maximum attenuation, peak position, half-height width, and base slope of the peak. The simulated results were convergent with the literature findings, allowing a much deeper analysis. This confirmed that the suggested Gaussian model could provide additional information, useful in terms of comparative analyses of datasets.
... 2b), and volumetrically resolves the complex structure of a virtual vessel model (Fig. 4). For all the experiments, NeuDOT outperforms both a classic LM reconstruction27 and the state-of-the-art CNN method 19 . The novelty of our proposed method lies in the hybridization of the NF-based continuous representation of optical absorbance and the adapted physic-based light propagation modeling using FEM. ...
... Next, the unknown Θ ( ) is continuously represented14 using an MLP network with a large number of weighting parameters Θ . Notably, the classic method like LM and the recently reported end-to-end CNN-based method only assumes a fixeddimension solution19,27 . NeuDOT achieves unprecedented spatial resolution in the reconstructed images, and largely removes the over smoothed effect resulted from the classic methods. ...
Preprint
Full-text available
Light scattering imposes a major obstacle for imaging objects seated deeply in turbid media, such as biological tissues and foggy air. Diffuse optical tomography (DOT) tackles scattering by volumetrically recovering the optical absorbance and has shown significance in medical imaging, remote sensing and autonomous driving. A conventional DOT reconstruction paradigm necessitates discretizing the object volume into voxels at a pre-determined resolution for modelling diffuse light propagation and the resulting spatial resolution of the reconstruction is generally limited. We propose NeuDOT, a novel DOT scheme based on neural fields (NF) to continuously encode the optical absorbance within the volume and subsequently bridge the gap between model accuracy and high resolution. Comprehensive experiments demonstrate that NeuDOT achieves submillimetre lateral resolution and resolves complex 3D objects at 14 mm-depth, outperforming the state-of-the-art methods. NeuDOT is a non-invasive, high-resolution and computationally efficient tomographic method, and unlocks further applications of NF involving light scattering.
... The material removal rate (MRR) and surface roughness have been predicted using an artificial neural network (ANN) model (SR).The artificial neural network works like the human brain which is capable of highly non-linear function depending on the weight and bias value. For this study, the Levenberg-Marquardt [20] error back propagation training algorithm (EBPTA) is used. The present case ANN structure is shown in the Fig. 2. ...
... The first layer is called the input layer which consists of discharge current, pulse time and gap voltage. The middle layer is called the hidden layer here one hidden layer with 10 numbers hidden consider for satisfactory accuracy [20]. The final layer is the output layer, which contains the MRR and SR. ...
... The LM method [253] adds a small identity matrix to H to eliminate the possible singularity. The quadratic approximation error to E ( w) is minimized at step t under the constraint that the step length d(t) is within a trust region, by using the Karush-Kuhn-Tucker (KKT) theorem [243]. ...
... The Jacobian J can be calculated by a modification to BP [254]. σ(t) can also be adapted according to the hook step [253], Powell's dogleg method [243], and other heuristics [243]. The LM method is an efficient algorithm for medium-sized neural networks [254]. ...
Article
Full-text available
The single-layer perceptron, introduced by Rosenblatt in 1958, is one of the earliest and simplest neural network models. However, it is incapable of classifying linearly inseparable patterns. A new era of neural network research started in 1986, when the backpropagation (BP) algorithm was rediscovered for training the multilayer perceptron (MLP) model. An MLP with a large number of hidden nodes can function as a universal approximator. To date, the MLP model is the most fundamental and important neural network model. It is also the most investigated neural network model. Even in this AI or deep learning era, the MLP is still among the few most investigated and used neural network models. Numerous new results have been obtained in the past three decades. This survey paper gives a comprehensive and state-of-the-art introduction to the perceptron model, with emphasis on learning, generalization, model selection and fault tolerance. The role of the perceptron model in the deep learning era is also described. This paper provides a concluding survey of perceptron learning, and it covers all the major achievements in the past seven decades. It also serves a tutorial for perceptron learning.
... was used to control display of experimental instructions which were projected onto a display screen placed 100 cm above the head of participants. One hundred sixty channel MEG data were simultaneously relayed to the MEG data acquisition computer and to a separate MASK processing computer which calculated coil positions from the MEG signals with an offline localisation algorithm, using a computational approach similar to those used in conventional MEG for localising fiducial coils with respect to MEG sensors (Wilson, 2014), with modifications to optimise performance of the smaller MASK tracking coils (More, 1978;Alves et al., 2016). ...
Article
Full-text available
Introduction Articulography and functional neuroimaging are two major tools for studying the neurobiology of speech production. Until now, however, it has generally not been feasible to use both in the same experimental setup because of technical incompatibilities between the two methodologies. Methods Here we describe results from a novel articulography system dubbed Magneto-articulography for the Assessment of Speech Kinematics (MASK), which is technically compatible with magnetoencephalography (MEG) brain scanning systems. In the present paper we describe our methodological and analytic approach for extracting brain motor activities related to key kinematic and coordination event parameters derived from time-registered MASK tracking measurements. Data were collected from 10 healthy adults with tracking coils on the tongue, lips, and jaw. Analyses targeted the gestural landmarks of reiterated utterances/ipa/ and /api/, produced at normal and faster rates. Results The results show that (1) Speech sensorimotor cortex can be reliably located in peri-rolandic regions of the left hemisphere; (2) mu (8–12 Hz) and beta band (13–30 Hz) neuromotor oscillations are present in the speech signals and contain information structures that are independent of those present in higher-frequency bands; and (3) hypotheses concerning the information content of speech motor rhythms can be systematically evaluated with multivariate pattern analytic techniques. Discussion These results show that MASK provides the capability, for deriving subject-specific articulatory parameters, based on well-established and robust motor control parameters, in the same experimental setup as the MEG brain recordings and in temporal and spatial co-register with the brain data. The analytic approach described here provides new capabilities for testing hypotheses concerning the types of kinematic information that are encoded and processed within specific components of the speech neuromotor system.
... The solution can be computed using an optimization algorithm such as Levenberg-Marquardt or Gauss-Newton [16]. ...
Article
Full-text available
In the Global Navigation Satellite System (GNSS) context, the growing number of available satellites has led to many challenges when it comes to choosing the most-accurate pseudorange contributions, given the strong impact of biased measurements on positioning accuracy, particularly in single-epoch scenarios. This work leverages the potential of machine learning in predicting linkwise measurement quality factors and, hence, optimize measurement weighting. For this purpose, we used a customized matrix composed of heterogeneous features such as conditional pseudorange residuals and per-link satellite metrics (e.g., carrier-to-noise-power-density ratio and its empirical statistics, satellite elevation, carrier phase lock time). This matrix is then fed as an input to a long short-term memory (LSTM) deep neural network capable of exploiting the hidden correlations between these features relevant to positioning, leading to the predictions of efficient measurement weights. Our extensive experimental results on real data, obtained from extensive field measurements, demonstrate the high potential of our proposed solution, which is able to outperform traditional measurement weighting and selection strategies from the state-of-the-art. In addition, we included detailed illustrations based on representative sessions to provide a concrete understanding of the significant gains of our approach, particularly in strongly GNSS-challenged operating conditions.
... In the intermediate layer, the sigmoidal function was used as the activation function, and in the output layer, the linear function was used as the activation function, corresponding to the heat transfer coefficient data. The training algorithm used was that of Levenberg and Marquardt [19]. Equations (9) and (10) (w j ⋅ f (n)) + b j et al. [29], and the present paper, corresponding to pipe diameters of 4.0 mm and 4.8 mm, temperature of saturation from 15 to 50 °C, and mass flux from 150 to 1200 kg/(m 2 s). ...
Article
This study presents an experimental investigation into the condensation heat transfer of R1234yf in a 4.8-mm horizontal tube at various mass flux rates of 150 kg/(m2 s), 200 kg/(m2 s), 250 kg/(m2 s), and 300 kg/(m2 s) and saturation temperatures of 30 °C and 35 °C, covering a vapor quality range of 10–90%. In addition to analyzing the experimental results, comparisons were made with other correlations in the literature. The results showed that the heat transfer coefficient increases with vapor quality, and for the tested temperatures, the average heat transfer coefficient of R1234yf was found to be lower than that of R134a by 28% and 9% for average mass fluxes of 175 kg/(m2 s) and 275 kg/(m2 s), respectively. The study also investigated the influence of mass flux on the heat transfer coefficient. The heat transfer coefficient decreased with an increase in the condensation temperature, being 7% lower for 35 °C than for 30 °C. Additionally, the experimental values of the condensation heat transfer coefficient were compared to those predicted by ten correlations in the literature and a neural network model. The Haraguchi (Trans Jpn Soc Mech Eng 60:2117e2124, 1994b) correlation was found to be the most appropriate for estimating the heat transfer coefficient, despite having lower precision than the neural network model, which provides all necessary details for implementation by potential users.
... Within each I θ_i value, the inverse analysis of K i and σ Ki was performed using a nonlinear (weighted) least-square method that incorporates the Levenberg-Marquardt optimization algorithm. The procedure was implemented as a function that returns a vector of (weighted) residuals whose sum square is minimized (Bates and Watts, 1988;More, 1978;Bates and Chambers, 1992). To this end, the R programming language (R version 3.5.0.; Core Team, 2022) was employed. ...
Article
The hydraulic characterization of hydrophobic substrates, which strongly depends on the water content, is an understudied topic in the field of soil physics. Based on laboratory experiments, this work presents a physically-based model, denoted as Soil Water Repellency Infiltration Model (SWRIM), to describe the double-slope cumulative infiltration curves commonly observed in hydrophobic substrates, whose behavior is conditioned by the dual hydraulic behavior (under dry and saturated conditions) of the water repellency materials. The experimental setup consists of 1D infiltration tests on hydrophobic substrates and tension of 0 cm, following by a free drainage. The SWRIM model assumes that water flow in the hydrophobic substrate is gravity driven, representing the pore system as a set of disconnected pathways or capillaries with dual behavior. Each capillary is governed by two hydraulic conductivity values corresponding to dry or partially saturated and saturated conditions, Kd and Ks respectively. During infiltration, water flow within each pathway is regulated by Kd, but once the pathways are filled and the substrate is fully saturated, the water flow is governed by Ks, whose value is directly measured from the slope of the drainage curve. In order to incorporate the variability of the pore system, unimodal and bimodal probability distributions of Kd values were considered. The application of the SWRIM model allowed determining the distribution of Kd and the percentage of water-conducting pores, δ, of the hydrophobic layer. Additional determinations of the soil hydraulic conductivity, K, and sorptivity under water repellent conditions were also obtained from the inverse analysis of the initial times of the infiltration test. The proposed model was validated on experimental soil columns of 5 cm internal diameter and 1.0 and 2.5 cm height filled with pine forest organic material, Pine, rosemary leaf litter, RS, and, blond peat, Peat. Overall, convex-to-linear curves with a monotonously increasing infiltration, typical of hydrophobic substrates, were obtained. SWRIM model allowed robust fits of the infiltration curves, showing in all cases a distribution of Kd with a bimodal shape. Pine and Peat presented the largest values of δ. The estimated average Kd was within the same order of magnitude as K values. Overall, the results showed that the proposed physically-based model allows to satisfactory describe and characterize the dynamic hydraulic behavior of hydrophobic soil substrates.
... However, the traditional NDT algorithm uses the Newton method to iterate, and the Hessian matrix of the objective function in its iterative equation is usually difficult to solve, which seriously affects the real-time performance of localization and mapping. Therefore, the traditional NDT algorithm is improved by the Levenberg-Marquardt iterative method [23]. The advantage of this method is that the Jacobian matrix of the objective function is used to approximate the Hessian matrix, which greatly improves the computational efficiency. ...
Article
Full-text available
Driverless technology refers to the technology that vehicles use to drive independently with the help of driverless system under the condition of unmanned intervention. The working environment of construction machinery is bad, and the working conditions are complex. The use of driverless technology can greatly reduce the risk of driver operation, reduce labor costs and improve economic benefits.Aiming at the problem of the GPS positioning signal in the working environment of construction machinery being weak and not able to achieve accurate positioning, this paper uses the fusion SLAM algorithm based on improved NDT to realize the real-time positioning of the whole vehicle through reconstruction of the scene. Considering that the motion characteristics of crawler construction machinery are different from those of ordinary passenger cars, this paper improves the existing pure pursuit algorithm. Simulations and real vehicle tests show that the algorithm combined with the fusion SLAM algorithm can realize the motion control of driverless crawler construction machinery well, complete the tracking of the set trajectory perfectly and have high robustness. Considering that there is no mature walking method of driverless crawler construction machinery for reference, the research of this paper will lay a foundation for the development of driverless crawler construction machinery.
... The correction factors do not change during iterative computation after being determined. The Levenberg-Marquardt algorithm [65] is used to address the flow of each branch by satisfying the boundary conditions during calculation, as shown in Fig. 10. The detailed process is as follows: ...
Article
Full-text available
Blood flow and pressure calculated using the currently available methods have shown the potential to predict the progression of pathology, guide treatment strategies and help with postoperative recovery. However, the conspicuous disadvantage of these methods might be the time-consuming nature due to the simulation of virtual interventional treatment. The purpose of this study is to propose a fast novel physics-based model, called FAST, for the prediction of blood flow and pressure. More specifically, blood flow in a vessel is discretized into a number of micro-flow elements along the centerline of the artery, so that when using the equation of viscous fluid motion, the complex blood flow in the artery is simplified into a one-dimensional (1D) steady-state flow. We demonstrate that this method can compute the fractional flow reserve (FFR) derived from coronary computed tomography angiography (CCTA). 345 patients with 402 lesions are used to evaluate the feasibility of the FAST simulation through a comparison with three-dimensional (3D) computational fluid dynamics (CFD) simulation. Invasive FFR is also introduced to validate the diagnostic performance of the FAST method as a reference standard. The performance of the FAST method is comparable with the 3D CFD method. Compared with invasive FFR, the accuracy, sensitivity and specificity of FAST is 88.6%, 83.2% and 91.3%, respectively. The AUC of FFR FAST is 0.906. This demonstrates that the FAST algorithm and 3D CFD method show high consistency in predicting steady-state blood flow and pressure. Meanwhile, the FAST method also shows the potential in detecting lesion-specific ischemia.
... is similar to the Levenberg-Marquarelt algorithm in form, except that is limited to a positive value close to zero without adjustments [24]. ...
Article
For massive multiple-input multiple-output (MIMO) communication systems, simple linear detectors such as zero forcing (ZF) and minimum mean square error (MMSE) can achieve near-optimal detection performance with reduced computational complexity. However, such linear detectors always involve complicated matrix inversion, which will suffer from high computational overhead in the practical implementation. Due to the massive parallel-processing and efficient hardware-implementation nature, the neural network has become a promising approach to signal processing for the future wireless communications. In this paper, we first propose an efficient neural network to calculate the pseudo-inverses for any type of matrices based on the improved Newton's method, termed as the PINN. Through detailed analysis and derivation, the linear massive MIMO detectors are mapped on PINNs, which can take full advantage of the research achievements of neural networks in both algorithms and hardwares. Furthermore, an improved limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) quasi-Newton method is studied as the learning algorithm of PINNs to achieve a better performance/complexity trade-off. Simulation results finally validate the efficiency of the proposed scheme.
... With P(i) and T (i), I est (i; P, T ) is computed from Eq. (2), which is depended on P and T . Levenberg-Marquardt (LM) nonlinear optimization [25] is exploited to minimize Eq. (24). It requires the initial values. ...
Article
Full-text available
Pan-tilt (PT) camera is an indispensable part of the video surveillance systems due to its rotatable property and low cost. As the primitive output of the PT camera limits its practical applications, an accurate calibration method is required. Previous single point calibration method (SPCM) was presented to estimate angles Pan and Tilt via single control point. For the more intuitive geometric interpretation and more robust performance, we propose a novel single point calibration method (novel SPCM). In this scheme, a nonlinear PT camera function (PT function) is established via a normalization approach. With PT function, calibration problem is converted as the intersection situation of two circles formed by Pan and Tilt. Solutions can be regarded as the intersection points of two circles in 3D space. Theoretical analysis shows that novel SPCM is stable to measurement noise, for it still finds the least-square solutions even if two circles have no intersection. In the simulation experiments, reprojection error of novel SPCM is 32.4% smaller than SPCM for the large noise situation. It is 25.1% faster than SPCM. With the angle smooth strategy, novel SPCM achieves accurate and stable performance in the real data experiment.
... In fact, the uncertainties in the lattice parameters calculated through the real-space method are at least one order of magnitude smaller if compared to the same quantity derived via reciprocal space mapping (see table 2). The uncertainty in the a-lattice parameter (Δ ) determined through the real space method is a function of Δ , 63 The first term in Δ and Δ are found to represent less than 1% of the total uncertainty while the uncertainties related to Δ ℎ0ℎ ̅ +/− and to Δ 000 +/− contain around ~99% of the total uncertainty distributed between the effect of Δ and Δ in the in-plane lattice parameter and ~99% for the orthogonal direction. Exhaustive description of the calculation of the uncertainties in the lattice parameters is addressed in S1. ...
Article
Full-text available
In the present work, the importance of determining the strain states of semiconductor compounds with high accuracy is demonstrated. For the matter in question, new software titled LAPAs, the acronym for LAttice PArameters is presented. The lattice parameters as well as the chemical composition of Al1-xInxN and Ge1-xSnx compounds grown on top of GaN- and Ge- buffered c-Al2O3 and (001) oriented Si substrates, respectively, are calculated via the real space Bond’s method. The uncertainties in the lattice parameters and composition are derived, compared and discussed with the ones found via X-ray diffraction reciprocal space mapping. Broad peaks lead to increased centroid uncertainty and are found to constitute up to 99% of the total uncertainty in the lattice parameters. Refraction correction is included in the calculations and found to have an impact of 0.001 Å in the lattice parameters of both hexagonal and cubic crystallographic systems and below 0.01% in the quantification of the InN and Sn contents. Although the relaxation degrees of the nitride and tin compounds agree perfectly between the real and reciprocal-spaces methods, the uncertainty in the latter is found to be 10 times higher. The impact of the findings may be substantial for the development of applications and devices as the intervals found for the lattice match condition of Al1-xInxN grown on GaN templates vary between ~1.8% (0.1675-0.1859) and 0.04% (0.1708-0.1712) if derived via the real- and reciprocal spaces methods.
... The position error of semantic elements caused by trajectory errors can be corrected by using the formula P = RP + t thus achieving alignment of semantic elements observed by multiple trajectories. To perform iterative optimization, we use the Levenberg Marquard algorithm [36]. The algorithm is implemented using the C++three-way toolkit GTSAM [37]. ...
Article
Full-text available
In this paper, a lightweight, high-definition mapping method is proposed for autonomous driving to address the drawbacks of traditional mapping methods, such as high cost, low efficiency, and slow update frequency. The proposed method is based on multi-source data fusion perception and involves generating local semantic maps (LSMs) using multi-sensor fusion on a vehicle and uploading multiple LSMs of the same road section, obtained through crowdsourcing, to a cloud server. An improved, two-stage semantic alignment algorithm, based on the semantic generalized iterative closest point (GICP), was then used to optimize the multi-trajectories pose on the cloud. Finally, an improved density clustering algorithm was proposed to instantiate the aligned semantic elements and generate vector semantic maps to improve mapping efficiency. Experimental results demonstrated the accuracy of the proposed method, with a horizontal error within 20 cm, a vertical error within 50 cm, and an average map size of 40 Kb/Km. The proposed method meets the requirements of being high definition, low cost, lightweight, robust, and up-to-date for autonomous driving.
... The data driven model consists of a inverse analysis where the numerical results are optimised with experimental results. The inverse analysis is performed using the Levenberg Marquardt method (More 1977), which is a method to solve non-linear least square problems. ...
Conference Paper
Full-text available
The correct assessment of the condition of line infrastructure is of vital importance for the modern society. Line infrastructure stimulates economic grow, enables trading operations and connects people. Disruptions on the services provided by line infrastructures (e.g. gas, water, telecommunication, transportation) can have a severe impact on its availability which can lead to significant economical and societal consequences. This paper proposes a hybrid methodology for the geotechnical assessment of line infrastructure, by combining an open-source finite element model with a data data-driven approach. This methodology is applied to a section of the Dutch railway network (120 km), in which the long-term displacement caused by train services is computed. The finite element model has been developed with a focus on computational performance, since a network analysis requires a large quantity of calculations. Axle acceleration measurements of a train are used to optimise the numerical results and improve the prediction of the long-term displacement, illustrating the added value of the proposed methodology. The proposed methodology can easily be adapted to study other line infrastructure applications.
... developed by Elzhov, Mullen, Spiess, and Bolker (2022). The details of how the NLLS works in practice can be found in More (1978). ...
Article
Full-text available
The R package econet provides methods for estimating parameter-dependent network centrality measures with linear-in-means models. Both nonlinear least squares and maximum likelihood estimators are implemented. The methods allow for both link and node heterogeneity in network effects, endogenous network formation and the presence of unconnected nodes. The routines also compare the explanatory power of parameter-dependent network centrality measures with those of standard measures of network centrality. Benefits and features of the econet package are illustrated using data from Battaglini and Patacchini (2018) and Battaglini, Leone Sciabolazza, and Patacchini (2020).
... The output neurons have a linear activation function. The weights are adjusted by the Levenberg-Marquardt algorithm in the training (More, 1977). Target figure is a minimum RMSE. ...
Thesis
Full-text available
This thesis is written as a cumulative dissertation. It presents methods and results which contribute to an improved understanding of the spatio-temporal variability of fog and fog deposition. The questions to be answered are: When is there how much fog, and where and how much fog is deposited on the vegetation as fog precipitation? Freely available data sets serve as a database. The meteorological input data are obtained from the Climate Data Center (CDC) of the German Meteorological Service (DWD). Station data for temperature, relative humidity and wind speed in hourly resolution are used. In addition, visibility data are used for validation purposes. Furthermore, Global Forest Heights (GFH) data from the National Aeronautics and Space Administration (NASA) are used as vegetation height data. The data from NASA’s Shuttle Radar Topography Mission (SRTM) is used as a digital elevation model. The first publication deals with gap filling and data compression for further calculations. This is necessary since the station density for hourly data is relatively low, especially before the 2000s. In addition, there are more frequent gaps in hourly data than in, for instance, daily data, which can thus be filled. It is shown that gradient boosting (gb) enables high quality gap filling in a short computing time. The second publication deals with the determination of the fog, especially with the liquid water content (lwc). Here the focus is on the correction of measurement errors of the relative humidity as well as methods of spatial interpolation are dealt with. The resulting lwc data for Germany with a temporal resolution of one hour and a spatial resolution of one kilometre, are validated against measured lwc data as well as visibility data of the DWD. The last publication uses the data and methods of the two previous publications. The vegetation and wind speed data are also used to determine fog precipitation from the lwc data. This is validated using data from other publications and water balance calculations. In addition to the measured precipitation, the fog precipitation data are used as an input variable for the modelling. This is also one of the possible applications: To determine precipitation from fog, which is not recorded by standard measuring methods, and thus to make water balance modelling more realistic.
... The Levenberg-Marquardt algorithm (Levenberg, 1944;More, 1977) adaptively varies the parameter updates between the gradient descent update and the Gauss-Newton update: ...
Article
Full-text available
Seismic Moho depth is defined based on a significant seismic velocity change between the crust and the upper mantle. Receiver function studies provide the most reliable estimates of the Moho depth. However, they are limited to some profile strikes to the Zagros collision zone. Here, we present an automated inverse methodology to provide a 3D Moho relief beneath the Zagros collision zone. We use gravity data in a vast square to compute the Moho depth model to refine an initial Moho depth model obtained from shear wave tomography. The depths, estimated by long wavelength shear wave tomography initially derived for the entire Middle East, are considered for the initial depth model. This initial model is refined by using a very popular and robust method for nonlinear inversion accessible through open-source resources in Python. Our modeling results indicate that only a 100 kg/m³ density contrast between the crust and the upper mantle is sufficient to provide the Moho depth values comparable to receiver functions. This is interpreted as the high density and mafic affinity of the lower crust beneath the Zagros collision zone. It appears the lower crust returns to eclogite in the deep part. Alternatively, the cratonic core of the Zagros keel has a small density leading to only a 100 kg/m³ density contrast between the lower crust and the uppermost lithospheric mantle. The depths obtained by our inversion methodology are compared and assessed to other studies in the area of Moho relief modeling.
... What's more, the positions can only be changed near the uniformly distributed control points to prevent the generation of invalid control volume frames. The optimisation algorithm adopts Levenberg-Marquardt algorithm [46], which ignores the derivative term of fixed order, and the nonlinear least square problem is internally transformed into a linear problem, thus reducing the consumption of computing resources and converging quickly. After optimisation, the distribution of the new control points and the configuration expression effect are shown in Fig. 11. ...
Article
Full-text available
FFD (free-form deformation method) is one of the most commonly used parameterisation methods at present. It places the parameterised objects inside the control volume through coordinate system transformation, and controls the control volume through control points, thus realising the deformation control of its internal objects. Firstly, this paper systematically analyses and compares the characteristics and technical requirements of Bernstein, B-spline and NURBS (non-uniform rational b-splines) basic functions that can be adopted by FFD, and uses the minimum number of control points required to achieve the specified control effect threshold to express the control capability. Aiming at the problem of discontinuity at the right end in the actual calculation of B-spline basis function, a method of adding a small epsilon is proposed to solve it. Then, three basic functions are applied to the FFD parameterisation method, respectively, and the differences are compared from two aspects of the accurate expression of the model and the ability of deformation control. It is found that the BFFD (b-spline free-form deformation) approach owns better comprehensive performance when the control points are distributed correctly. In this paper, the BFFD method is improved, and a p-BFFD (reverse solution points based BFFD) method based on inverse solution is proposed to realise the free distribution of control points under the specified topology. Further, for the lifting body configuration, the control points of the p-BFFD method are brought closer to the airframe forming the EDGE-p-BFFD (edge constraints based p-BFFD) method. For the case in this paper, the proposed EDGE-p-BFFD method not only has fairly high parameterisation accuracy, but also reduces the expression error from 1.01E-3 to 1.25E-4, which is nearly ten times. It can also achieve effective lifting body guideline constraints, and has the ability of local deformation adapting to the configuration characteristics. In terms of the proportion of effective control points, the EDGE-p-BFFD method increases the proportion of effective control points from 36.7% to 50%, and the more control points, the more obvious the proportion increase effect. The new method also has better effect on the continuity of geometric deformation. At the same time, this paper introduces the independent deformation method of the upper and lower surfaces based on the double control body frames, which effectively avoids the deformation coupling problem of the simultaneous change of the upper and lower surfaces caused by the movement of control points in the traditional single control framework.
... Based on the results of the elliptical orbit loading test, " Figure-8" pattern loading test, and unidirectional compression shear loading test, the parameters of the DHI model for the two specimens are calibrated. The seven parameters of the model are optimized by the Levenberg-Marquardt algorithm [42,43], that is, the model parameters are determined by achieving the following objective function: ...
Article
High damping rubber bearings (HDRBs) are commonly used in base isolation systems, which can protect the structures from heavy earthquake damage by elongating natural vibration periods of the structures and improving the energy dissipation capacity of the systems. For the HDRBs, the stiffness hardening effect is exhibited at large shear strain, which is favorable to control the displacement of the bearings. Meanwhile, the horizontal hysteretic behavior in the two orthogonal directions is coupled and related to the path history. Here, the influence of the stiffness hardening and bidirectional coupling effects of the bearings on the seismic responses of the isolation system is studied. The full-scale quasistatic cyclic tests of two HDRB specimens with different dimensions were carried out under unidirectional and bidirectional loads. In the numerical analysis, the DHI and the classical Bouc-Wen model are used to simulate the hysteretic behavior of the HDRBs, in which the DHI model can capture the increased damping and stiffness, and bidirectional coupling properties. The parameters of both bearing models are determined according to the test results. Time-history analyses of a six-story reinforced concrete (RC) frame building with an HDRB isolation system are carried out for a suite of earthquake ground motions. The influence on seismic response of superstructure and isolation layer is analyzed with or without considering the hardening effect and bidirectional coupling effect of HDRBs. The results demonstrate that with consideration of the stiffness hardening and coupling effects, the peak responses of the superstructure significantly increase under high-intensity earthquakes. The analytical results suggest that the stiffness hardening and the effect of bidirectional coupling of the HDRBs should be considered in the design for HDRB isolation systems.
... The classical nonlinear optimization methods, (e.g., Gauss-Newton, trust region, and Levenberg-Marquardt type methods) can also be applied to solve (5.4). We refer to Kelley [16], More [27] and Yuan [39] for such references. ...
Article
Full-text available
This paper studies loss functions for finite sets. For a given finite set S, we give sum-of-square type loss functions of minimum degree. When S is the vertex set of a standard simplex, we show such loss functions have no spurious minimizers (i.e., every local minimizer is a global one). Up to transformations, we give similar loss functions without spurious minimizers for general finite sets. When S is approximately given by a sample set T, we show how to get loss functions by solving a quadratic optimization problem. Numerical experiments and applications are given to show the efficiency of these loss functions.
Article
Full-text available
Specifying the role of genetic mutations in cancer development is crucial for effective screening or targeted treatments for people with hereditary cancer predispositions. Our goal here is to find the relationship between a number of cancerogenic mutations and the probability of cancer induction over the lifetime of cancer patients. We believe that the Avrami–Dobrzyński biophysical model can be used to describe this mechanism. Therefore, clinical data from breast and ovarian cancer patients were used to validate this model of cancer induction, which is based on a purely physical concept of the phase-transition process with an analogy to the neoplastic transformation. The obtained values of model parameters established using clinical data confirm the hypothesis that the carcinogenic process strongly follows fractal dynamics. We found that the model’s theoretical prediction and population clinical data slightly differed for patients with the age below 30 years old, and that might point to the existence of an ancillary protection mechanism against cancer development. Additionally, we reveal that the existing clinical data predict breast or ovarian cancers onset two years earlier for patients with BRCA1/2 mutations.
Article
Full-text available
This paper focuses on the computational optimization of RANSAC. We describe the Parallel Efficient Sample Consensus (PESAC) framework that allows efficient utilization of SIMD extensions and provides memory locality due to a special way of storing the input sequence of correspondences and generating a batch of samples per one main loop iteration. It is inspired by the USAC framework and has a block structure capable of implementing most modern RANSAC-based methods. We enhance it with individual blocks of sample and model restrictors that are aimed at the rejection of “bad” samples and model hypothesis before time-consuming model computation and verification blocks. We also provide a detailed description implementing 2D homography estimation problem in PESAC and benchmark the running time on the MIDV-2020 dataset of identity documents. Comparing to naive implementation, we accelerated our framework by 122 times for the document classification task (with a 6 % increase in accuracy) and by 18 times for document tracking (with a 46 % decrease in tracking failure rate) by using both restrictors and vector processing. This version also outperformed a number of USAC implementations from OpenCV-4.6.0 in runtime and accuracy of estimation (3 times faster, 6 % greater accuracy for the classification task, and 2 times faster, 33 % lower failure rate for tracking if comparing with USAC_MAGSAC).
Article
Ground-motion models (GMMs) typically include a source-to-site path model that describes the attenuation of ground motion with distance due to geometric spreading and anelastic attenuation. In contemporary GMMs, the anelastic component is typically derived for use in one or more broad geographical regions such as California or Japan, which necessarily averages spatially variable path effects within those regions. We extend that path modeling framework to account for systematic variations of anelastic attenuation for ten physiographic subregions in California that are defined in consideration of geological differences. Using a large database that is approximately doubled in size for California relative to Next Generation Attenuation (NGA)-West2, we find relatively high attenuation in Coast Range areas (North Coast, Bay area, and Central Coast), relatively low attenuation in eastern California (Sierra Nevada, eastern California shear zone), and state-average attenuation elsewhere, including southern California. As part of these analyses, we find for the North Coast region relatively weak ground motions on average from induced events (from the Geysers), similar attenuation rates for induced and tectonic events, and higher levels of ground-motion dispersion than other portions of the state. The proposed subregional path model appreciably reduces within-event and single-station variability relative to an NGA-West2 GMM for ground motions at large distance (RJB>100 km). The approach presented here can readily be adapted for other GMMs and regions.
Article
When employing traditional low-order approximation equations to forecast the Hopf bifurcation phenomenon in the wake of a circular cylinder at low Reynolds numbers, inaccuracies may arise in estimating the phase. This is due to the fact that, in this transition process, the frequency varies with time. In this paper, we propose a method for analyzing and predicting the vortex shedding behind a cylinder at low Reynolds numbers. The proposed method is based on coordinate transformation and description function and is demonstrated using data from computational fluid dynamics simulation of flow around a cylinder at Reynolds number 100. The resulting governing equations explicitly contain the flow amplitude and implicitly contain the flow frequency. The proposed method is found to have higher accuracy compared to other methods for nonlinear identification and order reduction. Finally, the method is extended to predict nonlinear vortex shedding in the Reynolds number range of 80–200.
Article
Objective: we propose a procedure for calibrating 4 parameters governing the mechanical boundary conditions (BCs) of a thoracic aorta (TA) model derived from one patient with ascending aortic aneurysm. The BCs reproduce the visco-elastic structural support provided by the soft tissue and the spine and allow for the inclusion of the heart motion effect. Methods: we first segment the TA from magnetic resonance imaging (MRI) angiography and derive the heart motion by tracking the aortic annulus from cine-MRI. A rigid-wall fluid-dynamic simulation is performed to derive the time-varying wall pressure field. We build the finite element model considering patient-specific material properties and imposing the derived pressure field and the motion at the annulus boundary. The calibration, which involves the zero-pressure state computation, is based on purely structural simulations. After obtaining the vessel boundaries from the cine-MRI sequences, an iterative procedure is performed to minimize the distance between them and the corresponding boundaries derived from the deformed structural model. A strongly-coupled fluid-structure interaction (FSI) analysis is finally performed with the tuned parameters and compared to the purely structural simulation. Results and conclusion: the calibration with structural simulations allows to reduce maximum and mean distances between image-derived and simulation-derived boundaries from 8.64 mm to 6.37 mm and from 2.24 mm to 1.83 mm, respectively. The maximum root mean square error between the deformed structural and FSI surface meshes is 0.19 mm. This procedure may prove crucial for increasing the model fidelity in replicating the real aortic root kinematics.
Article
Full-text available
A calibration method for line-structured light (LSL) by using a virtual binocular vision system (VBVS) composed of one camera and a front coating plane mirror is promoted in this work. The front coating plane in the VBVS can generate much less coplanarity error in lithographic feature points and remarkably decline the imaging distortion during back coating. An encoded target is proposed to distinguish between real corners and virtual corners (mirrored corners) and achieve high-precision matching between real and virtual corners when the target is occluded during the VBVS calibration. A parameter optimization method based on 3D constraints is presented in the work to obtain accurate structural parameters and thus guarantee precise reconstruction of the LSL. Moreover, the laser stripe and its mirrored image meet the auto-epipolar constraint. Therefore, the matching between the real and virtual stripes can be realized based on the vanish point. The performance of our method is verified in the experiments.
Article
Full-text available
Periodic structures are often found in various areas of nanoscience and nanotechnology with many of them being used for metrological purposes either to calibrate instruments, or forming the basis of measuring devices such as encoders. Evaluating the period of one or two-dimensional periodic structures from topography measurements, e.g. performed using scanning probe microscopy (SPM), can be achieved using different methodologies with many grating evaluation methods having been proposed in the past and applied to a handful of examples. The optimum methodology for determining the grating period/pitch is not immediately obvious. This paper reports the results of extensive large-scale simulations and analysis to evaluate the performance of both direct and Fourier space data processing methods. Many thousands of simulations have been performed on a variety of different gratings under different measurement conditions and including the simulation of defects encountered in real life situations. The paper concludes with a summary of the merits and disadvantages of the methods together with practical recommendations for the measurements of periodic structures and for developing algorithms for processing them.
Article
Purpose: Previous work used phantoms to calibrate the nonlinear relationship between the gadolinium contrast concentration and the intensity of the magnetic resonance imaging signal. This work proposes a new nonlinear calibration procedure without phantoms and considers the variation of contrast agent mass minimum combined with the multiple input blood flow system. This also proposes a new single-input method with meaningful variables that is not influenced by reperfusion or noise generated by aliasing. The reperfusion in the lung is usually neglected and is not considered by the indicator dilution method. However, in cases of lung cancer, reperfusion cannot be neglected. A new multiple input method is formulated, and the contribution of the pulmonary artery and bronchial artery to lung perfusion can be considered and evaluated separately. Methods: The calibration procedure applies the minimum variation of contrast agent mass in 3 different regions: (1) pulmonary artery, (2) left atrium, and (3) aorta. It was compared with four dimensional computerized tomography with iodine, which has a very high proportional relationship between contrast agent concentration and signal intensity. Results: Nonlinear calibration was performed without phantoms, and it is in the range of phantom calibration. It successfully separated the contributions of the pulmonary and bronchial arteries. The proposed multiple input method was verified in 6 subjects with lung cancer, and perfusion from the bronchial artery, rich in oxygen, was identified as very high in the cancer region. Conclusions: Nonlinear calibration of the contrast agent without phantoms is possible. Separate contributions of the pulmonary artery and aorta can be determined.
Article
Full-text available
MRI T2 mapping sequences quantitatively assess tissue health and depict early degenerative changes in musculoskeletal (MSK) tissues like cartilage and intervertebral discs (IVDs) but require long acquisition times. In MSK imaging, small features in cartilage and IVDs are crucial for diagnoses and must be preserved when reconstructing accelerated data. To these ends, we propose region of interest-specific postprocessing of accelerated acquisitions: a recurrent UNet deep learning architecture that provides T2 maps in knee cartilage, hip cartilage, and lumbar spine IVDs from accelerated T2-prepared snapshot gradient-echo acquisitions, optimizing for cartilage and IVD performance with a multi-component loss function that most heavily penalizes errors in those regions. Quantification errors in knee and hip cartilage were under 10% and 9% from acceleration factors R = 2 through 10, respectively, with bias for both under 3 ms for most of R = 2 through 12. In IVDs, mean quantification errors were under 12% from R = 2 through 6. A Gray Level Co-Occurrence Matrix-based scheme showed knee and hip pipelines outperformed state-of-the-art models, retaining smooth textures for most R and sharper ones through moderate R. Our methodology yields robust T2 maps while offering new approaches for optimizing and evaluating reconstruction algorithms to facilitate better preservation of small, clinically relevant features.
Article
Full-text available
Eight global and eight local optimization methods were used to calibrate the HBV-TEC hydrological model on the upper Toro river catchment in Costa Rica for four different calibration periods (4, 8, 12 and 16 years). To evaluate their sensitivity to getting trapped in local minima, each method was tested against 50 sets of randomly-generated initial model parameters. All methods were then evaluated in terms of optimization performance and computational cost. Results show a comparable performance among various global and local methods as they highly correlate to one another. Nonetheless, local methods are in general more sensitive to getting trapped in local minima, irrespective of the duration of the calibration period. Performance of the various methods seems to be independent to the total number of model calls, which may vary several orders of magnitude depending on the selected optimization method. The selection of an optimization method is largely influenced by its efficiency and the available computational resources regardless of global or local class
Preprint
Full-text available
The contribution of shallow and deep portions of crust in Bouguer anomaly is a long-lasting challenge. Several attempts including filtering of data are being performed. Filtering outcomes are enormously subject to disagreements due to disputable possible choice of cut-off wavelength. Here, we develop in novel strategy to divide the contribution of shallow and deep crustal structures in the Bouguer anomaly. The Moho relief is estimated by the inversion of Bouguer anomalies. The gravity effect of the volume mass between the estimated Moho and the ground surface is computed by parametrization of the volume mass by different meshes (tensor, quad tree, and octree). Octree mesh is opted as the best one after assessing the different meshing results visually and statistically. Then this gravity effect is subtracted from the Bouguer anomalies to obtain the Moho-free Bouguer anomalies. This Moho-free Bouguer anomaly is inverted to obtain the uppermost density contrast representing a proxy for sedimentary thickness and/or magmatic intrusions. The inversions are carried out by using a very popular and robust method for non-linear problems which is called sparse norm inversion and is accessible through SimPEG (Simulation and Parameter Estimation in Geophysics) in Python. Importantly, the inversion process does not need an initial geometry model or density contrast and is completely automatic.
Article
Combined with terrain observation by progressive scans (TOPS) synthetic aperture radar (SAR), near-nadir SAR (N-SAR) system has great potential for the surface water and ocean topography (SWOT) observation. However, one practical problem of the airborne N-SAR is the non-ideal attitude angle caused by the environment. Normally, the influence of attitude angle error can be considered as a constant squint angle of the system. However, in fact, for the near-nadir SAR system with TOPS SAR mode, the attitude angle error will cause a non-ideal 2-dimensional space-variant Doppler centroid for the receiving data and cause the traditional imaging algorithm failure. To deal with such problems, an imaging algorithm with attitude angle estimation for the near-nadir TOPS SAR system is proposed in this paper. Firstly, the influence of attitude angle error and the signal properties are analyzed. Secondly, based on the analysis, an attitude angle estimate algorithm (AAE), utilizing the nonlinear least square method (NLSM) and minimum entropy criterion, is proposed. Then, based on the estimated altitude angle, without data blocking, a full data imaging algorithm (FDA) is proposed. Its main idea is to obtain an un-ambiguous azimuth spectrum and design an appropriate range-variant azimuth filter through frequency chirp scaling (FCS). The numerical simulations and real data processing demonstrate that the proposed algorithm has good performance on attitude angle error estimation and well-focused image obtaining.
Conference Paper
Rice husk Ash (RHA) is a bye-product of rice paddy milling industries and due to their rough surface and abrasive nature, they have less tendency for natural degradation and pose serious disposal problems. Many research works are ongoing to study the feasibility of using RHA in a better way as a cementitious material and for the partial replacements for cement content in concrete. In the current research work, Artificial Neural Network (ANN) method is used for the prediction of compressive strength of concrete with RHA as a cementitious material. The experimental results of the author and the results taken from various published literature are used in developing an analytical model using ANN for the prediction of compressive strength of concrete with RHA. Experimental works were carried out to find the cube compressive strength of concrete with 5 different % of RHA content as the cementitious material at various ages (3 days, 7days, 28 days and 56 days) of concrete. The ANN model is developed as a function of seven variables such as cement content(C), sand content(S), coarse aggregates (CA), rice husk ash (RHA), water(W), super plasticizer content (SP) and the age of concrete (Age) testing. Three different networks (Levenberg–Marquardt, Bayesian regularization and Scaled conjugate gradient) available for ANN in MAT LAB tool were constructed, trained and tested using the experimental data available in various research works covering a large range of concrete compressive strengths at various ages. The experimental compressive strength of concrete (with RHA) mixes was compared with the predicted values using ANN model and it is found that the ANN model assesses the compressive strength with higher accuracy.
Article
This article describes a method used to calibrate 3-D freehand ultrasound systems based on phantoms with parallel wires forming two perpendicular planes, such as the usual general-purpose commercial phantoms. In our algorithm, the phantom pose is co-optimized with the calibration to avoid the need to precisely track the phantom. We provide a geometrical analysis to explain the proposed acquisition protocol. Finally, we give an estimate of the system accuracy and precision based on measurements acquired on an independent test phantom. We obtained error norms of 1.6 mm up to 6 cm of depth and 3.5 mm between 6 and 14 cm of depth, in total average. In conclusion, it is possible to calibrate ultrasound tracked-probe systems with a reasonable accuracy based on a general-purpose phantom. Contrarily to most calibration methods that imply the construction of the phantom, the present algorithm is based on a standard phantom geometry that is commercially available.
ResearchGate has not been able to resolve any references for this publication.