Book

Numerical Recipes: The Art of Scientific Computing

Authors:

Abstract

Do you want easy access to the latest methods in scientific computing? This greatly expanded third edition of Numerical Recipes has it, with wider coverage than ever before, many new, expanded and updated sections, and two completely new chapters. The executable C++ code, now printed in color for easy reading, adopts an object-oriented style particularly suited to scientific applications. Co-authored by four leading scientists from academia and industry, Numerical Recipes starts with basic mathematics and computer science and proceeds to complete, working routines. The whole book is presented in the informal, easy-to-read style that made earlier editions so popular. Highlights of the new material include: a new chapter on classification and inference, Gaussian mixture models, HMMs, hierarchical clustering, and SVMs; a new chapter on computational geometry, covering KD trees, quad- and octrees, Delaunay triangulation, and algorithms for lines, polygons, triangles, and spheres; interior point methods for linear programming; MCMC; an expanded treatment of ODEs with completely new routines; and many new statistical distributions.
... Numerical solutions for these partial differential equations (PDEs) using, for example, a finite difference method [38,34] lead to formulations with a very large number of ordinary differential equations (ODEs) at a global set of spatial grid points. In global models the number of these degrees of freedom (ODEs) may range from 10 8 to 10 10 while for regional models this may still be large, say 10 6 to 10 7 . ...
... Even though we can choose any RBF for ψ, the C(χ) is always linear in the weights w αq , c αj , enabling the use of the linear algebra of Ridge Regression or Tikhonov regularization [38] in estimating them. ...
... The "dimension reduction' [58] is achieved via a proper orthogonal decomposition approach (another name for principal component analysis [38]), a linear method, and not by using time delay embedding on the regional observations as discussed in this paper. This differs from our work, but the performance meteric is in the same spirit as we discuss here. ...
Preprint
Full-text available
Using data alone, without knowledge of underlying physical models, nonlinear discrete time regional forecasting dynamical rules are constructed employing well tested methods from applied mathematics and nonlinear dynamics. Observations of environmental variables such as wind velocity, temperature, pressure, etc allow the development of forecasting rules that predict the future of these variables only. A regional set of observations with appropriate sensors allows one to forgo standard considerations of spatial resolution and uncertainties in the properties of detailed physical models. Present global or regional models require specification of details of physical processes globally or regionally, and the ensuing, often heavy, computational requirements provide information of the time variation of many quantities not of interest locally. In this paper we formulate the construction of data driven forecasting (DDF) models of geophysical processes and demonstrate how this works within the familiar example of a 'global' model of shallow water flow on a mid-latitude beta plane. A sub-region, where observations are made, of the global flow is selected. A discrete time dynamical forecasting system is constructed from these observations. DDF forecasting accurately predicts the future of observed variables.
... An exception is the Markov Chain Monte Carlo (MCMC), where an existing library has been used [66]. In most other cases [67] has been used as a primary reference for the mathematical formulation and implementation details with some touch of the author based on the problem specifics. ...
... Multiple such methods for extending this basic definition through further point evaluations or interpolation of the function in the region with polynomials are described in Chapter 5 of [67]. For the purpose of the results in this work, we have used a 4-point evaluation which can be obtained by considering further terms in the Taylor expansion. ...
... Higher methods can be constructed in an equivalent way with higher order interpolations but these have not been needed in the present work since the number of points has not been a significant restriction and integrals have generally been performed once after numerical integration of certain functions. For further reference on higher order methods as well as methods for the evaluation of improper integrals, refer to Chapter 4 of [67]. Section 4.7 explores a different method of evaluating higher dimensional integrals, particularly well applied to statistical inference, which is described further down. ...
Thesis
Full-text available
The primary topic of this dissertation is the numerical exploration and constraint of three alternative theories of gravity. The theories explored are Scalar-Tensor Theories (STTs) of gravity, Tensor-Multi-Scalar Theories (TMSTs) of gravity and Gauss-Bonnet (GB) theories. In each case, an entire class of theories has been considered. These have been parametrized by certain coupling functions and numerical parameters. In order to to obtain final results and constraints on the numerical values of the parameters, certain physically-founded forms of the functions have been assumed. For each of the theories, numerical methods have been used to integrate the structure equations and obtain solutions for static and in some cases slowly rotating neutron stars. The relevant parameters of the compact objects such as their mass, radius, moment of inertia and in some cases matter interaction characteristics have been extracted from the solutions. Based on these, certain universal relations have been explored, which are largely independent on the Equation of State (EOS) of high-density nuclear matter and can provide insights into the gravitational theory independently of the exact underlying microscopic physics.
... We integrated the equations with the method of lines [59] in the following way: first we discretized the tortoise coordinate z fixing the boundaries of the domain and the number N of points, then we evolved ϕ 1 and M 2 in each discretized point using the 4th order Runge-Kutta method [60]. ...
... The spatial derivatives present in the equations were computed using second order accurate finite differences methods [59][60][61]: ...
... 2. we integrated numerically eq. (3.43) with the Simpson rule [60] using the scalar field at t=0 to obtain the initial profile of M 2 ; ...
Preprint
We study the instability of Schwarzschild black holes and the appearance of scalarized solutions in Einstein-scalar-Gauss-Bonnet gravity performing a time-domain analysis in a perturbative scheme. First we consider a quadratic coupling function and we perform an expansion for a small perturbation of the scalar field around the Schwarzschild solution up to the second order; we do not observe any stable scalarized configuration, in agreement with previous studies. We then consider the cases of quartic and exponential coupling, using an expansion for small values of the Newton's constant, in order to include the nonlinear terms introduced by the coupling in the field equations; in this case we observe the appearance of stable scalarized solutions different from those found in literature. The discrepancy can be an artifact of the perturbative approach.
... The N c {u c (q)} are denoted as centers, and they are selected from the observed data; N c ≤ N . The training of the {c aj , w aq } is a linear algebra problem [29]. ...
... which we regularize in a well established way [29]. The details of this are presented in Appendix A. ...
... ; In these equations we use the parameters given in [38,21]. Data is generated by solving Eq. (6) using a fourth order Runge-Kutta method [29]. ...
Preprint
Full-text available
Using methods from nonlinear dynamics and interpolation techniques from applied mathematics, we show how to use data alone to construct discrete time dynamical rules that forecast observed neuron properties. These data may come from from simulations of a Hodgkin-Huxley (HH) neuron model or from laboratory current clamp experiments. In each case the reduced dimension data driven forecasting (DDF) models are shown to predict accurately for times after the training period. When the available observations for neuron preparations are, for example, membrane voltage V(t) only, we use the technique of time delay embedding from nonlinear dynamics to generate an appropriate space in which the full dynamics can be realized. The DDF constructions are reduced dimension models relative to HH models as they are built on and forecast only observables such as V(t). They do not require detailed specification of ion channels, their gating variables, and the many parameters that accompany an HH model for laboratory measurements, yet all of this important information is encoded in the DDF model. As the DDF models use only voltage data and forecast only voltage data they can be used in building networks with biophysical connections. Both gap junction connections and ligand gated synaptic connections among neurons involve presynaptic voltages and induce postsynaptic voltage response. Biophysically based DDF neuron models can replace other reduced dimension neuron models, say of the integrate-and-fire type, in developing and analyzing large networks of neurons. When one does have detailed HH model neurons for network components, a reduced dimension DDF realization of the HH voltage dynamics may be used in network computations to achieve computational efficiency and the exploration of larger biological networks.
... Motivated by the aforementioned optimization approaches, we simultaneously employ the optimization of initial state preparation, Hamiltonian control and final measurements to achieve the highest estimation precision. Numerical optimization algorithms include the gradient ascent pulse engineering (GRAPE), [68] Newton's and the quasi-Newton method, [69] etc. Due to the slow convergence of the GRAPE algorithm and the computational burden of Newton's method involving the computation of second-order derivatives, we employ the quasi-Newton method in this work, which accelerates the speed of convergence by constructing a matrix approximately equal to the Hessian matrix but using only the first-order derivatives. Through the quasi-Newton method, we investigate the optimal scheme involving the optimal initial state preparation, the optimal controls and the optimal measurements to enhance the precision of estimating the multiple unknown frequencies by measuring the qubit evolving in the oscillating magnetic field. ...
... The quasi-Newton method is a development from the Newton's method. [69] The advantage of Newton's method lies in its second-order convergence speed, compared to the steepest descent method, but its disadvantage is the complexity of calculating the Hessian matrix, which involves the secondorder derivatives of the optimized function with respect to different parameters. The quasi-Newton method preserves the advantage of Newton's method by constructing an approximate matrix for the Hessian matrix and also maintains a fast convergence speed as the approximate matrix only involves the first-order derivatives of the optimized function, which keeps a good balance between the convergence speed and the computational complexity. ...
Article
Full-text available
Quantum multi-parameter estimation has recently attracted increased attention due to its wide applications, with a primary goal of designing high-precision measurement schemes for unknown parameters. While existing research has predominantly concentrated on time-independent Hamiltonians, little has been known about quantum multi-parameter estimation for time-dependent Hamiltonians due to the complexity of quantum dynamics. This work bridges the gap by investigating the precision limit of multi-parameter quantum estimation for a qubit in an oscillating magnetic field model with multiple unknown frequencies. As the well-known quantum Cramér–Rao bound is generally unattainable due to the potential incompatibility between the optimal measurements for different parameters, we use the most informative bound instead which is always attainable and equivalent to the Holevo bound in the asymptotic limit. Moreover, we apply additional Hamiltonian to the system to engineer the dynamics of the qubit. By utilizing the quasi-Newton method, we explore the optimal schemes to attain the highest precision for the unknown frequencies of the magnetic field, including the simultaneous optimization of initial state preparation, the control Hamiltonian and the final measurement. The results indicate that the optimization can yield much higher precisions for the field frequencies than those without the optimizations. Finally, we study the robustness of the optimal control scheme with respect to the fluctuation of the interested frequencies, and the optimized scheme exhibits superior robustness to the scenario without any optimization.
... The simplest approach is gradient descent, but it might take a long time to reach a point of convergence. Other methods, such as conjugate gradient, have a greater rate of convergence, at least in local minima, but are additional complex to implement the gradient descent [11]. ...
... The article demonstrated that a matrix can be factorized in a variety of ways utilizing the NMF models from equations (1), (2), (4), (6), (8), (10), (11), and (12). NMF models such as bilinear, symmetric, semiorthogonal, three factors, orthogonal three-factor, non-smooth, filtering, and multi-layer NMF models are also displayed. ...
Conference Paper
Network and tensor factorization are significant for the investigation of a non-negative informational index in lattice and higher request tensor structure. Such sort of factorization is known as non-negative matrix (NM) and non-negative tensor decomposition. We investigate non-negative matrix factorization (NMF) with several models in this research. Here, we look at some key models and their equations before looking at two different multiplicative algorithms for NMF. Further, investigated that comprehensive Kullback-Leibler divergence is minimized by one algorithm while the predictable least-squares error is minimized by another. Algorithm for NMF has been studied, as well as certain key theorems.
... Способ II (Вершков и Пашкевич, 2023): Вычисление наиболее существенных систематических (вековых) и периодических членов геодезического вращения методами численного интегрирования, наименьших квадратов и спектрального анализа. Численно интегрируя методом «Гаусса-Лежандра по 10 точкам» (Press et al., 1986), полученные численные временные ряды скоростей геодезического вращения исследуемого тела в рассматриваемых параметрах его вращения (углах Эйлера, его возмущающих членах физической либрации и абсолютной величине вектора угловой скорости вращения). В результате, вычисляются численные временные ряды величин геодезического вращения исследуемого тела (Вершков и Пашкевич, 2023) в рассматриваемых углах и абсолютной величине вектора углового поворота геодезического вращения тела ⃗ Λ = ⃗ σdt . ...
Article
In the article, the relativistic effect of geodetic nutation in the rotation of Jupiter and its Galilean moons (Io, Europa, Ganymede and Callisto) was investigated in two ways over an 800-year time interval. As a result, for the first time, the most significant periodic terms of the geodetic rotation of these celestial bodies relative to: a) the barycenter of the Solar system and the plane of the mean orbit of Jupiter of the epoch J2000.0 in the Euler angles, in the disturbing terms of the physical libration and in the absolute magnitude of the angular rotation vector of the geodetic rotation of the body under study; b) (with the exception of Jupiter) the barycenter of the Jupiter satellite system and the mean orbit of the studied satellite of the epoch J2000.0 in the disturbing terms of the physical libration and in the absolute magnitude of the angular rotation vector of the geodetic rotation of the body under study. It is shown that the use of Way II in this study, which uses the numerical integration procedure, is more preferable, and the values of the geodetic nutation terms obtained by this way are more accurate. The accuracy level of calculating the parameters (obtained by Way II) for the geodetic nutation values of the studied celestial bodies was 0.1 microarcseconds. The obtained analytical values of geodetic nutation of the studied celestial bodies can be used for numerical investigation of the rotation of these bodies in the relativistic approximation.
... Bilinear interpolation considers the four nearest pixels to the target location and calculates a weighted average based on their values. The weights are determined by the distances between the target location and the surrounding pixels; thereby, an image scaling task can be performed [20]. ...
Conference Paper
Full-text available
In stochastic computing (SC), data is represented using random bit- streams. The efficiency and accuracy of SC systems rely heavily on the stochastic number generator (SNG), which converts data from binary to stochastic bit-streams. While previous research has shown the benefits of using low-discrepancy (LD) sequences like Sobol and Halton in the SNG, the potential of other well-known random sequences remains unexplored. This study investigates new random sequences for potential use in SC. We find that Van Der Corput (VDC) sequences hold promise as a random number generator for accurate and energy-efficient SC, exhibiting intriguing correlation properties. Our evaluation of VDC-based bit-streams includes basic SC operations (multiplication and addition) and image processing tasks like image scaling. Our experimental results demonstrate high accuracy, reduced hardware cost, and lower energy consumption compared to state-of-the-art methods.
... Similarly, by (60) and (63) we have ...
Preprint
The goal of this paper is to understand how exponential-time approximation algorithms can be obtained from existing polynomial-time approximation algorithms, existing parameterized exact algorithms, and existing parameterized approximation algorithms. More formally, we consider a monotone subset minimization problem over a universe of size $n$ (e.g., Vertex Cover or Feedback Vertex Set). We have access to an algorithm that finds an $\alpha$-approximate solution in time $c^k \cdot n^{O(1)}$ if a solution of size $k$ exists (and more generally, an extension algorithm that can approximate in a similar way if a set can be extended to a solution with $k$ further elements). Our goal is to obtain a $d^n \cdot n^{O(1)}$ time $\beta$-approximation algorithm for the problem with $d$ as small as possible. That is, for every fixed $\alpha,c,\beta \geq 1$, we would like to determine the smallest possible $d$ that can be achieved in a model where our problem-specific knowledge is limited to checking the feasibility of a solution and invoking the $\alpha$-approximate extension algorithm. Our results completely resolve this question: (1) For every fixed $\alpha,c,\beta \geq 1$, a simple algorithm (``approximate monotone local search'') achieves the optimum value of $d$. (2) Given $\alpha,c,\beta \geq 1$, we can efficiently compute the optimum $d$ up to any precision $\varepsilon > 0$. Earlier work presented algorithms (but no lower bounds) for the special case $\alpha = \beta = 1$ [Fomin et al., J. ACM 2019] and for the special case $\alpha = \beta > 1$ [Esmer et al., ESA 2022]. Our work generalizes these results and in particular confirms that the earlier algorithms are optimal in these special cases.
... We want to detect this signal in the presence of the noise that is the sample variance of the the full V (τ). This "measurement variance of the variance" for any Gaussian process whose variance σ 2 is sampled N times, is 2σ 4 /N [40], so ...
Preprint
Trade prices of about 1000 New York Stock Exchange-listed stocks are studied at one-minute time resolution over the continuous five year period 2018--2022. For each stock, in dollar-volume-weighted transaction time, the discrepancy from a Brownian-motion martingale is measured on timescales of minutes to several days. The result is well fit by a power-law shot-noise (or Gaussian) process with Hurst exponent 0.465, that is, slightly mean-reverting. As a check, we execute an arbitrage strategy on simulated Hurst-exponent data, and a comparable strategy in backtesting on the actual data, obtaining similar results (annualized returns $\sim 60$\% if zero transaction costs). Next examining the cross-correlation structure of the $\sim 1000$ stocks, we find that, counterintuitively, correlations increase with time lag in the range studied. We show that this behavior that can be quantitatively explained if the mean-reverting Hurst component of each stock is uncorrelated, i.e., does not share that stock's overall correlation with other stocks. Overall, we find that $\approx 45$\% of a stock's 1-hour returns variance is explained by its particular correlations to other stocks, but that most of this is simply explained by the movement of all stocks together. Unexpectedly, the fraction of variance explained is greatest when price volatility is high, for example during COVID-19 year 2020. An arbitrage strategy with cross-correlations does significantly better than without (annualized returns $\sim 100$\% if zero transaction costs). Measured correlations from any single year in 2018--2022 are about equally good in predicting all the other years, indicating that an overall correlation structure is persistent over the whole period.
... The greater size of the simulation cohort relative to the database cohort is consistent with principles of MCS, with bootstrapping allowing for multiple replications per individual. 11 With a sample size of 2 million, diagnostic plots representing key summary results from simulation versus sample size were stable (variation <0.1% with successive runs) and insensitive to further cohort size increases. Sampling weights were used in bootstrapping to account for quantifiable differences in characteristics of the real-world cohort relative to the overall US NPDR population (online supplemental eMethods). ...
Article
Full-text available
Objective A simulation model was constructed to assess long-term outcomes of proactively treating severe non-proliferative diabetic retinopathy (NPDR) with anti-vascular endothelial growth factor (anti-VEGF) therapy versus delaying treatment until PDR develops. Methods and analysis Simulated patients were generated using a retrospective real-world cohort of treatment-naive patients identified in an electronic medical records database (IBM Explorys) between 2011 and 2017. Impact of anti-VEGF treatment was derived from clinical trial data for intravitreal aflibercept (PANORAMA) and ranibizumab (RISE/RIDE), averaged by weighted US market share. Real-world risk of PDR progression was modelled using Cox multivariable regression. The Monte Carlo simulation model examined rates of progression to PDR and sustained blindness (visual acuity <20/200) for 2 million patients scaled to US NPDR disease prevalence. Simulated progression rates from severe NPDR to PDR over 5 years and blindness rates over 10 years were compared for delayed versus early-treatment patients. Results Real-world data from 77 454 patients with mild-to-severe NPDR simulated 2 million NPDR patients, of which 86 680 had severe NPDR. Early treatment of severe NPDR with anti-VEGF therapy led to a 51.7% relative risk reduction in PDR events over 5 years (15 704 early vs 32 488 delayed), with a 19.4% absolute risk reduction (18.1% vs 37.5%). Sustained blindness rates at 10 years were 4.4% for delayed and 1.9% for early treatment of severe NPDR. Conclusion The model suggests treating severe NPDR early with anti-VEGF therapy, rather than delaying treatment until PDR develops, could significantly reduce PDR incidence over 5 years and sustained blindness over 10 years.
... The rapid development of information technology has also driven the progress of power system technology, including wind and photovoltaic power generation. The previous power system was a relatively conservative one as the electromechanical transient simulation tools have been used for 20-30 years [1][2][3][4] , although it can provide reasonable support for power system operation analysis, it can be seen that a single power system simulation means or power technology field can no longer adapt to the current multi-domain and multi-disciplinary technology development environment [5][6][7][8] , due to the rapid development of Internet and information technology with deep integration and mutual interaction of power system, communication and market. It is essential to improve the openness of the platform and realize an open and co-simulation environment on the basis of the original power system tools. ...
Article
Full-text available
As large-scale wind and photovoltaic power generation incorporating into the AC power grid, the impact of new energy fluctuation and uncertainty on the transient stability of power grid have attracted widespread attention among researchers and engineers in the power industry. It is necessary to improve the transient simulation and modelling ability of new energy to provide a more efficient simulation platform for the stability research of new energy generation connecting to large power grid. In this paper, the ADPSS technology of large-scale new energy electromagnetic transient simulation is mainly studied in depth, including the following contents: the design and development of the interface and program of ADPSS-Matlab co-simulation, the improvement and optimization of the parallel high performance computing, and the verification of the accuracy of co-simulation for the modelling and experimental research with large-scale new energy simulation.
... The Sherman Morrison linear algebra formula, termed after J. Sherman and W. Morrison, can be used to do the inverse operation for a sum of the outer product of two vectors (u and v) and an invertible matrix X. The statement of this formula is as follows [33], [34]; If we have two column vectors ( ) belong to , and an invertible matrix X belong to ( ) in this case, the matrix ( ) can be averted in one case only, i.e., when . If this condition is satisfied, the results of the inverse will be determined as follows, ...
Article
Full-text available
The new concept of Beamspace Multiple Input Multiple Output system (BS-MIMO) was developed to address the issue of reduced Energy Efficiency (EE) of the traditional MIMO in millimeter-Wave (mm-Wave) wireless communication systems by decreases the large number of radio frequency chains (RF-chains) while maintaining the same number of antennas in those systems. On the other hand, a fewer RF-Chains leads to a smaller number of users that the system can serve as a result of the number of users that can be served must be equal to/or less than the RF-chains’ number. To overcome the above issue, the BS-MIMO is proposed to be integrated with Non-Orthogonal Multiple-Access (NOMA) scheme to produce the novel approach of BS-MIMO-NOMA. As a result, the novel scheme is capable of serving a group of users with correlated channels via a single RF-chain. In this paper, we will address the issue of EE of the above-mentioned communication systems. Specifically, we propose and develop an iterative algorithm with a low complexity that achieves near-perfect performance. The proposed scheme, named MSE-DPA (for Mean-Square-Error-Based Dynamic Power Allocation Algorithm), is checked to ensure this validity of Figure-of-Merit. The simulation results indicate that the EE is approximately 85% greater than that of traditional (fully-digital) MIMO systems for a certain fair system and environment scenario.
... x t x x y t y y t PCBP can also be extended to two-dimensional (2D) and three-dimensional (3D) by employing Bicubic Interpolation [19,20] and Tricubic Interpolation [21,22], respectively. However, the proposed method has a limitation. ...
Conference Paper
Bézier curve is a parametric polynomial defined by control points. Many researchers have been using piecewise Bézier interpolating polynomials to estimate missing values of dataset. However, the smoothness and accuracy of these interpolating polynomials can still be improved. This article proposes a new piecewise cubic Bézier polynomial by adopting the similar strategy for deriving cubic splines with modification of boundary conditions. Hence, the resultant parametric interpolating polynomial is stable and flexible. Besides satisfying the second-order of continuity C², the numerical results also indicate that the newly constructed polynomial is more accurate than the existing methods when approximating missing values of the same datasets.
... Then it runs AOC and post-process its results in order to extract the objective values. Three standard multiple dimension optimization routines have been implemented (Downhill Simplex Method, Direction SET Powell Method and Simulated Annealing Method) [8]). ...
Conference Paper
Full-text available
At IBA a high-intensity compact self-extracting cyclotron is being studied. There is no dedicated extraction device but instead, a special shaping of the magnetic iron and the use of harmonic coils to create large turn-separation. Proton currents up to 5 mA are aimed for. This would open new ways for large-scale production of medical radioisotopes. The main features of the cyclotron are presented. A major variable of the beam simulations is the space charge effect in the cyclotron centre. Using the SCALA-solver of Opera3D, we attempt to find the ion source plasma meniscus and the beam phase space and current extracted from it. With these properties known, we study the bunch formation and acceleration under high space charge condition with our in-house tracking code AOC. We also discuss a new tool that automa-tizes optimization of cyclotron settings for maximizing beam properties such as extraction efficiency.
... Solving (4) requires the use of nonlinear optimization algorithms. We adopted the quasi-Newton Broyden-Fletcher-Goldfarb-Shanno (BFGS) method [18] for the numerical computations. We suggest to use as initial point estimate for β the ordinary least squares estimate of β, obtained from a linear regression of the ...
Preprint
Full-text available
This letter proposes a regression model for nonnegative signals. The proposed regression estimates the mean of Rayleigh distributed signals by a structure which includes a set of regressors and a link function. For the proposed model, we present: (i)~parameter estimation; (ii)~large data record results; and (iii)~a detection technique. In this letter, we present closed-form expressions for the score vector and Fisher information matrix. The proposed model is submitted to extensive Monte Carlo simulations and to measured data. The Monte Carlo simulations are used to evaluate the performance of maximum likelihood estimators. Also, an application is performed comparing the detection results of the proposed model with Gaussian-, Gamma-, and Weibull-based regression models in SAR images.
... The χ 2 was computed for each model of the grid and we interpolated between these points with a step of ∆T eff = 100 K and ∆ log g = 0.01. The uncertainties at 1σ, 2σ, and 3σ on T eff and log g are estimated from ∆χ 2 = 2.30, 6.18, and 11.83, respectively (two degrees of freedom, Press et al. 2007). ...
... where c i are the respective eigenvector coefficients. A set of eigenvectors and coefficients were obtained using a SVD solution (Press et al. 1992 profile after a given number of eigenvectors, the data was 'de-noised'. If the number of retained eigenvectors is correctly chosen, this can be achieved with minimal loss of signal. ...
Thesis
Full-text available
It has been known for some time that there exists a substantial amount of ‘hidden’ magnetic energy in the quiet solar photosphere, and unlocking an understanding of this phenomenon requires the study of magnetic activity on the smallest scales accessible to observations. With the advent of next generation high resolution telescopes, our understanding of how the magnetic field is organized in the internetwork (IN) photosphere is likely to advance significantly. Presented are high spatio-temporal resolution observations that reveal the dynamics of two disk-centre IN regions taken by the GREGOR Infrared Spectrograph Integral Field Unit (GRIS-IFU) with the highly magnetically sensitive photospheric Fe I line at 15648.52 ˚A. Inversions are applied with the Stokes inversion based on response functions (SIR) code to retrieve the parameters characterizing the atmosphere, tracking the dynamics of small-scale magnetism. Linear polarization features (LPFs) are found with magnetic flux density 130 − 150 G, appearing preferentially at granule-intergranular lane boundaries. The weak magnetic field appears to be organized in terms of complex ‘loop-like’ structures, with transverse fields often flanked by opposite polarity longitudinal fields. A snapshot produced from a high resolution three-dimensional radiative magnetohydrodynamic (MHD) simulation is used with SIR to produce synthetic observables in the same spectral window as observed by the GRIS-IFU. A parallelized wrapper to SIR is then used to perform nearly 14 million inversions of the synthetic spectra to test how well the ‘true’ MHD atmospheric parameters can be constrained statistically. Finally, the synthetic Stokes vector is degraded spectrally and spatially to GREGOR resolutions and the impact of unpolarized stray light contamination, spatial resolution and signal-to-noise is considered. A LPF exhibiting very similar magnetic flux density as those observed by the GRIS-IFU is studied. Thus, it is demonstrated that MHD simulations are capable of showing close agreement with real observations.
Article
We propose a simple imperative programming language, ERC, that features arbitrary real numbers as primitive data type, exactly. Equipped with a denotational semantics, ERC provides a formal programming language-theoretic foundation to the algorithmic processing of real numbers. In order to capture multi-valuedness, which is well-known to be essential to real number computation, we use a Plotkin powerdomain and make our programming language semantics computable and complete: all and only real functions computable in computable analysis can be realized in ERC. The base programming language supports real arithmetic as well as implicit limits; expansions support additional primitive operations (such as a user-defined exponential function). By restricting integers to Presburger arithmetic and real coercion to the `precision' embedding $\mathbb{Z}\ni p\mapsto 2^p\in\mathbb{R}$, we arrive at a first-order theory which we prove to be decidable and model-complete. Based on said logic as specification language for preconditions and postconditions, we extend Hoare logic to a sound (w.r.t. the denotational semantics) and expressive system for deriving correct total correctness specifications. Various examples demonstrate the practicality and convenience of our language and the extended Hoare logic.
Article
Full-text available
Iris biometrics is a phenotypic biometric trait that has proven to be agnostic to human natural physiological changes. Research on iris biometrics has progressed tremendously, partly due to publicly available iris databases. Various databases have been available to researchers that address pressing iris biometric challenges such as constraint, mobile, multispectral, synthetics, long-distance, contact lenses, liveness detection, etc. However, these databases mostly contain subjects of Caucasian and Asian docents with very few Africans. Despite many investigative studies on racial bias in face biometrics, very few studies on iris biometrics have been published, mainly due to the lack of racially diverse large-scale databases containing sufficient iris samples of Africans in the public domain. Furthermore, most of these databases contain a relatively small number of subjects and labelled images. This paper proposes a large-scale African database named Chinese Academy of Sciences Institute of Automation (CASIA)-Iris-Africa that can be used as a complementary database for the iris recognition community to mediate the effect of racial biases on Africans. The database contains 28 717 images of 1 023 African subjects (2 046 iris classes) with age, gender, and ethnicity attributes that can be useful in demographically sensitive studies of Africans. Sets of specific application protocols are incorporated with the database to ensure the database’s variability and scalability. Performance results of some open-source state-of-the-art (SOTA) algorithms on the database are presented, which will serve as baseline performances. The relatively poor performances of the baseline algorithms on the proposed database despite better performance on other databases prove that racial biases exist in these iris recognition algorithms. The database will be made available on our website: http://www.idealtest.org.
Article
Dental biomaterials are commonly used in vital pulp therapy to protect dentin against degradation. However, the protection role of the ions diffusing in affected dentin in indirect pulp capping remains not fully understood due to the limitations of experimental techniques to validate it at the atomic scale. In this study, molecular dynamics (MD) is used for studying two bioactive materials, during mineral apatite formation and remineralization. LAMMPS code with DREIDING Force Field and Universal Force-Fields (UFF) simulated the behaviour of calcium hydroxide and mineral trioxide aggregate in the tooth structure in the oral environment. The comparison of the physical parameters provided by the simulation is discussed in detail to explore the possibilities of crystallization depending on potential energy, lattice constant, XRD pattern, atomic volume and radius of gyration. MD results show that the crystallization process occurs in both materials after about 10 ns, at 310 K and 1 bar.
Article
Full-text available
The mechanism, temperature, and timescale of granite intrusion remain controversial, with wide-ranging implications for understanding continental growth, differentiation, rheology, and deformation dynamics. In this paper we present a method for determining intrusion emplacement temperature and timescale using the characteristics of the surrounding metamorphic aureole, and apply it to the Skiddaw granite in northern England. The estimated emplacement timescale (0.1–2 Myr) implies magma transport velocities of 1–100 mm/year. At the absent or low melt fractions relevant to our estimated emplacement temperature (580–650 $$^{\circ }$$ ∘ C), such velocities are incompatible with pluton formation by successive injections through dykes. Instead, our results indicate the intrusion of a diapir of crystal-rich slurry, solidifying before emplacement, with a rheology governed by the solid crystals. The emplacement depth is likely to be governed by the depth-dependent rheology of the surrounding rocks, occurring close to the brittle-ductile transition. The wider implications of our results relate to (1) the appreciation that much of the chemical and textural characteristics of plutons may relate to pre-emplacement crystallisation at depth, passively transported to higher crustal levels, and (2) an explanation of the difficulty of seismically imaging active plutonism.
Preprint
In recent decades, a growing number of discoveries in fields of mathematics have been assisted by computer algorithms, primarily for exploring large parameter spaces that humans would take too long to investigate. As computers and algorithms become more powerful, an intriguing possibility arises - the interplay between human intuition and computer algorithms can lead to discoveries of novel mathematical concepts that would otherwise remain elusive. To realize this perspective, we have developed a massively parallel computer algorithm that discovers an unprecedented number of continued fraction formulas for fundamental mathematical constants. The sheer number of formulas discovered by the algorithm unveils a novel mathematical structure that we call the conservative matrix field. Such matrix fields (1) unify thousands of existing formulas, (2) generate infinitely many new formulas, and most importantly, (3) lead to unexpected relations between different mathematical constants, including multiple integer values of the Riemann zeta function. Conservative matrix fields also enable new mathematical proofs of irrationality. In particular, we can use them to generalize the celebrated proof by Ap\'ery for the irrationality of $\zeta(3)$. Utilizing thousands of personal computers worldwide, our computer-supported research strategy demonstrates the power of experimental mathematics, highlighting the prospects of large-scale computational approaches to tackle longstanding open problems and discover unexpected connections across diverse fields of science.
Preprint
Ways of formation of azimuthal resonant patterns in circumstellar planetesimal disks with planets are considered. Our analytical estimates and massive numerical experiments show that the disk particles that initially reside in zones of low-order mean-motion resonances with the planet may eventually concentrate into potentially observable azimuthal patterns. The structuring process is rapid, usually taking ~100 orbital periods of the planet. It is found that the relative number of particles that retain their resonant position increases with decreasing the mass parameter $\mu$ (the ratio of masses of the perturbing planet and the parent star), but a significant fraction of the particle population is always removed from the disk due to accretion of the particles onto the star and planet, as well as due to their transition to highly elongated and hyperbolic orbits. Expected radio images of azimuthally structured disks are constructed. In the considered models, azimuthal patterns associated with the 2:1 and 3:2 resonances are most clearly manifested; observational manifestations of the 1:2 and 2:3 resonances are also possible.
Article
Full-text available
Square-root unscented Kalman filter (SRUKF) is a widely used state estimator for several state of-the-art, highly nonlinear, and critical applications. It improves the stability and numerical accuracy of the system compared to the non-square root formulation, the unscented Kalman filter (UKF). At the same time, SRUKF is less computationally intensive compared to UKF, making it suitable for portable and battery-powered applications. This paper proposes a low-complexity and power-efficient architecture design methodology for SRUKF presented with a use case of the simultaneous localization and mapping (SLAM) problem. Implementation results show that the proposed SRUKF methodology is highly stable and achieves higher accuracy than the extensively used extended Kalman filter and UKF when developed for highly critical nonlinear applications such as SLAM. The design is synthesized and implemented on resource constraint Zynq-7000 XC7Z020 FPGA-based Zedboard development kit and compared with the state-of-the-art Kalman filter-based FPGA designs. Synthesis results show that the architecture is highly stable and has significant computation savings in DSP cores and clock cycles. The power consumption was reduced by 64\(\%\) compared to the state-of-the-art UKF design methodology. ASIC design was synthesized using UMC 90-nm technology, and the results for on-chip area and power consumption results have been discussed.
Article
In this paper, we introduce the inflated beta autoregressive moving average (I\(\beta \)ARMA) models for modeling and forecasting time series data that assume values in the intervals (0,1], [0,1) or [0,1]. The proposed model considers a set of regressors, an autoregressive moving average structure and a link function to model the conditional mean of inflated beta conditionally distributed variable observed over the time. We develop partial likelihood estimation and derive closed-form expressions for the score vector and the cumulative partial information matrix. Hypotheses testing, confidence interval, some diagnostic tools and forecasting are also proposed. We evaluate the finite sample performances of partial maximum likelihood estimators and confidence interval using Monte Carlo simulations. Two empirical applications related to forecasting hydro-environmental data are presented and discussed.
Chapter
Nowadays, an Optimal Control problem tends to fall into the non-standard setting, especially in the economic field. This research deals with the non-standard Optimal Control problem with the involvement of the royalty payment. In maximizing the performance index, the difficulty arises when the final state value is unknown and resulting in the non-zero final costate value. In addition, the royalty function cannot be differentiated at a certain time frame. Therefore, an approximation of the hyperbolic tangent (tanh) was used as a continuous approach and the shooting method was implemented to settle the untangled issue. The shooting method was constructed in C ++ programming computer software. At the end of the study, the results produced are in the optimal solution. Future academics may build on this innovative discovery as they create mathematical modeling techniques to address practical economic issues. Moreover, the new method can advance the academic field in line with today’s technological advances.KeywordsOptimal ControlRoyalty Payment ProblemShooting Method
Article
Full-text available
Unlabelled: The goal of NASA's Europa Clipper Mission is to investigate the habitability of the subsurface ocean within the Jovian moon Europa using a suite of ten investigations. The Europa Clipper Magnetometer (ECM) and Plasma Instrument for Magnetic Sounding (PIMS) investigations will be used in unison to characterize the thickness and electrical conductivity of Europa's subsurface ocean and the thickness of the ice shell by sensing the induced magnetic field, driven by the strong time-varying magnetic field of the Jovian environment. However, these measurements will be obscured by the magnetic field originating from the Europa Clipper spacecraft. In this work, a magnetic field model of the Europa Clipper spacecraft is presented, characterized with over 260 individual magnetic sources comprising various ferromagnetic and soft-magnetic materials, compensation magnets, solenoids, and dynamic electrical currents flowing within the spacecraft. This model is used to evaluate the magnetic field at arbitrary points around the spacecraft, notably at the locations of the three fluxgate magnetometer sensors and four Faraday cups which make up ECM and PIMS, respectively. The model is also used to evaluate the magnetic field uncertainty at these locations via a Monte Carlo approach. Furthermore, both linear and non-linear gradiometry fitting methods are presented to demonstrate the ability to reliably disentangle the spacecraft field from the ambient using an array of three fluxgate magnetometer sensors mounted along an 8.5-meter (m) long boom. The method is also shown to be useful for optimizing the locations of the magnetometer sensors along the boom. Finally, we illustrate how the model can be used to visualize the magnetic field lines of the spacecraft, thus providing very insightful information for each investigation. Supplementary information: The online version contains supplementary material available at 10.1007/s11214-023-00974-y.
Preprint
This literature review presents a comprehensive overview of machine learning (ML) applications in proton magnetic resonance spectroscopy (MRS). As the use of ML techniques in MRS continues to grow, this review aims to provide the MRS community with a structured overview of the state-of-the-art methods. Specifically, we examine and summarize studies published between 2017 and 2023 from major journals in the magnetic resonance field. We categorize these studies based on a typical MRS workflow, including data acquisition, processing, analysis, and artificial data generation. Our review reveals that ML in MRS is still in its early stages, with a primary focus on processing and analysis techniques, and less attention given to data acquisition. We also found that many studies use similar model architectures, with little comparison to alternative architectures. Additionally, the generation of artificial data is a crucial topic, with no consistent method for its generation. Furthermore, many studies demonstrate that artificial data suffers from generalization issues when tested on in-vivo data. We also conclude that risks related to ML models should be addressed, particularly for clinical applications. Therefore, output uncertainty measures and model biases are critical to investigate. Nonetheless, the rapid development of ML in MRS and the promising results from the reviewed studies justify further research in this field.
Article
Research activities in KAERI Atomic Data Center on the basic atomic structure and collision cross-sections needed for spectroscopy analysis in various atomic and molecular, optical, and plasma physics fields are instructed. The methodologies of our research and the present and future aspects of the applications are explained. In addition, our constructed numerical database for the atomic data and running of a collisional-radiative spectroscopic modeling code on the web are demonstrated.
Article
Full-text available
Near-Earth solar winds are separated into two groups: slow solar wind (SSW) with plasma speed [\(V_{\mathrm{sw}}\)] \(< 500\) km s−1 and high-speed solar wind (HSW) with \(V_{\mathrm{sw}} > 700\) km s−1. A comparative study is performed on the plasma and interplanetary magnetic field (IMF) properties of the near-Earth SSW and HSW, using solar wind measurements propagated to Earth’s bow shock nose from 1963 through 2022. On average, HSW is characterized by higher alpha-to-proton density ratio [\(N_{\mathrm{a}}/N_{\mathrm{p}}\)] (67%), ram pressure [\(P_{ \mathrm{sw}}\)] (95%), proton temperature [\(T_{\mathrm{p}}\)] (370%), reconnection electric field [\(VB_{\mathrm{s}}\)] (141%), Alfvén speed [\(V_{\mathrm{A}}\)] (76%), magnetosonic speed [\(V_{\mathrm{ms}}\)] (65%), and lower proton density [\(N_{\mathrm{p}}\)] (52%) and plasma-\(\beta\) (54%) than SSW. In \(VB_{\mathrm{s}}\), \(V = V_{\mathrm{sw}}\), \(B_{\mathrm{s}}\) is the southward component of IMF. \(V_{\mathrm{A}} = B_{0}/\sqrt{\mu_{0}\rho}\), \(V_{\mathrm{ms}} = \sqrt{V_{\mathrm{A}}^{2}+V_{\mathrm{S}}^{2}}\), where \(B_{0}\) is the IMF magnitude, \(\mu_{0}\) is the free space permeability, \(\rho\) is the solar wind mass density, and \(V_{\mathrm{S}}\) is the sound speed. \(\beta\) is defined as the plasma pressure to the magnetic-pressure ratio. The geomagnetic activity is found to be enhanced during HSW, as reflected in higher average auroral electrojet index [AE] (213%) and stronger geomagnetic Dst index (367%) compared to those during SSW. The SSW characteristic parameters \(N_{\mathrm{a}}/N_{\mathrm{p}}\), \(T_{\mathrm{p}}\), \(B_{0}\), \(V_{\mathrm{A}}\), and \(V_{\mathrm{ms}}\) exhibit medium to strong correlations (correlation coefficients \(r = 0.51\) to 0.87) with the \(F_{\mathrm{10.7}}\) solar flux, while \(\beta\) and Mach numbers exhibit strong anti-correlations (\(r = -0.82\) to −0.90) with \(F_{\mathrm{10.7}}\). The associations are weaker or insignificant for HSW.
Preprint
Full-text available
Many existing extremal data span only a few decades, often resulting in large bias and uncertainty in the estimated shape parameter of the extreme hazard model. This in turn leads to unreliable predicted extreme values at high average recurrence intervals (ARI’s). This paper illustrates a statistical method that provides a mechanism to obtain a hazard model that produces return levels at high ARI’s with reduced bias. The method makes use of the maximum recorded values of extremal data independently recorded from a number of observational sites. The logarithmically transformed probability of the maximum recorded value at a site is shown to follow the Gumbel (Type I extreme-value) distribution, therefore multiple, say m , sites provide a sample of size m transformed probabilities of extreme values, each from a distinct site. The sample can be treated as being drawn from a Gumbel distribution, irrespective of the underlying hazard-generating mechanisms or the statistical hazard models. The method is demonstrated by an analysis of the extreme wind gust data collected from automatic weather stations in South Australia. The results are compared to the specifications in the Australian standard AS/NZS 1170.2:2021 and indicates that the standard may have overestimated the wind gust hazard, hence the specified design wind speeds may fall on the conservative side for South Australia.
Article
This study examines how LIBS data collected using a downhole deployable LIBS prototype for geochemical analysis in a fashion that imitates downhole deployment may be used for mineralogical investigations. Two chemically and mineralogically practically identical felsic rocks, namely granite and microgranite are used to assess the effects of rock texture on mineral classification and high-resolution SEM-TIMA mineral maps are used to reveal mineralogical composition of each LIBS ablation crater. Additionally, in order to extend the LIBS application for fast mineralogical studies to a greenfield scenario (i.e., no previous knowledge) a clustering methodology is presented for mineralogical classification from LIBS data. Results indicate that most LIBS spot analyses sample mineral mixtures, 91.2% and 100% for granite and microgranite, respectively, which challenges mineralogical classification, particularly for fine-grained rocks. Positive identification and classification of minerals of slightly different compositions relative to the bulk rock (i.e., fluorite and biotite in granitic rocks) demonstrates how minerals or minerals groups of distinct and interesting chemical compositions (e.g., sulphides or oxides in silicate dominated rocks) can be rapidly recognised in a mineral exploration scenario. Strategies for overcoming mineral mixture issues are presented and recommendations are given for effective workflows for mineralogical analysis using LIBS data in different mineral exploration stages. Supplementary material: https://doi.org/10.6084/m9.figshare.c.6444482
Article
The Zhang-Torquato conjecture [G. Zhang and S. Torquato, Phys. Rev. E, 2020, 101, 032124.] states that any realizable pair correlation function g2(r) or structure factor S(k) of a translationally invariant nonequilibrium system can be attained by an equilibrium ensemble involving only (up to) effective two-body interactions. To further test and study this conjecture, we consider two singular nonequilibrium models of recent interest that also have the exotic hyperuniformity property: a 2D "perfect glass" and a 3D critical absorbing-state model. We find that each nonequilibrium target can be achieved accurately by equilibrium states with effective one- and two-body potentials, lending further support to the conjecture. To characterize the structural degeneracy of such a nonequilibrium-equilibrium correspondence, we compute higher-order statistics for both models, as well as those for a hyperuniform 3D uniformly randomized lattice (URL), whose higher-order statistics can be very precisely ascertained. Interestingly, we find that the differences in the higher-order statistics between nonequilibrium and equilibrium systems with matching pair statistics, as measured by the "hole" probability distribution, provide measures of the degree to which a system is out of equilibrium. We show that all three systems studied possess the bounded-hole property and that holes near the maximum hole size in the URL are much rarer than those in the underlying simple cubic lattice. Remarkably, upon quenching, the effective potentials for all three systems possess local energy minima (i.e., inherent structures) with stronger forms of hyperuniformity compared to their target counterparts. Our methods are expected to facilitate the self-assembly of tunable hyperuniform soft-matter systems.
Chapter
Time series analysis is used to investigate the temporal behavior of a variable x(t). Examples include investigations into long-term records of mountain uplift, sea-level fluctuations, orbitally induced insolation variations (and their influence on the ice-age cycles), millennium-scale variations in the atmosphere–ocean system, the effect of the El Niño/Southern Oscillation on tropical rainfall and sedimentation (Fig. 5.1), and tidal influences on noble gas emissions from bore holes. The temporal pattern of a sequence of events can be random, clustered, cyclic, or chaotic.
Article
The present research work has been carried out for the Kollur River Basin, Kundapura Taluk of Udupi District of Karnataka. Kollur River is tributaries of Chakra and Souparnika Rivers. The problem of seawater getting mixed with underground water tables has become acute in many gram panchayats like Maravanthe and Senapura, located near the seashore. Villages like Vandse and Chittur, situated some distance away from the sea, have another problem on hand. The drinking water sources of these villages have dried up due to the depletion of the water table. To understand the hydrology of this complex landscape, the SWAT-CUP model was calibrated and validated using the SUFI-2, considering 14 important hydrologic parameters based on literature sources. The SUFI-2 tool employs stochastic calibration, which recognizes and expresses model errors and uncertainties as ranges account for all underlying variables, conceptual framework, parameters, and observed values. Our watershed model has eight sub-basins and 126 Hydrological Response Units (HRU) to simulate hydrological processes. Climate data from 2007 to 2021 revealed that the most precipitation occurred from June to September, with a maximum of 789 mm in June and a low of 0 mm in January. The hydrographs of 95 PPU plots were obtained from single iterations (500 simulations). The p-factor and r-factor were found to be 0.15 and 1.59, respectively. The accuracy of the simulation findings between observed and model-generated streamflow values was satisfactory. The SWAT-CUP enhanced streamflow models by lowering parameter uncertainty. It can be concluded that less sensitive parameters require more time to reduce the uncertainty than more sensitive values due to wider confidential intervals.
Article
The purpose of this study is to discuss the statistical distributions of the inter-occurrence times of successive large earthquakes. We examine the Global Centroid Moment Tensor Catalog from 1976 to 2021 to analyze shallow earthquakes with a moment magnitude, \({M}_{w}\ge 7.0\). After removing the aftershocks that occur in and around the faults of the mainshock within a given time–space window, we select the main events and search for successive ones in the space–time window to group them in clusters. We use four renewal models (Brownian passage time, gamma, lognormal, and Weibull) to fit the data. We estimate the models’ parameters using the maximum likelihood estimation method. Then, we perform two goodness-of-fit tests: the Akaike information criterion and the Kolmogorov–Smirnov test to evaluate the suitability of the model distributions to the observed data. The results reveal that the lognormal distribution provides the best fit to the observed data in at least 50% of the regions under consideration. An intermediate fit comes from the Weibull distribution, whereas the Brownian passage time and gamma distributions exhibit a poor fit. Then, we estimated the conditional probability of the occurrence of successive large earthquakes for the 10-year period between 2022 and 2032. Estimates range from 16 to 96%. To evaluate the usefulness of the interevent time-dependent earthquake modeling, we compared the results with the time-independent Poisson distribution. The results show that the renewal model, associated with a time-dependent earthquake hazard, is significantly better than a time-independent Poisson model.
Book
Full-text available
A course in Numerical Methods in Computational Engineering, oriented to engineering education, originates at first from the course in numerical analysis for graduate students of Faculty of Civil Engineering and Architecture of Nis (GAF), and then from course Numer- ical Methods held in English language at Faculty of Civil Engineering in Belgrade in the frame of project DYNET (Dynamical Network) in common of Faculty of Civil Engineering of University of Bochum, Faculty of Civil Engineering and Architecture of University of Nis, Faculty of Civil Engineering of University Belgrade, and IZIIS (Institute for Earth- quake Engineering and Seismology) of University Skopje. The subject Numerical Analysis was held in the first semester ofpostgraduate studies at GAF by Prof G.V. Milovanovic for years. In continuation, following Bologna process, the new structured subject entitled Numerical Analysis is ~o be introduced to PhD students at GAF. In addition, having in n1ind that course in numerical analysis become accepted as an important ingredient in the undergraduate education in engineering and technology, it was with its main topics involved in undergraduate subject Informatics II at GAF Nis (As acollateral case, in Appendix A.4. -in electronic fonn -are given numerical methods in Informatics, what could be interesting for students of this orientation). The backbone of this script are famous books of G.V. Milovanovic, Numerical Anal- ysis, Part I, II, and III, Naucna knjiga, Beograd, 1988 (Serbian). In addition, the book Programming Numerical Methods in Fortran, by G.V. Milovanovic and Dj. R. Djordjevic, University of Nis, 1981 (Serbian), with its engineering-oriented text and codes, was rather used. As previously noted, this textbook is supporting undergraduate studies, master and doctoral study at GAF; and international master study in the frame of DYNET project. Presentation on GAF site would enable distance lear~ing technique and on-line consulta- tions with lecturer. By up-to-day engineering oriented applications the supporting of life long education of civil engineers will be enabled. This script will be available on the site of GAF (http: I lwww. gaf. ni. ac. yu) under In- ternational Projects and can be reached by chapters using address http: I lwww. gaf. ni. ac. yulcdplsubj ect_syllabus .htm. Each chapter concludes with a ba- sic bibliography and suggested further reading. Tutorial exercises in form of selected as- signments are also presented on the site of GAF. Some hints for solutions are given in the same files. Devoted primarily to students of Civil Engineering (undergraduate and graduate - master & PhD), this textbook is dedicated also to industry and research purposes.
Thesis
p>As an attempt to overcome the unfortunate division between data and physical modelling, this thesis is devoted to the development of a framework which allows the derivation of a physical model that at the same time includes errors and uncertainties about this system which are as unspecified with respect to their statistical properties as possible. The work can be summarized as: 1. Grey box modelling : An appropriate semi-physical model of a ballistic missile with very limited knowledge about the aerodynamical properties of the airframe is being developed. It is shown which kind of dynamics the missile may exhibit and with which methods these dynamics can be analyzed. It is demonstrated that the missile may also show chaotic behaviour, the methods to measure this type of dynamics are presented. The lack of data points may, however, prevent the applicability of these methods which therefore suggests the theoretical framework of an: 2. Endo-observer for linear systems: It is proved that real-life observers require a framework such as a stochastic one for incorporating their uncertainties into a physical model. It is shown how minimally specified uncertainties can enter the dynamics of a free particle. The dynamics turn out to be similar to those given by the Schrödinger equation. 3. Endo-observer for the nonlinear state space: The concepts of physical dynamics with uncertainties which have been developed for the free particle are exhaustively re-developed for the case when the variance of the uncertainties remains within an arbitrary but fixed bound defined prior to the experiment. The dynamics of a nonlinear vector field with uncertainties are expressed in a form which is very much similar to the Schrödinger equation with a 'momentum' consisting of the 'mechanical momentum' from which a 'electromagnetical momentum' is subtracted which in this case here is the vector field of the state space. 4. endo-obsserver with bias: It is argued that an observer which has to select permanently among all available variable and parameter values has to favour the more probable ones rather than the less probable ones. The dynamics then lead to the well known nonlinear Schrödinger equation.</p
Article
We propose a new method to calculate relaxation time spectrum (RTS) and enable conversions between viscoelastic functions. The exact relations between the viscoelastic functions are simply derived using complex analysis of the higher-order derivative of those functions. Hence, a stable numerical differential method is demanded to obtain genuine solutions without the interference of errors due to numerical analysis. In this study, we adopted the double-logarithmic B-spline and its recursive relation to obtain higher-order derivative. The proposed algorithm is tested and compared with previous methods, using simulated and experimental data. When creep data obtained through experiments are converted to dynamic moduli, significant improvement in the terminal behavior is observed compared to the previous method because the Runge phenomenon is significantly reduced using a low-order polynomial. Moreover, the spectra obtained for experimental data are almost identical to those obtained through a previously verified algorithm. Thus, our results agree well with both simulated data and experimental data.
Article
The generation of random values corresponding to an underlying Gamma distribution is a key capability in many areas of knowledge such as Probability and Statistics, Signal Processing or Digital Communication, among others. Throughout history, different algorithms have been developed for the generation of such values and advances in computing have made them increasingly faster and more efficient from a computational point of view. These advances also allow the generation of higher quality inputs (from the point of view of randomness and uniformity) for these algorithms that are easily tested by different statistical batteries such as NIST, Dieharder or TestU01 among others. This article describes the existing algorithms for the generation of (independent and identically distributed-i.i.d.-) Gamma distribution values as well as the theoretical and mathematical foundations that support their validity.
Article
Full-text available
Increased ambient temperature causes heat stress in mammals, which affects physiological and molecular functions. We have recently reported that the dietary administration of a postbiotic from Aspergillus oryzae (AO) improves tolerance to heat stress in fruit flies and cattle. Furthermore, heat-induced gut dysfunction and systemic inflammation have been ameliorated in part by nutritional interventions. The objective of this study was to characterize the phenotypic response of growing calves to heat stress compared to thermoneutral ad libitum fed and thermoneutral feed-restricted counterparts and examining the physiologic alterations associated with the administration of the AO postbiotic to heat-stressed calves with emphasis on intestinal permeability. In this report, we expand previous work by first demonstrating that heat stress reduced partial energetic efficiency of growth in control (45%) but not in AO-fed calves (62%) compared to thermoneutral animals (66%). While heat stress increased 20% the permeability of the intestine, AO postbiotic and thermoneutral treatments did not affect this variable. In addition, AO postbiotic reduced fecal water content relative to thermoneutral and heat stress treatments. Heat stress increased plasma concentrations of serum amyloid A, haptoglobin and lipocalin-2, and administration of AO postbiotic did not ameliorate this effect. In summary, our findings indicated that heat stress led to reduced nutrient-use efficiency and increased systemic inflammation. Results suggest that the AO postbiotic improved energy-use efficiency, water absorption, and the intestinal permeability in heat stress-mediated increase in gut permeability but did not reduce heat stress-mediated rise in markers of systemic inflammation.
Conference Paper
Full-text available
A bird’s–eye overview of the innovative, on–board and Multi–Purpose, random vibration based MAIANDROS Condition Monitoring system for railway vehicles and infrastructure is presented. The system includes Modules for Suspension Monitoring (SM), Wheel Monitoring (WM), Track Monitoring (TM) for track segment condition characterization, Lateral Stability Monitoring (LSM), and Remaining Useful Life Estimation (RULE) for critical components such as wheels. It is based on Statistical Time Series type methods and proper decision making, and aims at overcoming various challenges of current systems while pushing their performance limits. Its unique advantages include high diagnostic performance, ability to detect early–stage (incipient) faults, robustness to varying Operating Conditions, early detection of the onset of hunting, operation with a minimal number of low–cost sensors, and minimal computational complexity for achieving real–time or almost real–time operation. Its high achievable performance is demonstrated via indicative assessments using a prototype system onboard an Athens Metro vehicle and Monte Carlo simulations with a SIMPACK based high–fidelity vehicle model.
Conference Paper
Full-text available
Hole cleaning efficiency is one of the major factors that affects well drilling performance. Rate of penetration (ROP) is highly dependent on hole cleaning efficiency. Hole cleaning performance can be monitored in real-time in order to make sure drilled cuttings generated are efficiently transported to surface. The objective of this paper to present a real time automated model to obtain hole cleaning efficiency and thus effectively adjust parameters as required to improve drilling performance. The process adopts a modified real time carrying capacity indicator. There are many hole cleaning models, methodologies, chemicals and correlations, but majority of these models do not simulate drilling operations sequences and are not dependent on practicality of drilling operations. The developed real time hole cleaning indicator can ensure continuous monitoring and evaluation of hole cleaning performance during drilling operations. The methodology of real time model development is by selecting offset mechanical drilling parameters and drilling fluid parameters where collected, analyzed, tested and validated to model strong hole cleaning efficiency indicator that can extremely participate and facilitate a position in drilling automations and fourth industry revolution. The automated hole cleaning model is utilizing real time sensors of drilling and validate the strongest relationships among the variables. The study, analysis, test and validation of the relationships will reveal the significant parameters that will contribute massively for model development procedures. The model can be run as well by using the real time sensors readings and their inputs to be fed into the developed automated model. The developed model of real time carrying capacity indicator profile will be shown as function of depth, drilling fluid density, flow rate of mud pump or mud pump output, and other important factors will be illustrated by details. The model has been developed and validated in the field of drilling operations to empower the drilling teams for better and understandable monitoring and evaluation of hole cleaning efficiency while performing drilling operations. The real time model can provide a vision for better control of mud additives and that will contribute to mud cost effectiveness. The automated model of hole cleaning efficiency optimized the rate of penetration (ROP) by 50% in well drilling performance as a noticeable and valuable improvement. This optimum improvement saved cost and time of rig and drilling of wells and contributed to accelerate wells’ delivery. The innovative real time model was developed to optimize drilling and operations efficiency by using the surface rig sensors and interpret the downhole measurements and that can lead innovatively to other important hole cleaning indicators and other tactics for better development of downhole measurements models that can participate for optimized drilling efficiency.
ResearchGate has not been able to resolve any references for this publication.