Book

Numerical recipes in C. The art of scientific computing

Authors:

Abstract

The product of a unique collaboration among four leading scientists in academic research and industry, Numerical Recipes is a complete text and reference book on scientific computing. In a self-contained manner it proceeds from mathematical and theoretical considerations to actual practical computer routines. With over 100 new routines bringing the total to well over 300, plus upgraded versions of the original routines, the new edition remains the most practical, comprehensive handbook of scientific computing available today.
... where α = A, B is the index identifying the conductor, L α m is the length of the mth linear element in the discretization of the contour ∂S α , m, m , m = 1, ..., M α , and n = 1, ..., N α . The evaluation of the line integral in (31) can be performed after subtraction of the singular ln-part of the integral kernel via Gauss-Legendre quadrature rule [29], while the singular part can be integrated over the line element analytically [30]. The kernel of the integral operators (7) and ...
... and (29), correspondingly, are dense. The higher is the frequency, the more elements in the MoM discretization of (7) and (8) can be neglected, and the usage of the sparse matrix format [29] ...
... The MoM discretized integral operators in the sparse matrix form (using β skin-depth sparsification criteria) (32), (33), (36) result in the similar SLAE that can be solved numerically using sparse matrix techniques [29] ...
Article
Full-text available
We recently proposed a novel single-source integral equation (SSIE) for accurate broadband resistance and inductance extraction and current flow modeling in 2-D conductors. The new surface integral equation is advantageous compared with the traditional volume electric field integral equation (V-EFIE) used for the inductance extraction, since the unknown function is defined on the surface of conductors as opposed to the volumetric unknown current density in V-EFIE. The new SSIE is also more suitable for the solution of inductance extraction problems than the traditional surface integral equation formulations, as it features only a single unknown surface function as opposed to having the unknown equivalent electric and magnetic surface current densities. The new equation also features only the electric field Green's functions unlike the previously known SSIE formulations. The latter property makes the new SSIE equation particularly suitable to the inclusion of the multilayered substrate effect into the inductance extraction model. This paper describes the generalization of the new SSIE formulation to the case of transmission line models embedded into the multilayered lossy substrates. This paper also shows how the matrix sparsity in the method of moments discretization of the novel integral equation can be exploited to accelerate its numerical solution and reduce associated memory use. This sparsity arises due to the skin-effect-based attenuation of the fields in conductors' cross sections leading to vanishing levels of the matrix elements corresponding to the distant interactions. Typical examples of inductance extraction in complex interconnects situated in lossy substrate are considered to validate the proposed techniques against traditional approaches.
... We deal with these errors by introducing a LSQ computation of the center of rotation. We use a modified version of the LevenbergMarquardt method [10] for all of our least squares computations. Depending on the complexity of the movements, the errors sum up or compensate each other, the worst cases being presented in [12]. ...
... corresponding to the LSQ minimization [10] of the function: ...
... The maximum length of all intervals occurs for p = 1/2 and is plotted as a function of n inFig. 5. The widest interval is the exact interval, which 11 Apart from an understanding of probability theory, it also requires knowledge about the generation of random numbers (transformation method, rejection method[19]). 0.0 0.2 0.4 0.6 0.8 1.0 0.85 0.90 0.95 1.(a) Wald interval 0.0 0.2 0.4 0.6 0.8 1.0 0.85 0.90 0.95 1.(d) HPD intervalis inevitable prize for the guarantee of P cov (p) ≥ 1−α with a greater P cov more often than not. Curiously enough, the maximum length of the Wald interval is greater than that of the Wilson or HPD interval, although its coverage probability is smaller. ...
... The expectation value of this distribution is 3/4, and random numbers drawn from this distribution can be generated by means of the transformation method[19]with runif(N, min=0, max=1) ** (1/3) ...
Technical Report
Full-text available
Introductory texts on statistics typically only cover the classical ``two sigma'' confidence interval for the mean value and do not describe methods to obtain confidence intervals for other estimators. The present technical report fills this gap by first defining different methods for the construction of confidence intervals, and then by their application to a binomial proportion, the mean value, and to arbitrary estimators. Beside the frequentist approach, the likelihood ratio and the highest posterior density approach are explained. Two methods to estimate the variance of general maximum likelihood estimators are described (Hessian, Jackknife), and for arbitrary estimators the bootstrap is suggested. For three examples, the different methods are evaluated by means of Monte Carlo simulations with respect to their coverage probability and interval length. R code is given for all methods, and the practitioner obtains a guideline which method should be used in which cases.
... where A is the photopeak area, x 0 the centroid and s is the standard deviation which determines the FWHM of the Gaussian through the relation FWHM ¼ 2.3548s [8]. When a sum of k multiple Gaussian peaks is de-convoluted, the overall complex is given by [9] ⎛ ...
... Certain commands of the program (as in photopeak de-convolution or background assessment) require a fitting procedure. The method used in most cases in SPECTRW (except in linear fitting ) is based on the Levenberg–Marquardt algorithm (LMA) for nonlinear least-squares [9]. The problem for which the LM algorithm can be used to provide a solution is called Nonlinear Least Squares Minimization [10]. ...
... To model this cycle, it is used the reference engine flow scheme shown from Figure 5.9. The nonlinear system of equations are given by: The numerical method applied to simulate the engine cycles was taken from the book of Press et al. (2007). The Newton's method or the multidimensional secant methods called Broyden's Method presented similar results, however both methods were implemented. ...
... To explore the full spatio-temporal dynamics at D = 0 we solved Eqs. (8) and (10) numerically by using a Crank-Nicolson algorithm [46]. We sample over N t = 30, 000 temporal and N y = 150 spatial discrete steps with a corresponding step size dt = 0.005 and dy = 0.007, yielding a total integration time of T = N t · dt = 150, and a gap size of L = 1.0. ...
Article
Suspensions of elongated micelles under shear display complex nonlinear behavior including shear banding, spatiotemporal oscillatory patterns, and chaotic response. Based on a suitable rheological model [S. M. Fielding and P. D. Olmsted, Phys. Rev. Lett. 92, 084502 (2004)], we here explore possibilities to manipulate the dynamical behavior via closed-loop (feedback) control involving a time delay $\tau$. The model considered relates the viscoelastic stress of the system to a structural variable, that is, the length of the micelles, yielding two time- and space-dependent dynamical variables $\xi_1$, $\xi_2$. As a starting point we perform a systematic linear stability analysis of the uncontrolled system for (i) an externally imposed average shear rate and (ii) an imposed total stress, and compare the results to those from extensive numerical simulations. We then apply the so-called Pyragas feedback scheme where the equations of motion are supplemented by a control term of the form $K[a(t)−a(t−\tau)]$ with a being a measurable quantity depending on the rheological protocol. For the choice of an imposed shear rate, the Pyragas scheme for the total stress reduces to a nondiagonal scheme concentrating on the viscoelastic stress. Focusing on parameters close to a Hopf bifurcation, where the uncontrolled system displays oscillatory states as well as hysteresis in the shear rate controlled protocol, we demonstrate that (local) Pyragas control leads to a full stabilization to the steady-state solution of the total stress, while a global control scheme does not work. In contrast, for the case of imposed total stress, global Pyragas control fully stabilizes the system. In both cases, the control does not change the space of solutions, rather it selects the steady-state solutions out of the existing solutions. This underlines the noninvasive character of the Pyragas scheme.
... Each optimization run took less than 20 min to complete. The model was optimized by minimizing the cost function with a modified version of the Davidon-Fletcher-Powell optimizer (dfpmin; Fletcher and Powell, 1963;Press et al., 1992). For efficiency, the cost function was coded for parallel computation. ...
Article
Full-text available
Kinetic mechanisms predict how ion channels and other proteins function at the molecular and cellular levels. Ideally, a kinetic model should explain new data but also be consistent with existing knowledge. In this two-part study, we present a mathematical and computational formalism that can be used to enforce prior knowledge into kinetic models using constraints. Here, we focus on constraints that quantify the behavior of the model under certain conditions, and on constraints that enforce arbitrary parameter relationships. The penalty-based optimization mechanism described here can be used to enforce virtually any model property or behavior, including those that cannot be easily expressed through mathematical relationships. Examples include maximum open probability, use-dependent availability, and nonlinear parameter relationships. We use a simple kinetic mechanism to test multiple sets of constraints that implement linear parameter relationships and arbitrary model properties and behaviors, and we provide numerical examples. This work complements and extends the companion article, where we show how to enforce explicit linear parameter relationships. By incorporating more knowledge into the parameter estimation procedure, it is possible to obtain more realistic and robust models with greater predictive power.
... In this work we have approximated the derivatives at the points of the table by two different methods, using finite differences and using cubic splines. For more details we refer the reader to [5]. ...
Technical Report
Full-text available
The traditional approach of flight trajectory planning for commercial airplanes aligns the routes to a finite air travel network (ATN) graph. A new alternative is the free-flight trajectory planning, where routes can use the entire 4D space (3D+time) for more fuel-efficient trajectories that minimize the travel costs. In this work, we focus on the vertical optimization part of such trajecto-ries for a fixed horizontal trajectory, computed or manually derived beforehand. The idea is to assign to each of the trajectory's segments an optimal altitude and speed for the cruise phase of the flight. We formulate this problem as a non-linear programming (NLP) problem. As for the input of the model, information about the airplane's fuel consumption is provided for discrete levels of speed and weight values. Thus a continuous formulation of this input data is required, to meet the NLP requirements. We implement different interpolation and approximation techniques for this. Using AMPL as modeling language, along with non-linear commercial solvers such as SNOPT, CONOPT, KNITRO, and MINOS, we present numerical results on test instances for real-world instance data and compare the resulting trajectories in terms of the fuel consumption and the computation times.
... The subset data was used for regression of the coefficients and estimation error with new form I and new form II. The results show that a higher goodness of fit and a lower estimation error can be obtained with the new forms than with the others (Figure 5, andTable 4), especially for new form II. Both new form I and new form II consist of linear equations and our data distribution is rather uniform, making the regression analysis using a least squares method via a common matrix inversion technique easy and quick. The LU decomposition technique (Press et al., 1992) was used in our computer code to solve the linear equations and find coefficients for the equations. ...
Article
This study employs strong-motion data from the 1999 Chi-Chi earthquake, the 1999 Kocaeli earthquake, the 1999 Duzce earthquake, the 1995 Kobe earthquake, the 1994 Northridge earthquake and the 1989 Loma Prieta earthquake to refine the relationship among critical acceleration (Ac), Arias intensity (Ia), and Newmark displacement (Dn). The results reveal that, as expected, logDn is proportional to logIa when Ac is large. As Ac gets smaller however, the linearity becomes less. We also found that logDn is proportional to Ac, and that the linearity is very stable through all Ia values. These features are common to all six sets of data. Therefore, we add a third term in addition to the Jibson's form which covers the aforementioned problem, and propose a new form for the relationship among Ia, Ac and Dn. Two alternative forms were tested using each of the six data sets, before a final form was selected.The final analyses grouped the data into a worldwide data set and a Taiwanese data set. Coefficients for the selected form were derived from regression with the data, and two final empirical formulas, one for global, the other for local, proposed. Site conditions are also considered in this study with empirical formulas being developed for soil and rock sites, respectively. The estimation error is smaller and the goodness of fit is higher for both the local soil-site and rock-site formulas. Since landslides are more likely to occur on hillsides, the rock site formula may be more applicable for the landslide cases, whereas the soil site formula should be used for side slope of landfills.
... Moreover, when applying this function to interpolate the dataset, we specify that a cubic interpolation is used in space and also in time. The system (7) is integrated using a Cash– Karp Runge–Kutta scheme (Press et al., 1992) with a time step of 1 h. The condition w = 0 is imposed at h = 0 in order to constrain trajectories in the vertical. ...
Article
Full-text available
In this paper we study the three-dimensional (3-D) Lagrangian structures in the stratospheric polar vortex (SPV) above Antarctica. We analyse and visualize these structures using Lagrangian descriptor function M. The procedure for calculation with reanalysis data is explained. Benchmarks are computed and analysed that allow us to compare 2-D and 3-D aspects of Lagrangian transport. Dynamical systems concepts appropriate to 3-D, such as normally hyperbolic invariant curves, are discussed and applied. In order to illustrate our approach we select an interval of time in which the SPV is relatively undisturbed (August 1979) and an interval of rapid SPV changes (October 1979). Our results provide new insights into the Lagrangian structure of the vertical extension of the stratospheric polar vortex and its evolution. Our results also show complex Lagrangian patterns indicative of strong mixing processes in the upper troposphere and lower stratosphere. Finally, during the transition to summer in the late spring, we illustrate the vertical structure of two counterrotating vortices, one the polar and the other an emerging one, and the invariant separatrix that divides them.
... The numerical model is calibrated to the experimentally obtained polarization curves, using the Nelder-Mead optimization scheme [52]. The fuel cell's area specific contact resistance (R contact ) and the anode and cathode reference exchange current densities (i a,ref and i c,ref , respectively) are used as calibration parameters. ...
Article
The performance impact of using bio-inspired interdigitated and non-interdigitated flow fields (I-FF and NI-FF, respectively) within a DMFC is investigated. These two flow fields, as well as a conventional serpentine flow field (S-FF, used as a reference), were examined as possible anode and cathode flow field candidates. To examine the performance of each of these candidates, each flow field was manufactured and experimentally tested under different anode and cathode flow rate combinations (1.3 mL/min [methanol] and 400 mL/min [oxygen], as well as 2 and 3 times these flow rates), and different methanol concentrations (0.50 M, 0.75 M, and 1.00 M). To help understand the experimental results and the underlying physics, a three dimensional numerical model was developed. Of the examined flow fields, the S-FF and the I-FF yielded the best performance on the anode and cathode, respectively. This finding was mainly due to the enhanced under-rib convection of both of these flow fields. Although the I-FF provided a higher mean methanol concentration on the anode catalyst layer surface, its distribution was less uniform than that of the S-FF. This caused the rate of methanol permeation to the cathode to increase (for the anode I-FF configuration), along with the anode and cathode activation polarizations, deteriorating the fuel cell performance. The NI-FF provided the lowest pressure drops of the examined configurations. However, the hydrodynamics within the flow field made the reactants susceptible to traveling directly from inlet to outlet, leading to several low concentration pockets. This significantly decreased the reactant uniformity across its respective catalyst layer, and caused this FFs performance to be the lowest of the examined configurations.
... This is the general expression for evaluating the flow velocity from panels with arbitrary order polynomial shapes and strength distributions. So far, the only numerical step is to find the roots of the polynomial in the denominator, a well-studied problem for which efficient algorithms exist[11,15]For numerical issues to consider when using this expression, see Appendix A. ...
Article
We develop an efficient and high order panel method with applications in airfoil design. Through the use of analytic work and careful considerations near singularities our approach is quadrature-free. The resulting method is examined with respect to accuracy and efficiency and we discuss the different trade-offs in approximation order and computational complexity. A reference implementation within a package for a two-dimensional fast multipole method is distributed freely.
... As for model validation, We run a set of kernels that we call codelets. They are extracted from numerical recipes (NR)[19,20], where many families of algorithms are represented: Linear Algebraic Equations, Eigensystems, Fast Fourier Transform and Partial Differential Equations. All the codelets were carefully selected and they cover a wide range of performance and code structure characteristics[19]: single, double and mixed precision data, non-vectorized, partially vectorized and fully vectorized, 1D and 2D loops, 1D and 2D arrays, unit stride and non-unit stride memory accesses. ...
Conference Paper
This paper presents an empirical approach to measuring and modeling the energy consumption of multicore processors.The modeling approach allows us to find a breakdown of the energy consumption among a set of key hardware components, also called HW nodes. We explicitly model the front-end and the back-end in terms of the number of instructions executed. We also model the L1, L2 and L3 caches. Furthermore, we explicitly model the static and dynamic energy consumed by the the uncore and core components. From a software perspective, our methodology allows us to correlate energy to the executed code, which helps find opportunities for code optimization and tuning. We use binary analysis and hardware counters for performance characterization. Although, we use the on-chip counters (RAPL) for energy measurement, our methodology does not rely on a specific method for energy measurement. Thus, it is portable and easy to deploy in various computing environments. We validate our energy model using two Intel processors with a set of HPC codelets, where data sizes are varied to come from the L1, L2 and L3 caches and show 3% average modeling error. We present a comprehensive analysis and show energy consumption differences between kernels and relate those differences to the algorithms that are implemented. Finally, we discuss how vectorization leads to energy savings compared to non-vectorized codes.
... The program combines aqueous complexation, surface complexation equilibria, surface charge density and mass balance conditions (Vancappellen et al. 1993; Hiemstra & VanRiemsdijk 1996). The set of equations obtained is solved iteratively by the classical Newton–Raphson technique (Press et al. 1989). Table 3summarizes, in matrix form, the set of equations relative to the calculation of surface and solution speciation for fixed values of pH and partial pressure of CO 2 (pCO 2 ). ...
Article
Full-text available
When pH and alkalinity increase, calcite frequently precipitates and hence modifies the petrophysical properties of porous media. The complex conductivity method can be used to directly monitor calcite precipitation in porous media because it is sensitive to the evolution of the mineralogy, pore structure and its connectivity. We have developed a mechanistic grain polarization model considering the electrochemical polarization of the Stern and diffuse layer surrounding calcite particles. Our complex conductivity model depends on the surface charge density of the Stern layer and on the electrical potential at the onset of the diffuse layer, which are computed using a basic Stern model of the calcite/water interface. The complex conductivity measurements of Wu et al. (2010) on a column packed with glass beads where calcite precipitation occurs are reproduced by our surface complexation and complex conductivity models. The evolution of the size and shape of calcite particles during the calcite precipitation experiment is estimated by our complex conductivity model. At the early stage of the calcite precipitation experiment, modeled particles sizes increase and calcite particles flatten with time because calcite crystals nucleate at the surface of glass beads and grow into larger calcite grains around glass beads. At the later stage of the calcite precipitation experiment, modeled sizes and cementation exponents of calcite particles decrease with time because large calcite grains aggregate over multiple glass beads, a percolation threshold is achieved, and small and discrete calcite crystals polarize.
... The computational model was implemented using C on a UNIX workstation in which the fourth-order Runge-Kutta method with adaptive time steps was included [33]. For modeling, the combination of free parameters D, Y s , Y a , and A was defined using two methods. ...
Article
Full-text available
Due to the huge number of neuronal cells in the brain and their complex circuit formation, computer simulation of neuronal activity is indispensable to understanding whole brain dynamics. Recently, various computational models have been developed based on whole-brain calcium imaging data. However, these analyses monitor only the activity of neuronal cell bodies and treat the cells as point unit. This point-neuron model is inexpensive in computational costs, but the model is unrealistically simplistic at representing intact neural activities in the brain. Here, we describe a novel three-unit Ordinary Differential Equation (ODE) model based on the neuronal responses derived from a Caenorhabditis elegans salt-sensing neuron. We recorded calcium responses in three regions of the ASER neuron using a simple downstep of NaCl concentration. Our simple ODE model generated from a single recording can adequately reproduce and predict the temporal responses of each part of the neuron to various types of NaCl concentration changes. Our strategy which combines a simple recording data and an ODE mathematical model may be extended to realistically understand whole brain dynamics by computational simulation.
... Concomitant with computer speed increases have been algorithmic improvements in the random number generators at the heart of every Monte Carlo code. Press et al. (2000) tells the fascinating story of bad early generators with hidden correlations and biases that might not have ruined a 10K-photon run but could be disastrous for a 1B-photon run. 56 These problems have receded now, although each time the number of photons increases by a factor of ten, one must reconsider them. ...
Chapter
Full-text available
This introductory chapter is a wide-ranging philosophical meander around the subject of clouds and how we model them for purposes of atmospheric radiative transfer, with excursions into the author's history in the atmospheric radiative transfer and climate field dating back to the early 1970s.
... Independent T test was used to compare means between 2 groups according to [11]. Using SPSS 16.0. ...
... In practical application, one may not know beforehand what regime should be used for a good approximation. Besides the complexity of the new Green's function in the image charge method, to use the FFT to calculate the discrete convolution, one needs to double the computational domain with zero padding [8, 22]. This increases both the computational time and the memory usage. ...
Article
A three-dimensional (3D) Poisson solver with longitudinal periodic and transverse open boundary conditions can have important applications in beam physics of particle accelerators. In this paper, we present a fast efficient method to solve the Poisson equation using a spectral finite-difference method. This method uses a computational domain that contains the charged particle beam only and has a computational complexity of $O(N_u(logN_{mode}))$, where $N_u$ is the total number of unknowns and $N_{mode}$ is the maximum number of longitudinal or azimuthal modes. This saves both the computational time and the memory usage by using an artificial boundary condition in a large extended computational domain.
... The studies mentioned here illustrate that drifts played Figure 9: Top panel: Global radial gradients from the model (in Figure 8 ) are compared to observational gradients calculated in this study (circles). We also show two sets of calculated gradients from Gieseler and Heber (2016), who based their analysis on two statistical fitting algorithms, namely bootstrap (triangles) and the fitexy function from Numerical Recipes (Press et al., 1996; squares). The radial gradient calculated by De Simone et al. (2011) is given by the diamond symbol. ...
Article
Full-text available
Global gradients for cosmic-ray (CR) protons in the heliosphere are computed with a comprehensive modulation model for the recent prolonged solar minimum of Cycle 23/24. Fortunately, the PAMELA (Payload for Antimatter Matter Exploration and Light-nuclei Astrophysics) and Ulysses/KET (Kiel Electron Telescope) instruments simultaneously observed proton intensities for the period between July 2006 and June 2009. Radial and latitudinal gradients are calculated from measurements, with the latter possible because Ulysses changed its position significantly in the heliocentric meridional plane during this period. The modulation model is set up for the conditions that prevailed during this unusual solar minimum period to gain insight into the role role that particle drifts played in establishing the observed gradients for this period. Four year-end PAMELA proton spectra were reproduced with the model, from 2006 to 2009, followed by corresponding radial profiles that were computed along the Voyager-1 trajectory, and compared to available observations. It is found that the computed intensity levels are in agreement with solar minimum observations from Voyager-1 at multiple energies. The model also reproduces the steep intensity increase observed when Voyager-1 crossed the heliopause region. Good agreement is found between computed and observed latitudinal gradients so that we conclude that the model gives a most reasonable representation of modulation conditions from the Earth to the heliopause for the period from 2006 to 2009. As a characteristic feature of CR drifts, the most negative latitudinal gradient is computed for 2009, with a value of -0.15%/degree around 600 MV. The maximum radial gradient in the inner heliosphere (as covered by Ulysses) also occurs in this range, with the highest value of 4.25%/AU in 2009.
Article
Background Muscle response in older adults is believed to decrease with maximal muscle strength, although it has not been adequately assessed; further, the relationship between frailty and muscle response remains unexamined.Objectives This study aimed to develop a practical method for measuring muscle response using grip strength in older adults and to clarify the relationship between frailty and grip strength response.Design, Setting, and ParticipantsWe performed a cross-sectional, clinical, observational study. A total of 248 patients (94 men and 154 women, mean age: 78.2 years) who visited the outpatient unit in the Integrated Healthy Aging Clinic of our Hospital for the first time were enrolled.MeasurementsUsing a grip strength measuring device originally developed by us, we measured grip strength response indices, such as reaction time, time constant, rate of force development (response speed), and maximum grip strength. Grip strength response indices were compared among three groups (robust, pre-frail, and frail) according to the Fried and Kihon checklist assessments for frailty.ResultsBased on Fried’s assessment, marked differences were found between groups not only in maximal grip strength but also in response time and response speed. Based on the Kihon checklist assessment, there was no significant difference in response time; however, a considerable difference in response speed for the left hand was observed. Moreover, according to the Kihon checklist assessment, some cases showed differences in muscle response although not in maximal muscle strength.Conclusions The response speed of grip strength was suggested to decrease with frailty. The results suggest that measurement of grip strength response in both hands is useful to examine the relationship between frailty and grip strength response.
Article
Sistem Identifikasi sidik jari ini mengacu pada teori Galton-Henry, dimana dua sidik jari dapat dikatakan identik apabila memiliki pola dan minimal 12 titik minusi yang sama. Pada implementasinya digunakan pendekatan AFIS (Automated Fingerprint Indentification System), yaitu dengan menggunakan teori-teori image processing yang disesuakan dengan jenis input citra dan tujuan dari teori Galton-Hanry. Beberapa teori image processing yang digunakan adalah Interpolasi Bilinear, Histogram Equalization, Transformasi Fourier dan Gabor Filter.Tujuan dari penelitian ini adalah untuk mengidentifikasi sidik jari laten. Sidikjari laten yang dimaksud adalah citra sidik jari yang tidak secara langsung diambil melalui media pencitraan (seperti fingerprint reader). Sidik jari laten dapat diperoleh dari cap sidik jari ataupun dari tempat kejadian perkara (TKP).
Article
Full-text available
The previous published work by Othmani et al. invites a further useful discussion on how the viscoelastic rheological model affects the second harmonic and single modes of guided waves. However, this paper has two main objectives. The first objective is to establish a simple relationship between the Rayleigh waves velocity (VR), fundamental Lamb modes (A0 and S0), and the shear wave velocity (VS) in unidirectional viscoelastic carbon–epoxy at high frequency. The second objective is to study the second harmonic Lamb modes. Numerical codes based on Legendre polynomial and Newton–Raphson methods are developed and implemented to predict dispersion and attenuation curves. The second harmonic generation was investigated with the hysteretic and Kelvin–Voigt viscoelastic models. Results showed that rheological models have a much larger effect on the attenuation curves than the ones on the dispersion trajectories. Additionally, the optimal angles of Lamb and SH modes in unidirectional carbon–epoxy under Snell‐Descartes law was investigated. The present numerical results may render as a useful message for the rheological community.
Article
Full-text available
To assess genotypic variation in drought response of silver birch (Betula pendula Roth), we studied the plasticity of 16 physiological traits in response to a 12-14-week summer drought imposed on four clones in two consecutive years. In a common garden experiment, 1-year-old clonal trees from regions with low (550 mm year-1) to high rainfall (1270 mm year-1) were grown in 45-l pots, and leaf gas exchange parameters , leaf water potentials, leaf osmotic potentials and leaf carbon isotope signatures were repeatedly measured. There were no clonal differences in leaf water potential, but stomatal con-ductance (g s), net photosynthesis at ambient carbon dioxide concentration, photosynthetic water-use efficiency, leaf carbon isotope composition (δ 13 C) and leaf osmotic potentials at saturation (Π 0) and at incipient plasmolysis (Π p) were markedly influenced by genotype, especially g s and osmotic adjustment. Genotypes of low-rainfall origin displayed larger osmotic adjustment than genotypes of high-rainfall origin, although their Π 0 and Π p values were similar or higher with ample water supply. Genotypes of low-rainfall origin had higher g s than genotypes of high-rainfall origin under both ample and limited water supply, indicating a higher water consumption that might increase competitiveness in drought-prone habitats. Although most parameters tested were significantly influenced by genotype and treatment, the genotype × treatment interactions were not significant. The genotypes differed in plasticity of the tested parameters and in their apparent adaptation to drought; however, among genotypes, physiological plasticity and drought adaptation were not related to each other. Reduction of g s was the first and most plastic response to drought in all genotypes , and allowed the maintenance of high predawn leaf water potentials during the drought. None of the clones exhibited non-stomatal limitation of photosynthesis. Leaf g s , photo-synthetic capacity, magnitude of osmotic adjustment and δ 13 C were all markedly lower in 2000 than in 1999, indicating root limitation in the containers in the second year.
Article
To investigate the effect of background noise on visual summation, we measured the contrast detection thresholds for targets with or without a white noise mask in luminance contrast. The targets were Gabor patterns placed at 3° eccentricity to either the left or right of the fixation and elongated along an arc of the same radius to ensure equidistance from fixation for every point along the long axis. The task was a spatial two-alternative forced-choice (2AFC) paradigm in which the observer had to indicate whether the target was on the left or the right of the fixation. The threshold was measured at 75% accuracy with a staircase procedure. The detection threshold decreased with target length with slope -1/2 on log-log coordinates for target lengths between 30' and 300' half-height full-width (HHFW), defining a range of ideal matched-filter summation extending up to about 200' (or about 16× the center width of the Gabor targets). The summation curves for different noise contrasts were shifted copies of each other. For the threshold versus mask contrast (TvN) functions, the target threshold was constant for noise levels up to about -22 dB, then increased with noise contrast to a linear asymptote on log-log coordinates. Since the "elbow" of the target threshold versus noise function is an index of the level of the equivalent noise experienced by the visual system during target detection, our results suggest that the signal-to-noise ratio was invariant with target length. We further show that a linear-nonlinear-linear gain-control model can fully account for these results with far fewer parameters than a matched-filter model.
Article
Full-text available
We present a detailed report of the method, setup, analysis and results of a precision measurement of the positive muon lifetime. The experiment was conducted at the Paul Scherrer Institute using a time-structured, nearly 100%-polarized, surface muon beam and a segmented, fast-timing, plastic scintillator array. The measurement employed two target arrangements; a magnetized ferromagnetic target with a ~4 kG internal magnetic field and a crystal quartz target in a 130 G external magnetic field. Approximately 1.6 x 10^{12} positrons were accumulated and together the data yield a muon lifetime of tau_{mu}(MuLan) = 2196980.3(2.2) ps (1.0 ppm), thirty times more precise than previous generations of lifetime experiments. The lifetime measurement yields the most accurate value of the Fermi constant G_F (MuLan) = 1.1663787(6) x 10^{-5} GeV^{-2} (0.5 ppm). It also enables new precision studies of weak interactions via lifetime measurements of muonic atoms.
Article
A foundation is laid for making the characterization of physical relationships among design variables, process parameters, and performance variables a primary focus of designing statistical process control (SPC) systems. Such characterizations will make it possible to construct and investigate collections of performance measures that managers can use to evaluate and choose policies. An example model is given that describes the interacting sources of variability in an SPC system using X-bar and standard deviation charts. This is followed by a collection of performance measures, examples of how they can be investigated through trade-off curves, and an illustration of interplay between the example model and the behavior of a real system.
Article
The positioning of ocean bottom seismometers (OBS) is a key step in the processing flow of OBS data, especially in the case of self popup types of OBS instruments. The use of first arrivals from airgun shots, rather than relying on the acoustic transponders mounted in the OBS, is becoming a trend and generally leads to more accurate positioning due to the statistics from a large number of shots. In this paper, a linearization of the OBS positioning problem via the multilateration technique is discussed. The discussed linear solution solves jointly for the average water layer velocity and the OBS position using only shot locations and first arrival times as input data.
Article
In this paper, we use (1) the 20 year record of Schumann resonance (SR) signals measured at West Greenwich Rhode Island, USA, (2) the 19 year Lightning Imaging Sensor (LIS)/Optical Transient Detector (OTD) lightning data, and (3) the normal mode equations for a uniform cavity model to quantify the relationship between the observed Schumann resonance modal intensity and the global-average vertical charge moment change M (C km) per lightning flash. This work, by integrating SR measurements with satellite-based optical measurements of global flash rate, accomplishes this quantification for the first time. To do this, we first fit the intensity spectra of the observed SR signals to an eight-mode, three parameter per mode, (symmetric) Lorentzian line shape model. Next, using the LIS/OTD lightning data and the normal mode equations for a uniform cavity model, we computed the expected climatological-daily-average intensity spectra. We then regressed the observed modal intensity values against the expected modal intensity values to find the best fit value of the global-average vertical charge moment change of a lightning flash (M) to be 41 C km per flash with a 99% confidence interval of ±3.9 C km per flash, independent of mode. Mode independence argues that the model adequately captured the modal intensity, the most important fit parameter herein considered. We also tested this relationship for the presence of residual modal intensity at zero lightning flashes per second and found no evidence that modal intensity is significantly different than zero at zero lightning flashes per second, setting an upper limit to the amount of nonlightning contributions to the observed modal intensity.
Article
Full-text available
Math boxes is a recently introduced pen-based user interface for simplifying the task of hand writing difficult mathematical expressions. Visible bounding boxes around subexpressions are automatically generated as the system detects relevant spatial relationships between symbols including superscripts, subscripts, and fractions. Subexpressions contained in a math box can then be extended by adding new terms directly into its given bounds. When new characters are accepted, box boundaries are dynamically resized and neighboring terms are translated to make room for the larger box. Feedback on structural recognition is given via the boxes themselves. In this work, we extend the math boxes interface to include support for subexpression modifications via a new set of pen-based interactions. Specifically, techniques to expand and rearrange terms in a given expression are introduced. To evaluate the usefulness of our proposed methods, we first conducted a user study in which participants wrote a variety of equations ranging in complexity from a simple polynomial to the more difficult expected value of the logistic distribution. The math boxes interface is compared against the commonly used offset typeset (small) method, where recognized expressions are typeset in a system font near the user's unmodified ink. In this initial study, we find that the fluidness of the offset method is preferred for simple expressions but that, as difficulty increases, our math boxes method is overwhelmingly preferred. We then conducted a second user study that focused only on modifying various mathematical expressions. In general, participants worked faster with the math boxes interface, and most new techniques were well received. On the basis of the two user studies, we discuss the implications of the math boxes interface and identify areas where improvements are possible.
Article
Several recent studies have suggested that much of the winter-time Antarctic ice is thin (<0.3m). The presence of extensive areas of thin ice has a significant effect on ocean-atmosphere energy exchange. This is investigated using the Maykut (1978) thin-ice energy-budget model in a study for typical September Antarctic ice and climatic conditions. To study the sensitivity of turbulent heat loss to ice concentrations, the Maykut model is combined with an empirical parameterization for the turbulent fluxes from leads (Andreas, 1980) which takes account of the non-linear relationship between heat loss and lead width (or ice concentration). In this one-dimensional sensitivity study, a constant floe-size is assumed, and ice-concentration variations are simulated by changing the width of the leads between floes. The modelled results, for the floe size considered, indicate that at 80% ice concentration the turbulent heat loss through the thin ice component can be greater than that from leads. As concentration decreases, however, the fractional loss through the ice, and, hence, the ice-thickness distribution, becomes less significant. For the concentrations lower than 50%, there is little change in turbulent loss with further decrease in ice cover, as the atmosphere effectively “sees” open ocean.
Article
Full-text available
Glycosylation of proteins is a key function of the biosynthetic-secretory pathway in the endoplasmic reticulum (ER) and Golgi apparatus. Glycosylated proteins play a crucial role in cell trafficking and signaling, cell-cell adhesion, blood-group antigenicity, and immune response. In addition, the glycosylation of proteins is an important parameter in the optimization of many glycoprotein-based drugs such as monoclonal antibodies. In vitro glycoengineering of proteins requires glycosyltransferases as well as expensive nucleotide sugars. Here, we present a designed pathway consisting of five enzymes, glucokinase (Glk), phosphomannomutase (ManB), mannose-1-phosphate-guanyltransferase (ManC), inorganic pyrophosphatase (PmPpA) and 1-domain polyphosphate kinase 2 (1D-Ppk2) expressed in E. coli for the cell-free production and regeneration of GDP-mannose from mannose and polyphosphate with catalytic amounts of GDP and ADP. It was shown that GDP-mannose is produced at various conditions, i.e. pH 7–8, temperature 25–35°C and co-factor concentrations of 5–20 mM MgCl2. The maximum reaction rate of GDP-mannose achieved was 2.7 µM/min at 30°C and 10 mM MgCl2 producing 566 nmol GDP-mannose after a reaction time of 240 min. With respect to the initial GDP concentration (0.8 mM) this is equivalent to a yield of 71%. Additionally, the cascade was coupled to purified, transmembrane-deleted Alg1 (ALG1▵TM), the first mannosyltransferase in the ER-associated lipid-linked oligosaccharide (LLO) assembly. Thereby, in a one-pot reaction, phytanyl-PP-(GlcNAc)2-Man1 was produced with efficient nucleotide sugar regeneration for the first time. Phytanyl-PP-(GlcNAc)2-Man1 can serve as a substrate for the synthesis of LLO for the cell-free in vitro glycosylation of proteins. A high-performance anion exchange chromatography method with UV and conductivity detection (HPAEC-UV/CD) assay was optimized and validated to determine the enzyme kinetics. The established kinetic model enabled the optimization of the GDP-mannose regenerating cascade and can further be used to study coupling of the GDP-mannose cascade with glycosyltransferases. Overall, the study envisages a first step towards the development of a platform for the cell-free production of LLOs as precursors for in vitro glycoengineering of proteins.
Article
Full-text available
Stratosphere-to-troposphere transport (STT) provides an important natural source of ozone to the upper troposphere, but the characteristics of STT events in the Southern Hemisphere extratropics and their contribution to the regional tropospheric ozone budget remain poorly constrained. Here, we develop a quantitative method to identify STT events from ozonesonde profiles. Using this method we estimate the seasonality of STT events and quantify the ozone transported across the tropopause over Davis (69° S, 2006–2013), Macquarie Island (54° S, 2004–2013), and Melbourne (38° S, 2004–2013). STT seasonality is determined by two distinct methods: a Fourier bandpass filter of the vertical ozone profile and an analysis of the Brunt–Väisälä frequency. Using a bandpass filter on 7–9 years of ozone profiles from each site provides clear detection of STT events, with maximum occurrences during summer and minimum during winter for all three sites. The majority of tropospheric ozone enhancements owing to STT events occur within 2.5 and 3 km of the tropopause at Davis and Macquarie Island respectively. Events are more spread out at Melbourne, occurring frequently up to 6 km from the tropopause. The mean fraction of total tropospheric ozone attributed to STT during STT events is ∼ 1. 0–3. 5 % at each site; however, during individual events, over 10 % of tropospheric ozone may be directly transported from the stratosphere. The cause of STTs is determined to be largely due to synoptic low-pressure frontal systems, determined using coincident ERA-Interim reanalysis meteorological data. Ozone enhancements can also be caused by biomass burning plumes transported from Africa and South America, which are apparent during austral winter and spring and are determined using satellite measurements of CO. To provide regional context for the ozonesonde observations, we use the GEOS-Chem chemical transport model, which is too coarsely resolved to distinguish STT events but is able to accurately simulate the seasonal cycle of tropospheric ozone columns over the three southern hemispheric sites. Combining the ozonesonde-derived STT event characteristics with the simulated tropospheric ozone columns from GEOS-Chem, we estimate STT ozone flux near the three sites and see austral summer dominated yearly amounts of between 5. 7 and 8. 7 × 1017 molecules cm−2 a−1.
Article
On average, secondary impact craters are expected to deepen and become more symmetric as impact velocity (vi) increases with downrange distance (L). We have used high-resolution topography (1–2 m/pixel) to characterize the morphometry of secondary craters as a function of L for several well-preserved primary craters on Mars. The secondaries in this study (N = 2,643) span a range of diameters (25 m ≤D≤400 m) and estimated impact velocities (0.4 km/s ≤vi≤2 km/s). The range of diameter-normalized rim-to-floor depth (d/D) broadens and reaches a ceiling of d/D≈0.22 at L≈280 km (vi=1–1.2 km/s) whereas average rim height shows little dependence on vi for the largest craters (h/D≈0.02,D > 60 m). Populations of secondaries that express the following morphometric asymmetries are confined to regions of differing radial extent: planform elongations (L< 110–160 km), taller downrange rims (L < 280 km), and cavities that are deeper uprange (L< 450–500 km). Populations of secondaries with lopsided ejecta were found to extend to at least L ∼ 700 km. Impact hydrocode simulations with iSALE-2D for strong, intact projectile and target materials predict a ceiling for d/D vs. L whose trend is consistent with our measurements. This study illuminates the morphometric transition from subsonic to hypervelocity cratering and describes the initial state of secondary crater populations. This has applications to understanding the chronology of planetary surfaces and the long-term evolution of small crater populations.
Article
Transient enhanced diffusion of boron inhibits the formation of ultrashallow junctions needed in the next-generation of microelectronic devices. Reducing the junction depth using rapid thermal annealing with high heating rates comes at a cost of increasing sheet resistance. The focus of this study is to design the optimal annealing temperature program that gives the minimum junction depth while maintaining satisfactory sheet resistance. Comparison of different parameterizations of the optimal trajectories shows that linear profiles gave the best combination of minimizing junction depth and sheet resistance. Worst-case robustness analysis of the optimal control trajectory motivates improvements in feedback control implementations for these processes
Article
Full-text available
A physical system can be studied carrying out an experiment, in which one of its variables is modi�ed and the resulting output of the system is observed and measured. For a given study, this kind of experiment can generate a set of pair of coordinate points (xi; yi), which can have a graphical representation or be represented by a given function y(x; a1; a2; :::;an) that is obtained from a curve �t procedure. The determination of the parameters a1; a2; :::;an is done in such a way that the points (xi; yi) have the maximum likelyhood to belong to the �tted function. In general, such procedure is attained from the aplication of the Least Square Method. On the other hand, the simulation of the attained mathematical model generates a point, denoted herein by y(x)m, which has some uncertainty associated with it. This paper proposes to determine the uncertainties associated with models, which can be reduced to a �rst order representation, in terms of the central second moments sigma_y(x)m that are attained from the statistical properties of the experimental points around y(x): Equations to determine the uncertainties are obtained for such models and applied to two sets of experimental data. The attained results can be considered good. When the uctuations can be represented by a normal distribuition, the �tted function can be given in the following form: y(x) = y(x)m +- sigma_y(x)m or y(x) = y(x)m +- 3sigma_y(x)m, with a probability of 68.3% and 99.7%, respectively. Herein, the graphical representation is given in terms of three lines, and the size between the upper and lower limits gives an indication of the experimental precision. Moreover, the visualization of the points outside those limits allows one to verify the existence of outliers during the measuring procedure of the variables when performing the experiment.
Article
Full-text available
The Dynamic Global Core Plasma Model (DGCPM) is an empirical dynamical model of the plasmasphere which, despite its simple mathematical form, or perhaps because of its simple mathematical form, has enjoyed wide use in the space physics modeling community. In this paper we present some recent observations from the European quasi-Meridional Magnetometer Array (EMMA) and compare these with the DGCPM. The observations suggest more rapid daytime refilling and loss than what is described in the DGCPM. We then modify the DGCPM by changing the values of some of its parameters, leaving the functional form intact. The modified DGCPM agrees much better with the EMMA observations. The modification resulted in an order-of-magnitude faster daytime refilling and nighttime loss. These results are also consistent with previous observations of daytime refilling.
Article
Full-text available
We study theoretically the edge fracture instability in sheared complex fluids, by means of linear stability analysis and direct nonlinear simulations. We derive an exact analytical expression for the onset of edge fracture in terms of the shear-rate derivative of the fluid's second normal stress difference, the shear-rate derivative of the shear stress, the jump in shear stress across the interface between the fluid and the outside medium (usually air), the surface tension of that interface, and the rheometer gap size. We provide a full mechanistic understanding of the edge fracture instability, carefully validated against our simulations. These findings, which are robust with respect to choice of rheological constitutive model, also suggest a possible route to mitigating edge fracture, potentially allowing experimentalists to achieve and accurately measure stronger flows than hitherto.
Article
Probability is a vital measure in numerous disciplines, from bioinformatics and econometrics to finance/insurance and computer science. Developed from a successful course, Fundamental Probability provides an engaging and hands-on introduction to this important topic. Whilst the theory is explored in detail, this book also emphasises practical applications, with the presentation of a large variety of examples and exercises, along with generous use of computational tools.
Article
We give a simple method to calculate without approximation the balanced density field of an axisymmetric vortex in a compressible atmosphere in various coordinate systems given the tangential wind speed as a function of radius and height and the vertical density profile at large radius. The method is generally applicable, but the example considered is relevant to tropical cyclones. The exact solution is used to investigate the accuracy of making the anelastic approximation in a tropical cyclone, i.e. the neglect of the radial variation of density when calculating the gradient wind. We show that the core of a baroclinic vortex with tangential wind speed decreasing with height is positively buoyant in terms of density differences compared at constant height, but at some levels may be interpreted as cold-cored or warm-cored depending on the surfaces along which the temperature deviation is measured. However, it is everywhere warm-cored if the potential temperature deviation is considered. In contrast, a barotropic vortex in a stably-stratified atmosphere is cold-cored at all levels when viewed in terms of the temperature deviation at constant height or constant σ ,b ut warm-cored when viewed in terms of the potential temperature deviation along these surfaces. The calculations provide a possible explanation for the observed reduction in surface air temperature in the inner core of tropical cyclones.
Article
Full-text available
The effect of intermittent and Gaussian inflow conditions on wind energy converters is studied experimentally. Two different flow situations were created in a wind tunnel using an active grid. Both flows exhibit nearly equal mean velocity values and turbulence intensities but strongly differ in their two point statistics, namely their distribution of velocity increments on a variety of timescales, one being Gaussian distributed, and the other one being strongly intermittent. A horizontal axis model wind turbine is exposed to both flows, isolating the effect on the turbine of the differences not captured by mean values and turbulence intensities. Thrust, torque and power data were recorded and analyzed, showing that the model turbine does not smooth out intermittency. Intermittent inflow is converted to similarly intermittent turbine data on all scales considered, reaching down to sub-rotor scales in space. This indicates that it is not correct to assume a smoothing of intermittent wind speed increments below the size of the rotor.
Article
We report in-situ measurements of plasma irregularities associated with a reverse flow event (RFE) in the cusp F region ionosphere. The Investigation of Cusp Irregularities 3 (ICI-3) sounding rocket, while flying through a RFE, encountered several regions with density irregularities down to meter-scales. We address in detail the region with the most intense small-scale fluctuations in both the density and in the AC electric field, which were observed on the equatorward edge of an flow shear, and coincided with a double-humped jet of fast flow. Due to its long-wavelength and low-frequency character, the Kelvin-Helmholtz instability (KHI) alone cannot be the source of the observed irregularities. Using ICI-3 data as inputs we perform a numerical stability analysis of the inhomogeneous energy-density-driven instability (IEDDI) and demonstrate that it can excite electrostatic ion cyclotron waves in a wide range of wavenumbers and frequencies for the electric field configuration observed in that region, which can give rise to the observed small-scale turbulence. The IEDDI can seed as a secondary process on steepened vortices created by a primary KHI. Such an interplay between macro-processes and micro-processes could be an important mechanism for ion heating in relation to RFEs.
Conference Paper
An efficient interpolation scheme which is bilinear interpolation for multilevel fast multipole algorithm (MLFMA) is presented. Bilinear interpolation is simple, easy to implement and has quadratic interpolation behavior. Numerical results show that bilinear interpolation achieves good performance even in complex model when compared to bicubic and Lagrange interpolation which is commonly used in MLFMA implementations.
Conference Paper
Anisotropic diffusion is a powerful image processing technique, which allows simultaneously to remove noise and to enhance sharp features in two and three dimensional images. Anisotropic diffusion filtering concentrates on preservation of important surface features, such as sharp edges and corners, by applying direction dependent smoothing. This feature is very important in image smoothing, edge detection, image segmentation and image enhancement. For instance, in the image segmentation case, it is necessary to smooth images as accurately as possible in order to use gradient-based segmentation methods. If image edges are seriously polluted by noise, these methods would not be able to detect them, so edge features cannot be retained. The aim of this paper is to present a comparative study of three methods that have been used for smoothing using anisotropic diffusion techniques. These methods have been compared using the root mean square error (RMSE) and the Nash-Sutcliffe error. Numerical results are presented for both artificial data and real data.
Conference Paper
Computing derivatives from observed integral data is known as an ill-posed inverse problem. The ill-posed qualifier refers to the noise amplification that can occur in the numerical solution if appropriate measures are not taken (small errors for measurement values on specified points may induce large errors in the derivatives). For example, the accurate computation of the derivatives is often hampered in medical images by the presence of noise and a limited resolution, affecting the accuracy of segmentation methods. In our case, we want to obtain an upper airways segmentation, so it is necessary to compute the first derivatives as accurately as possible, in order to use gradient-based segmentation techniques. For this reason, the aim of this paper is to present a comparative analysis of several methods (finite differences, interpolation, operators and regularization), that have been developed for numerical differentiation. Numerical results are presented for artificial and real data sets.
ResearchGate has not been able to resolve any references for this publication.