Article

Guide to the Expression of Uncertainty in Measurement

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... The efficiency for heating and cooling can be taken as COP and EER, respectively, according to Australian Standard 5151 (AS/NZS ISO 5151:2022) [53,56]. Therefore, for heating loads: ...
... For air conditioning systems, EER and COP are assumed to be 1.6 and 0.6 [54][55][56][57][58][59]. • The additional light coefficient is assumed to be 0.9 [54][55][56][57][58][59][60]. ...
... For air conditioning systems, EER and COP are assumed to be 1.6 and 0.6 [54][55][56][57][58][59]. • The additional light coefficient is assumed to be 0.9 [54][55][56][57][58][59][60]. ...
Article
Full-text available
The global demand for energy is significantly impacted by the consumption patterns within the building sector. As such, the importance of energy simulation and prediction is growing exponentially. This research leverages Building Information Modelling (BIM) methodologies, creating a synergy between traditional software methods and algorithm-driven approaches for comprehensive energy analysis. The study also proposes a method for monitoring select energy management factors, a step that could potentially pave the way for the integration of digital twins in energy management systems. The research is grounded in a case study of a newly constructed educational building in New South Wales, Australia. The digital physical model of the building was created using Autodesk Revit, a conventional software for BIM methodology. EnergyPlus, facilitated by OpenStudio, was employed for the traditional software-based energy analysis. The energy analysis output was then used to develop preliminary algorithm models using regression strategies in Python. In this regression analysis, the temperature and relative humidity of each energy unit were used as independent variables, with their energy consumption being the dependent variable. The sigmoid algorithm model, known for its accuracy and interpretability, was employed for advanced energy simulation. This was combined with sensor data for real-time energy prediction. A basic digital twin (DT) example was created to simulate the dynamic control of air conditioning and lighting, showcasing the adaptability and effectiveness of the system. The study also explores the potential of machine learning, specifically reinforcement learning, in optimizing energy management in response to environmental changes and usage conditions. Despite the current limitations, the study identifies potential future research directions. These include enhancing model accuracy and developing complex algorithms to boost energy efficiency and reduce costs.
... At least three samples were t and tested each time. Analysed for representativeness, the material was then teste estimating measurement uncertainty, this study used procedures compliant with P and German standards [42,43]. The energy value results obtained for the samples te over the year were averaged. ...
... Analysed for representativeness, the material was then tested. In estimating measurement uncertainty, this study used procedures compliant with Polish and German standards [42,43]. The energy value results obtained for the samples tested over the year were averaged. ...
Article
Full-text available
Using a wide range of organic substrates in the methane fermentation process enables efficient biogas production. Nonetheless, in many cases, the efficiency of electricity generation in biogas plant cogeneration systems is much lower than expected, close to the calorific value of the applied feedstock. This paper analyses the energy conversion efficiency in a 1 MWel agricultural biogas plant fed with corn silage or vegetable waste and pig slurry as a feedstock dilution agent, depending on the season and availability. Biomass conversion studies were carried out for 12 months, during which substrate samples were taken once a month. The total primary energy in the substrates was estimated in laboratory conditions by measuring the released heat (17,760 MWh·year −1), and, in the case of pig slurry, biochemical methane potential (BMP, (201.88 ± 3.21 m 3 ·Mg VS −1). Further, the substrates were analysed in terms of their chemical composition, from protein, sugar and fat content to mineral matter determination, among other things. The results obtained during the study were averaged. Based on such things as the volume of the biogas, the amount of chemical (secondary) energy contained in methane as a product of biomass conversion (10,633 MWh·year −1) was calculated. Considering the results obtained from the analyses, as well as the calculated values of the relevant parameters, the biomass conversion efficiency was determined as the ratio of the chemical energy in methane to the (primary) energy in the substrates, which was 59.87%, as well as the electricity production efficiency, as the ratio of the electricity produced (4913 MWh·year −1) to the primary energy, with a 35% cogeneration system efficiency. The full energy conversion efficiency, related to electricity production, reached a low value of 27.66%. This article provides an insightful, unique analysis of energy conversion in an active biogas plant as an open thermodynamic system.
... Estimation of the uncertainty in the measurement results by the externally calibrated HPLC-UV and UV-VIS-NIR spectrophotometer is based on the mathematical model in Equation (4) and the calculations were performed according to ISO GUM [30]. From this equation, the explicit sources of uncertainty are the measured area/absorbance, the slope and intercept of the calibration lines. ...
... The sensitivity coefficients, c i were calculated by differentiating Equation (6) and were used to calculate the combined standard uncertainty according to Equation (10) [30] [31]. ...
... Uncertainty budgets have been generated following the guide to the expression of uncertainty in measurements (GUM) [16]. ...
... Solid-state detectors, including both unshielded diodes and synthetic diamond detectors, have been shown to demonstrate over-response in small fields. In the case of unshielded silicon diodes, it is understood that this is a result of the increased electron fluence and higher stopping power within the relatively dense sensitive layer [14][15][16][17][18], while for the microDiamond, it is suspected this is largely caused by the relatively dense layers encasing the sensitive volume [19], [20]. The concern with these attributes is that they are a function of energy and, therefore, potentially depth. ...
Article
Full-text available
Clinical implementation of SRS cones demands particular experimental care and dosimetric considerations in order to deliver precise and safe radiotherapy to patients. The purpose of this work was to present the commissioning data of recent Aktina cones combined with a 6MV flattened beam produced by an Elekta VersaHD linear accelerator. Additionally, the modelling process, and an assessment of dosimetric accuracy of the RayStation Monte Carlo dose calculation algorithm for cone based SRS was performed. There are currently no studies presenting beam data for this equipment and none that outlines the modelling parameters and validation of dose calculation using RayStation's photon Monte Carlo dose engine with cones. Beam data was measured using an SFD and a microDiamond and benchmarked against EBT3 film for cones of diameter 5-39 mm. Modelling was completed and validated within homogeneous and heterogeneous phantoms. End-to-end image-guided validation was performed using a StereoPHAN™ housing, an SRS MapCHECK and EBT3 film, and calculation time was investigated as a function of statistical uncertainty and field diameter. The TPS calculations agreed with measured data within their estimated uncertainties and clinical treatment plans could be calculated in under a minute. The data presented serves as a reference for others commissioning Aktina stereotactic cones and the modelling parameters serve similarly, while providing a starting point for those commissioning the same TPS algorithm for use with cones. It has been shown in this work that RayStation's Monte Carlo photon dose algorithm performs satisfactorily in the presence of SRS cones.
... Following the ISO GUM rule [36], both of the two components of uncertainty (type A and type B) were evaluated. ...
... This last uncertainty is obtained from the diagonal elements of the covariance matrix of the unknown [37]. In any case, it results in being much lower than dispersion due to the repetitions of tests and the other type B sources, so it has been neglected in the sum in quadrature of the components [36]. ...
Article
Full-text available
This experimental work presents the results of measurements of thermal conductivity λ and convection heat transfer coefficient h on regular structure PLA and aluminium foams with low density ratio (~0.15), carried out with a TCP (thermal conductivity probe), built by the authors’ laboratory. Measurements were performed with two fluids, water and air: pure fluids, and samples with the PLA and aluminium foams immersed in both fluids have been tested. Four temperatures (10, 20, 30, 40 °C) and various temperature differences during the tests ΔT (between 0.35 and 9 °C) were applied. Also, tests in water mixed with 0.5% of a gel (agar agar) have been run in order to increase the water viscosity and to avoid convection starting. For these tests, at the end of the heating, the temperature of the probe reaches steady-state values, when all the thermal power supplied by the probe is transferred to the cooled cell wall; thermal conductivity was also evaluated through the guarded hot ring (GHR) method. A difference was found between the results of λ in steady-state and transient regimes, likely due to the difference of the sample volume interested by heating during the tests. Also, the effect of the temperature difference ΔT on the behaviour of the pure fluid and foams was outlined. The mutual effect of thermal conductivity and free convection heat transfer results in being extremely important to describe the behaviour of such kinds of composites when they are used to increase or to reduce the heat transfer, as heat conductors or insulators. Very few works are present in the literature about this subject, above all, ones regarding low-density regular structures.
... The above error analysis can be scaled by Type-B uncertainties which come from all error sources except the repeatability (Type-A), while the expanded uncertainty includes the both and evaluated according to GUM [54]. These values are very critical and limits a system's ability to measure a quantity. ...
... Moreover, while measuring radii ≤ 12.5 mm, f/0.65 TS has reported the lowest normalization error value in this study. The measurement uncertainty is evaluated for each used TS according to Guide to the expression of Uncertainty in Measurement (GUM) [54]. Future studies could focus on untreated error sources like phase calibration. ...
Article
Full-text available
Spherical surfaces are essential components of optical systems and imaging devices. Moreover, precision spheres are calibration standards for many accurate instruments in dimensional and mass metrology. A spherical surface's main property is its radius of curvature, which can be measured using contact or non-contact methods. Interferometry is an accurate non-contact technique, but some error sources impact it. This study investigates seventeen error sources that affect a laser interferometric system for measuring the radius of curvature of a precision sphere. The measurements are obtained using a Fizeau laser interferometer (GPI-XP, Zygo) with phase-shifting capability and a displacement measuring interferometer (ZMI-1000, Zygo). A silicon–nitride precision sphere with a nominal radius of 12.49965 mm is dealt with in this study. One of the main contributions of this study is proposing three additional error sources: focal shift, optical distortion, and y -axis vibration. Besides, deadpath, nulling, and focal shift error sources contributed 70% of the total uncertainty budget. Also, to correlate measurement accuracy with the reference surface, three transmission spheres ( f /3.3, f /1.5, and f /0.65) are employed; f /0.65 reported the most accurate radius measurement of 12.49922 ± 0.00089 mm. This study also investigates the dependence of the nulling error on the coverage factor that defines the tested surface area. The analysis of the measurement uncertainty and the optimum conditions that minimize the system's potential error sources are described in this work.
... Further, assumptions and models can pose biases, for example, over-or underestimation of CO 2 production by litter in CO 2 balances to estimate ventilation rate. ISO, 1995) Spurious errors are sometimes difficult to detect, but in some cases, they can be identified as outlier values when representing a measurement trend. When detected, these must be withdrawn from the analysis. ...
... In general terms, the basic expression to calculate emission rates is multiplying two input variables: ventilation rate and concentration difference between outlet and inlet air. In general terms, we can apply the law of propagation of uncertainty (ISO, 1995), and the input variable with the highest uncertainty expressed in relative terms will be the one with greater impact on the uncertainty of the emission value. As an example, if a ventilation measurement is very imprecise compared to a concentration measurement (in relative terms to the measured value), improving the last one will hardly mean a more precise emission value. ...
Chapter
The aim of this chapter is to summarize dietary measures to mitigate methane at animal level. The chapter briefly summarizes methane measurement techniques. The focus is on the mitigation potential studied in vivo, but when such data were not available, in vitro measurements were included. The chapter covers main dietary ingredients such as forage quality, inclusion of concentrate, grazing management and inclusion of primary (e.g. lipids) and secondary (e.g. tannins) plant compounds as well as chemical inhibitors (e.g. 3-NOP) to the diet. This chapter can be used as a guidance on what to use, at which concentrations in the diets levels (farmers) and how to quantify the effect (researchers).
... The analytical method for the determination of DHA, SA, BA, MP and EP in the products were validated in terms of the linearity, limit of detection (LOD), limit of quantification (LOQ), selectivity, accuracy (recovery), precision (as percent relative standard deviation (%RSD)) and measurement uncertainty. The method was assessed according to the AOAC (Association of Official Analytical Chemistry) guideline for selectivity, accuracy, precision, LOD, LOQ (AOAC, 2012b), the GUM (guide to the expression of uncertainty in measurement) and EURACHEM guide for the measurement of uncertainty (ISO, 1993;EURACHEM, 2000). Linearity was evaluated using three replicates of calibration curve and calculating the correlation coefficient (r 2 ). ...
... The intra-day (three repetitions on the same day) and inter-day (three repetitions over three different days) precision tests were performed. The measurement uncertainty of the concentration of DHA, SA, BA, MP, EP in processed foods was estimated according to the GUM and EURACHEM guide (ISO, 1993;EURACHEM, 2000). The uncertainty factors were measured based on the four components: The calibration curve, foodstuffs matrix (repeatability), preparation of standards and sample preparation. ...
Article
Full-text available
In this study, an analytical method was established and validated to determine the preservatives such as dehydroacetic acid, benzoic acid, sorbic acid, methylparaben and ethylparaben. The level of preservatives was measured by solvent extraction method adding purification process with carrez reagent and by high-performance liquid chromatography (HPLC). The developed analytical method was successfully applied to determine the concentration of preservatives in various food samples including jam, cheese and soy sauce, displaying high accuracy (recoveries between 87.8% and 110%) and precision (%RSD less than 5.92% and 7.72% for intra-day and inter-day, respectively). To verify the applicability of the improved test method, selected 13 food items and collected 521 samples were monitored. As a result, all the cases met the Korea standard guidelines. Consequently, this study is expected to contribute to the safety management of preservatives for domestic distribution and imported food.
... In order to eliminate the influence of electromagnetic interference on electronic devices, the test and measuring part of the equipment were separated by a professional measuring cabin for protection against electric fields greater than 100 dB and protection against magnetic fields greater than 40 dB. The connection between the test and measurement part was non-galvanic [26], [27]. The measuring system was fully automated. ...
... The combined measurement uncertainty of the experimental procedure was about 5% [27], [28]. Based on Figures 1 and 2, it can be concluded that neutron and gamma radiation have a different effect on the value of the breakdown voltage of the gas surge arrester. ...
Article
Full-text available
The study examines the effect of neutron and gamma radiation on commercial gas surge arresters. The research is of experimental-theoretical nature. The experimental part of the research was performed un-der well-controlled laboratory conditions. The combined measurement uncertainty was about 5%. The experimental system is specially designed for the observed problem and has certain original solutions. The test procedure was fully automated and had software support in the experiment management and also for data collection and statistical processing. The obtained results show that neutron and gamma ra-diation improves the functional characteristics of gas surge arresters with a memory effect. The obtained results are explained in accordance with the theory of interaction of neutron and gamma fields with ma-terial and with the theory of electric discharge in gases. The results presented in this study are important for the design of surge protection in systems that can be found in the field of neutron and gamma radia-tion because they can achieve a positive synergistic effect of protection in hybrid schemes with other components for surge protection whose characteristics spoil this radiation.
... Absolute permeability k, the Poiseuille number Po and the hydraulic diameter of a fracture dh, and the experimental relative permeabilities of water-oil flow (krw and kro) were calculated by Equations (1)- (5). From the uncertainty propagation law [41], in this experiment, the combined relative standard uncertainties of Po, k, krw and kro are 0.1%, 0.9%, 1.4% and 1.4%, respectively. ...
Article
Full-text available
The influence of wettability on the permeability performance of water–oil two-phase flow has attracted increasing attention. Dispersed flow and stratified flow are two flow regimes for water–oil two-phase flow in capillary fractures. The theoretical models of relative permeability considering wettability were developed for these two water–oil flow regimes from the momentum equations of the two-fluid model. Wettability coefficients were proposed to study the impact of wettability on relative permeabilities. Experiments were conducted to study the relative permeabilities of laminar water–oil two-phase flow in water-saturated and oil-saturated horizontal capillary fractures with different hydraulic diameters. These fractures were made of polymethylmethacrylate (PMMA) and polytetrafluoroethylene (PTFE), which had different surface wettabilities. In this experiment, the regimes are dispersed flow and stratified flow. The results show that the effect of wettability on the relative permeabilities increases as the hydraulic diameters of capillary fractures decrease for water–oil two-phase flow. The relative permeabilities in a water-saturated capillary fracture are higher than those in an oil-saturated capillary fracture of the same material. The relative permeabilities in a PTFE capillary fracture are larger than those in a PMMA capillary fracture under the same saturated condition. Wettability has little effect on the permeability performances of water–oil two-phase flow in water-saturated capillary fractures, but is significant for those in oil-saturated capillary fractures.
... Furthermore, quality control (QC) testing performed during release and stability studies demands homogeneity assessment of product batches for the justification of sample size used 19 . Uncertainty is usually defined as the parameter associated with the measurement which illustrates reasonably attributed value dispersions of the measurand 20 . Every measurement has a degree of uncertainty irrespective of precision and accuracy. ...
Article
Full-text available
Aqueous solution containing different concentration (0.5, 0.6 and 1.0%) (w/v) of Polyvinyl pyrrolodon-Iodine (PVP-I) complex, a well-known antiseptic; is prepared and the stability and homogeneity of these solution is assessed as per the ICH Guidelines and International Harmonized Protocol respectively. The solutions were found to be sufficiently homogeneous and stable for a year at 25 °C (60%RH). Measurement uncertainty of the prepared PVP-I solutions were estimated by identifying possible sources of uncertainty using Ishikawa diagram and preparing uncertainty budget based on scope of calibration laboratory. The stable and homogenized PVP-I solution is to be used in a clinical trial for the application on oro and nasopharynx against novel SARS-CoV-2 Virus.
... The diagonal elements of both reflect the random noise level. Since Equation (10) represents implicit overdetermined equations, Equation (14) and Equation (16) are different from the error transfer of implicit well-determined equations [12] provided by the Joint Committee for Guides in Metrology (JCGM). They can be considered as more general error transfer formulas. ...
Article
Full-text available
In the conventional methods of star sensor accuracy evaluation, neither the accuracy calculation formula nor the Monte Carlo method can comprehensively reflect the attitude measurement accuracy of star sensors in real-time. In this paper, a real-time analysis and calculation model for the attitude measurement accuracy of star sensors is proposed. Firstly, the basic attitude measurement model of the star sensor is established through the pinhole imaging model. Moreover, a moving frame of star sensor attitude measurement is established to avoid the defects of attitude representation by the Euler angle in the star sensor accuracy analysis. Then, combined with the error transfer of implicit overdetermined equations proposed in this paper, the error transfers of starspot extraction random error, star catalog random error, and intrinsic parameter system error are provided. Finally, simulation and field experiments are conducted to verify the accuracy of the proposed error analysis theory. The experimental results show that the attitude measurement accuracy of star sensors can be accurately estimated by using single-frame data of guide stars.
... nm). This is related to the systematic error of our setup [26]. These experimental results are in the typical range of both types of the lasers as specified in their datasheets. ...
Article
A wavemeter using the near-field Talbot diffraction is simplified. High accuracy wavelength measurement can be obtained from each spacing between two adjacent maximum intensities of the periodicity along multiples of the Talbot distance. Our experimental results are confirmed by our calculations. In contrast to the previous works, we use a diffraction grating with sufficiently large grating period. Therefore, the setup is practical and the pixel size of camera used to measure the interference pattern can be large. Moreover, the obtained Talbot patterns are sharp without using a post-image processing. According to our recent setup, we use the grating with a period of 12.5 μm. With visible light lasers, the Talbot distances are in the range of a few hundred of micrometers. These distances are much larger than a pixel size of normal camera. An external cavity diode laser with rubidium saturated absorption spectroscopy is used to calibrate our setup. High accuracy at 1 pm can be achieved.
... Often, the order of magnitude of the parameter to be assessed, e.g., form error, is similar to that of the uncertainty of the measurement system. In evaluating the uncertainty associated with the fitted parameter, a standard GUM approach [1] is likely to define a coverage interval that includes negative values. Furthermore, the noise in the instrument can introduce significant bias in estimates of the geometrical parameters. ...
Conference Paper
Full-text available
The assessment of the geometry of artefacts in length metrology is concerned with estimating dimension, form and roughness. While all three are positive quantities, the evaluation of form and roughness are problematic, mainly because the values of the parameters to be measured are often of the same order as the measurement uncertainty. Furthermore, random effects associated with a measurement system will in general bias the estimate of parameter. This paper discusses these issues, using a Bayesian approach in which prior distributions are used to ensure parameter estimates are physically meaningful.
... No measurement done as a part of any scientific research, no matter how carefully we look, can be considered exact. The quantification of the uncertainty associated with the measurements is very important to identify the value of the measurand precisely [44]. ...
Chapter
Full-text available
The blown film production process involves several processes as extrusion and film cooling. The demand to achieve high productivity with proper film quality has devoted the researchers to investigating in detail, the bubble kinematics, and the thin film cooling process, utilizing the powerful Computational Fluid Dynamics tools, which allows them to deeply study the flow field, and the related heat transfer from the hot thin film to the surrounding mediums, several runs can be done, resulting in huge scientific data, at a minimum cost. In addition to the experimental measurements. The combination of both, reveals better understanding and accurate results.
... Based on the analyses performed, the uncertainty of the results was also calculated as a numerical value indicating the degree to which the obtained measurement result can be regarded as correct. In estimating measurement uncertainty, this study used procedures compliant with Polish and German standards [41,42]. The energy value results obtained for the samples tested over the year were averaged. ...
Preprint
Full-text available
Using a wide range of organic substrates in the methane fermentation process enables efficient biogas production. Nonetheless, in many cases, the efficiency of electricity generation in biogas plant cogeneration systems is much lower than expected, close to the calorific value of the applied feedstock. This paper analyses energy conversion efficiency in a 1 MWel agricultural biogas plant fed with corn silage or vegetable waste and pig slurry as a feedstock dilution agent, depending on the season and availability. Biomass conversion studies were carried out for 12 months, during which substrate samples were taken once a month. The total primary energy in substrates was estimated in laboratory conditions by measuring the heat of combustion in a ballistic bomb calorimeter (17,760 MWh·year‐1), and in the case of pig slurry, biochemical methane potential (BMP, (201.88±3.21 m3 ·Mg VS‐1). Further, the substrates were analysed in terms of their chemical composition — from protein, sugar and fat content to mineral matter determination, among other things. The results obtained during the study were averaged. Based on such things as the amount of biogas produced at the plant, the amount of chemical (secondary) energy contained in methane as a product of biomass conversion (10,633 MWh·year‐1) was calculated. Considering the results obtained from the analyses, as well as the calculated values of the relevant parameters, biomass conversion efficiency was determined as a ratio of chemical energy in methane to (primary) energy in substrates, which was 59.87%, as well as electricity production efficiency, as a ratio of electricity produced (4,913 MWh·year‐1) to primary energy, with a 35% cogeneration system efficiency. Full energy conversion efficiency, related to electricity production, reached a low value of 27.66%. This article provides an insightful, unique analysis of energy conversion in an active biogas plant as an open thermodynamic system.
... Based on the analyses performed, the uncertainty of the results was also calculated as a numerical value indicating the degree to which the obtained measurement result can be regarded as correct. In estimating measurement uncertainty, this study used procedures compliant with Polish and German standards [41,42]. The energy value results obtained for the samples tested over the year were averaged. ...
Preprint
Full-text available
Using a wide range of organic substrates in the methane fermentation process enables efficient biogas production. Nonetheless, in many cases, the efficiency of electricity generation in biogas plant cogeneration systems is much lower than expected, close to the calorific value of the applied feedstock. This paper analyses energy conversion efficiency in a 1 MWel agricultural biogas plant fed with corn silage or vegetable waste and pig slurry as a feedstock dilution agent, depending on the season and availability. Biomass conversion studies were carried out for 12 months, during which substrate samples were taken once a month. The total primary energy in substrates was estimated in laboratory conditions by measuring the heat of combustion in a ballistic bomb calorimeter (17,760 MWh·year-1), and in the case of pig slurry, biochemical methane potential (BMP, (201.88±3.21 m3·Mg VS-1). Further, the substrates were analysed in terms of their chemical composition — from protein, sugar and fat content to mineral matter determination, among other things. The results obtained during the study were averaged. Based on such things as the amount of biogas produced at the plant, the amount of chemical (secondary) energy contained in methane as a product of biomass conversion (10,633 MWh·year-1) was calculated. Considering the results obtained from the analyses, as well as the calculated values of the relevant parameters, biomass conversion efficiency was determined as a ratio of chemical energy in methane to (primary) energy in substrates, which was 59.87%, as well as electricity production efficiency, as a ratio of electricity produced (4,913 MWh·year-1) to primary energy, with a 35% cogeneration system efficiency. Full energy conversion efficiency, related to electricity production, reached a low value of 27.66%. This article provides an insightful, unique analysis of energy conversion in an active biogas plant as an open thermodynamic system.
... For the allergen ingredients, uncertainties for water and nitrogen content were evaluated following the principles of ISO Guide 35 (ISO, 2017b) and the ISO Guide to the expression of uncertainty in measurement (GUM) (BIPM et al., 2008;ISO, 2008). The general model used was ...
... [116] They are based on the pair potential of Przybytek et al. [133] and the three-body potential of Cencek et al. [134] Uncertainties in the density virial coefficients were propagated into uncertainties in the acoustic virial coefficients by the Monte Carlo method recommended in Supplement 1 to the "Guide to the Expression of Uncertainty in Measurement". [135] Gokul et al. [132] formulated the acoustic virial equation of state as expansion in terms of density or pressure. The uncertainty of speeds of sound calculated with the acoustic virial equation of state was estimated from the uncertainty of the acoustic virial coefficients. ...
Preprint
Recent advances regarding the interplay between ab initio calculations and metrology are reviewed, with particular emphasis on gas-based techniques used for temperature and pressure measurements. Since roughly 2010, several thermophysical quantities - in particular, virial and transport coefficients - can be computed from first principles without uncontrolled approximations and with rigorously propagated uncertainties. In the case of helium, computational results have accuracies that exceed the best experimental data by at least one order of magnitude and are suitable to be used in primary metrology. The availability of ab initio virial and transport coefficients contributed to the recent SI definition of temperature by facilitating measurements of the Boltzmann constant with unprecedented accuracy. Presently, they enable the development of primary standards of temperature in the range 2.5-552 K and pressure up to 7 MPa using acoustic gas thermometry, dielectric constant gas thermometry, and refractive index gas thermometry. These approaches will be reviewed, highlighting the effect of first-principles data on their accuracy. The recent advances in electronic structure calculations that enabled highly accurate solutions for the many-body interaction potentials and polarizabilities of atoms - particularly helium - will be described, together with the subsequent computational methods, most often based on quantum statistical mechanics and its path-integral formulation, that provide thermophysical properties and their uncertainties. Similar approaches for molecular systems, and their applications, are briefly discussed. Current limitations and expected future lines of research are assessed.
... In this paper, the uncertainties of all experimental measurement parameters were evaluated according to analysis of uncertainty propagation [38] . For example, the changing of the subcooled water temperature during the data acquisition process is less than 0.2 °C, and the accuracy of the T-type thermocouple was 0.5 °C. ...
... Table 3 shows the main parameters of the experimental apparatus. According to the standard of ISO GUM [35], Type A uncertainty refers to the statistical analysis of observations, while Type B refers to the system uncertainty caused by the experimental instrument and system [36,37]. The combined standard uncertainty u C can be calculated through Eqs. ...
... The relevant standards [48], [49] provide guidance on how the sources of uncertainty in equipment calibration and measurements should be combined to determine the overall measurement uncertainty. Based on these guidelines and the mathematical methodologies described in [50] we formed the uncertainty model of our EFS measurements as detailed below. ...
Article
Full-text available
Everyday living environments concentrate a growing amount of wireless communications leading to increased public concern for radiofrequency (RF) electromagnetic fields (EMF) exposure. Recent technological advances are turning the focus on Internet of Things (IoT) systems that enable automated and continuous real-time EMF monitoring, facing however several challenges mainly stemming from infrastructural costs. This paper seeks to provide a comprehensive view of RF-EMF levels in Greece and evidence-based decision support for a spatially prioritized deployment of an IoT RF-EMF monitoring system. We applied the stratified sampling method to estimate Electric Field Strength (EFS) in the 27MHz-3GHz range in 661 schools. Three different residential areas were considered, i.e. urban, semi-urban and rural. Results showed that the 95% confidence interval for the EFS is (0.40, 0.44) with central value equal to the sample mean 0.42 V/m. We obtained strong evidence that the mean EFS value for all Greek schools is 0.42, which is 52 times lower than the Greek safety limit and equal to 1% of international limits. Mean EFS values of individual residential areas were also significantly below safety limits. Rural areas displayed the highest EFS peaks comprising the strongest candidate to start the deployment of an IoT RF-EMF monitoring system from.
... Guide to the expression of Uncertainty in Measurement (GUM), the expanded uncertainty of a measurand is expressed as a large fraction of the distribution of values it is likely to take [110]. A calibration certificate documents the expanded uncertainty, reflecting the different sources of error associated with the instrument used for the calibration of the material measure. ...
Article
Full-text available
As the need for the manufacturing of complex surface topographies increases, traceable measurement with known uncertainties can allow a manufacturing process to remain stable. Material measures are the link in the chain that connects the surface topography measurement instrument’s output to the definition of the metre. In this review, the use of material measures is examined for the purposes of instrument calibration and performance verification based on the metrological characteristics framework, as introduced in ISO 25178 part 600. The material measures associated with each metrological characteristic are investigated in terms of fabrication, geometry and functionality. Material measures for metrological characteristics are discussed in a sequential approach, focusing on material measures that have been developed for specific measurement technologies and optical surface topography measurement instruments. There remains a gap in the metrological characteristic framework for the characteristic, topography fidelity, and the review highlights current methods using reference metrology and alternative approaches using virtual instruments to quantify the effects of topography fidelity. The influence of primary instruments is also reviewed in the context of uncertainty propagation. In the conclusion, the current challenges are identified with regards to the scarcity of available material measures in the lower nanometre range, and the limitations in terms of cost, complexity, manufacturing time and industrial applicability.
... which uses the equivalence 1 ≡ 1 . Finally, applying (the multivariate version of) the law of propagation of uncertainty [JCGM100,JCGM102] to this measurement function gives the covariance matrix ′ for the corrected measured values as ...
Technical Report
Full-text available
This report is the Final Report for Key Comparison CCAUV.W-K2. This Key Comparison covers primary free-field standards for sound in water at frequencies between 250 Hz and 500 kHz. This project is one of the Key Comparisons organised under the auspices of the Consultative Committee on Acoustics, Ultrasound and Vibration (CCAUV) of the CIPM. This report has the status of a Final Report and has been submitted to the Key Comparison Database (KCDB). In the report, the results of participants are presented with the Key Comparison Reference Values and Degrees of Equivalence. The results are calculated according to the procedures agreed after consideration of the Draft A1 and A2 reports, and the Draft B report has been approved by the CCAUV. All participants have had the opportunity to give final agreement on the contents and amendments have been made to account for their comments. In many respects, the comparison has been a success with good agreement achieved over an extended lower frequency range compared to the previous CCAUV.W-K1 comparison, the lower frequency limit for CCAUV.W-K2 being extended down by two octaves to 250 Hz. The generally more difficult frequency range from 100 kHz to 500 kHz has also shown very good agreement between the participants. However, in the range 60 kHz to 100 kHz the agreement was not as good, with three participants exhibiting some discrepant results. See: https://iopscience.iop.org/article/10.1088/0026-1394/59/1A/09003
... The standard uncertainty must be evaluated according to the GUM [19], and it is carried out separately for the three measured quantities: thermal conductivity, thermal diffusivity, and convection heat transfer coefficient. As prescribed by the GUM, two components of the total uncertainty are present, type A uncertainty (evaluated with statistical methods) and type B (all other uncertainty sources). ...
Article
Four probes for measurement of thermal conductivity, thermal diffusivity and convection heat transfer coefficient have been designed, built, and tested. In two of these probes (SP-1 and SP-2) three thermocouples were located at 25%, 50%, and 75% of the total length of 150 mm, while the third one (SP-3) has the three thermocouples at 10%, 50% and 90%. The fourth probe (DP-1) is double, to measure thermal diffusivity with the pulse method. First results show good performances of the devices when used to measure a reference material (glycerol). In fact, an accuracy of about ±5% was achieved both in thermal conductivity and thermal diffusivity measurements. Due to their size (length 150 mm, diameter 2 mm) the probes are especially suited to measure the thermophysical properties of bulk materials, as soils, composites, walls of buildings, etc. Convection can be studied thanks to the presence of three thermocouples, so also the boundary layer can be evaluated.
Preprint
Full-text available
Boric acid is authorized as food additive in EU. However, it is not authorized for use in Korea. The analytical method of boric acid in foods has not been reported in Korea. In this study, an analytical method has been developed and validated for the determination of boric acid in caviars. We established the analytical assay for boric acid in caviars by Inductively Coupled Plasma Mass Spectrometry(ICP-MS). Caviar samples were digested in closed vessels made of PTFE in an Automatic Microwave Digestion System. We also performed a method validation including linearity, inter-day, intra-day, precision, accuracy, Limit of detection (LOD) and Limit of Quantitation(LOQ) and recovery. The calibration curve was obtained from 0.2 µg/mL to 10 µg/mL with a satisfactory correlation coefficient of 0.99. The LOD and LOQ were 0.04 µg/mL and 0.16 µg/mL. The recoveries of boric acid from spiked samples at levels of 0.5, 2 and 10 µg/mL ranged from 91.8 to 98.3% with relative standard deviation(RSD) between 0.4 and 2.4 %.
Article
Full-text available
Optimal quality of health services depends on the accuracy of medical devices. In an effort to improve quality standards, Dr. General Hospital Iskak Tulungagung implements a program to assist health workers in understanding and applying the results of medical device calibration. This research aims to increase health workers' knowledge about calibration, strengthen patient confidence, and improve the accuracy of diagnosis and treatment. This research contributes to improving the competency of health workers and increasing patient satisfaction through assistance in implementing calibration results. This contribution is important in building public trust in health services. The mentoring program involves regular counseling, practical workshops, skills training, as well as individual mentoring by a team of calibration experts. A feedback system is also implemented for continuous evaluation and improvement. This research shows a significant increase in health workers' understanding of medical device calibration. Patients report higher levels of satisfaction with services provided by healthcare professionals who use calibrated devices. This mentoring program was successful in increasing health workers' understanding of calibration, building patient trust, and increasing the accuracy of diagnosis and treatment. This success indicates the importance of assistance in the context of implementing calibration results in hospitals. The results of this research have positive implications for improving health service quality standards. These implications extend to the health education sector and clinical practice, strengthening evidence-based practice and increasing public trust in the health system. Thus, the mentoring program becomes a viable model for improving the quality of health services in hospitals and other health institutions
Article
Full-text available
Hepatic steatosis is characterized by an abnormal accumulation of lipids within hepatocytes. Magnetic resonance imaging (MRI) is a widely used noninvasive method that can accurately and objectively quantify liver fat. To evaluate the accuracy of the quantitatively measured fat fraction, stable and homogenous qualified material is needed as a reference. Surfactant-free micro-emulsions of three fat fractions I, II, and III, corresponding to (9.12 ± 0.02) %, (18.32 ± 0.04) %, and (27.86 ± 0.05) %, respectively, were prepared using a high-intensity focused ultrasonic emulsification technique. The targeted fat fraction of 10-30 % covers the range of grade I moderate non-alcoholic fatty liver disease (NAFLD), which occurs in the early stages that require early detection. Water contents as the main component of the emulsified reference materials (RMs) were determined using the Karl Fisher titration method to evaluate the stability and homogeneity of the RMs. The water contents of fat fraction I, II, and III were (89.12 ± 1.08) %, (79.87 ± 0.81) %, and (72.71 ± 1.29) %, respectively. The RMs were stable for six months and showed good homogeneity with both standard deviations between and within units in the range of 0.3 – 0.6 %. The physical phantom consisted of nine vials of RMs surrounded by agarose gel. The phantom was scanned on 3 T MRI (Siemens MAGNETOM Vida, Siemens Healthineers, Erlangen, Germany). The correlation between the measured proton density fat fraction (PDFF) values and the fabricated fat fraction values was evaluated using linear regression analysis. The slope of the linear fitting was 0.99, and the intercept was –0.88 %. These results show that the developed RMs can provide a reference value for the measured fat fraction from a medical imaging system to evaluate the effectiveness of a measurement procedure. It is also expected that the developed RMs can be utilized to harmonize the measured values across multi-site.
Article
Full-text available
Laser Detection and Ranging (LiDAR) systems possess the capability to generate high-resolution three-dimensional (3D) data of indoor environments. The inherent uncertainties pertaining to relative spatial positioning and the centimeter-level precision of LiDAR ranging, however, contribute to discernible constraints within contexts requiring elevated degrees of precision, particularly in the domain of high-precision sensing applications. In response to this concern, this paper introduces an approach designed to mitigate and appraise the uncertainty associated with plane positioning through the utilization of point cloud fitting methodologies, concurrently integrating principles of Building Information Modeling (BIM) and Anisotropic Affine Transformations (AAT). Primarily, the methodology involves the extraction of precise plane characteristics employing the tenets of robustly weighted total least squares theory within the context of point cloud fitting. Subsequently, the method synergistically incorporates geometric information emanating from the Building Information Model alongside the accurately determined plane positioning data derived from LiDAR point clouds via Anisotropic Affine Transformations. This integration markedly enhances the precision of the ranging system's datasets. Ultimately, the assessment of ranging uncertainty is conducted by quantifying the deviations of individual points from the conforming plane and employing a probability approximative scheme grounded in higher-order moments. Experimental results demonstrate the method's precision and efficacy, offering a solution to the challenge of achieving higher perception precision in LiDAR-based ranging systems.
Article
Full-text available
The size of human speech or cough droplets decides their air-borne transport distance, life span and virus infection risk. We have investigated the measurement accuracy of artificial saliva and saline droplet size for more effective COVID-19 infection control. A spray generator was used for polydisperse droplet generation and a special test chamber was designed for droplet measurement. Saline and artificial saliva were gravimetrically prepared and used to generate droplets. The droplet spray generator and the test chamber were circulated among four metrology institutes (NMC, CMS/ITRI, NIM and KRISS) for droplet size measurement and evaluation of deviations. The composition of artificial saliva was determined by measuring the mass fraction of the inorganic ions. The density of dried artificial saliva droplets was estimated using its composition and the density of each non-volatile component. The volume equivalent diameter (VED) of droplets have been measured by aerodynamic particle sizer (APS) and optical particle size spectrometer (OPSS). As a response to the COVID-19 pandemic, this is the first time that a comparative study among four metrology institutes has been conducted to evaluate the accuracy of saliva and saline droplet size measurement. For artificial saliva droplets measured by OPSS, the deviations from the reference VED (~ 4 μm) were below 5.3%. For saline droplets measured by APS, the deviations from the reference VED were below 10.0%. The potential droplet size measurement errors have been discussed. This work underscores the need for new reference size standards to improve the accuracy and establish traceability in saliva and saline droplet size measurement.
Preprint
Full-text available
The size of human speech or cough droplets decides their air-borne transport distance, life span and virus infection risk. We have investigated the measurement accuracy of artificial saliva and saline droplet size for more effective COVID-19 infection control. A spray generator was used for polydisperse droplet generation and a special test chamber was designed for droplet measurement. Saline and artificial saliva were gravimetrically prepared and used to generate droplets. The droplet spray generator and test chamber were circulated as travelling standard among four metrology institutes (NMC, CMS/ITRI, NIM and KRISS) for droplet size measurement comparison and evaluation of deviations. The composition of artificial saliva was determined by measuring the mass fraction of the inorganic ions. The density of dried artificial saliva droplets was estimated using its composition and the density of each non-volatile component. The volume equivalent diameter (VED) of droplets have been measured by aerodynamic particle sizer (APS) and optical particle size spectrometer (OPSS). As a response to the COVID-19 pandemic, this is the first time that a comparative study among four metrology institutes has been conducted to evaluate the accuracy of saliva and saline droplet size measurement. For artificial saliva droplets measured by OPSS, the deviations from the reference VED (~4 μm) were below 5.3%. For saliva droplet sizes measured by APS, two institutes showed higher deviations up to 21.9% from the reference VED. For saline droplets measured by APS, the deviations from the reference VED were below 10.0%. The potential droplet size measurement errors using OPSS and APS have been discussed. This work underscores the need for new reference size standards to improve the accuracy and establish traceability in saliva and saline droplet size measurement.
Article
Full-text available
This study details the occurrence and concentrations of organic micropollutants (OMPs) in stormwater collected from a highway bridge catchment in Sweden. The prioritized OMPs were bisphenol-A (BPA), eight alkylphenols, sixteen polycyclic aromatic hydrocarbons (PAHs), and four fractions of petroleum hydrocarbons (PHCs), along with other global parameters, namely, total organic carbon (TOC), total suspended solids (TSS), turbidity, and conductivity (EC). A Monte Carlo (MC) simulation was applied to estimate the event mean concentrations (EMC) of OMPs based on intra-event subsamples during eight rain events, and analyze the associated uncertainties. Assessing the occurrence of all OMPs in the catchment and comparing the EMC values with corresponding environmental quality standards (EQSs) revealed that BPA, octylphenol (OP), nonylphenol (NP), five carcinogenic and four non-carcinogenic PAHs, and C16-C40 fractions of PHCs can be problematic for freshwater. On the other hand, alkylphenol ethoxylates (OPnEO and NPnEO), six low molecule weight PAHs, and lighter fractions of PHCs (C10-C16) do not occur at levels that are expected to pose an environmental risk. Our data analysis revealed that turbidity has a strong correlation with PAHs, PHCs, and TSS; and TOC and EC highly associated with BPA concentrations. Furthermore, the EMC error analysis showed that high uncertainty in OMP data can influence the final interpretation of EMC values. As such, some of the challenges that were experienced in the presented research yielded suggestions for future monitoring programs to obtain more reliable data acquisition and analysis.
Conference Paper
Along-hole Depth (AHD) is the most fundamental subsurface wellbore measurement made. Well depth is the main descriptor of wellbore position, measured from zero depth point (ZDP). This is translated into vertical depth (V) using inclination (I). V is the main descriptor subsurface wellbore events and then North (N) and East (E) act as the Qualifying descriptions of V. Well depth is commonly described as measured depth (MD) and is used to describe well construction, navigation and collision avoidance, drilled geologies, reservoir properties, fluid gradients and interfaces, production, and subsurface positioning of well services. AHD is a calibrated and corrected well depth measurement defined using a specific rig-state and can deliver improved subsurface position and positional uncertainty. Well depth, I and azimuth (A) are used to calculate subsurface 3D position. Well depth is measured at surface, represented by drillpipe or wireline length. I and A are subsurface measurements referenced to the provided well depth. These together provide the navigation information required to arrive at the 3D positions of N, E, and V. These positions are used to define the location of subsurface events such as well placement, geological horizons and fluid contacts. This paper outlines a method ("3D method") for defining 3D subsurface positional locations using "way-points" (Bolt 2021). Way-points represent a sequential series of specific, calibrated, 3D positional locations each defined by calibrated and corrected AHD. Based on Pythagorean geometry using AHD, I, and A measurements, these are converted into N, E, and V positions. Each way-point has a specific N, E, and V positional uncertainty. Four component accuracies are used to describe the individual AHD, I and A measurement uncertainties at each way-point: calibration and observation, applied correction, model-fit, and a fixed-term applicable to all observations. AHD, I, and A measurement uncertainties which are converted into individual interval N, E, and V positional uncertainties and sequentially concatenated. The method provides a simplified yet accurate solution to 3D positional and positional uncertainty. The calculations demonstrate the dependency of the positional and positional uncertainty results on both interval spacing between way-points and measurement accuracy. The example results demonstrate that each well has its own specific and unique N, E and V positional uncertainty description. Specific positional uncertainty requirements of operators can be answered to through instrumentation accuracies and way-point interval spacing defined in the well survey program. Well placement can be more easily portrayed, reservoir characteristics more confidently reported, and asset volume estimation improved.
Article
Full-text available
The aim of this work is to experimentally determine and evaluate the value of the correction factor for ultrasonic flow meters in order to improve their accuracy. This article concerns flow velocity measurement with the use of an ultrasonic flow meter in the area of disturbed flow behind the distorting element. Clamp-on ultrasonic flow meters are popular among measurement technologies due to their high accuracy and easy, non-invasive installation, because the sensors are mounted directly on the outer surface of the pipe. In industrial applications, installation space is usually limited and, therefore, flow meters frequently have to be mounted directly behind flow disturbances. In such cases, it is necessary to determine the value of the correction factor. The disturbing element was a knife gate valve, a valve often used in flow installations. Water flow velocity tests were performed using an ultrasonic flow meter with clamp-on sensors on the pipeline. The research was performed in 2 series of measurements with different Reynolds numbers of 35,000 and 70,000, which correspond to a velocity of approximately 0.9 m/s and 1.8 m/s. The tests were carried out at different distances from the source of interference, within the range of 3–15 DN (pipe nominal diameter). The position of the sensors at successive measurement points on the circuit of the pipeline was changed by 30 degrees. Flow velocity measurements were carried out for two different levels of the valve’s closure: 1/3 and 1/2 of the valve’s height. For the collected velocity values at single measurement points, the values of the correction coefficient, K, were determined. The results of the tests and calculations prove that compensation error of measurement performed behind the disturbance without keeping the required straight sections of the pipeline is possible by using the factor K*. The analysis of the results made it possible to identify the optimal measuring point at a distance from the knife gate valve as being smaller than specified in the standards and recommendations.
Chapter
The quantification of gas emissions from livestock housings is a complex and challenging measurement task because performing emission measurements under practice conditions requires a high level of expertise and poses significant challenges to obtaining a careful evaluation and a reliable validation. Many measurement methods have been developed in recent decades to improve the knowledge on emissions from livestock housings, to such an extent that it becomes difficult to choose the most suitable method for the system under study and especially for the measurement objectives. The aim of this chapter is to present and discuss different measurement approaches as well as analytical instruments. Additional information is given concerning data preparation, analysis and reporting and uncertainty assessment. Further it is shown how measurements and modelling are combined and which models could be used for several scientific and applied.
Article
Full-text available
Natural and anthropogenic factors highly influence the concentration of major (Na, Mg, K, Ca) and trace (Sr, Ba, Mn, Li) elements, anions (HCO3−, NO3−, SO42−, Cl−), and Sr isotopic signatures. The current study identified the Sr isotopic signature in groundwaters from the Southern Carpathians and Apuseni Mountains karst areas of Romania and its relation to the water’s chemistry. The Sr concentration ranged between 16.5 and 658 µg/L, but in most groundwaters, it was below 200 µg/L. A considerable spatial variation and a low temporal variation, with a slightly lower Sr concentration in the winter than in spring, were observed. The strong positive correlation of the Sr with Ca, Mg, K, and Na indicated the common source of these elements. The main source of the Sr in groundwaters was the dissolution of carbonates, especially calcite, and dolomite to a lesser extent. The 87Sr/86Sr isotopic ratio ranged between 0.7038 and 0.7158. Generally, waters with a high Sr concentration and moderate 87Sr/86Sr ratios indicated carbonate dissolution, whereas samples with low Sr concentrations and high 87Sr/86Sr ratios suggested the dissolution of silicates.
Article
The long-term stabilities of arsenobetaine (AsB), arsenate (As(V)), and dimethylarsinic acid (DMA), which are arsenic (As) compounds in certified reference materials (CRMs), namely, NMIJ CRMs 7901-a, 7912-a, and 7913-a, were monitored. The CRMs were developed and certified by the National Metrology Institute of Japan (NMIJ) and the National Institute of Advanced Industrial Science and Technology (AIST) in 2009 to prepare a calibrant for the speciation analysis of As species. The CRMs were prepared from high-purity reagent powders as raw materials, and each reagent was dissolved in water or diluted acid. The certification of the CRMs for AsB, As(V), and DMA was conducted by NMIJ. The concentration of total As was determined by more than three independent analytical techniques. Then, the obtained As concentrations were converted into the concentration of each chemical species, and the mass fractions of each certified value were certificated. The long-term stability of As species in the CRMs under storage was performed by liquid chromatography-inductively coupled plasma-mass spectrometry, and this report presents data on long-term stability that occurred approximately 13 years. The obtained monitoring results were evaluated using both measurement results with uncertainty and a statistical parameter method, complying with ISO Guide 35. According to the results, the long-term stability of all mass fractions was confirmed.
Article
A laboratory model study is performed to investigate the characteristics of wave impact pressures on a monopile substructure for offshore wind turbines subjected to breaking waves in intermediate water depth. Laboratory experiments are conducted in a 30m long, 2m wide and 1.8m deep wave flume. Breaking wave was generated by focusing the wave energy from a wave group at a pre-defined time and space in the laboratory flume. High-resolution impact measurements are carried out in a well-controlled wave flume for different wave loading conditions of varying incident wave steepness and wave impact conditions. The evolution of local wave characteristics and geometric properties of breaking waves is investigated along the wave flume. The sensitivity of the vertical distribution and the variation of peak pressures to small changes in time and spatial scale is assessed during the impact. During the breaking wave interaction with a monopile, the wave impact characteristics such as pressure rise time, duration of impact, maximum pressure, and pressure impulse are analyzed. The measured geometric properties and impact characteristics are in good agreement with previous studies. The impact region and its correlation with wave characteristics and geometric properties are evaluated and discussed.
Preprint
Full-text available
This study details the occurrence and concentrations of organic micropollutants (OMPs) in stormwater collected from a highway bridge catchment in Sweden. The prioritized OMPs were bisphenol-A (BPA), eight alkylphenols, sixteen polycyclic aromatic hydrocarbons (PAHs), and four fractions of petroleum hydrocarbons (PHCs), along with other global parameters, namely, total organic carbon (TOC), total suspended solids (TSS), turbidity, and conductivity (EC). A Monte Carlo (MC) simulation was applied to estimate the event mean concentrations (EMC) of OMPs based on intra-event subsamples during eight rain events, and analyze the associated uncertainties. Assessing the occurrence of all OMPs in the catchment and comparing the EMC values with corresponding environmental quality standards (EQSs) revealed that BPA, octylphenol (OP), nonylphenol (NP), five carcinogenic and four non-carcinogenic PAHs, and C 16 -C 40 fractions of PHCs can be problematic for freshwater. On the other hand, alkylphenol ethoxylates (OPnEO and NPnEO), six low molecule weight PAHs, and lighter fractions of PHCs (C 10 -C 16 ) do not occur at levels that are expected to pose an environmental risk. Our data analysis suggests that three water quality parameters (turbidity, TOC, and EC) hold strong potential as surrogate parameters for PAHs, PHCs, BPA, OP, and TSS. Therefore, continuously measuring these parameters could complement data from monitoring programs in which long-term, high-resolution time series are of interest. Furthermore, the EMC error analysis showed that high uncertainty in OMP data can influence the final interpretation of EMC values. As such, some of the challenges that were experienced in the presented research yielded suggestions for future monitoring programs to obtain more reliable data acquisition and analysis.
Chapter
GNSS integrity monitoring requires proper bounding to characterize all ranging error sources. Unlike classical approaches based on probabilistic assumptions, our alternative integrity approach depends on deterministic interval bounds as inputs. The intrinsically linear uncertainty propagation with intervals is adequate to describe remaining systematic uncertainty, the so-called imprecision. In this contribution, we make a proposal on how to derive the required intervals in order to quantify and bound the residual error for empirical troposphere models, based on the refined sensitivity analysis via interval arithmetic. We evaluated experimentally the Saastamoinen model with (i) a priori ISO standard atmosphere, and (ii) on-site meteorological measurements from IGS and Deutscher Wetterdienst (DWD) stations as inputs. We obtain consistent and complete enclosure of residual ZPD errors w.r.t IGS ZPD products. Thanks to the DWD dense network, interval maps for meteorological parameters and residual ZPD errors are generated for Germany as by-products. These experimental results and products are finally validated, taking advantage of the high-quality tropospheric delays estimated by the Vienna Ray Tracer. Overall, the results indicate that our strategy based on interval analysis successfully bounds tropospheric model uncertainty. This will contribute to a realistic uncertainty assessment of GNSS-based single point positioning.
Article
Full-text available
The paper presents the experimental research on the thermal management of a 150 W LED lamp with heat sink inside a synthetic jet actuator. The luminous flux was generated by 320 SMD LEDs with a nominal luminous efficacy equal to 200 lm/W mounted on a single PCB. Characteristic temperatures were measured with three different measurement techniques: thermocou-ples, infrared camera, and an estimation of the junction temperature from its calibrated dependence on the LED forward voltage. The temperature budget between the LED junction and ambient as well as the thermal resistance network was determined and analyzed. The energy balance of the LED lamp is presented along with the values of the heat flow rate and heat transfer coefficient in different regions of the LED lamp surface. For an input power supplied to the SJA equal to 4.50 W, the synthetic jet dissipated approximately 89% of the total heat generated by the LED lamp. The heat from the PCB was transferred through the front and rear surfaces of the board. For the input power of 4.50 W, approximately 91% of the heat generated by LEDs was conducted by the PCB substrate to the heat spreading plate, while the remaining 9% was dissipated by the front surface of the PCB, mostly by radiation. The thermal balance revealed that for the luminous efficacy of the investigated LEDs, approximately 60% of the electrical energy supplied to the LED lamp was converted into heat, while the rest was converted into light.
ResearchGate has not been able to resolve any references for this publication.