Fig 17 - uploaded by Marc Pollefeys
Content may be subject to copyright.
Another example of stereo sequence alignment. (Top) Some samples from the stereo sequence. (Bottom) Texturemapped  

Another example of stereo sequence alignment. (Top) Some samples from the stereo sequence. (Bottom) Texturemapped  

Source publication
Article
Full-text available
In many computer vision systems, it is assumed that the image brightness of a point directly reflects the scene radiance of the point. However, the assumption does not hold in most cases due to nonlinear camera response function, exposure changes, and vignetting. The effects of these factors are most visible in image mosaics and textures of 3D mode...

Similar publications

Article
Full-text available
This work develops a new robust statistical framework for blind image denoising. Robust statistics addresses the problem of estimation when the idealized assumptions about a system are occasionally violated. The contaminating noise in an image is considered as a violation of the assumption of spatial coherence of the image intensities and is treate...
Article
Full-text available
Principal component (PC) analysis has received considerable attention as a technique for the extraction of meteorological signals from hyperspectral infra-red sounders such as the Infrared Atmospheric Sounding Interferometer (IASI) and the Atmospheric Infrared Sounder (AIRS). In addition to achieving substantial bit-volume reductions for disseminat...
Article
Full-text available
Principal component (PC) analysis has received considerable attention as a technique for the extraction of meteorological signals from hyperspectral infra-red sounders such as the Infrared Atmospheric Sounding Interferometer (IASI) and the Atmospheric Infrared Sounder (AIRS). In addition to achieving substantial bit-volume reductions for disseminat...

Citations

... The first one is the atmosphere, which has effects such as scattering and absorption. These atmospheric effects are addressed through atmospheric correction [35]. The second source is related to noise that was created during the imaging operation, and this noise is the main factor in reducing the SNR [24]. ...
Article
Full-text available
Soil organic carbon (SOC) is a crucial factor for soil fertility, directly impacting agricultural yields and ensuring food security. In recent years, remote sensing (RS) technology has been highly recommended as an efficient tool for producing SOC maps. The PRISMA hyperspectral satellite was used in this research to predict the SOC map in Fars province, located in southern Iran. The main purpose of this research is to investigate the capabilities of the PRISMA satellite in estimating SOC and examine hyperspectral processing techniques for improving SOC estimation accuracy. To this end, denoising methods and a feature generation strategy have been used. For denoising, three distinct algorithms were employed over the PRISMA image, including Savitzky–Golay + first-order derivative (SG + FOD), VisuShrink, and total variation (TV), and their impact on SOC estimation was compared in four different methods: Method One (reflectance bands without denoising, shown as M#1), Method Two (denoised with SG + FOD, shown as M#2), Method Three (denoised with VisuShrink, shown as M#3), and Method Four (denoised with TV, shown as M#4). Based on the results, the best denoising algorithm was TV (Method Four or M#4), which increased the estimation accuracy by about 27% (from 40% to 67%). After TV, the VisuShrink and SG + FOD algorithms improved the accuracy by about 23% and 18%, respectively. In addition to denoising, a new feature generation strategy was proposed to enhance accuracy further. This strategy comprised two main steps: first, estimating the number of endmembers using the Harsanyi–Farrand–Chang (HFC) algorithm, and second, employing Principal Component Analysis (PCA) and Independent Component Analysis (ICA) transformations to generate high-level features based on the estimated number of endmembers from the HFC algorithm. The feature generation strategy was unfolded in three scenarios to compare the ability of PCA and ICA transformation features: Scenario One (without adding any extra features, shown as S#1), Scenario Two (incorporating PCA features, shown as S#2), and Scenario Three (incorporating ICA features, shown as S#3). Each of these three scenarios was repeated for each denoising method (M#1–4). After feature generation, high-level features were added to the outputs of Methods One, Three, and Four. Subsequently, three machine learning algorithms (LightGBM, GBRT, RF) were employed for SOC modeling. The results showcased the highest accuracy when features obtained from PCA transformation were added to the results from the TV algorithm (Method Four—Scenario Two or M#4–S#2), yielding an R² of 81.74%. Overall, denoising and feature generation methods significantly enhanced SOC estimation accuracy, escalating it from approximately 40% (M#1–S#1) to 82% (M#4–S#2). This underscores the remarkable potential of hyperspectral sensors in SOC studies.
... The images in CRAID-2022 from the 2022 growing season were first photometrically calibrated using the Macbeth Color Checker card and a well-established approach of estimating the optimal radiometric correction using measurements of the card under uniform illumination (Kim and Pollefeys, 2008;Debevec and Malik, 2008;mit, 1999). Radiometric or photometric calibration is needed to calibrate for the effects of the changing camera parameters between imaging sessions. ...
Preprint
Full-text available
Agricultural domains are being transformed by recent advances in AI and computer vision that support quantitative visual evaluation. Using drone imaging, we develop a framework for characterizing the ripening process of cranberry crops. Our method consists of drone-based time-series collection over a cranberry growing season, photometric calibration for albedo recovery from pixels, and berry segmentation with semi-supervised deep learning networks using point-click annotations. By extracting time-series berry albedo measurements, we evaluate four different varieties of cranberries and provide a quantification of their ripening rates. Such quantification has practical implications for 1) assessing real-time overheating risks for cranberry bogs; 2) large scale comparisons of progeny in crop breeding; 3) detecting disease by looking for ripening pattern outliers. This work is the first of its kind in quantitative evaluation of ripening using computer vision methods and has impact beyond cranberry crops including wine grapes, olives, blueberries, and maize.
... This can also be done jointly with other tasks, e.g. vignetting correction [35]. ...
Preprint
Light plays an important role in human well-being. However, most computer vision tasks treat pixels without considering their relationship to physical luminance. To address this shortcoming, we present the first large-scale photometrically calibrated dataset of high dynamic range \ang{360} panoramas. Our key contribution is the calibration of an existing, uncalibrated HDR Dataset. We do so by accurately capturing RAW bracketed exposures simultaneously with a professional photometric measurement device (chroma meter) for multiple scenes across a variety of lighting conditions. Using the resulting measurements, we establish the calibration coefficients to be applied to the HDR images. The resulting dataset is a rich representation of indoor scenes which displays a wide range of illuminance and color temperature, and varied types of light sources. We exploit the dataset to introduce three novel tasks: where per-pixel luminance, per-pixel temperature and planar illuminance can be predicted from a single input image. Finally, we also capture another smaller calibrated dataset with a commercial \ang{360} camera, to experiment on generalization across cameras. We are optimistic that the release of our datasets and associated code will spark interest in physically accurate light estimation within the community.
... Portions of the RGB imagery exhibited vignetting; thus, color correction was applied to before orthomosaic was generated. Vignetting is defined as the reduction in an image's brightness towards the edge when compared with its center [73]. Vignetting arises due to the changes in irradiance over the image plane due to sensor geometry [74], and color correction balances the brightness variation across the imagery block (Agisoft, 2017b). ...
Thesis
Full-text available
Natural habitat communities are an important element of any forest ecosystem. Mapping and monitoring Laurentian Mixed Forest natural communities using high spatial resolution imagery is vital for management and conservation purposes. This study developed integrated spatial, spectral and Machine Learning (ML) approaches for mapping complex vegetation communities. The study utilized ultra-high and high spatial resolution National Agriculture Imagery Program (NAIP) and Unmanned Aerial Vehicle (UAV) datasets, and Digital Elevation Model (DEM). Complex natural vegetation community habitats in the Laurentian Mixed Forest of the Upper Midwest. A detailed workflow is presented to effectively process UAV imageries in a dense forest environment where the acquisition of ground control points (GCPs) is extremely difficult. Statistical feature selection methods such as Joint Mutual Information Maximization (JMIM) which is not that widely used in the natural resource field and variable importance (varImp) were used to discriminate spectrally similar habitat communities. A comprehensive approach to training set delineation was implemented including the use of Principal Components Analysis (PCA), Independent Components Analysis (ICA), soils data, and expert image interpretation. The developed approach resulted in robust training sets to delineate and accurately map natural community habitats. Three ML algorithms were implemented Random Forest (RF), Support Vector Machine (SVM), and Averaged Neural Network (avNNet). RF outperformed SVM and avNNet. Overall RF accuracies across the three study sites ranged from 79.45-87.74% for NAIP and 87.31-93.74% for the UAV datasets. Different ancillary datasets including spectral enhancement and image transformation techniques (PCA and ICA), GLCM-Texture, spectral indices, and topography features (elevation, slope, and aspect) were evaluated using the JMIM and varImp feature selection methods, overall accuracy assessment, and kappa calculations. The robustness of the workflow was evaluated with three study sites which are geomorphologically unique and contain different natural habitat communities. This integrated approach is recommended for accurate natural habitat community classification in ecologically complex landscapes.
... The appearance of this effect is particularly undesirable when there is a need for radiometric or quantitative image analysis, which is very common in different areas, e.g., astronomy [1,2]; microscopy [3][4][5][6]; and remote sensing applications using terrestrial [7,8], airborne [9][10][11][12][13] and spaceborne sensors [14,15], to name just a few of them. This phenomenon is also undesirable in the case of the use of computational imaging algorithms, such as the creation of high dynamic range (HDR) images [16,17], the stitching of static images to create panoramic [18][19][20] or mosaic images [3][4][5][6]21], as well as a panoramic real-time view [22]. Vignetting also affects the results of image analysis, including the results obtained using neural networks [23,24]. ...
... Physically-based models of vignetting [28,30]-the usage of these methods requires detailed knowledge about the parameters of the used lens-camera system, which is often unavailable; however, this approach is very useful during the design process of the optical system. • A single image [31][32][33][34] or a sequence of images [18,19,25,26] of a natural scene or scenes to estimate the vignetting-in the case of these methods, the vignetting estimation is obtained as a result minimization of an objective function with the assumption that vignetting is a radial function, which limits the number of lens-camera systems for which these methods can be used. The effectiveness of these methods also depends strongly on many other factors, such as the precision of localization of corresponding pixels, uniformity of the analyzing scene, which limits their applicability. ...
Article
Full-text available
Image vignetting is one of the major radiometric errors that occur in lens-camera systems. In many applications, vignetting is an undesirable effect; therefore, when it is impossible to fully prevent its occurrence, it is necessary to use computational methods for its correction. In probably the most frequently used approach to the vignetting correction, that is, the flat-field correction, the use of appropriate vignetting models plays a pivotal role. The radial polynomial (RP) model is commonly used, but for its proper use, the actual vignetting of the analyzed lens-camera system has to be a radial function. However, this condition is not fulfilled by many systems. There exist more universal models of vignetting; however, these models are much more sophisticated than the RP model. In this article, we propose a new model of vignetting named the Deformable Radial Polynomial (DRP) model, which joins the simplicity of the RP model with the universality of more sophisticated models. The DRP model uses a simple distance transformation and minimization method to match the radial vignetting model to the non-radial vignetting of the analyzed lens-camera system. The real-data experiment confirms that the DRP model, in general, gives better (up 35% or 50%, depending on the measure used) results than the RP model.
... The radiometric response function of the camera may also affect the effectiveness of the CMF in color correcting a pair of images. To reduce this problem, the monotonicity of the CMF has to be ensured using, for example, dynamic programming [43]. In [44,45], the CMF estimation is performed with a 2D tensor voting approach followed by a heuristic local adjustment method to force monotonicity. ...
Article
Full-text available
Texture mapping can be defined as the colorization of a 3D mesh using one or multiple images. In the case of multiple images, this process often results in textured meshes with unappealing visual artifacts, known as texture seams, caused by the lack of color similarity between the images. The main goal of this work is to create textured meshes free of texture seams by color correcting all the images used. We propose a novel color-correction approach, called sequential pairwise color correction, capable of color correcting multiple images from the same scene, using a pairwise-based method. This approach consists of sequentially color correcting each image of the set with respect to a reference image, following color-correction paths computed from a weighted graph. The color-correction algorithm is integrated with a texture-mapping pipeline that receives uncorrected images, a 3D mesh, and point clouds as inputs, producing color-corrected images and a textured mesh as outputs. Results show that the proposed approach outperforms several state-of-the-art color-correction algorithms, both in qualitative and quantitative evaluations. The approach eliminates most texture seams, significantly increasing the visual quality of the textured meshes.
... A variety of methods for generating such maps exists, based for example on computational methods using regular photographs [330,[373][374][375][376], simply averaging many exposures [329], and simply imaging white paper [377]. These maps are typically parametrised, for which various methods also exist [118,329,330,369,[374][375][376][377], the simplest being the cos 4 model, a combination of inverse square falloff, Lambert's law, and foreshortening [374]. ...
... A variety of methods for generating such maps exists, based for example on computational methods using regular photographs [330,[373][374][375][376], simply averaging many exposures [329], and simply imaging white paper [377]. These maps are typically parametrised, for which various methods also exist [118,329,330,369,[374][375][376][377], the simplest being the cos 4 model, a combination of inverse square falloff, Lambert's law, and foreshortening [374]. Alternately, a pixel-by-pixel map of vignetting correction coefficients may be used. ...
... The highly linear nature of RAW data was previously demonstrated in [118,361,366] and may be a result of internal linearity corrections in the CMOS chip [104]. Furthermore, RAW data are not affected by white balance, a colour correction in JPEG processing which severely affects colourimetric measurements, is difficult to calibrate, and differs strongly between measurements and cameras [56,94,285,317,318,323,342,349,351,376]. This variable gamma correction and white balance make it impossible to invert the JPEG algorithm and recover RAW data. ...
Thesis
Full-text available
Water is all around us and is vital for all aspects of life. Studying the various compounds and life forms that inhabit natural waters lets us better understand the world around us. Remote sensing enables global measurements with rapid response and high consistency. Citizen science provides new knowledge and greatly increases the scientific and social impact of research. In this thesis, we investigate several aspects of citizen science and remote sensing of water, with a focus on uncertainty and accessibility. We improve existing techniques and develop new methods to use smartphone cameras for accessible remote sensing of water.
... The nonlinearity in colour intensity is mainly caused by the nonlinear CRF transformations during the image formation. Its presence reduces the performance of computer vision tasks that require linearity of colour intensity such as image mosaic [32], high dynamic range imaging [33], and deblurring [34]. To estimate the CRF, the most accurate way is to image a standard grey chart [35] or a steady scene with multiple known exposures [36], [37]. ...
... A major obstacle in camera radiometric calibration is the ambiguity between E, D, and CRF in the image formation model [32], [40]. The root of this ambiguity is due to the immeasurability of E that might have been scaled or offset by some value and inaccessible camera properties such as exposure and aperture settings in the image formation. ...
Article
Full-text available
Relative colour constancy is an essential requirement for many scientific imaging applications. However, most digital cameras differ in their image formations and native sensor output is usually inaccessible, e.g., in smartphone camera applications. This makes it hard to achieve consistent colour assessment across a range of devices, and that undermines the performance of computer vision algorithms. To resolve this issue, we propose a colour alignment model that considers the camera image formation as a black-box and formulates colour alignment as a three-step process: camera response calibration, response linearisation, and colour matching. The proposed model works with non-standard colour references, i.e., colour patches without knowing the true colour values, by utilising a novel balance-of-linear-distances feature. It is equivalent to determining the camera parameters through an unsupervised process. It also works with a minimum number of corresponding colour patches across the images to be colour aligned to deliver the applicable processing. Three challenging image datasets collected by multiple cameras under various illumination and exposure conditions, including one that imitates uncommon scenes such as scientific imaging, were used to evaluate the model. Performance benchmarks demonstrated that our model achieved superior performance compared to other popular and state-of-the-art methods.
... Portions of the RGB imagery exhibited vignetting; thus, color correction was applied to before orthomosaic was generated. Vignetting is defined as the reduction in an image's brightness towards the edge when compared with its center (Kim & Pollefeys, 2008). ...
Article
Full-text available
Description: A detailed workflow using Structure from Motion (SfM) techniques for processing high-resolution Unmanned Aerial System (UAS) NIR and RGB imagery in a dense forest environment where obtaining control points is difficult due to limited access and safety issues. Abstract: Imagery collected via Unmanned Aerial System (UAS) platforms has become popular in recent years due to improvements in a Digital Single-Lens Reflex (DSLR) camera (centimeter and sub-centimeter), lower operation costs as compared to human piloted aircraft, and the ability to collect data over areas with limited ground access. Many different application (e.g., forestry, agriculture, geology, archaeology) are already using and utilizing the advantages of UAS data. Although, there are numerous UAS image processing workflows, for each application the approach can be different. In this study, we developed a processing workflow of UAS imagery collected in a dense forest (e.g., coniferous/deciduous forest and contiguous wetlands) area allowing users to process large datasets with acceptable mosaicking and georeferencing errors. Imagery was acquired with near-infrared (NIR) and red, green, blue (RGB) cameras with no ground control points. Image quality of two different UAS collection platforms were observed. Agisoft Metashape, a photogrammetric suite, which uses SfM (Structure from Motion) techniques, was used to process the imagery. The results showed that an UAS having a consumer grade Global Navigation Satellite System (GNSS) onboard had better image alignment than an UAS with lower quality GNSS.
... od parametrów optycznych stosowanego układu obiektyw-kamera. Negatywny wpływ winietowania jest szczególnie widoczny, gdy istnieje potrzeba łączenia obrazów w celu tworzenia obrazów pano-ramicznych [1,10] lub obrazów mozaikowych [5,18], a także radiometrycznej bądź ilościowej analizy obrazów [7,20]. Potrzeby takie występują m.in. ...
Conference Paper
Full-text available
Winietowanie obrazów jest jedną z głównych przyczyn występowania błędów radiometrycznych w układach obiektyw-kamera. Zjawisko to jest zwykle niepożądane, jednak jego wpływ można korygować metodami obliczeniowymi. Prawidłowa korekcja winietowania wymaga zastosowania odpowiednich modeli winietowania. W artykule opisano nowy model winietowania, tj. model SNILP (ang. Smooth Non-Iterative Local Polynomial). Wyniki przeprowadzonego eksperymentu pokazują, że model SNILP daje zwykle lepsze wyniki korekcji winietowania niż inne testowane modele. Co więcej, dla obrazów większych niż format UXGA (1600× 1200) proponowany model jest również szybszy od innych testowanych modeli. Ponadto, w porównaniu do innych testowanych modeli, model SNILP wymaga również najmniejszych zasobów sprzętowych do jego wyznaczenia — ta cecha sprawia, że model SNILP nadaje się do wykorzystania w systemach o ograniczonej mocy obliczeniowej.