Figure 1 - uploaded by Russell Epstein
Content may be subject to copyright.
Examples of faces under different lighting conditions. 

Examples of faces under different lighting conditions. 

Source publication
Article
Full-text available
We describe a method of learning generative models of objects from a set of images of the object under different, and unknown, illumination. Such a model allows us to approximate the objects'' appearance under a range of lighting conditions. This work is closely related to photometric stereo with unknown light sources and, in particular, to the use...

Contexts in source publication

Context 1
... changes in lighting conditions can cause large changes in appearance, often bigger than those due to viewpoint changes [21]. The amounts of these variations can be appreciated by looking at images of the same object taken under different, but calibrated, lighting conditions, see figure 1. Accurate lighting models are also required for the related reconstruction problem of photometric stereo. ...
Context 2
... that G is known we can solve M = G T R by least squares while imposing the condition that R is a rotation matrix. Figures (10) shows the results on the face. ...

Similar publications

Article
Full-text available
Cross-hatching is an artistic drawing method in which lines of variable thickness and orientation approximate tonal variations associated with shading and shadowing. Research in computer graphics has focused primarily on creating illustrations with cross-hatching that conforms to the three dimensional surface of virtual objects. Cross-hatched shado...
Conference Paper
Full-text available
Over the last decades several approaches were introduced to deal with cast shadows in background subtraction applications. However, very few algorithms exist that address the same problem for still images. In this paper we propose a figure ground segmentation algorithm to segment objects in still images affected by shadows. Instead of modeling the...
Conference Paper
Full-text available
Robust face recognition under various illumination environments is essential for successful commercialization, but difficult to achieve. For robust face recognition with respect to illumination variations, illumination normalization of face images is usually necessary as a preprocessing step. Most of previously proposed illumination normalization m...
Article
Full-text available
How do observers recognize faces despite dramatic image variations that arise from changes in illumination? This paper examines 1) whether face recognition is sensitive to illumination direction, and 2) whether cast shadows improve performance by providing information about illumination, or hinder performance by introducing spurious edges. In Exper...

Citations

... Semi-calibrated algorithms [8] could also be employed for automatically inferring the coefficients ψ i . Provided that the integrability constraint [28] is adapted to the refractive case, uncalibrated algorithms [12] would even provide the s i up to a generalized basrelief ambiguity [2], which could be resolved a posteriori using one of the methods discussed in [24]. ...
Chapter
Full-text available
We conduct a discussion on the problem of 3D-reconstruction by calibrated photometric stereo, when the surface of interest is embedded in a refractive medium. We explore the changes refraction induces on the problem geometry (surface and normal parameterization), and we put forward a complete image formation model accounting for refracted lighting directions, change of light density and Fresnel coefficients. We further show that as long as the camera is orthographic, lighting is directional and the interface is planar, it is easy to adapt classic methods to take into account the geometric and photometric changes induced by refraction. Moreover, we show on both simulated and real-world experiments that incorporating these modifications of PS methods drastically improves the accuracy of the 3D-reconstruction.
... In fact, these global illumination effects (cast shadows, self reflections, ambient light) are one of the most challenging aspects of PS. [34,66] tackle the case of fixed ambient light which however is too simple of a model to cover realistic inter-reflections. The global illumination issue is firstly adequately addressed in [21] by employing a Convolutional Neural Network (CNN). ...
Preprint
Reconstructing the 3D shape of an object using several images under different light sources is a very challenging task, especially when realistic assumptions such as light propagation and attenuation, perspective viewing geometry and specular light reflection are considered. Many of works tackling Photometric Stereo (PS) problems often relax most of the aforementioned assumptions. Especially they ignore specular reflection and global illumination effects. In this work, we propose a CNN-based approach capable of handling these realistic assumptions by leveraging recent improvements of deep neural networks for far-field Photometric Stereo and adapt them to the point light setup. We achieve this by employing an iterative procedure of point-light PS for shape estimation which has two main steps. Firstly we train a per-pixel CNN to predict surface normals from reflectance samples. Secondly, we compute the depth by integrating the normal field in order to iteratively estimate light directions and attenuation which is used to compensate the input images to compute reflectance samples for the next iteration. Our approach sigificantly outperforms the state-of-the-art on the DiLiGenT real world dataset. Furthermore, in order to measure the performance of our approach for near-field point-light source PS data, we introduce LUCES the first real-world 'dataset for near-fieLd point light soUrCe photomEtric Stereo' of 14 objects of different materials were the effects of point light sources and perspective viewing are a lot more significant. Our approach also outperforms the competition on this dataset as well. Data and test code are available at the project page.
... In fact, these global illumination effects (cast shadows, self reflections, ambient light) are one of the most challenging aspects of PS. Logothetis et al. (2016); Yuille et al. (1999) tackle the case of fixed ambient light which however is too simple of a model to cover realistic inter-reflections. The global illumination issue is firstly adequately addressed in Ikehata (2018) by employing a Convolutional Neural Network (CNN). ...
Article
Full-text available
Reconstructing the 3D shape of an object using several images under different light sources is a very challenging task, especially when realistic assumptions such as light propagation and attenuation, perspective viewing geometry and specular light reflection are considered. Many of works tackling Photometric Stereo (PS) problems often relax most of the aforementioned assumptions. Especially they ignore specular reflection and global illumination effects. In this work, we propose a CNN-based approach capable of handling these realistic assumptions by leveraging recent improvements of deep neural networks for far-field Photometric Stereo and adapt them to the point light setup. We achieve this by employing an iterative procedure of point-light PS for shape estimation which has two main steps. Firstly we train a per-pixel CNN to predict surface normals from reflectance samples. Secondly, we compute the depth by integrating the normal field in order to iteratively estimate light directions and attenuation which is used to compensate the input images to compute reflectance samples for the next iteration. Our approach sigificantly outperforms the state-of-the-art on the DiLiGenT real world dataset. Furthermore, in order to measure the performance of our approach for near-field point-light source PS data, we introduce LUCES the first real-world ’dataset for near-fieLd point light soUrCe photomEtric Stereo’ of 14 objects of different materials were the effects of point light sources and perspective viewing are a lot more significant. Our approach also outperforms the competition on this dataset as well. Data and test code are available at the project page.
... If the surface is integrable, the ambiguity can be degraded to a three-parameter generalized bas-relief (GBR) ambiguity [10,11]. Most of the existing traditional methods [12][13][14] for UPS are based on the Lambertian assumption and focus on resolving the GBR ambiguity. Methods such as those of [15], and [16] can handle surfaces with general bidirectional reflectances (BRDFs), which only work under assumptions of uniformly distributed light sources. ...
Article
Full-text available
The surfaces of real objects can visually appear to be glossy, matte, or anywhere in between, but essentially, they display varying degrees of diffuse and specular reflectance. Diffuse and specular reflectance provides different clues for light estimation. However, few methods simultaneously consider the contributions of diffuse and specular reflectance for light estimation. To this end, we propose ReDDLE-Net, which performs Reflectance Decomposition for Directional Light Estimation. The primary idea is to take advantage of diffuse and specular clues and adaptively balance the contributions of estimated diffuse and specular components for light estimation. Our method achieves a superior performance advantage over state-of-the-art directional light estimation methods on the DiLiGenT benchmark. Meanwhile, the proposed ReDDLE-Net can be combined with existing calibrated photometric stereo methods to handle uncalibrated photometric stereo tasks and achieve state-of-the-art performance.
... 3,13,30 Representation of the light field up to its first order (light density and vector) still explains 94% of the appearance variations for Lambertian surfaces. 31,32 Therefore, we employed the first-order approach for its practicality and ease of implementation with the current state of the art. ...
Article
Full-text available
Chromatic properties of the effective light in a space are hard to predict, measure and visualise. This is due to complex interactions between materials and illuminants. Here, we describe, measure and visualise the effects of inter-reflections on the structure of the physical light field for diffusely scattering scenes. The spectral properties of inter-reflections vary as a function of the number of bounces they went through. Via a computational model, these spectral variations were found to be systematic and correspond with brightness, saturation and hue shifts. We extended our light-field methods to measure and understand these spectral effects on the first-order properties of light fields, the light density and light vector. We tested the model via a set of computer renderings and cubic spectral illuminance measurements in mock-up rooms under different furnishing scenarios for two types of illuminants. The predicted spectral variations were confirmed and indeed varied systematically within the resulting light field, spatially and directionally. Inter-reflections predominantly affect the light density spectrum and have less impact on the light vector spectrum. It is important to consider these differential effects for their consequences on the colour rendering of 3-dimensional objects and people.
... This ambiguity can be reduced to a 3-parameter GBR ambiguity using the surface integrability constraint, which also holds true at the presence of attached and cast shadows [12], [31]. Previous work used additional clues like albedo priors [9], [10], inter-reflections [32], specular spikes [33], Torrance and Sparrow reflectance model [34], reflectance symmetry [35], [36], multi-view images [37], and local diffuse maxima [11], to resolve the GBR ambiguity. ...
Article
Full-text available
This paper addresses the problem of photometric stereo, in both calibrated and uncalibrated scenarios, for non-Lambertian surfaces based on deep learning. We first introduce a fully convolutional deep network for calibrated photometric stereo, which we call PS-FCN. Unlike traditional approaches that adopt simplified reflectance models to make the problem tractable, our method directly learns the mapping from reflectance observations to surface normal, and is able to handle surfaces with general and unknown isotropic reflectance. At test time, PS-FCN takes an arbitrary number of images and their associated light directions as input and predicts a surface normal map of the scene in a fast feed-forward pass. To deal with the uncalibrated scenario where light directions are unknown, we introduce a new convolutional network, named LCNet, to estimate light directions from input images. The estimated light directions and the input images are then fed to PS-FCN to determine the surface normals. Our method does not require a pre-defined set of light directions and can handle multiple images in an order-agnostic manner. Thorough evaluation of our approach on both synthetic and real datasets shows that it outperforms state-of-the-art methods in both calibrated and uncalibrated scenarios.
... According to Photometric stereo, the shape of each point can be solved by the observed variation in shading of the images. Data of n texture charts are input into M n×N p for estimating the initial shapeS and lightingL by factorizing M = LS via SVD (Yuille et al., 1999). ...
Article
Full-text available
A number of methods have been proposed for face reconstruction from single/multiple image(s). However, it is still a challenge to do reconstruction for limited number of wild images, in which there exists complex different imaging conditions, various face appearance, and limited number of high-quality images. And most current mesh model based methods cannot generate high-quality face model because of the local mapping deviation in geometric optics and distortion error brought by discrete differential operation. In this paper, accurate geometrical consistency modeling on B-spline parameter domain is proposed to reconstruct high-quality face surface from the various images. The modeling is completely consistent with the law of geometric optics, and B-spline reduces the distortion during surface deformation. In our method, 0th- and 1st-order consistency of stereo are formulated based on low-rank texture structures and local normals, respectively, to approach the pinpoint geometric modeling for face reconstruction. A practical solution combining the two consistency as well as an iterative algorithm is proposed to optimize high-detailed B-spline face effectively. Extensive empirical evaluations on synthetic data and unconstrained data are conducted, and the experimental results demonstrate the effectiveness of our method on challenging scenario, e.g., limited number of images with different head poses, illuminations, and expressions.
... In fact, these global illumination effects (cast shadows, self reflections, ambient light) are one of the most challenging aspects of PS. [15,34] tackle the case of fixed ambient light which however is too simple of a model to cover realistic interreflections. The global illumination issue is firstly adequately addressed in [8] by employing a Convolutional Neural Network (CNN). ...
Preprint
Full-text available
Reconstructing the 3D shape of an object using several images under different light sources is a very challenging task, especially when realistic assumptions such as light propagation and attenuation, perspective viewing geometry and specular light reflection are considered. Many of works tackling Photometric Stereo (PS) problems often relax most of the aforementioned assumptions. Especially they ignore specular reflection and global illumination effects. In this work, we propose the first CNN based approach capable of handling these realistic assumptions in Photometric Stereo. We leverage recent improvements of deep neural networks for far-field Photometric Stereo and adapt them to near field setup. We achieve this by employing an iterative procedure for shape estimation which has two main steps. Firstly we train a per-pixel CNN to predict surface normals from reflectance samples. Secondly, we compute the depth by integrating the normal field in order to iteratively estimate light directions and attenuation which is used to compensate the input images to compute reflectance samples for the next iteration. To the best of our knowledge this is the first near-field framework which is able to accurately predict 3D shape from highly specular objects. Our method outperforms competing state-of-the-art near-field Photometric Stereo approaches on both synthetic and real experiments.
... Firstly, when lighting is unknown (uncalibrated PS), the local estimation of surface normals is underconstrained. As in SfS, the problem must be reformulated globally, and the integrability constraint must be imposed [133]. But even then, a low-frequency ambiguity known as the generalised bas-relief ambiguity remains [12]: it is necessary to introduce additional priors, see [106] for an overview of existing uncalibrated photometric stereo approaches, and [19] for a modern solution based on deep learning. ...
Chapter
Photometric 3D-reconstruction techniques aim at inferring the geometry of a scene from one or several images, by inverting a physical model describing the image formation. This chapter presents an introductory overview ofShape-from-polarisation the main photometric 3D-reconstruction techniques which are shape-from-shadingShape-from-shading, photometric stereoPhotometric stereo and shape-from-polarisation.
... A Lambertian surface's normals can be recovered up to a 3 × 3 linear ambiguity when light directions are unknown [19]. By considering the surface integrability constraint, this linear ambiguity can be reduced to a 3-parameter generalized bas-relief (GBR) ambiguity [15,6,52,26]. To further resolve the GBR ambiguity, many methods make use of additional clues like inter-reflections [7], specularities [13,16,12], albedo priors [4,44], isotropic reflectance symmetry [48,51], special light source distributions [54], or Lambertian diffuse reflectance maxima [35]. ...
Conference Paper
Full-text available
This paper targets at discovering what a deep uncalibrated photometric stereo network learns to resolve the problem’s inherent ambiguity, and designing an effective network architecture based on the new insight to improve the performance. The recently proposed deep uncalibrated photometric stereo method achieved promising results in estimating directional lightings. However, what specifically inside the network contributes to its success remains a mystery. In this paper, we analyze the features learned by this method and find that they strikingly resemble attached shadows, shadings, and specular highlights, which are known to provide useful clues in resolving the generalized bas-relief (GBR) ambiguity. Based on this insight, we propose a guided calibration network, named GCNet, that explicitly leverages object shape and shading information for improved lighting estimation. Experiments on synthetic and real datasets show that GCNet achieves improved results in lighting estimation for photometric stereo, which echoes the findings of our analysis. We further demonstrate that GCNet can be directly integrated with existing calibrated methods to achieve improved results on surface normal estimation. Our code and model can be found at https://guanyingc.github.io/UPS-GCNet.