Conference PaperPDF Available

Estimating specular roughness from polarized second order spherical gradient illumination

Authors:
  • Rhythm & Hues Studios

Abstract

Measurement of spatially varying BRDFs of real world materials has been an active area of research in computer graphics with image-based measurements being the preferred approach in practice. In order to restrict the total number of measurements, existing techniques typically trade spatial variation for angular variation of the surface BRDF [Marschner et al. 1999]. Recently, Ma et al. [2007] introduced a technique for estimating high quality specular normals and albedo (Fig. 1, (a) & (b) respectively) of a specular object using polarized first order spherical gradient illumination conditions. In this work, we extend this technique to estimate per pixel specular roughness using polarized second order spherical gradients as a measure of the variance about the mean (reflection vector). We demonstrate that for isotropic BRDFs, only three spherical gradient illumination patterns related to the second order spherical harmonics are sufficient for a robust estimate of per pixel specular roughness (Fig. 1, (c)). Thus, we go further than previous work on image-based measurement of specular BRDFs that typically obtain sparse estimates of the spatial variation. Our technique also provides a direct estimate of the per pixel specular roughness and hence has the added advantage of not requiring any off-line numerical optimization that is typical of the measure and fit approach to BRDF modeling.
A preview of the PDF is not available
... Ghosh et al. [Ghosh et al. 2009] proposed a setup suitable for roughly specular objects of any shape, based on a LED sphere with 150 controllable lights linearly polarised, with the subject placed at the centre of the sphere. It can be used to estimate spatially varying BRDFs for both isotropic and anisotropic materials, using up to 9 polarised second order spherical gradient illumination patterns. ...
... It can be used to estimate spatially varying BRDFs for both isotropic and anisotropic materials, using up to 9 polarised second order spherical gradient illumination patterns. For specular relections, specular albedo, relection vector and specular roughness can be directly estimated from the 0 t h , 1 st [Ma et al. 2007] and 2 nd order [Ghosh et al. 2009] statistics respectively. In the same work two additional setups are described. ...
... The analysis of the Stokes relectance ield of circularly polarised spherical illumination has been exploited by Ghosh et al. [Ghosh et al. 2010a] to estimate the specular and difuse albedo, index of refraction and specular roughness for isotropic SVBRDFs, assuming known surface orientation. Three diferent setups are used to demonstrate the technique, similar to the ones described in [Ghosh et al. 2009] but with the light sources covered with right circular polarisers. Four pictures of the subject are required to measure the Stokes ield, three of them with diferently oriented linear polarisers in front of the camera and one with a circular polariser. ...
... Ghosh et al. [Ghosh et al. 2009] proposed a setup suitable for roughly specular objects of any shape, based on a LED sphere with 150 controllable lights linearly polarised, with the subject placed at the centre of the sphere. It can be used to estimate spatially varying BRDFs for both isotropic and anisotropic materials, using up to 9 polarised second order spherical gradient illumination patterns. ...
... It can be used to estimate spatially varying BRDFs for both isotropic and anisotropic materials, using up to 9 polarised second order spherical gradient illumination patterns. For specular reflections, specular albedo, reflection vector and specular roughness can be directly estimated from the 0 t h , 1 st [Ma et al. 2007] and 2 nd order [Ghosh et al. 2009] statistics respectively. In the same work two additional setups are described. ...
... The analysis of the Stokes reflectance field of circularly polarised spherical illumination has been exploited by Ghosh et al. [Ghosh et al. 2010a] to estimate the specular and diffuse albedo, index of refraction and specular roughness for isotropic SVBRDFs, assuming known surface orientation. Three different setups are used to demonstrate the technique, similar to the ones described in [Ghosh et al. 2009] but with the light sources covered with right circular polarisers. Four pictures of the subject are required to measure the Stokes field, three of them with differently oriented linear polarisers in front of the camera and one with a circular polariser. ...
Conference Paper
Photorealistic and physically-based rendering of real-world environments with high fidelity materials is important to a range of applications, including special effects, architectural modelling, cultural heritage, computer games, automotive design, and virtual reality (VR). Our perception of the world depends on lighting and surface material characteristics, which determine how the light is reflected, scattered, and absorbed. In order to reproduce appearance, we must therefore understand all the ways objects interact with light, and the acquisition and representation of materials has thus been an important part of computer graphics from early days. Nevertheless, no material model nor acquisition setup is without limitations in terms of the variety of materials represented, and different approaches vary widely in terms of compatibility and ease of use. In this course, we describe the state of the art in material appearance acquisition and modelling, ranging from mathematical BSDFs to data-driven capture and representation of anisotropic materials, and volumetric/thread models for patterned fabrics. We further address the problem of material appearance constancy across different rendering platforms. We present two case studies in architectural and interior design. The first study demonstrates Yulio, a new platform for the creation, delivery, and visualization of acquired material models and reverse engineered cloth models in immersive VR experiences. The second study shows an end-to-end process of capture and data-driven BSDF representation using the physically-based Radiance system for lighting simulation and rendering.
... One approach involves moving a point light source, such as a flashlight on a mobile phone [Hui et al. 2017;Riviere et al. 2016] Kang et al. 2018Kang et al. , 2019Ma et al. 2021]. Display photometric stereo exploits off-the-shelf displays as cost-effective, versatile active-illumination modules capable of generating spatially-varying trichromatic intensity variation [Clark 2010;Francken et al. 2008;Ghosh et al. 2009;Lattas et al. 2022;Liu et al. 2018;Nogue et al. 2022]. Lattas et al. [2022] demonstrated facial capture using multiple off-the-shelf monitors and multi-view cameras with trichromatic complementary illumination, enabling explicit surface reconstruction. ...
... and compute the diffuse reflection and specular reflection : shows the diffuse-reflection image , which we use for robust photometric stereo. Note that this diffuse-specular separation using polarized illumination and imaging has been used in the other systems [Francken et al. 2008;Ghosh et al. 2009], and we apply the same principle to the polarized monitor and the polarization camera setup. ...
Preprint
Full-text available
Photometric stereo leverages variations in illumination conditions to reconstruct per-pixel surface normals. The concept of display photometric stereo, which employs a conventional monitor as an illumination source, has the potential to overcome limitations often encountered in bulky and difficult-to-use conventional setups. In this paper, we introduce Differentiable Display Photometric Stereo (DDPS), a method designed to achieve high-fidelity normal reconstruction using an off-the-shelf monitor and camera. DDPS addresses a critical yet often neglected challenge in photometric stereo: the optimization of display patterns for enhanced normal reconstruction. We present a differentiable framework that couples basis-illumination image formation with a photometric-stereo reconstruction method. This facilitates the learning of display patterns that leads to high-quality normal reconstruction through automatic differentiation. Addressing the synthetic-real domain gap inherent in end-to-end optimization, we propose the use of a real-world photometric-stereo training dataset composed of 3D-printed objects. Moreover, to reduce the ill-posed nature of photometric stereo, we exploit the linearly polarized light emitted from the monitor to optically separate diffuse and specular reflections in the captured images. We demonstrate that DDPS allows for learning display patterns optimized for a target configuration and is robust to initialization. We assess DDPS on 3D-printed objects with ground-truth normals and diverse real-world objects, validating that DDPS enables effective photometric-stereo reconstruction.
... Inverse rendering is a longstanding challenge in computer vision and graphics. Most early works [Dong et al. 2014;Gardner et al. 2003;Ghosh et al. 2009;Guarnera et al. 2016;Xia et al. 2016] involve stringent conditions for scene capture, such as the need for specific lighting or complex camera setups. These extra settings provide sufficient priors. ...
Preprint
Traditional inverse rendering techniques are based on textured meshes, which naturally adapts to modern graphics pipelines, but costly differentiable multi-bounce Monte Carlo (MC) ray tracing poses challenges for modeling global illumination. Recently, neural fields has demonstrated impressive reconstruction quality but falls short in modeling indirect illumination. In this paper, we introduce a simple yet efficient inverse rendering framework that combines the strengths of both methods. Specifically, given pre-trained neural field representing the scene, we can obtain an initial estimate of the signed distance field (SDF) and create a Neural Radiance Cache (NRC), an enhancement over the traditional radiance cache used in real-time rendering. By using the former to initialize differentiable marching tetrahedrons (DMTet) and the latter to model indirect illumination, we can compute the global illumination via single-bounce differentiable MC ray tracing and jointly optimize the geometry, material, and light through back propagation. Experiments demonstrate that, compared to previous methods, our approach effectively prevents indirect illumination effects from being baked into materials, thus obtaining the high-quality reconstruction of triangle mesh, Physically-Based (PBR) materials, and High Dynamic Range (HDR) light probe.
... Yet the challenges shift to synchronization across the light sources and between the lights and the camera, as well as calibrating the light sources. PS solutions are epitomized by the USC Light-Stage [15,30,86] that utilize thousands of light sources to provide controllable illuminations [23,25,52,53,78], with a number of recent extensions [2,27,34,35,40,64,72,93]. A key benefit of PS is that it can produce a very high-quality normal maps significantly surpassing MVS reconstruction. ...
Preprint
Human modeling and relighting are two fundamental problems in computer vision and graphics, where high-quality datasets can largely facilitate related research. However, most existing human datasets only provide multi-view human images captured under the same illumination. Although valuable for modeling tasks, they are not readily used in relighting problems. To promote research in both fields, in this paper, we present UltraStage, a new 3D human dataset that contains more than 2K high-quality human assets captured under both multi-view and multi-illumination settings. Specifically, for each example, we provide 32 surrounding views illuminated with one white light and two gradient illuminations. In addition to regular multi-view images, gradient illuminations help recover detailed surface normal and spatially-varying material maps, enabling various relighting applications. Inspired by recent advances in neural representation, we further interpret each example into a neural human asset which allows novel view synthesis under arbitrary lighting conditions. We show our neural human assets can achieve extremely high capture performance and are capable of representing fine details such as facial wrinkles and cloth folds. We also validate UltraStage in single image relighting tasks, training neural networks with virtual relighted data from neural assets and demonstrating realistic rendering improvements over prior arts. UltraStage will be publicly available to the community to stimulate significant future developments in various human modeling and rendering tasks.
... The methods generally observe the material sample with a fixed camera position, and solve for the parameters of a spatially-varying BRDF model such as diffuse albedo, roughness (glossiness) and surface normal. They differ in the number of light patterns required and their type; the patterns used include moving linear light [10], Gray code patterns [8] and spherical harmonic illumination [13]. In these approaches, the model and its optimization are specific to the light patterns and the optical setup of the method, as general non-linear optimization was historically deemed inefficient and not robust enough. ...
Preprint
Procedural material models have been graining traction in many applications thanks to their flexibility, compactness, and easy editability. In this paper, we explore the inverse rendering problem of procedural material parameter estimation from photographs using a Bayesian framework. We use \emph{summary functions} for comparing unregistered images of a material under known lighting, and we explore both hand-designed and neural summary functions. In addition to estimating the parameters by optimization, we introduce a Bayesian inference approach using Hamiltonian Monte Carlo to sample the space of plausible material parameters, providing additional insight into the structure of the solution space. To demonstrate the effectiveness of our techniques, we fit procedural models of a range of materials---wall plaster, leather, wood, anisotropic brushed metals and metallic paints---to both synthetic and real target images.
Preprint
Full-text available
We present a differentiable ray-tracing based novel face reconstruction approach where scene attributes - 3D geometry, reflectance (diffuse, specular and roughness), pose, camera parameters, and scene illumination - are estimated from unconstrained monocular images. The proposed method models scene illumination via a novel, parameterized virtual light stage, which in-conjunction with differentiable ray-tracing, introduces a coarse-to-fine optimization formulation for face reconstruction. Our method can not only handle unconstrained illumination and self-shadows conditions, but also estimates diffuse and specular albedos. To estimate the face attributes consistently and with practical semantics, a two-stage optimization strategy systematically uses a subset of parametric attributes, where subsequent attribute estimations factor those previously estimated. For example, self-shadows estimated during the first stage, later prevent its baking into the personalized diffuse and specular albedos in the second stage. We show the efficacy of our approach in several real-world scenarios, where face attributes can be estimated even under extreme illumination conditions. Ablation studies, analyses and comparisons against several recent state-of-the-art methods show improved accuracy and versatility of our approach. With consistent face attributes reconstruction, our method leads to several style -- illumination, albedo, self-shadow -- edit and transfer applications, as discussed in the paper.
Article
Full-text available
Procedural material models have been gaining traction in many applications thanks to their flexibility, compactness, and easy editability. We explore the inverse rendering problem of procedural material parameter estimation from photographs, presenting a unified view of the problem in a Bayesian framework. In addition to computing point estimates of the parameters by optimization, our framework uses a Markov Chain Monte Carlo approach to sample the space of plausible material parameters, providing a collection of plausible matches that a user can choose from, and efficiently handling both discrete and continuous model parameters. To demonstrate the effectiveness of our framework, we fit procedural models of a range of materials—wall plaster, leather, wood, anisotropic brushed metals and layered metallic paints—to both synthetic and real target images.
Article
Full-text available
Empowered by deep learning, recent methods for material capture can estimate a spatially‐varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or hundreds of pictures required by traditional optimization‐based approaches. However, a single image is often simply not enough to observe the rich appearance of real‐world materials. We present a deep‐learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash. Thanks to an order‐independent fusing layer, this architecture extracts the most useful information from each picture, while benefiting from strong priors learned from data. The method can handle both view and light direction variation without calibration. We show how our method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images ‐ a sweet spot between existing single‐image and complex multi‐image approaches.
Preprint
Full-text available
Empowered by deep learning, recent methods for material capture can estimate a spatially-varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or hundreds of pictures required by traditional optimization-based approaches. However, a single image is often simply not enough to observe the rich appearance of real-world materials. We present a deep-learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash. Thanks to an order-independent fusing layer, this architecture extracts the most useful information from each picture, while benefiting from strong priors learned from data. The method can handle both view and light direction variation without calibration. We show how our method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images -- a sweet spot between existing single-image and complex multi-image approaches.
Conference Paper
Full-text available
We estimate surface normal maps of an object from either its diffuse or specular reflectance using four spherical gradient illumination patterns. In contrast to traditional photometric stereo, the spherical patterns allow normals to be estimated simultaneously from any number of viewpoints. We present two polarized lighting techniques that allow the diffuse and specular normal maps of an object to be measured independently. For scattering materials, we show that the specular normal maps yield the best record of detailed surface shape while the diffuse normals deviate from the true surface normal due to subsurface scattering, and that this effect is dependent on wavelength. We show several applications of this acquisition technique. First, we capture normal maps of a facial performance simultaneously from several viewing positions using time-multiplexed illumination. Second, we show that high- resolution normal maps based on the specular component can be used with structured light D scanning to quickly acquire high-resolution facial surface geometry using off-the-shelf digital still cameras. Finally, we present a real- time shading model that uses independently estimated normal maps for the specular and diffuse color channels to reproduce some of the perceptually important effects of subsurface scattering.
Conference Paper
We present a new image-based process for measuring the bidirectional reflectance of homogeneous surfaces rapidly, completely, and accurately. For simple sample shapes (spheres and cylinders) the method requires only a digital camera and a stable light source. Adding a 3D scanner allows a wide class of curved near-convex objects to be measured. With measurements for a variety of materials from paints to human skin, we demonstrate the new method's ability to achieve high resolution and accuracy over a large domain of illumination and reflection directions. We verify our measurements by tests of internal consistency and by comparison against measurements made using a gonioreflectometer.