Conference Paper
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Enhanced perception [15,28,40,53,81,87,91,95,98,145]; Showing the invisible [15,91]; Pigments [28,47]; Surface defects [95]; Interpretation/Monitoring [6,15,40,67,92,146,147]; Dissemination/Inspection [6,40,67,81,87,89,95,98,145,147] Enhanced perception [54,80,81]; Automatic illustration [54]; Species identification/Interpretation [55]; Dissemination/Inspection [54,80,81]. ...
... In the direct interpolation scenario, Radial Basis Functions (RBF) [14] interpolation has been typically used to produce a relighted image with a new virtual light from the MLIC stack [47]. Given the N images corresponding to the closest neighbors of the new light direction, RBF interpolation is achieved by computing the parameters that define a sum of N radial functions. ...
... For this reason, these techniques have been successfully employed in many application domains (e.g., Cultural Heritage -CH, Sec. 1.3), and they prove very suitable to directly treat complex materials with behavior that is difficult to model [47]. ...
Thesis
Full-text available
Multi-Light Image Collections (MLICs) are stacks of photos of a scene acquired with a fixed viewpoint and a varying surface illumination that provides large amounts of visual and geometric information. Over the last decades, a wide variety of methods have been devised to extract information from MLICs and have shown its use in different application domains to support daily activities. In this thesis, we present methods that leverage a MLICs for surface analysis and visualization. First, we provide background information: acquisition setup, light calibration and ap- plication areas where MLICs have been successfully used for the research of daily analysis work. Following, we discuss the use of MLIC for surface visualization and analysis and available tools used to support the analysis. Here, we discuss methods that strive to sup- port the direct exploration of the captured MLIC, methods that generate relightable models from MLIC, non-photorealistic visualization methods that rely on MLIC, methods that es- timate normal map from MLIC and we point out visualization tools used to do MLIC analysis. In chapter 3 we propose novel benchmark datasets (RealRTI, SynthRTI and SynthPS) that can be used to evaluate algorithms that rely on MLIC and discusses available bench- mark for validation of photometric algorithms that can be also used to validate other MLIC-based algorithms. In chapter 4, we evaluate the performance of different photo- metric stereo algorithms using SynthPS for cultural heritage applications. RealRTI and SynthRTI have been used to evaluate the performance of (Neural)RTI method. Then, in chapter 5, we present a neural network-based RTI method, aka NeuralRTI, a framework for pixel-based encoding and relighting of RTI data. In this method using a simple autoencoder architecture, we show that it is possible to obtain a highly compressed representation that better preserves the original information and provides increased quality of virtual images relighted from novel directions, particularly in the case of challenging glossy materials. Finally, in chapter 6, we present a method for the detection of crack on the surface of paintings from multi-light image acquisitions and that can be used as well on single images and conclude our presentation.
... This image type is increasingly used in the Cultural Heritage (CH) field because the way the light interacts with the object of interest allows disclosing important information on the surface conservation status or the constituent materials. The reflectance behavior of the materials and the perception of the fine details are very important for the study of bas-relieves [1], coins [2,3], paintings [4,5,6], or epigraphy [7]. RTI images do not encode a plain color for each pixel, but a function that allows computing the specific color associated with each pixel given the light direction. ...
... Recently, RTI technology has been extended to enable multi-spectral analysis of the acquired data [6] under different lighting conditions. In this recent work, the per-pixel fitting function is replaced by an interpolation approach. ...
... This paper is an extended version which presents more in-depth results. This approach was designed to first, support more sophisticated RTI encoding, following the approach proposed in [6]; second, enable interpolation-based RTI visual-ization on the Web, thus requiring only a reasonable amount of data to be transmitted. Our focus is therefore to produce an encoding which should be able to preserve information but should also be able to apply a considerable degree of compression. ...
Article
Full-text available
Relightable images have been widely used as a valuable tool in Cultural Heritage (CH) artifacts, including coins, bas-reliefs, paintings, and epigraphs. Reflection Transformation Imaging (RTI), a commonly used type of relightable images, consists of a per-pixel function which encodes the reflection behavior, estimated from a set of digital photographs acquired from a fixed view. Web visualisation tools for RTI images currently require to transmit substantial quantities of data in order to achieve high fidelity renderings. We propose a web-friendly compact representation for RTI images based on a joint interpolation-compression scheme that combines a PCA-based data reduction with a Gaussian Radial Basis Function (RBF) interpolation exhibiting superior performance in terms of quality/size ratio. This approach can be adapted also to other data interpolation schemes, and it is not limited to Gaussian RBF. The rendering part is simple to implement and computationally efficient allowing real-time rendering on low-end devices.
... This type of images is increasingly used in the Cultural Heritage (CH) field because the way the light interacts with the object of interest allows disclosing important information on the surface conservation status or the constituent materials. The reflectance behavior of the materials and the perception of the fine details are very important for the study of bas-relief [8], coins [16,19], paintings [6,7,18], or epigraphy [12]. ...
... Recently, the RTI approaches have been extended to enable multispectral analysis of the acquired data [6] under different lighting conditions. In this recent work, the per-pixel function fitting is replaced by an interpolation approach. ...
... Recently, Multi-Spectral Reflectance Imaging (MS-RTI) has been proposed by Giachetti et al. [6]. In MS-RTI, five frequency bands (IR, UV and visible spectrum) are acquired under different lighting conditions. ...
Conference Paper
Full-text available
Relightable images have demonstrated to be a valuable tool for the study and the analysis of coins, bas-relief, paintings, and epigraphy in the Cultural Heritage (CH) field. Reflection Transformation Imaging (RTI) are the most diffuse type of relightable images. An RTI image consists in a per-pixel function which encodes the reflection behavior, estimated from a set of digital photographs acquired from a fixed view. Even if web visualization tools for RTI images are available, high fidelity of the relighted images still requires a high amount of data to be transmitted. To overcome this limit, we propose a web-friendly compact representation for RTI images which allows very high quality of the rendered images with a relatively small amount of data required (in the order of 6-9 standard JPEG color images). The proposed approach is based on a joint interpolation-compression scheme that combines a PCA-based data reduction with a Gaussian Radial Basis Function (RBF) interpolation. We will see that the proposed approach can be adapted also to other data interpolation schemes, and it is not limited to Gaussian RBF. The proposed approach has been compared with several techniques, demonstrating its superior performance in terms of quality/size ratio. Additionally, the rendering part is simple to implement and very efficient in terms of computational cost. This allows real-time rendering also on low-end devices.
... RTI was originally developed at HP labs by Malzbender et al. [8,9] under a name, Polynomials Texture Mappings (PTM), that refers to the implemented modeling method. Other modeling methods have been developed, such as the HSH (Hemispherical Harmonics) model [10][11][12][13], the DMD (Discrete Modal Decomposition) method [14][15][16][17], and the approach based on RBF (Radial Basis Function) [18,19]. Particularly important development has occurred in the field of the digitization of historical and cultural heritage objects [20][21][22]. ...
... RTI acquisition and characterization makes it possible to more directly measure and describe aspect function, i.e., the effect that is not geometry intrinsic and is instead created by the geometry of the surface [7,31]. Therefore, depending on the density of the acquisition angles and the methods chosen, RTI acquisition can lead to obtaining a voluminous and complex set of data [18,32,33]. ...
Article
Full-text available
Reflectance Transformation Imaging (RTI) is a non-contact technique which consists in acquiring a set of multi-light images by varying the direction of the illumination source on a scene or a surface. This technique provides access to a wide variety of local surface attributes which describe the angular reflectance of surfaces as well as their local microgeometry (stereo photometric approach). In the context of the inspection of the visual quality of surfaces, an essential issue is to be able to estimate the local visual saliency of the inspected surfaces from the often-voluminous acquired RTI data in order to quantitatively evaluate the local appearance properties of a surface. In this work, a multi-scale and multi-level methodology is proposed and the approach is extended to allow for the global comparison of different surface roughnesses in terms of their visual properties. The methodology is applied on different industrial surfaces, and the results show that the visual saliency maps thus obtained allow an objective quantitative evaluation of the local and global visual properties on the inspected surfaces.
... From this set of images, each pixel is associated a set of discrete values (measured gray-levels, considered to be proportional to the luminance [10]). To model the surface visual appearance continuously and to allow relighting the surface for any virtual direction of light, this set of luminance values can be approximated or interpolated locally [4], [11], [12]. The main approximation methods used to model this information are the Polynomial Texture Mappings approach (PTM), based on 2nd order polynomial functions [13], the Hemispherical Harmonics (HSH) approach [14] and the Discrete Modal Decomposition (DMD) [8], [15]- [17]. ...
... These natural modes form the projection basis, named the modal basis. Thus, the modal modes derives from the resolution of the dynamic equation (4). ...
Conference Paper
Full-text available
In this paper, we propose to evaluate the quality of the reconstruction and relighting from images acquired by a Reflectance Transformation Imaging (RTI) device. Three relighting models, namely the PTM, HSH, and DMD, are evaluated using PSNR and SSIM. A visual assessment of how the reconstructed surfaces are perceived is also carried out through a sensory experiment. This study allows estimating the relevance of these models to reproduce the appearance of the manufactured surfaces. It also shows that DMD reproduces the most accurate reconstruction/relighting to an acquired measurement and that a higher sampling density doesn’t mean necessarily a higher perceptual quality.
... RTI's integration with other imaging modalities has demonstrated profound utility in the domain of cultural heritage. A significant advancement in RTI is the incorporation of High Dynamic Range Reflectance Transformation Imaging (HDR-RTI) [29], Multi-Spectral Reflectance Transformation Imaging (MS-RTI) [18,14,2], Florescence Transformation Imaging (FTI) [5,19] and Focus Variation Reflectance Transformation Imaging [21] techniques in it. These techniques provide better opportunities for surface analysis bringing new knowledge to the field of cultural heritage and archeology. ...
Conference Paper
This paper investigates the optimization of acquisition in Reflectance Transformation Imaging (RTI). Current methods for RTI acquisition are either computationally expensive or impractical, which leads to continued reliance on conventional classical methods like homogenous equally spaced methods in museums. We propose a methodology that is aimed at dynamic collaboration between automated analysis and cultural heritage expert knowledge to obtain optimized light positions. Our approach is cost-effective and adaptive to both linear and non-linear reflectance profile scenarios. The practical contribution of research in this field has a considerable impact on the cultural heritage context and beyond.
... Many new RTI acquisition modalities have recently been developed, including multispectral approaches [34,35], approaches to measure the complete luminance dynamic (HD-RTI, [36][37][38]) self-adaptive approaches to determine the relevant lighting directions (NBLP-RTI, [39]), and even robot-based RTI systems [40]. Within the framework of this research, we focus on the RTI acquisition parameters associated with the conventional approach. ...
Article
Full-text available
This work investigates the use of Reflectance Transformation Imaging (RTI) rendering for visual inspection. This imaging technique is being used more and more often for the inspection of the visual quality of manufactured surfaces. It allows reconstructing a dynamic virtual rendering of a surface from the acquisition of a sequence of images where only the illumination direction varies. We investigate, through psychometric experimentation, the influence of different essential parameters in the RTI approach, including modeling methods, the number of lighting positions and the measurement scale. In addition, to include the dynamic aspect of perception mechanisms in the methodology, the psychometric experiments are based on a design of experiments approach and conducted on reconstructed visual rendering videos. The proposed methodology is applied to different industrial surfaces. The results show that the RTI approach can be a relevant tool for computer-aided visual inspection. The proposed methodology makes it possible to objectively quantify the influence of RTI acquisition and processing factors on the perception of visual properties, and the results obtained show that their impact in terms of visual perception can be significant.
... Following this trend, some researchers have already prepared the road for RTI and multispectral coupling. Important contributions on the development of Multispectral RTI (MSRTI) were presented in [8,9,10]. More recently, Ono et al [11] demonstrated the feasibility and potential of multispectral RTI for the investigation of varnish cleaning of painting. ...
... In addition to architectural elements, we also documented rock art, which we can find at the sites or at a small distance from them. The rock art has become one of the elements that presented an opportunity to use modern research methods in a wider context -from documentation of the condition of preservation, to attempts to extract invisible elements using three-dimensional digitization (e.g., Duffy 2018a, Duffy 2018b, Earl et al. 2010Giachetti et al. 2017;Goskar 2017;Pires, Rubio & Arana 2015). Because of the weather conditions and human activity the condition of the petroglyphs are occasionally very poor, which sometimes makes their interpretation or documentation by traditional methods difficult or even impossible. ...
Preprint
Full-text available
Our paper focuses on modern techniques of documentation, such as photo-grammetry and laser scanning and later analysis in a virtual environment of the Ancestral Pueblo sites, with sandstone architecture and rock art that are located in several canyons of the central Mesa Verde region, southwestern Colorado, USA. All of the sites roughly date back to the 13th century A.D. and could have functioned as a community of allied sites. The research was conducted over the course of several seasons by the Sand Canyon-Castle Rock Community Archaeological Project led by the Institute of Archaeology at the Jagiellonian University in Kraków. The goal of digitising the Ancestral Puebloan sites was to accurately document and analyse the condition of preservation of the architectural features and rock art. The registered data have been used to generate accurate 2D documentation together with 3D models. The 3D models that were generated have also been used to interpret some barely visible details, for example of the petroglyphs, by varying the position of the light, with the use of Reflectance Transformation Imaging (RTI) software. Another element is the virtual three-dimensional models that we used in a game engine and Digital Elevation Model that encompasses the sites and the associated environment. Also, we briefly discuss the potential and the benefits and disadvantages of using specific methods in the field in the research area, mainly photogrammetry, laser scanning, and RTI analysis.
... Or, stakeholder want to disseminate to beholders interactive, photorealistic, virtual models. This can even be combined with multispectral applications (Van der Perre et al. 2016;Hanneken 2016;Giachetti et al. 2017;). Depending on the processing method and the applied visualisation filters all such variation of output is possible. ...
Technical Report
Full-text available
Interactive Pixel Based file formats have been produced over the course of the last two decades by many stakeholders active in the Heritage sector. It is a global story. Its use has facilitated research and dissemination strategies for vast numbers of artefacts, conservation interactions and scientific studies. The technology has been explored and elaborated via varying technical strategies; each one of them delivering specific results with visual benefits and paths for computational research (Pintus et al. 2019). This situation is also reflected in the many naming conventions used for this technology: Relightable Images, Multi-Light Image Collections, Reflectance Transformation Imaging, Multi-Light Images, Multi-Light Reflectance, … . It has been the aim of the pixel+ project to bring the existing spectrum of these technologies closer together, by A. creating a universal web interface for its most popular processed output formats (objective 1), B. finding solutions to store and disseminate these type of multi-layered datasets in a new open, web optimize file format (objective 2), C. providing a conversion tool allowing stakeholders to optimize their existing datasets into the new file format (objective 2). Throughout the pixel+ project the general collective term for the targeted technology is Single Camera Multi-Light (SCML), the newly proposed file format: .scml. SCML techniques have been used extensively to study objects under varying light conditions. Intuitively, we humans perform this technique to study local surface detail, the shape of an object or reflectance properties, e.g. by using a flash light at raking angles to reveal the tiny surface details. These visual cues that change under varying light conditions are captured with cameras and serve as input for various algorithms that model this light dependent pixel intensity (i.e. formulate the final pixel color as a function of the light direction) or even learn surface and material properties from these cues. Two major SCML approaches lay in the scope of the pixel+ project. On the one hand the Belgian system of the Portable Light Dome (PLD) and on the other hand the US solution with Reflectance Transformation Imaging (RTI); sometimes also known as Polynomial Texture Mapping (PTM), piloted by Cultural Heritage Imaging (CHI, San Francisco). The latter includes the axe expanded by mainly ISTI/CNR researchers in Italy. Both approaches have proven their added value over the last 20 years for the visual documenting, study and multi-layered digital preservation of heritage objects. In the meantime countless past and ongoing research and imaging projects, initiated and funded all over the world, illustrate this clearly. For all these projects and their outcomes pixel+ has provided new solutions. Both platforms have been developed by separate research groups, with a different focus. This resulted in dissimilar interactive pixel based file formats, making it challenging to interchange the information processed and stored in their respective output datasets. Therefore, the pixel+ project aimed to merge the technologies of both approaches into a single consultation platform that will be capable of displaying all existing interactive pixel based file formats with their respective viewing modes and metadata. This viewer has the ability to illuminate the virtual model of the heritage object and consult the processed datasets with visual styles developed within the research spheres of both platforms. Moreover, as both methods are alike in terms of required input and processed output, pixel+ focusses on other types of integration, resulting in new additional visual styles for processed data, as well as a novel reprocessing pipeline for existing source image sets. Because RTI and PLD are still relatively young technologies, knowledge of their technicalities and wide range of potential benefits is still sub-par. A new dissemination website has been launched which explains these technologies and their derivatives. It contains best practices and use cases, and shares community updates. As it's in the interest of the whole community, the website doesn’t shy away from discussing other future computational photography techniques to further elaborate viewing, processing and/or digital preserving interactive pixel based file formats. The source codes of both the pixel+ viewer and this companion website are on GitHub.
... RTI has been applied to many useful applications on a wide range of cultural heritage domain such as condition monitoring, treatment documenting, and surface analyzing. With its mathematical enhancement function, observing features interactively that are difficult to see with naked eyes is possible (Manrique Tamayo et al., 2013) ( Giachetti et al., 2017) (Clar- ricoates, Kotoula, 2019). In recent research, Pamart, A., et al. developed an integrated tool to cross-reference qualitative depth information of RTI and quantitative depth information of photogrammetry ( Pamart et al., 2019). ...
Article
Full-text available
In the past few decades, a number of scholars studied painting classification based on image processing or computer vision technologies. Further, as the machine learning technology rapidly developed, painting classification using machine learning has been carried out. However, due to the lack of information about brushstrokes in the photograph, typical models cannot use more precise information of the painters painting style. We hypothesized that the visualized depth information of brushstroke is effective to improve the accuracy of the machine learning model for painting classification. This study proposes a new data utilization approach in machine learning with Reflectance Transformation Imaging (RTI) images, which maximizes the visualization of a three-dimensional shape of brushstrokes. Certain artist’s unique brushstrokes can be revealed in RTI images, which are difficult to obtain with regular photographs. If these new types of images are applied as data to train in with the machine learning model, classification would be conducted including not only the shape of the color but also the depth information. We used the Convolution Neural Network (CNN), a model optimized for image classification, using the VGG-16, ResNet-50, and DenseNet-121 architectures. We conducted a two-stage experiment using the works of two Korean artists. In the first experiment, we obtained a key part of the painting from RTI data and photographic data. In the second experiment on the second artists work, a larger quantity of data are acquired, and the whole part of the artwork was captured. The result showed that RTI-trained model brought higher accuracy than Non-RTI trained model. In this paper, we propose a method which uses machine learning and RTI technology to analyze and classify paintings more precisely to verify our hypothesis.
... On one hand, Multi-view and imagebased modeling (CRP and SfM) are periodically discussed regarding their gain of completeness and/or accuracy in necessary critical overviews (Remondino et al., 2014), whereas they can be exploited nowadays as a stable and reliable technique. On the other hand, Multi-light and RTI's techniques are massively developing from the original open-source code (Malzbender et al., 2001, Mudge et al., 2006, with numerous enhancements concerning the fitting-viewing side (Palma et al., 2010, Giachetti et al., 2017a and going to multispectral (Hanneken, 2014, Giachetti et al., 2017b and new devices (Schuster et al., 2014) support. ...
Article
Full-text available
Close-Range Photogrammetry (CRP) and Reflectance Transformation Imaging (RTI) are two of the most used image-based techniques when documenting and analyzing Cultural Heritage (CH) objects. Nevertheless, their potential impact in supporting study and analysis of conservation status of CH assets is reduced as they remain mostly applied and analyzed separately. This is mostly because we miss easy-to-use tools for of a spatial registration of multimodal data and features for joint visualisation gaps. The aim of this paper is to describe a complete framework for an effective data fusion and to present a user friendly viewer enabling the joint visual analysis of 2D/3D data and RTI images. This contribution is framed by the on-going implementation of automatic multimodal registration (3D, 2D RGB and RTI) into a collaborative web platform (AIOLI) enabling the management of hybrid representations through an intuitive visualization framework and also supporting semantic enrichment through spatialized 2D/3D annotations.
Article
Full-text available
We study the relationship between reflectance and the degree of linear polarization of radiation that bounces off the surface of an unvarnished oil painting. We design a VNIR-SWIR (400 nm to 2500 nm) polarimetric reflectance imaging spectroscopy setup that deploys unpolarized light and allows us to estimate the Stokes vector at the pixel level. We observe a strong negative correlation between the S0 component of the Stokes vector (which can be used to represent the reflectance) and the degree of linear polarization in the visible interval (average -0.81), while the correlation is weaker and varying in the infrared range (average -0.50 in the NIR range between 780 and 1500 nm, and average -0.87 in the SWIR range between 1500 and 2500 nm). By tackling the problem with multi-resolution image analysis, we observe a dependence of the correlation on the local complexity of the surface. Indeed, we observe a general trend that strengthens the negative correlation for the effect of artificial flattening provoked by low image resolutions.
Thesis
Full-text available
La maitrise de la perception visuelle des surfaces des produits manufacturées est un enjeu central pour l'industrie. Or, en entreprise, la qualité des surfaces est souvent évaluée par des contrôleurs humains. Seul quelques cas spécifiques utilisent une approche instrumentale ou photométrique. Parmi les approches photométriques, l'une d'elle connaît un essor important: le Reflectance Transformation Imaging (RTI). Cependant cette technique présente des limites au niveau de l'acquisition et du traitement des données. L'objectif est donc de corriger certaines de ces limites afin d'améliorer le RTI et, par conséquent, le contrôle qualité visuel des états de surfaces dans l'industrie.Les systèmes RTI actuels sont limités et ne peuvent répondre à nos besoins en terme d'implémentation et d'expérimentation des modalités et méthodes liées au RTI. Nous avons donc développer un système de mesure RTI couplé à un logiciel de pilotage. Cette ensemble nous permet l'accès au matériel et au code du logiciel pour ajouter, modifier, et contrôler, les paramètres et modalités d'acquisitions. Un des développements a consisté à implémenter une nouvelle modalité d'acquistion qui consiste à coupler le High Dynamic Range (HDR) au RTI (HD-RTI). Ce couplage permet de corriger un biais de mesure lié au temps d'exposition de la caméra et à la limite du capteur en terme de plage dynamique. Le HD-RTI mesure la pleine dynamique de la réponse en luminance des surfaces inspectées. Avec les donnée stéréo-photométrique HD-RTI, nous pouvons reconstruire virtuellement la scène en simulant un temps d'exposition arbitraire, mais aussi, mieux caractériser et donc discriminer les anomalies de surfaces. Le RTI génère de grande quantité de données, qui se complexifie selon les modalités d'acquisition utilisées tel que le HD-RTI. Nous proposons une méthodologie afin de caractériser l'apparence des surfaces, à partir mesures RTI, basée sur l'utilisation de descripteurs de la géométrie et du comportement photométrique des surfaces. La variété des descripteurs permet une caractérisation fine des différents états de surface. A partir des descripteurs extraits des acquisitions RTI nous proposons une méthode afin d'estimer la saillance visuelle multi-échelle et multi-niveau en chaque pixel et permettre ainsi de discriminer les anomalies de surfaces. Une méthodologie, pour segmenter les données RTI en utilisant la saillance, et déterminer les descripteurs les plus pertinents à utiliser selon un critère global, est ensuite appliqué sur un cas d'application. Ensuite, le calcul de distance est étendue aux acquisitions RTI afin de comparer les états de surface. La distance est corrélé avec le degré de différence entre les caractéristiques des états de surfaces. Enfin, une distance est aussi calculée entre les modèles de reconstruction de l'apparence.
Article
Full-text available
Reflectance Transformation Imaging (RTI) is a technique for estimating surface local angular reflectance from a set of stereo-photometric images captured with variable lighting directions. The digitization of this information fully fits into the industry 4.0 approach and makes it possible to characterize the visual properties of a surface. The proposed method, namely HD-RTI, is based on the coupling of RTI and HDR imaging techniques. This coupling is carried out adaptively according to the response at each angle of illumination. The proposed method is applied to five industrial samples which have high local variations of reflectivity because of their heterogeneity of geometric texture and/or material. Results show that coupling HDR and RTI improves the relighting quality compared to RTI, and makes the proposed approach particularly relevant for glossy and heterogeneous surfaces. Moreover, HD-RTI enhances significantly the characterization of the local angular reflectance, which leads to more discriminating visual saliency maps, and more generally to an increase in robustness for visual quality assessment tasks.
Conference Paper
Full-text available
We present MLIC-Synthetizer, a Blender plugin specifically designed for the generation of a syntethic Multi-Light Image Collection using physically-based rendering. This tool makes easy to generate large amount of test data that can be useful for Photometric Stereo algorithms evaluation, validation of Reflectance Transformation Imaging calibration and processing method, relighting methods and more. Multi-pass rendering allows the generation of images with associated shadows and specularity ground truth maps, ground truth normals and material segmentation masks. Furthermore loops on material parameters allows the automatic generation of datasets with pre-defined material parameters ranges that can be used to train robust learning-based algorithms for 3D reconstruction, relight and material segmentation. CCS Concepts • Computing methodologies → Computer graphics; • Software and its engineering → Software notations and tools; Multi-light image collections (MLICs) are an effective mean to gather detailed information on the shape and appearance of objects. For this reason, lots of visualization and analysis methods are built upon this kind of data, e.g. multiple images of the surface of interest captured from a fixed point of view, changing the illumination conditions (typically light direction) at each shot [PDC * 19]. In particular Photometric Stereo approaches [AG15] and Reflectance Transformation Imaging/Relightable images [MGW01, GCD * 17] are quite popular, with many applications, especially in the Cultural Heritage domain [Mac15]. MLIC have been used also for material segmentation tasks, e.g. in [WGSD09]. The development and validation of algorithms for image relight-ing (e.g. generation of images of the surface with arbitrary illumination), normal reconstruction and 3D shape estimation, material segmentation, is not easy to perform as the acquisition methods used in the practice have relevant calibration issues. Furthermore, learning based approaches for these tasks are emerging [XSHR18, RDL * 15], requiring lots of annotated images, with known lights, shape and/or materials. For these reasons, an easy generation of custom sets of realistic data with known parameters would be extremely useful in order to create datasets for evaluation of different kinds of tools and training of different kinds of learning-based algorithms.
Article
Full-text available
Multi‐Light Image Collections (MLICs), i.e., stacks of photos of a scene acquired with a fixed viewpoint and a varying surface illumination, provide large amounts of visual and geometric information. In this survey, we provide an up‐to‐date integrative view of MLICs as a mean to gain insight on objects through the analysis and visualization of the acquired data. After a general overview of MLICs capturing and storage, we focus on the main approaches to produce representations usable for visualization and analysis. In this context, we first discuss methods for direct exploration of the raw data. We then summarize approaches that strive to emphasize shape and material details by fusing all acquisitions in a single enhanced image. Subsequently, we focus on approaches that produce relightable images through intermediate representations. This can be done both by fitting various analytic forms of the light transform function, or by locally estimating the parameters of physically plausible models of shape and reflectance and using them for visualization and analysis. We finally review techniques that improve object understanding by using illustrative approaches to enhance relightable models, or by extracting features and derived maps. We also review how these methods are applied in several, main application domains, and what are the available tools to perform MLIC visualization and analysis. We finally point out relevant research issues, analyze research trends, and offer guidelines for practical applications.
Conference Paper
Full-text available
Reflectance Transformation Imaging (RTI) is widely used to produce relightable models from multi-light image collections. These models are used for a variety of tasks in the Cultural Heritage field. In this work, we carry out an objective and subjective evaluation of RTI data visualization. We start from the acquisition of a series of objects with different geometry and appearance characteristics using a common dome-based configuration. We then transform the acquired data into relightable representations using different approaches: PTM, HSH, and RBF. We then perform an objective error estimation by comparing ground truth images with relighted ones in a leave-one-out framework using PSNR and SSIM error metrics. Moreover, we carry out a subjective investigation through perceptual experiments involving end users with a variety of backgrounds. Objective and subjective tests are shown to behave consistently, and significant differences are found between the various methods. While the proposed analysis has been performed on three common and state-of-the-art RTI visualization methods, our approach is general enough to be extended and applied in the future to new developed multi-light processing pipelines and rendering solutions, to assess their numerical precision and accuracy, and their perceptual visual quality.
Conference Paper
Full-text available
We present a practical acquisition and processing pipeline to characterize the surface structure of cultural heritage objects. Using a free-form Reflectance Transformation Imaging (RTI) approach, we acquire multiple digital photographs of the studied object shot from a stationary camera. In each photograph, a light is freely positioned around the object in order to cover a wide variety of illumination directions. Multiple reflective spheres and white Lambertian surfaces are added to the scene to automatically recover light positions and to compensate for non-uniform illumination. An estimation of geometry and reflectance parameters (e.g., albedo, normals, polynomial texture maps coefficients) is then performed to locally characterize surface properties. The resulting object description is stable and representative enough of surface features to reliably provide a characterization of measured surfaces. We validate our approach by comparing RTI-acquired data with data acquired with a high-resolution microprofilometer.
Conference Paper
Full-text available
Controlling surface appearance has become essential in the supplier/customer relationship. In this context, many industries have implemented new methods to improve the sensory inspection, particularly in terms of variability. A trend is to develop both hardware and methods for moving towards the automation of appearance inspection and analysis. If devices inspired from dimensional control solutions generally allows to identify defects far apart the expected quality of products, it do not allow to quantify finely appearance anomalies, and decide on their acceptance. To address this issue, new methods devoted to appearance modelling and rendering have been implemented, such as the Reflectance Transformation Imaging (RTI) technique. By varying the illumination positions, the RTI technique aims at enriching the classical information conveyed by images. Thus each pixel is described by a set of values rather than one value classically; each value corresponding to a specific illumination position. This set of values could be interpolated or approximated by a continuous model (function), associated to the reflectance of the pixel, generally based on a second order polynomial (namely, Polynomial Texture Mapping Technique). This paper presents a new approach to evaluate this information from RTI acquisitions. A modal projection based on dynamics (Discrete Modal Decomposition) is used estimate reflectance surfaces on each measurement point. After presenting the acquisition device, an application on an industrial surface is proposed in order to validate the approach, and compare it to the more classical polynomial transformation. Results show that the proposed projection basis not only provides closer assessment of the reflectance surface (modelling) but also yields to a more realistic rendering.
Article
Full-text available
Polynomial texture mapping (PTM) uses simple polynomial regression to interpolate and re-light image sets taken from a fixed camera but under different illumination directions. PTM is an extension of the classical photometric stereo (PST), replacing the simple Lambertian model employed by the latter with a polynomial one. The advantage and hence wide use of PTM is that it provides some effectiveness in interpolating appearance including more complex phenomena such as interreflections, specularities and shadowing. In addition, PTM provides estimates of surface properties, i.e., chromaticity, albedo and surface normals. The most accuratemodel to date utilizes multivariate Least Median of Squares (LMS) robust regression to generate a basic matte model, followed by radial basis function (RBF) interpolation to give accurate interpolants of appearance. However, robust multivariate modelling is slow. Here we show that the robust regression can find acceptably accurate inlier sets using a much less burdensome 1D LMS robust regression (or 'mode-finder'). We also show that one can produce good quality appearance interpolants, plus accurate surface properties using PTM before the additional RBF stage, provided one increases the dimensionality beyond 6D and still uses robust regression. Moreover, we model luminance and chromaticity separately, with dimensions 16 and 9 respectively. It is this separation of colour channels that allows us to maintain a relatively low dimensionality for the modelling. Another observation we show here is that in contrast to current thinking, using the original idea of polynomial terms in the lighting direction outperforms the use of hemispherical harmonics (HSH) for matte appearance modelling. For the RBF stage, we use Tikhonov regularization, which makes a substantial difference in performance. The radial functions used here are Gaussians; however, to date the Gaussian dispersion width and the value of the Tikhonov parameter have been fixed. Here we show that one can extend a theorem from graphics that generates a very fast error measure for an otherwise difficult leave-one-out error analysis. Using our extension of the theorem, we can optimize on both the Gaussian width and the Tikhonov parameter.
Conference Paper
Full-text available
This paper presents a photometric stereo method that is purely pixelwise and handles general isotropic surfaces in a stable manner. Following the recently proposed sum-of-lobes representation of the isotropic reflectance function, we constructed a constrained bivariate regression problem where the regression function is approximated by smooth, bivariate Bernstein polynomials. The unknown normal vector was separated from the unknown reflectance function by considering the inverse representation of the image formation process, and then we could accurately compute the unknown surface normals by solving a simple and efficient quadratic programming problem. Extensive evaluations that showed the state-of-the-art performance using both synthetic and real-world images were performed.
Conference Paper
Full-text available
In this paper we propose a simple method to extract edges from Polynomial Texture Maps (PTM) or other kinds of Reflection Transformation Image (RTI) files. It is based on the idea of following 2D lines where the variation of corresponding 3D normals computed from the PTM coefficients is maximal. Normals are estimated using a photometric stereo approach, derivatives along image axes directions are computed in a multiscale framework providing normal discontinuity and orientation maps and lines are finally extracted using non-maxima suppression and hysteresis thresholds as in Canny’s algorithm. In this way it is possible to discover automatically potential structure of interest (inscriptions, small reliefs) on Cultural Heritage artifacts of interest without the necessity of interactively recreating images using different light directions. Experimental results obtained on test data and new PTMs acquired in an archaeological site in the Holy Land with a simple low-end camera, show that the method provides potentially useful results.
Conference Paper
Full-text available
The presentation of CH artefacts is technically demanding because it has to meet a variety of requirements: A plethora of file formats, compatibility with numerous application scenarios from powerwall to web-browser, sustainability and long-term availability, extensibility with respect to digital model representations, and last but not least a good usability. Instead of a monolithic application we propose a viewer architecture that builds upon a module concept and a scripting language. This permits to design with reasonable effort non-trivial interaction components for exploration and inspection of individual models as well as of complex 3D-scenes. Furthermore some specific CH-models will be discussed in more detail.
Conference Paper
Full-text available
We offer two new methods of documenting and communicating cultural heritage information using Reflection Transformation Imaging (RTI). One imaging method is able to acquire Polynomial Texture Maps (PTMs) of 3D rock art possessing a large range of sizes, shapes, and environmental contexts. Unlike existing PTM capture methods requiring known light source positions, we rely on the user to position a handheld light source, and recover the lighting direction from the specular highlights produced on a black sphere included in the field of view captured by the camera. The acquisition method is simple, fast, very low cost, and easy to learn. A complementary method of integrating digital RTI representations of subjects from multiple viewpoints is also presented. It permits RTI examination "in the round" in a unified, interactive, image-based representation. Collaborative tests between Cultural Heritage Imaging, Hewlett- Packard Labs, and the UNESCO Prehistoric Rock-Art Sites in the Côa Valley, a World Heritage Site in Portugal, suggest this approach will be very beneficial when applied to paleolithic petroglyphs of various sizes, both in the field and in the laboratory. These benefits over current standards of best practice can be generalized to a broad range of cultural heritage material.
Conference Paper
Full-text available
When the shape of an object is known, its appearance is determined by the spatially-varying reectance function dened on its surface. Image-based rendering methods that use geometry seek to estimate this function from image data. Most existing methods recover a unique angular reectance function (e.g., BRDF) at each surface point and provide reectance estimates with high spatial resolution. Their angular accuracy is limited by the number of available images, and as a result, most of these methods focus on capturing parametric or low-frequency angular reectance effects, or allowing only one of lighting or viewpoint variation. We present an alternative approach that enables an increase in the angular accuracy of a spatially-varying reectance function in exchange for a decrease in spatial resolution. By framing the problem as scattered-data interpolation in a mixed spatial and angular domain, reectance information is shared across the surface, exploiting the high spatial resolution that images provide to ll the holes between sparsely observed view and lighting directions. Since the BRDF typically varies slowly from point to point over much of an object's surface, this method enables image-based rendering from a sparse set of images without assuming a parametric reectance model. In fact, the method can even be applied in the limiting case of a single input image.
Conference Paper
Full-text available
In this paper we present a new form of texture mapping that produces increased photorealism. Coefficients of a biquadratic polynomial are stored per texel, and used to reconstruct the surface color under varying lighting conditions. Like bump mapping, this allows the perception of surface deformations. However, our method is image based, and photographs of a surface under varying lighting conditions can be used to construct these maps. Unlike bump maps, these Polynomial Texture Maps (PTMs) also capture variations due to surface self-shadowing and interreflections, which enhance realism. Surface colors can be efficiently reconstructed from polynomial coefficients and light directions with minimal fixed-point hardware. We have also found PTMs useful for producing a number of other effects such as anisotropic and Fresnel shading models and variable depth of focus. Lastly, we present several reflectance function transformations that act as contrast enhancement operators. We have found these particularly useful in the study of ancient archeological clay and stone writings.
Article
Full-text available
Virtual reconstruction and representation of historical environments and objects have been of research interest for nearly two decades. Physically based and historically accurate illumination allows archaeologists and historians to authentically visualise a past environment to deduce new knowledge. This report reviews the current state of illuminating cultural heritage sites and objects using computer graphics for scientific, preservation and research purposes. We present the most noteworthy and up-to-date examples of reconstructions employing appropriate illumination models in object and image space, and in the visual perception domain. Finally, we also discuss the difficulties in rendering, documentation, validation and identify probable research challenges for the future. The report is aimed for researchers new to cultural heritage reconstruction who wish to learn about methods to illuminate the past.
Article
Full-text available
We propose a set of dynamic shading enhancement techniques for improving the perception of details, features, and overall shape characteristics from images created with Reflectance Transformation Imaging (RTI) techniques. Selection of these perceptual enhancement filters can significantly improve the user's ability to interactively inspect the content of 2D RTI media by zooming, panning, and changing the illumination direction. In particular, we present two groups of strategies for RTI image enhancement based on two main ideas: exploiting the unsharp masking methodology in the RTI-specific context; and locally optimizing the incident light direction for improved RTI image sharpness and illumination of surface features. The Result section will present a number of datasets and compare them with existing techniques.
Article
Full-text available
We present a non-photorealistic rendering approach to capture and convey shape features of real-world scenes. We use a camera with multiple flashes that are strategically positioned to cast shadows along depth discontinuities in the scene. The projective-geometric relationship of the camera-flash setup is then exploited to detect depth discontinuities and distinguish them from intensity edges due to material discontinuities.We introduce depiction methods that utilize the detected edge features to generate stylized static and animated images. We can highlight the detected features, suppress unnecessary details or combine features from multiple images. The resulting images more clearly convey the D structure of the imaged scenes.We take a very different approach to capturing geometric features of a scene than traditional approaches that require reconstructing a 3D model. This results in a method that is both surprisingly simple and computationally efficient. The entire hardware/software setup can conceivably be packaged into a self-contained device no larger than existing digital cameras.
Book
Computer graphics systems are capable of generating stunningly realistic images of objects that have never physically existed. In order for computers to create these accurately detailed images, digital models of appearance must include robust data to give viewers a credible visual impression of the depicted materials. In particular, digital models demonstrating the nuances of how materials interact with light are essential to this capability. This is the first comprehensive work on the digital modeling of material appearance: it explains how models from physics and engineering are combined with keen observation skills for use in computer graphics rendering. Written by the foremost experts in appearance modeling and rendering, this book is for practitioners who want a general framework for understanding material modeling tools, and also for researchers pursuing the development of new modeling techniques. The text is not a "how to" guide for a particular software system. Instead, it provides a thorough discussion of foundations and detailed coverage of key advances. Practitioners and researchers in applications such as architecture, theater, product development, cultural heritage documentation, visual simulation and training, as well as traditional digital application areas such as feature film, television, and computer games, will benefit from this much needed resource. ABOUT THE AUTHORS Julie Dorsey and Holly Rushmeier are professors in the Computer Science Department at Yale University and co-directors of the Yale Computer Graphics Group. François Sillion is a senior researcher with INRIA (Institut National de Recherche en Informatique et Automatique), and director of its Grenoble Rhône-Alpes research center. *Provides sound technical advice, tips, and techniques to create the most realistic surface appearances of graphic designs *Assembles a great variety of graphics rendering techniques into a one-stop resource *Readers will walk away with a superior knowledge base about creating more convincing and technically accurate and detailed designs.
Conference Paper
This research investigation used digital photography in a hemispherical dome, enabling a set of 64 photographic images of an object to be captured in perfect pixel register, with each image illuminated from a different direction. This representation turns out to be much richer than a single 2D image, because it contains information at each point about both the 3D shape of the surface (gradient and local curvature) and the directionality of reflectance (gloss and specularity). Thereby it enables not only interactive visualisation through viewer software, giving the illusion of 3D, but also the reconstruction of an actual 3D surface and highly realistic rendering of a wide range of materials. The following seven outcomes of the research are claimed as novel and therefore as representing contributions to knowledge in the field:  A method for determining the geometry of an illumination dome;  An adaptive method for finding surface normals by bounded regression;  Generating 3D surfaces from photometric stereo;  Relationship between surface normals and specular angles;  Modelling surface specularity by a modified Lorentzian function;  Determining the optimal wavelengths of colour laser scanners;  Characterising colour devices by synthetic reflectance spectra.
Conference Paper
We present an automated light calibration pipeline for free-form acquisition of shape and reflectance of objects using common off-the-shelf illuminators, such as LED lights, that can be placed arbitrarily close to the objects. We acquire multiple digital photographs of the studied object shot from a stationary camera. In each photograph, a light is freely positioned around the object in order to cover a wide variety of illumination directions. While common free-form acquisition approaches are based on the simplifying assumptions that the light sources are either sufficiently far from the object that all incoming light can be modeled using parallel rays, or that lights are local points emitting uniformly in space, we use the more realistic model of a scene lit by a moving local spot light with exponential fall-off depending on the cosine of the angle between the spot light optical axis and the illumination direction, raised to the power of the spot exponent. We recover all spot light parameters using a multipass numerical method. First, light positions are determined using standard methods used in photometric stereo approaches. Then, we exploit measures taken on a Lambertian reference planar object to recover the spot light exponent and the per-image spot light optical axis; we minimize the difference between the observed reflectance and the reflectance synthesized by using the near-field Lambertian equation. The optimization is performed in two passes, first generating a starting solution and then refining it using a Levenberg-Marquardt iterative minimizer. We demonstrate the effectiveness of the method based on an error analysis performed on analytical datasets, as well as on real-world experiments.
Article
We present a method to detect edges from polynomial texture maps (PTM) directly based on its polynomial coefficients. The method considers PTM as a mapping from two-dimensional space to higher dimensional space. The direction and magnitude of the largest change at each texel is first detected from the singular value decomposition of the mapping’s Jacobian. Edges are then extracted using non-maxima suppression. Finally, edge lines are traced out with hysteresis thresholding. Both geometric and texture discontinuity can be measured with the detection method. The proposed edge detection algorithm demonstrates the superiority on a variety of real-world datasets and compares very favorably with known methods. More subtle edge details are detected in the results. © 2016, 3D Research Center, Kwangwoon University and Springer-Verlag Berlin Heidelberg.
Article
Reconstructing the shape of an object from images is an important problem in computer vision that has led to a variety of solution strategies. This survey covers photometric stereo, i.e., techniques that exploit the observed intensity variations caused by illumination changes to recover the orientation of the surface. In the most basic setting, a diffuse surface is illuminated from at least three directions and captured with a static camera. Under some conditions, this allows to recover per-pixel surface normals. Modern approaches generalize photometric stereo in various ways, e.g., relaxing constraints on lighting, surface reflectance and camera placement or creating different types of local surface estimates. Starting with an introduction for readers unfamiliar with the subject, we discuss the foundations of this field of research. We then summarize important trends and developments that emerged in the last three decades. We put a focus on approaches with the potential to be applied in a broad range of scenarios. This implies, e.g., simple capture setups, relaxed model assumptions, and increased robustness requirements. The goal of this review is to provide an overview of the diverse concepts and ideas on the way towards more general techniques than traditional photometric stereo.
Article
A novel technique called photometric stereo is introduced. The idea of photometric stereo is to vary the direction of incident illumination between successive images, while holding the viewing direction constant. It is shown that this provides sufficient information to determine surface orientation at each image point. Since the imaging geometry is not changed, the correspondence between image points is known a priori. The technique is photometric because it uses the radiance values recorded at a single image location, in successive views, rather than the relative positions of displaced features. Photometric stereo is used in computer-based image understanding. It can be applied in two ways. First, it is a general technique for determining surface orientation at each image point. Second, it is a technique for determining object points that have a particular surface orientation. These applications are illustrated using synthesized examples.
Article
The painting Il ritratto della figliastra (Portrait of the Stepdaughter) by Giovanni Fattori (1889, Gallery of Modern Art, Pitti Palace, Florence) was investigated using non-invasive fibre optics reflectance spectroscopy (FORS). The use of compact and transportable instrumentation made it possible to easily record spectra of the polychrome surface at the restorer’s atelier during the restoration work. The results of colour analysis before and after the cleaning procedure of the painting are reported and discussed, together with an attempt at pigment identification.
Article
Recent progress in the measurement of surface reflectance has created a demand for non-parametric appearance representations that are accurate, compact, and easy to use for rendering. Another crucial goal, which has so far received little attention, is editability: for practical use, we must be able to change both the directional and spatial behavior of surface reflectance (e.g., making one material shinier, another more anisotropic, and changing the spatial "texture maps" indicating where each material appears). We introduce an Inverse Shade Tree framework that provides a general approach to estimating the "leaves" of a user-specified shade tree from high-dimensional measured datasets of appearance. These leaves are sampled 1- and 2-dimensional functions that capture both the directional behavior of individual materials and their spatial mixing patterns. In order to compute these shade trees automatically, we map the problem to matrix factorization and introduce a flexible new algorithm that allows for constraints such as non-negativity, sparsity, and energy conservation. Although we cannot infer every type of shade tree, we demonstrate the ability to reduce multi-gigabyte measured datasets of the Spatially-Varying Bidirectional Reflectance Distribution Function (SVBRDF) into a compact representation that may be edited in real time.
Conference Paper
In order to produce visually appealing digital models of cultural heritage artefacts, a meticulous reconstruction of the 3D geometry alone is often not sufficient, as colour and reflectance information give essential clues of the object's material. Standard texturing methods are often only able to overcome this fact under strict material and lighting condition limitations. The realistic reconstruction of complex yet frequently encountered materials such as fabric, leather, wood or metal is still a challenge. In this paper, we describe a novel system to acquire the 3D- geometry of an object using its visual hull, recorded in multiple 2D images with a multi-camera array. At the same time, the material properties of the object are measured into Bidirectional Texture Functions (BTF), that faithfully capture the mesostructure of the surface and reconstruct the look-and-feel of its material. The high rendering fidelity of the acquired BTF texture data with respect to reflectance and self-shadowing also alleviates the limited precision of the visual hull approach for 3D geometry acquisition.
Conference Paper
Current research trends demonstrate that, for a wide range of applications in cultural heritage, 3D shape acquisition alone is not sufficient. To generate a digital replica of a real world object the digitized geometric models have to be complemented with information pertaining to optical properties of the object surface. We therefore propose an integrated system for acquiring both the 3D shape and reflectance properties necessary for obtaining a photo-realistic digital replica. The proposed method is suitable for the digitization of objects showing the complex reflectance behavior, for example specularities and meso-scale interreflections, often encountered in the field of cultural heritage. We demonstrate the performance of our system with four challenging examples. By using Bidirectional Texture Functions, our structured light based approach is able to achieve good geometric precision while preserving tiny details such as scratches and engravings.
Conference Paper
Recent progress in acquisition technology has increased the availability and quality of measured appearance data. Although representations based on dimensionality reduction provide the greatest fidelity to measured data, they require assembling a high-resolution and regularly sampled matrix from sparse and non-uniformly scattered input. Constructing and processing this immense matrix becomes a significant computational bottleneck. We describe a technique for performing basis decomposition directly from scattered measurements. Our approach is flexible in how the basis is represented and can accommodate any number of linear constraints on the factorization. Because its time- and space-complexity is proportional to the number of input measurements and the size of the output, we are able to decompose multi-gigabyte datasets faster and at lower error rates than currently available techniques. We evaluate our approach by representing measured spatially-varying reflectance within a reduced linear basis defined over radial basis functions and a database of measured BRDFs.
Image-based empirical information acquisition, scientific reliability, and long-term digital preservation for the natural sciences and cultural heritage
  • M Mudge
  • T Malzbender
  • Chalmers A
  • Scopigno R
  • J Davis
  • Wang O
  • P Gunawardane
  • Ash-Ley M
  • M Doerr
  • Proenca A
  • A L Et
[MMC * 08] MUDGE M., MALZBENDER T., CHALMERS A., SCOPIGNO R., DAVIS J., WANG O., GUNAWARDANE P., ASH-LEY M., DOERR M., PROENCA A., ET AL.: Image-based empirical information acquisition, scientific reliability, and long-term digital preservation for the natural sciences and cultural heritage. In Eurographics (Tutorials) (2008). 2
A survey of geometric analysis in cultural heritage
  • Pintus R Pal
  • Yang Y Weyrich
  • Gobbetti E Rushmeier
[PPY * 16] PINTUS R., PAL K., YANG Y., WEYRICH T., GOBBETTI E., RUSHMEIER H.: A survey of geometric analysis in cultural heritage. In Computer Graphics Forum (2016), vol. 35, Wiley Online Library, pp. 4-31. 2
Data driven surface reflectance from sparse and irregular samples
  • Ruiters R Schwartz
  • C Klein R
RUITERS R., SCHWARTZ C., KLEIN R.: Data driven surface reflectance from sparse and irregular samples. In Computer Graphics Forum (2012), vol. 31, Wiley Online Library, pp. 315-324. 2
Bipolynomial modeling of low-frequency reflectances. IEEE transactions on pattern analysis and machine intelligence
  • Shi B
  • Tan P
  • Y Matsushita
  • Ikeuchi K
SHI B., TAN P., MATSUSHITA Y., IKEUCHI K.: Bipolynomial modeling of low-frequency reflectances. IEEE transactions on pattern analysis and machine intelligence 36, 6 (2014), 1078-1091. 2
A sparse parametric mixture model for btf compression, editing and rendering
  • H Wu
  • J Dorsey
  • H Rushmeier
WU H., DORSEY J., RUSHMEIER H.: A sparse parametric mixture model for btf compression, editing and rendering. In Computer Graphics Forum (2011), vol. 30, Wiley Online Library, pp. 465-473. 2
Easy and cost-effective cuneiform digitizing
  • G Willems
  • F Verbiest
  • Moreau W
  • H Hameeuw
  • Van Lerberghe K
  • Van Gool L
[WVM * 05] WILLEMS G., VERBIEST F., MOREAU W., HAMEEUW H., VAN LERBERGHE K., VAN GOOL L.: Easy and cost-effective cuneiform digitizing. In The 6th International Symposium on Virtual Reality, Archaeology and Cultural Heritage (VAST 2005) (2005), Eurographics Assoc., pp. 73-80.
Study of ancient greek and roman coins using reflectance transformation imaging. E-Conservation Magazine
  • Kotoula E
  • Kyranoudi M
KOTOULA E., KYRANOUDI M.: Study of ancient greek and roman coins using reflectance transformation imaging. E-Conservation Magazine 25 (2013), 74-88. 1