Figure 3 - uploaded by Gabriel Taubin
Content may be subject to copyright.
5: Projector calibration sequence containing multiple views of a checkerboard projected on a white plane marked with four printed fiducials in the corners. As for camera calibration, the plane must be moved to various positions and orientations throughout the scene.  

5: Projector calibration sequence containing multiple views of a checkerboard projected on a white plane marked with four printed fiducials in the corners. As for camera calibration, the plane must be moved to various positions and orientations throughout the scene.  

Source publication
Article
Full-text available
Over the last decade, digital photography has entered the mainstream with inexpensive, miniaturized cameras for consumer use. Digital projection is poised to make a similar breakthrough, with a variety of vendors offering small, low-cost projectors. As a result, active imaging is a topic of renewed interest in the computer graphics community. In pa...

Citations

... In order to effectively demonstrate the prowess of our learning-based framework in comparison to classical matching algorithms, we carry out further experiments using Gray code methods with the same patterns for both. We implement both fixed threshold (w/o Inv.) and inverse projection (w/ Inv.) algorithms for pixel recognition, following Dong Lanman's tutorial [17]. Employing up to 9 projected patterns with an interval of 2.5 pixels, we achieve superior accuracy compared to Gray code methods with the same patterns (Fig. 7). ...
Preprint
Full-text available
We introduce a novel depth estimation technique for multi-frame structured light setups using neural implicit representations of 3D space. Our approach employs a neural signed distance field (SDF), trained through self-supervised differentiable rendering. Unlike passive vision, where joint estimation of radiance and geometry fields is necessary, we capitalize on known radiance fields from projected patterns in structured light systems. This enables isolated optimization of the geometry field, ensuring convergence and network efficacy with fixed device positioning. To enhance geometric fidelity, we incorporate an additional color loss based on object surfaces during training. Real-world experiments demonstrate our method's superiority in geometric performance for few-shot scenarios, while achieving comparable results with increased pattern availability.
... The quality of the reconstruction depends on the accuracy of the model used to simulate the camera and projector, usually leading to the use of expensive electronics and high-quality lenses in the hardware setup [Koch et al. 2021] to achieve high resolution and accuracy. The calibration of the scanner leads to a small set of parameters describing the geometry of the scanner and lenses for the camera and projector [Lanman and Taubin 2009a]. When many patterns are used, this approach is limited to objects moving at low speed due to the necessity of projecting and acquiring multiple coded patterns for each depth reconstruction. ...
... This is often done by triangulating matching samples from calibrated projector and camera pairs. We refer to [Lanman and Taubin 2009a]'s excellent tutorial for detailed coverage of the triangulation process and how to perform camera and projector calibration and to [Koch et al. 2021] for a detailed open-hardware and open-software implementation. ...
... We refer to [Lanman and Taubin 2009a] for a complete description of traditional calibration methods for structured light scanning, and here we focus on our method. Instead of scanning an object with known geometry to find the parameters of a projector and camera model, we are interested in finding the normalized color for all points within the scanning volume. ...
Preprint
We introduce a novel calibration and reconstruction procedure for structured light scanning that foregoes explicit point triangulation in favor of a data-driven lookup procedure. The key idea is to sweep a calibration checkerboard over the entire scanning volume with a linear stage and acquire a dense stack of images to build a per-pixel lookup table from colors to depths. Imperfections in the setup, lens distortion, and sensor defects are baked into the calibration data, leading to a more reliable and accurate reconstruction. Existing structured light scanners can be reused without modifications while enjoying the superior precision and resilience that our calibration and reconstruction algorithms offer. Our algorithm shines when paired with a custom-designed analog projector, which enables 1-megapixel high-speed 3D scanning at up to 500 fps. We describe our algorithm and hardware prototype for high-speed 3D scanning and compare them with commercial and open-source structured light scanning methods.
... An exposure time adjustment rule is developed to pinpoint the projector pixels that are oversaturated at high exposure times, ensuring that reliable results from low exposure times can be used. Finally, the results of the decoding inheritance obtained at the maximum exposure time are reconstructed using a triangulation method [20]. ...
... The corresponding AIBP is regenerated for projection, and decoding inheritance is performed to obtain D inherit n . The process is repeated iteratively in this way untilt n = t max , and the final decoding inheritance result D inherit n is used to complete the three-dimensional reconstruction process [20]. ...
... In implementing the adaptive blooming suppression system for the metal faucet, the exposure parameters were set as follows: t min = 0.1s, t max = 4.1s, and t step = 2s. Moreover, in the blooming detector, the size of the averaging filter was set to 7 × 7 and T Avg was set to 250. Figure 15 [20]. Figures 5(a-1)-(a-3) show that the fringes around the faucet become clearer as the exposure time increases. ...
Article
Full-text available
Structured light systems often suffer interference of the fringes by blooming when scanning metal objects. Unfortunately, this problem cannot be reliably solved using conventional methods such as the high dynamic range (HDR) method or adaptive projection technique. Therefore, this study proposes a method to adaptively suppress the oversaturated areas that cause blooming as the exposure time increases and then fuse the multi-exposure time decoding results using a decoding inheritance method. The experimental results demonstrate that the proposed method provides a more effective suppression of blooming interference than existing methods.
... Therefore, small gap will appear when reconstruct the pot, as shown in Fig. 1 there are thirteen fragments and all of them have been saved as (p1.obj, p2.obj,.... p13.obj) on the website. These fragments have been used by the authors [13,14]. ...
... This work is proposed by reconstruction broken pieces for ceramic object via utilizing the fact, which two fragments edges are matched when it has the identical geometry [14]. Therefore, to find the homogeneity of variances of the angles measurements for both fragments was tested using F test and reached to there's no significant difference at 0.05 via SPSS application. ...
... passive stereo, structured light and time-of-flight camera. The static 3D scanning Lanman and Taubin [2009], Yamaguchi et al. [2014] achieves high quality reconstruction of millimetric precision, with detailed wrinkles on the cloth, at a high speed computation which gets the success in fashion and movie industry, however it is restricted to small scale capture. Moreover, missing parts in the acquisition occur frequently due to partial observations and occlusion, so heavy manual postprocessing is required by artists. ...
Thesis
With the development of 3D vision techniques, in particular neural network based methods, the 3D neural avatar representation has gained growing interest both in the academia and in the industry. Such digital representations have been applied to movie, video game, fashion and virtual reality environments to enrich user experiences. In terms of 3D representation and reconstruction, classic methods rely on heavy setups and costly computation processes while neural networks open up the possibility to handle this problem from partial observations thanks to tolerance to insufficient information. In particular, neural networks achieve promising results for reconstruction tasks with a high speed inference process. However, at the time of the thesis few neural network designs used spatial constraint on the 3D human shape and temporal coherence of motion for dense reconstruction and completion. This thesis proposes to build 3D and 4D models for dense human shape estimation/reconstruction from sparse or incomplete point clouds and investigates how the proposed network and training strategy contribute. To evaluate the effectiveness of the proposed methods, we collect data from synthetic and real datasets, with dressed humans and undressed humans. We first examine a static intermediate task, in which we deform the key points of a reference template to fit to input sparse point clouds and densify the deformed points with our proposed Gaussian Process layer. Our Gaussian Process layer enforces the smoothness of 3D geometry and adversarial training can further improve robustness across the datasets, which allow us to reconstruct 3D human shapes from sparse unstructured point clouds and avoid local optima during inference. Instead of static frame-by-frame poses, humans perform dynamic motion in daily life. We thus examine temporal continuity in dense shape inference. We develop a continuous representation of human motion sequences from partial observations with neural implicit modeling, which enables to complete spatial information and to enhance temporal frames. Our proposed method outperforms the static methods which lack temporal coherence, by correcting artefacts due to holes or noises. However we are still missing some high-frequency details in our results when using a naive training strategy. Therefore, we investigate how to represent fine details of humans with a coarse-to-fine strategy and temporal feature aggregation from an input sequence of depth images. This allows us to pyramidally learn a signed distance field in both spatial and temporal directions in order to recover fine details on cloth wrinkles and facial expressions.
... Based on these different methodologies which are being implemented to make 3D scanning affordable and widespread we can generalize those modern technologies in manufacturing such as 3D scanning and 3D printing are now becoming available to the general public and masses [11][12][13][14][15]. ...
Conference Paper
This paper is based on the design and fabrication of a 3D scanner that uses low-cost hardware and open-source software where the cost of hardware is reduced significantly using wood as construction material and eliminating the need for an IR sensor which is used to capture the dimensions of the objects which is done by netfabb in this scanner and making the application of this scanner limited to small objects. The hardware is controlled using Arduino UNO ATmega328p microcontroller and using Open-source free software like 3D zephyr for 2D to 3D model generation, Netfabb for scaling, MeshLab for clean-up and alignment and GOM inspect to compare the generated scans with professional scans and their respective CAD models.
... Then the printer which produces the output of the device, must be made in order to have the necessary accuracy and quality [2]. In addition to acceptable accuracy, non-contact 3D scanners can provide us with high speed in taking dimensions of parts with different dimensions [3]. Due to the wide use of 3D scanners in the industry [4], especially in quality control [5], the application of cameras with megapixel accuracy has increased accuracy, but on the other hand there are limitations such as price increase [6]. ...
... Combined with data of the rest of the head and shoulders captured by a structured light 3D scanner, the Smithsonian team was able to recreate a complete and accurate 3D representation with which to produce the physical bust using 3D printing [15] (Figure 5a). In many ways, 3D scanning shares or utilises some of the same techniques as photography, capturing traditional visual information with a camera while using enhanced assistive means, such as structured light pattern or photogrametry, to capture additional spatial distance information to not only record colour and light, but also the three-dimensional forms [127] [24]. In this post-photographic era, 3D scanning can be seen as a form of photography in the broadest sense, or Augmented Photography. ...
Preprint
Full-text available
The metaverse, enormous virtual-physical cyberspace, has brought unprecedented opportunities for artists to blend every corner of our physical surroundings with digital creativity. This article conducts a comprehensive survey on computational arts, in which seven critical topics are relevant to the metaverse, describing novel artworks in blended virtual-physical realities. The topics first cover the building elements for the metaverse, e.g., virtual scenes and characters, auditory, textual elements. Next, several remarkable types of novel creations in the expanded horizons of metaverse cyberspace have been reflected, such as immersive arts, robotic arts, and other user-centric approaches fuelling contemporary creative outputs. Finally, we propose several research agendas: democratising computational arts, digital privacy and safety for metaverse artists, ownership recognition for digital artworks, technological challenges, and so on. The survey also serves as introductory material for artists and metaverse technologists to begin creations in the realm of surrealistic cyberspace.
... We place a single stereo camera pair (In our case use the t261), placed behind the view of the headset. We then use binary-coded structured light homography [3] to compute the depth of each pixel displayed on the HMD, in order to collect a series of undistortion points. These points allow us to create a look up table, resolving a 3rd-order 2D polynomial from the 2D screen space to the corresponding point on the now undistorted virtual viewport. ...
... With this setup, the first step is to precisely estimate this rotation axis with respect to the camera coordinates, to measure the depth of an object using triangulation [23]. Owing to various reasons, such as the fabrication error of mechanical parts, line laser misalignment to the rotation axis, line laser optical properties, and so on, the rotation axis of the line laser is somewhat different from the ideal CAD design. ...
... We do not need to rotate the camera because it already has a wide field of view with a proper lens (e.g., fish-eye lens). With this setup, the first step is to precisely estimate this rotation axis with respect to the camera coordinates, to measure the depth of an object using triangulation [23]. Owing to various reasons, such as the fabrication error of mechanical parts, line laser misalignment to the rotation axis, line laser optical properties, and so on, the rotation axis of the line laser is somewhat different from the ideal CAD design. ...
Article
Full-text available
In a 3D scanning system, using a camera and a line laser, it is critical to obtain the exact geometrical relationship between the camera and laser for precise 3D reconstruction. With existing depth cameras, it is difficult to scan a large object or multiple objects in a wide area because only a limited area can be scanned at a time. We developed a 3D scanning system with a rotating line laser and wide-angle camera for large-area reconstruction. To obtain 3D information of an object using a rotating line laser, we must be aware of the plane of the line laser with respect to the camera coordinates at every rotating angle. This is done by estimating the rotation axis during calibration and then by rotating the laser at a predefined angle. Therefore, accurate calibration is crucial for 3D reconstruction. In this study, we propose a calibration method to estimate the geometrical relationship between the rotation axis of the line laser and the camera. Using the proposed method, we could accurately estimate the center of a cone or cylinder shape generated while the line laser was rotating. A simulation study was conducted to evaluate the accuracy of the calibration. In the experiment, we compared the results of the 3D reconstruction using our system and a commercial depth camera. The results show that the precision of our system is approximately 65% higher for plane reconstruction, and the scanning quality is also much better than that of the depth camera.