ArticlePublisher preview available

3D reconstruction method based on N-step phase unwrapping

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

Reducing the number of images in fringe projection profilometry has emerged as a significant research focus. Traditional temporal phase unwrapping algorithms typically require an additional set of coding fringe or phase shift fringe images to determine the fringe order and facilitate phase unwrapping, in addition to the essential sinusoidal phase shift fringe for calculating the wrapped phase. In order to reduce the required number of fringe images and increase reconstruction speed, this paper proposes a three-dimensional (3D) reconstruction method inspired by spatial phase unwrapping. The proposed method is based on the N-step temporal phase unwrapping algorithm and can solve the wrapped phase and fringe order using only a set of sinusoidal phase shift fringe images. Our method achieves a further reduction in the required number of images without compromising reconstruction accuracy. In the calculation of the absolute phase, our proposed method only requires an N-step standard phase shift sinusoidal fringe image, eliminating the need for additional fringe images to determine the fringe order. Firstly, we employ the standard N-step phase shift algorithm to compute the wrapped phase and apply a mask for background removal. Next, we directly calculate the fringe order using the wrapped phase and mask and solve for the absolute phase based on the connected region labeling theorem. Our method achieves 3D reconstruction using a minimum of three fringe images, while maintaining reconstruction precision comparable to that of the traditional temporal phase unwrapping technique. As no additional fringe image is required to solve the fringe order, our method has the potential to achieve significantly faster reconstruction speed.
Composite computational framework; the wrapped phase φ(x,y)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varphi (x,y)$$\end{document} is acquired by an N-step phase-shifting algorithm; the Mask is created via data modulation γ(x,y)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\gamma (x,y)$$\end{document}; the fringe order K(x,y)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K(x,y)$$\end{document} is determined jointly by the wrapped phase φ(x,y)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varphi (x,y)$$\end{document} and Mask; the final absolute phase Φ(x,y)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Phi (x,y)$$\end{document} is derived by Eq. (15)
… 
This content is subject to copyright. Terms and conditions apply.
The Visual Computer (2024) 40:3601–3613
https://doi.org/10.1007/s00371-023-03054-y
ORIGINAL ARTICLE
3D reconstruction method based on N-step phase unwrapping
Lin Wang1,2,3 ·Lina Yi1,2,3 ·Yuetong Zhang1,2,3 ·Xiaofang Wang4·Wei Wang1,2,3 ·Xiangjun Wang1,2,3 ·
Xuan Wang5
Accepted: 3 August 2023 / Published online: 19 August 2023
© The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2023, corrected publication 2024
Abstract
Reducing the number of images in fringe projection profilometry has emerged as a significant research focus. Traditional
temporal phase unwrapping algorithms typically require an additional set of coding fringe or phase shift fringe images
to determine the fringe order and facilitate phase unwrapping, in addition to the essential sinusoidal phase shift fringe for
calculating the wrapped phase. In order to reduce the required number of fringe images and increase reconstruction speed, this
paper proposes a three-dimensional (3D) reconstruction method inspired by spatial phase unwrapping. The proposed method
is based on the N-step temporal phase unwrapping algorithm and can solve the wrapped phase and fringe order using only a
set of sinusoidal phase shift fringe images. Our method achieves a further reduction in the required number of images without
compromising reconstruction accuracy. In the calculation of the absolute phase, our proposed method only requires anN-step
standard phase shift sinusoidal fringe image, eliminating the need for additional fringe images to determine the fringe order.
Firstly, we employ the standard N-step phase shift algorithm to compute the wrapped phase and apply a mask for background
removal. Next, we directly calculate the fringe order using the wrapped phase and mask and solve for the absolute phase based
on the connected region labeling theorem. Our method achieves 3D reconstruction using a minimum of three fringe images,
while maintaining reconstruction precision comparable to that of the traditional temporal phase unwrapping technique. As
no additional fringe image is required to solve the fringe order, our method has the potential to achieve significantly faster
reconstruction speed.
Keywords 3D reconstruction ·Structured light ·Absolute phase retrieval ·Phase unwrapping
1 Introduction
Fringe projection profilometry (FPP) is a widely employed
optical reconstruction method that holds significant impor-
tance in scientific applications and engineering fields,
including machine vision, intelligent manufacturing, product
inspection, and biometrics [18]. This technique offers sev-
eral advantages, such as high precision, nondestructiveness,
BLin Wang
Nchuwl@163.com
1State Key Laboratory Precision Measuring Technology and
Instruments, Tianjin University, Tianjin, China
2Key Laboratory of MOEMS of the Ministry of Education,
Tianjin University, Tianjin, China
3School of Precision Instrument and Opto-Electronics
Engineering, Tianjin University, Tianjin, China
4Unit 32382 of PLA, Wuhan, China
5North University of China, Taiyuan, China
and full-field flexibility. Among the various implementations
of FPP, the digital fringe projection technique, also known as
phase shift profilometry (PSP), stands out for its exceptional
reconstruction efficiency, high sample density, and precision
[3,911]. In FPP, a fringe pattern is projected onto the target
object, and the resulting distorted fringe image is captured
by an angled camera. Different fringe analysis algorithms are
then employed to recover the corresponding wrapped phase
from the captured image. In the PSP, the phase information
of the measured object height is extracted using the phase
shift algorithm, which involves arctangent calculation. The
wrapped phase range from (–π,π) with a 2πphase jump
[2,12], necessitating phase unwrapping to eliminate discon-
tinuities and obtain a continuous phase map.
After decades of development, researchers have proposed
many fast phase unwrapping algorithms [7], which can be
divided into two categories: spatial phase unwrapping algo-
rithms and temporal phase unwrapping algorithms. Classical
spatial phase unwrapping algorithms [12], such as least
123
Content courtesy of Springer Nature, terms of use apply. Rights reserved.
Chapter
To autonomously explore and densely recover an unknown indoor scene is a nontrivial task in 3D scene reconstruction. It is challenging for scenes composed of compact and complicated interconnected rooms with no priors. To address this issue, we aim to use autonomous scanning, reconstruct multi-room scenes, and produce a complete reconstruction in as few scans as possible. With a progressive discrete motion planning module, we introduce submodular-based planning for automated scanning scenarios to efficiently guide the active scanning by Next-Best-View until marginal gains diminish. The submodular-based planning gives an approximately optimal solution of “Next-Best-View” which is NP-hard in case of no prior knowledge. Experiments show that our method can improve scanning efficiency significantly for multi-room scenes while maintaining reconstruction errors.
Article
Full-text available
We present a method for reconstructing accurate and consistent 3D hands from a monocular video. We observe that the detected 2D hand keypoints and the image texture provide important cues about the geometry and texture of the 3D hand, which can reduce or even eliminate the requirement on 3D hand annotation. Accordingly, in this work, we propose $\mathrm{{S}^{2}HAND}$ , a self-supervised 3D hand reconstruction model, that can jointly estimate pose, shape, texture, and the camera viewpoint from a single RGB input through the supervision of easily accessible 2D detected keypoints. We leverage the continuous hand motion information contained in the unlabeled video data and explore $\mathrm{{S}^{2}HAND(V)}$ , which uses a set of weights shared $\mathrm{{S}^{2}HAND}$ to process each frame and exploits additional motion, texture, and shape consistency constrains to obtain more accurate hand poses, and more consistent shapes and textures. Experiments on benchmark datasets demonstrate that our self-supervised method produces comparable hand reconstruction performance compared with the recent full-supervised methods in single-frame as input setup, and notably improves the reconstruction accuracy and consistency when using the video training data.
Article
Full-text available
Gray-code plus phase-shifting is currently a commonly used method for structured light three-dimensional (3D) measurement that is able to measure complex surfaces. However, the Gray-code fringe patterns tend to be complicated, making the measurement process time-consuming. To solve this problem and to obtain faster speed without sacrificing accuracy, a 3D measurement method based on three-step phase-shifting and a binary fringe is proposed; the method contains three phase-shifting fringe patterns and an additional binary fringe pattern. The period of the binary fringe is designed to be the same as the three-step phase-shifting fringe. Because of the specific pattern design strategy, the three-step phase-shifting algorithm is used to obtain the wrapped phase, and the connected region labeling theorem is used to calculate the fringe order. A theoretical analysis, simulation, and experiments validate the efficiency and robustness of the proposed method. It can achieve high-precision 3D measurement, which performs almost the same as the Gray-code plus phase-shifting method. Since only one additional binary fringe pattern is required, it has the potential to achieve higher measurement speed.
Preprint
Full-text available
We are witnessing an explosion of neural implicit representations in computer vision and graphics. Their applicability has recently expanded beyond tasks such as shape generation and image-based rendering to the fundamental problem of image-based 3D reconstruction. However, existing methods typically assume constrained 3D environments with constant illumination captured by a small set of roughly uniformly distributed cameras. We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections in the presence of varying illumination. To achieve this, we propose a hybrid voxel- and surface-guided sampling technique that allows for more efficient ray sampling around surfaces and leads to significant improvements in reconstruction quality. Further, we present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes. We perform extensive experiments, demonstrating that our approach surpasses both classical and neural reconstruction methods on a wide variety of metrics.
Article
Full-text available
With the advances in scientific foundations and technological implementations, optical metrology has become versatile problem-solving backbones in manufacturing, fundamental research, and engineering applications, such as quality control, nondestructive testing, experimental mechanics, and biomedicine. In recent years, deep learning, a subfield of machine learning, is emerging as a powerful tool to address problems by learning from data, largely driven by the availability of massive datasets, enhanced computational power, fast data storage, and novel training algorithms for the deep neural network. It is currently promoting increased interests and gaining extensive attention for its utilization in the field of optical metrology. Unlike the traditional “physics-based” approach, deep-learning-enabled optical metrology is a kind of “data-driven” approach, which has already provided numerous alternative solutions to many challenging problems in this field with better performances. In this review, we present an overview of the current status and the latest progress of deep-learning technologies in the field of optical metrology. We first briefly introduce both traditional image-processing algorithms in optical metrology and the basic concepts of deep learning, followed by a comprehensive review of its applications in various optical metrology tasks, such as fringe denoising, phase retrieval, phase unwrapping, subset correlation, and error compensation. The open challenges faced by the current deep-learning approach in optical metrology are then discussed. Finally, the directions for future research are outlined.
Article
Full-text available
A phase unwrapping method for phase-shifting projected fringe profilometry is presented. It did not require additional projections to identify the fringe orders. The pattern used for the phase extraction could be used for phase unwrapping directly. By spatially encoding the fringe patterns that were used to perform the phase-shifting technique with binary contrasts, fringe orders could be discerned. For spatially isolated objects or surfaces with large depth discontinuities, unwrapping could be identified without ambiguity. Even though the surface color or reflectivity varied periodically with position, it distinguished the fringe order very well.
Article
Full-text available
This paper reviews recent developments of non-contact three-dimensional (3D) surface metrology using an active structured optical probe. We focus primarily on those active non-contact 3D surface measurement techniques that could be applicable to the manufacturing industry. We discuss principles of each technology, and its advantageous characteristics as well as limitations. Towards the end, we discuss our perspectives on the current technological challenges in designing and implementing these methods in practical applications.
Article
Full-text available
Two-wavelength fringe projection profilometry (FPP) unwraps a phase with the unambiguous phase range (UPR) of the least common multiple (LCM) of the two wavelengths. It is accurate, convenient, and robust, and thus plays an important role in shape measurement. However, when two non-coprime wavelengths are used, only a small UPR can be generated, and the unwrapping performance is compromised. In this Letter, a spatial pattern-shifting method (SPSM) is proposed to generate the maximum UPR (i.e., the product of the two wavelengths) from two non-coprime wavelengths. For the first time, to the best of our knowledge, the SPSM breaks the constraint of wavelength selection and enables a complete (i.e., either coprime or non-coprime) two-wavelength FPP. The SPSM, on the other hand, only requires spatially shift of the low-frequency pattern with the designed amounts and accordingly adjusting the fringe order determination, which is extremely convenient in implementation. Both numerical and experimental analyses verify its flexibility and correctness.
Article
Full-text available
Binary coding methods have been widely used for phase unwrapping. However, traditional temporal binary coding methods require a sequence of binary patterns to encode the fringe order information. This paper presents a spatial binary coding (SBC) method that encodes the fringe order into only one binary pattern. Each stripe of the sinusoidal phase-shifting patterns is corresponding to an $N$ N -bit codeword of the binary pattern. A robust stripe-wise decoding scheme is also developed to extract the $N$ N -bit codeword, then fringe order can be determined, and stripe-wise phase unwrapping can be performed. Experiment results confirm that the SBC method can correctly recover the absolute phase of measured objects with only one additional binary pattern.
Article
Full-text available
Fringe projection profilometry has been increasingly sought and applied in dynamic three-dimensional (3D) shape measurement. In this work, a robust and high-efficiency 3D measurement based on Gray-code light is pro-posed. Unlike the traditional method, a novel tripartite phase unwrapping method is proposed to avoid the jump errors on the boundary of code words, which are mainly caused by the defocusing of the projector and the mo-tion of the tested object. Subsequently, the time-overlapping coding strategy is presented to greatly increase the coding efficiency, decreasing the projected number in each group from 7 (i.e. 3 + 4) to 4 (i.e. 3 + 1) for one re-stored 3D frame. Combination of two proposed techniques allows to reconstruct a pixel-wise and unambiguous 3D geometry of dynamic scenes with strong noise using every 4 projected patterns. To our knowledge, the pre-sented techniques preserve the high anti-noise ability of Gray-coded-based method while overcoming the draw-backs of jump errors and low coding efficiency for the first time. Experiments have demonstrated that the pro-posed method can achieve the robust and high-efficiency 3D shape measurement of high-speed dynamic scenes even polluted by strong noise.
Chapter
Traditionally, 3D indoor scene reconstruction from posed images happens in two phases: per-image depth estimation, followed by depth merging and surface reconstruction. Recently, a family of methods have emerged that perform reconstruction directly in final 3D volumetric feature space. While these methods have shown impressive reconstruction results, they rely on expensive 3D convolutional layers, limiting their application in resource-constrained environments. In this work, we instead go back to the traditional route, and show how focusing on high quality multi-view depth prediction leads to highly accurate 3D reconstructions using simple off-the-shelf depth fusion. We propose a simple state-of-the-art multi-view depth estimator with two main contributions: 1) a carefully-designed 2D CNN which utilizes strong image priors alongside a plane-sweep feature volume and geometric losses, combined with 2) the integration of keyframe and geometric metadata into the cost volume which allows informed depth plane scoring. Our method achieves a significant lead over the current state-of-the-art for depth estimation and close or better for 3D reconstruction on ScanNet and 7-Scenes, yet still allows for online real-time low-memory reconstruction. Code, models and results are available at https://nianticlabs.github.io/simplerecon.