Article

Design of multipath error correction algorithm of coarse and fine sparse decomposition based on compressed sensing in time-of-flight cameras

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

A single pixel in time of flight cameras receives multiple reflected light from different scene points, resulting in erroneous depth information. In this paper, based on the proposed sparse decomposition, a coarse and fine sparse decomposition based on compressed sensing is applied to multipath separation. The applied method uses a linear combination of multiple frequency signals to modulate the source. The measured vector obtained through finite random measurements is subjected to two sparse decompositions – rough separation and detailed positioning, and finally the minimum direct path depth is accurately recovered. Under the premise of the same number of measurements, calculation amount, and storage space, the accuracy of the coarse and fine sparse decomposition based on compressed sensing is improved by nearly an order of magnitude compared to the sparse decomposition without compressed sensing. Moreover, our method can basically achieve multi-path separation accuracy to the sub-millimeter level.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

Article
Time-of-Flight (ToF) sensors ranging has been used as a method for acquiring depth information in a variety of applications. However, Multi-Path Interference (MPI) can severely affect ToF imaging quality. Deep learning can achieve significant improvements in correcting MPI compared to conventional methods. In this paper, we use an Attention Generative Adversarial Network (GAN) consisting of three components: a residual attention network, an encoder-decoder network, and a discriminator network. Our approach introduces a residual structure and an attention mechanism to capture the feature distribution of spatial scenes, generating attention-aware features. Due to the lack of large real ToF datasets, we train and test synthetic images with basic facts and a small number of real images using a combination of supervised and unsupervised training. We used Mean Absolute Error (MAE) and relative error metrics to quantitatively test our model. The experimental results have shown that our method is effective in removing MPI error for depth images at different frequencies and scenes, greatly improving the accuracy of depth estimation and being robust to the differences between real and simulated ToF images.
Article
Full-text available
Time-of-flight (ToF) imaging has become a widespread technique for depth estimation, allowing affordable off-the-shelf cameras to provide depth maps in real time. However, multipath interference (MPI) resulting from indirect illumination significantly degrades the captured depth. Most previous works have tried to solve this problem by means of complex hardware modifications or costly computations. In this work, we avoid these approaches and propose a new technique to correct errors in depth caused by MPI, which requires no camera modifications and takes just 10 milliseconds per frame. Our observations about the nature of MPI suggest that most of its information is available in image space; this allows us to formulate the depth imaging process as a spatially-varying convolution and use a convolutional neural network to correct MPI errors. Since the input and output data present similar structure, we base our network on an autoencoder, which we train in two stages. First, we use the encoder (convolution filters) to learn a suitable basis to represent MPI-corrupted depth images; then, we train the decoder (deconvolution filters) to correct depth from synthetic scenes, generated by using a physically-based, time-resolved renderer. This approach allows us to tackle a key problem in ToF, the lack of ground-truth data, by using a large-scale captured training set with MPI-corrupted depth to train the encoder, and a smaller synthetic training set with ground truth depth to train the decoder stage of the network. We demonstrate and validate our method on both synthetic and real complex scenarios, using an off-the-shelf ToF camera, and with only the captured, incorrect depth as input.
Article
Full-text available
Time of flight cameras may emerge as the 3-D sensor of choice. Today, time of flight sensors use phase-based sampling, where the phase delay between emitted and received, high-frequency signals encodes distance. In this paper, we present a new time of flight architecture that relies only on frequency---we refer to this technique as frequency-domain time of flight (FD-TOF). Inspired by optical coherence tomography (OCT), FD-TOF excels when frequency bandwidth is high. With the increasing frequency of TOF sensors, new challenges to time of flight sensing continue to emerge. At high frequencies, FD-TOF offers several potential benefits over phase-based time of flight methods.
Article
Full-text available
Multipath interference of light is the cause of important errors in Time of Flight (ToF) depth estimation. This paper proposes an algorithm that removes multipath distortion from a single depth map obtained by a ToF camera. Our approach does not require information about the scene, apart from ToF measurements. The method is based on fitting ToF measurements with a radiometric model. Model inputs are depth values free from multipath interference whereas model outputs consist of synthesized ToF measurements. We propose an iterative optimization algorithm that obtains model parameters that best reproduce ToF measurements, recovering the depth of the scene without distortion. We show results with both synthetic and real scenes captured by commercial ToF sensors. In all cases, our algorithm accurately corrects the multipath distortion, obtaining depth maps that are very close to ground truth data.
Article
Full-text available
Time-of-flight (ToF) cameras calculate depth maps by reconstructing phase shifts of amplitude-modulated signals. For broad illumination of transparent objects, reflections from multiple scene points can illuminate a given pixel, giving rise to an erroneous depth map. We report here a sparsity-regularized solution that separates K interfering components using multiple modulation frequency measurements. The method maps ToF imaging to the general framework of spectral estimation theory and has applications in improving depth profiles and exploiting multiple scattering.
Article
Full-text available
Recently, Range Imaging (RIM) cameras have become available that capture high resolution range images at video rate. Such cameras measure the distance from the scene for each pixel independently based upon a measured time of flight (TOF). Some cameras, such as the SwissRanger(tm) SR-3000, measure the TOF based on the phase shift of reflected light from a modulated light source. Such cameras are shown to be susceptible to severe distortions in the measured range due to light scattering within the lens and camera. Earlier work induced using a simplified Gaussian point spread function and inverse filtering to compensate for such distortions. In this work a method is proposed for how to identify and use generally shaped empirical models for the point spread function to get a more accurate compensation. The otherwise difficult inverse problem is solved by using the forward model iteratively, according to well established procedures from image restoration. Each iteration is done as a sequential process, starting with the brightest parts of the image and then moving sequentially to the least bright parts, with each step subtracting the estimated effects from the measurements. This approach gives a faster and more reliable compensation convergence. An average reduction of the error by more than 60% is demonstrated on real images. The computation load corresponds to one or two convolutions of the measured complex image with a real filter of the same size as the image.
Article
This paper demonstrates to separate multi-path components caused by specular reflection with temporally compressive time-of-flight (CToF) depth imaging. Because a multi-aperture ultra-high-speed (MAUHS) CMOS image sensor is utilized, any sweeping or changing of frequency, delay, or shutter code is not necessary. Therefore, the proposed scheme is suitable for capturing dynamic scenes. A short impulse light is used for excitation, and each aperture compresses the temporal impulse response with a different shutter pattern at the pixel level. In the experiment, a transparent acrylic plate was placed 0.3m away from the camera. An objective mirror was placed at the distance of 1.1 m or 1.9m from the camera. A set of 15 compressed images was captured at an acquisition rate of 25.8 frames per second. Then, 32 subsequent images were reconstructed from them. The multi-path interference from the transparent acrylic plate was distinguished. Copyright © 2018 by ITE Transactions on Media Technology and Applications (MTA).
Article
Transient imaging is a technique in photography that records the process of light propagation before it reaches a stationary state such that events at the light speed level can be observed. In this review we introduce three main models for transient imaging with a time-of-flight (ToF) camera: correlation model, frequency-domain model, and compressive sensing model. Transient imaging applications usually involve resolving the problem of light transport and separating the light rays arriving along different paths. We discuss two of the applications: imaging objects inside scattering media and recovering both the shape and texture of an object around a corner.
Article
Consumer time-of-flight depth cameras like Kinect and PMD are cheap, compact and produce video-rate depth maps in short-range applications. In this paper we apply energy-efficient epipolar imaging to the ToF domain to significantly expand the versatility of these sensors: we demonstrate live 3D imaging at over 15 m range outdoors in bright sunlight; robustness to global transport effects such as specular and diffuse inter-reflections---the first live demonstration for this ToF technology; interference-free 3D imaging in the presence of many ToF sensors, even when they are all operating at the same optical wavelength and modulation frequency; and blur-free, distortion-free 3D video in the presence of severe camera shake. We believe these achievements can make such cheap ToF devices broadly applicable in consumer and robotics domains.
Conference Paper
3D Time-of-Flight sensing technology provides distant measurements from the camera to the scene in the field of view, for complete depth map of a scene. It works by illuminating the scene with a modulated light sources and measuring the phase change between illuminated and reflected light. This is translated to distance, for each pixel simultaneously. The sensor receives the radiance which is combination of light received along multiple paths due to global illumination. This global radiance causes multi-path interference. Separating these components to recover scene depths is challenging for corner shaped and coronel shaped scene as number of multiple path increases. It is observed that for different scenes, global radiance disappears with increase in frequencies beyond some threshold level. This observation is used to develop a novel technique to recover unambiguous depth map of a scene. It requires minimum two frequencies and 3 to 4 measurements which gives minimum computations.
Conference Paper
The multipath interference (MPI) adds noises to the time-of-flight (TOF) measurements and then causes depth estimation inaccuracy. This paper combines multi-frequency TOF (MFT) acquisition and compressive sensing (CS) approaches to reconstruct the multipath reflections. Mismatch model and depth resolution problems will be analyzed to achieve a good measurement accuracy with a high probability.
Article
Time of flight (ToF) range cameras illuminate the scene with an amplitude-modulated continuous wave light source and measure the returning modulation envelopes: phase and amplitude. The phase change of the modulation envelope encodes the distance travelled. This technology suffers from measurement errors caused by multiple propagation paths from the light source to the receiving pixel. The multiple paths can be represented as the summation of a direct return, which is the return from the shortest path length, and a global return, which includes all other returns. We develop the use of a sinusoidal pattern from which a closed form solution for the direct and global returns can be computed in nine frames with the constraint that the global return is a spatially lower frequency than the illuminated pattern. In a demonstration on a scene constructed to have strong multipath interference, we find the direct return is not significantly different from the ground truth in 33/136 pixels tested; where for the full-field measurement, it is significantly different for every pixel tested. The variance in the estimated direct phase and amplitude increases by a factor of eight compared with the standard time of flight range camera technique. © 2015 Society of Photo-Optical Instrumentation Engineers (SPIE).
Article
Multipath interference (MPI) is one of the major sources of both depth and amplitude measurement errors in Time-of-Flight (ToF) cameras. This problem has seen a lot of attention recently. In this work, we discuss the MPI problem within the framework spectral estimation theory and multi- frequency measurements. As compared to previous approaches that consider up to two interfering paths, our model considers the general case of K-interfering paths. In the theoretical setting, we show that for the case of K-interfering paths of light, 2K + 1 frequency measurements suffice to recover the depth and amplitude values corresponding to each of the K optical paths. What singles out our method is the that our algorithm is non-iterative in implementation. This leads to a closed-form solution which is computationally attractive. Also, for the first time, we demonstrate the effectiveness of our model on an off-the-shelf Microsoft Kinect for the X-Box one.
Article
Time of flight cameras produce real-time range maps at a relatively low cost using continuous wave amplitude modulation and demodulation. However, they are geared to measure range (or phase) for a single reflected bounce of light and suffer from systematic errors due to multipath interference. We re-purpose the conventional time of flight device for a new goal: to recover per-pixel sparse time profiles expressed as a sequence of impulses. With this modification, we show that we can not only address multipath interference but also enable new applications such as recovering depth of near-transparent surfaces, looking through diffusers and creating time-profile movies of sweeping light. Our key idea is to formulate the forward amplitude modulated light propagation as a convolution with custom codes, record samples by introducing a simple sequence of electronic time delays, and perform sparse deconvolution to recover sequences of Diracs that correspond to multipath returns. Applications to computer vision include ranging of near-transparent objects and subsurface imaging through diffusers. Our low cost prototype may lead to new insights regarding forward and inverse problems in light transport.
Conference Paper
A transient image is the optical impulse response of a scene which visualizes light propagation during an ultra-short time interval. In this paper we discover that the data captured by a multifrequency time-of-flight (ToF) camera is the Fourier transform of a transient image, and identify the sources of systematic error. Based on the discovery we propose a novel framework of frequency-domain transient imaging, as well as algorithms to remove systematic error. The whole process of our approach is of much lower computational cost, especially lower memory usage, than Heide et al.'s approach using the same device. We evaluate our approach on both synthetic and real-datasets.
Conference Paper
Multipath interference is inherent to the working principle of a Time-of-flight camera and can influence the measurements by several centimeters. Especially in applications that demand for high accuracy, such as object localization for robotic manipulation or ego-motion estimation of mobile robots, multipath interference is not tolerable. In this paper we formulate a multipath model in order to estimate the interference and correct the measurements. The proposed approach comprises the measured scene structure. All distracting surfaces are assumed to be Lambertian radiators and the directional interference is simulated for correction purposes. The positive impact of these corrections is experimentally demonstrated.
Fast Multipath Estimation for PMD Sensors
  • M Conde
  • T Kerstein
  • B Buxbaum
Optimized scattering compensation for time-of-flight camera
  • Jcc Heinz Muredubois
  • Hügli