Figure 4 - available from: Machine Vision and Applications
This content is subject to copyright. Terms and conditions apply.
Basic principle of a Flash LIDAR camera. An eye-safe laser flood-illuminates an object of interest

Basic principle of a Flash LIDAR camera. An eye-safe laser flood-illuminates an object of interest

Source publication
Article
Full-text available
Time-of-flight (TOF) cameras are sensors that can measure the depths of scene-points, by illuminating the scene with a controlled laser or LED source, and then analyzing the reflected light. In this paper we will first describe the underlying measurement principles of time-of-flight cameras, including: (i) pulsed-light cameras, which measure direct...

Citations

... Therefore, these methods cannot exclude the influence of surface roughness on the material property acquisition performance. In contrast, the non-contact methods mainly include the methods using a time-of-flight (ToF) camera because this type of camera can actively emit infrared waves to irradiate a scene, thus avoiding the interference of ambient light [5][6][7][8][9]. At the same time, it can extract parameters, such as reflectivity and transmittance, from the ToF imaging models as classification features to realize transparent material classification. ...
Article
Full-text available
    Classification of transparent materials with various roughness types has been widely used in the field of computer vision. However, the surface roughness of a transparent material affects the extraction effect of classification features, thus affecting the performance of transparent material classification. In this study, a classification method of transparent materials with various surface roughness types and transparencies, which uses the microfacet shape factor, reflectivity, and transmissivity as classification characteristics, is proposed. First, a transparent material feature extraction method based on microfacet distribution function is proposed for the first time, and the microfacet shape factor, reflectivity, and transmissivity are extracted by our model as classification features. The microfacet distribution model ground glass unknown is combined with the time-of-flight imaging model to achieve an accurate classification of surfaces with various roughness types. Then, according to the nonlinear and discrete characteristics of data, an appropriate classifier is selected to realize the transparent material classification. The transparent material classification experiments are performed using four types of material appearances, and the proposed method is compared with the methods of Shim et al. and Lang et al. The average classification accuracy of the proposed method for the transparent materials with four material appearances is 92.62%, which represents improvements of 56.18% and 11.58% compared with the methods of Shim et al. and Lang et al., respectively. These improvements are achieved because the proposed method uses the microfacet shape factors as classification features, which ensures that the classification effect of transparent materials is not affected by surface roughness. Finally, the proposed method is suitable for all transparent materials.
    ... meaning that a 100 nm laser wavelength range can yield a scanning range in θ of 5-10°. However, in combination with other LIDAR-compatible laser requirements such as high output power (typically >100 mW), short pulse width (<10 ns for time-offlight systems 21,22 ) or narrow linewidths (<100 kHz for heterodyne detection schemes 23,24 ) and fundamental mode operation (for efficient optical coupling to the photonic circuit), obtaining a laser that is efficient over such a wide spectral range is very challenging at low unit cost. ...
    Article
    Full-text available
    Three dimensional sensing is essential in order that machines may operate in and interact with complex dynamic environments. Solid-state beam scanning devices are seen as being key to achieving required system specifications in terms of sensing range, resolution, refresh rate and cost. Integrated optical phased arrays fabricated on silicon wafers are a potential solution, but demonstrated devices with system-level performance currently rely on expensive widely tunable source lasers. Here, we combine silicon nitride photonics and micro-electromechanical system technologies, demonstrating the integration of an active photonic beam-steering circuit into a piezoelectric actuated micro cantilever. An optical phased array, operating at a wavelength of 905 nm, provides output beam scanning over a range of 17° in one dimension, while the inclination of the entire circuit and consequently the angle of the output beam in a second dimension can be independently modified over a range of up to 40° using the piezoelectric actuator.
    ... The acquisition of point clouds has become increasingly cost-and time-efficient due to advances in remote sensing technology, such as light detection and ranging (LiDAR) [64], photogrammetric approaches for reconstructing point clouds from image and video data [36], as well as particle simulations and systems [34,86,112]. These advances have led to increased availability of point clouds for a variety of entities and scenes. ...
    ... Technically, this is not correct, as depth cameras also mostly use LiDAR technology, which refers to the emission of laser beams and the measurement of the time-of-flight (pulsed-light technology) and possibly also the phase-shift (continuous-wave technology) using a corresponding receiver. Horaud et al. provide a good overview of range sensing technology and available hardware [64]. For our consideration, a differentiation between laser-scanned point clouds originating from a mechanical scanning process and those obtained by depth cameras (ªscannerless devicesº as Horaud et al. denote them, referring to the absence of a mechanical scanning procedure) seems helpful. ...
    Article
    Full-text available
    Point clouds are widely used as a versatile representation of 3D entities and scenes for all scale domains and in a variety of application areas, serving as a fundamental data category to directly convey spatial features. However, due to point sparsity, lack of structure, irregular distribution, and acquisition-related inaccuracies, results of point cloud visualization are often subject to visual complexity and ambiguity. In this regard, non-photorealistic rendering can improve visual communication by reducing the cognitive effort required to understand an image or scene and by directing attention to important features. In the last 20 years, this has been demonstrated by various non-photorealistic rendering approaches that were proposed to target point clouds specifically. However, they do not use a common language or structure for assessment which complicates comparison and selection. Further, recent developments regarding point cloud characteristics and processing, such as massive data size or web-based rendering are rarely considered. To address these issues, we present a survey on non-photorealistic rendering approaches for point cloud visualization, providing an overview of the current state of research. We derive a structure for the assessment of approaches, proposing seven primary dimensions for the categorization regarding intended goals, data requirements, used techniques, and mode of operation. We then systematically assess corresponding approaches and utilize this classification to identify trends and research gaps, motivating future research in the development of effective non-photorealistic point cloud rendering methods.
    ... The ToF sensors considered in this work rely primarily on frequency hopping for eliminating multicamera interference. This frequency-hopping approach generally operates by only transmitting IR light at different frequencies that vary over time [63][72][78] [79]. Due to hardware constraints, a limited number of frequencies are typically available for a device to use. ...
    Thesis
    Full-text available
    Three-dimensional (3D) sensors provide the ability to perform contactless measurements of objects and distances that are within their field of view. Unlike traditional two-dimensional (2D) cameras, which only provide RGB data about objects within a scene, 3D sensors are able to directly provide depth information for objects within a scene. Of these 3D sensing technologies, Time-of-Flight (ToF) sensors are becoming more compact which allows them to be more easily integrated with other devices and to find use in more applications. ToF sensors also provide several benefits over other 3D sensing technologies that increase the types of applications where ToF sensors can be used. For example, over the last decade, ToF sensors have become more widely used in applications such as 3D scanning, drone positioning, robotics, logistics, structural health monitoring, and road surveillance. To further extend the applications where ToF sensors can be employed, this work focuses on how to improve the performance of ToF sensors by suppressing and mitigating the effects of noise artifacts that are associated with ToF sensors. These issues include multipath interference, motion blur, and multicamera interference in 3D depth maps and point clouds.
    ... Furthermore, they assume that people are captured with a high pixel resolution, which is not the case when imaging over longer distances (unless the field-of-view (FoV) is limited). We therefore adopt direct Time-of-Flight (dToF) imaging, based on a Single-Photon Avalanche Diode (SPAD) sensor, which estimates depth by illuminating the scene with a pulsed light source, and measuring the time of arrival of backscattered photons [22]. These sensors are well-suited for long-range LIDAR as they can obtain precise depth estimates even from low photon returns [23][24][25][26]. ...
    Article
    Full-text available
    Single-Photon Avalanche Diode (SPAD) direct Time-of-Flight (dToF) sensors provide depth imaging over long distances, enabling the detection of objects even in the absence of contrast in colour or texture. However, distant objects are represented by just a few pixels and are subject to noise from solar interference, limiting the applicability of existing computer vision techniques for high-level scene interpretation. We present a new SPAD-based vision system for human activity recognition, based on convolutional and recurrent neural networks, which is trained entirely on synthetic data. In tests using real data from a 64×32 pixel SPAD, captured over a distance of 40 m, the scheme successfully overcomes the limited transverse resolution (in which human limbs are approximately one pixel across), achieving an average accuracy of 89% in distinguishing between seven different activities. The approach analyses continuous streams of video-rate depth data at a maximal rate of 66 FPS when executed on a GPU, making it well-suited for real-time applications such as surveillance or situational awareness in autonomous systems.
    ... The selection of the optical technique for distance measurement must consider the demanding performance requirements specific to the application, which are quite stringent: In addressing the specified requirements, one potential avenue worth exploring involves the utilization of a time-of-flight measurement system [2][3][4][5], employing either pulsed or continuous-wave lasers. However, this approach, commonly employed for When a guitar is played, the player's fingers come into play by shortening the distance between the string and the fingerboard through the act of pressing. ...
    ... In addressing the specified requirements, one potential avenue worth exploring involves the utilization of a time-of-flight measurement system [2][3][4][5], employing either pulsed or continuous-wave lasers. However, this approach, commonly employed for long-distance measurements, proves unsuitable in the present context due to its inherent limitations in resolution and high associated costs. ...
    Article
    Full-text available
    To attain a direct MIDI output from an electric guitar, we devised and implemented a sophisticated laser sensor system capable of measuring finger positions. This sensor operates on the principle of optical triangulation, employing six lasers and seven position-sensing detectors that are time-multiplexed. The speed and precision of this sensor system meet the necessary criteria for creating an electric guitar with a direct digital output, perfectly satisfying the application’s requirements.
    ... There is a large literature on the acquisition of 3D geometry [Lanman and Taubin 2009b], using a variety of hardware solutions including touch probes [Dobosz and Woźniak 2005;Ferreira et al. 2013], time-of-flight cameras [Gao et al. 2015;Horaud et al. 2016;Tang et al. 2010], structured light [Salvi et al. 2004;Zhang 2018], and stereo reconstruction [Furukawa and Hernández 2015]. We focus our review on structured light scanning (SLS) methods as they are popular due to their quality and accuracy tradeoffs and they are the closest to our method. ...
    Preprint
    We introduce a novel calibration and reconstruction procedure for structured light scanning that foregoes explicit point triangulation in favor of a data-driven lookup procedure. The key idea is to sweep a calibration checkerboard over the entire scanning volume with a linear stage and acquire a dense stack of images to build a per-pixel lookup table from colors to depths. Imperfections in the setup, lens distortion, and sensor defects are baked into the calibration data, leading to a more reliable and accurate reconstruction. Existing structured light scanners can be reused without modifications while enjoying the superior precision and resilience that our calibration and reconstruction algorithms offer. Our algorithm shines when paired with a custom-designed analog projector, which enables 1-megapixel high-speed 3D scanning at up to 500 fps. We describe our algorithm and hardware prototype for high-speed 3D scanning and compare them with commercial and open-source structured light scanning methods.
    ... Point clouds can efficiently represent diverse 3D entities, capturing various shapes, topologies, and scales. Recent advances in remote sensing technologies, particularly LiDAR (Horaud et al., 2016) and photogrammetry (Westoby et al., 2012), have made point cloud acquisition more accessible and efficient. Consequently, point clouds have become an integral part of spatial computational models and digital twins, serving various sectors such as autonomous driving (Li et al., 2021) or infrastructure management (Mirzaei et al., 2022). ...
    Conference Paper
    Full-text available
    3D point clouds are a widely used representation for surfaces and object geometries. However, their visualization can be challenging due to point sparsity and acquisition inaccuracies, leading to visual complexity and ambiguity. Non-photorealistic rendering (NPR) addresses these challenges by using stylization techniques to abstract from certain details or emphasize specific areas of a scene. Although NPR effectively reduces visual complexity, existing approaches often apply uniform styles across entire point clouds, leading to a loss of detail or saliency in certain areas. To address this, we present a novel segment-based NPR approach for point cloud visualization. Utilizing prior point cloud segmentation, our method applies distinct rendering styles to different segments, enhancing scene understanding and directing the viewer’s attention. Our emphasis lies in integrating aesthetic and expressive elements through image-based artistic rendering, such as watercolor or cartoon filtering. To combine the per-segment images into a consistent final image, we propose a user-controllable depth inpainting algorithm. This algorithm estimates depth values for pixels that lacked depth information during point cloud rendering but received coloration during image-based stylization. Our approach supports real-time rendering of large point clouds, allowing users to interactively explore various artistic styles.
    ... However, recently the use of 3D data has become more frequently used mainly point cloud representation. This is caused by increased sensing devices such as Light Detection and Ranging (LiDAR) even on mobile phones with the time-of-flight (TOF) [2] depth camera feature enables easy point cloud acquisition [1]. A point cloud is a collection of points where each point has certain features. ...
    Article
    Many studies stack SVM and neural network by utilzing SVM as an output layer of the neural network. However, those studies use kernel before the SVM which is unnecessary. In this study, we proposed an alternative to kernel SVM and proved why kernel is unnecessary when the SVM is stacked on top of neural network. The experiments is done on Dublin City LiDAR data. In this study, we stack PointNet and SVM but instead of using kernel, we simply utilize the last hidden layer of the PointNet. As an alternative to the SVM kernel, this study performs dimension expansion by increasing the number of neurons in the last hidden layer. We proved that expanding the dimension by increasing the number of neurons in the last hidden layer can increase the F-Measure score and it performs better than RBF kernel both in term of F-Measure score and computation time.
    ... Specifically, there is an increasing demand for short to medium range 3D cameras to be used in low-cost robot platforms. The go-to solutions for 3D perception have been mainly light detection and ranging (lidar) systems, which use time-of-flight (ToF) of a laser pulse to determine distance [1,2]. These devices can be scanning single point system or imaging systems based on ToF sensor array. ...
    Article
    Full-text available
    A 3D camera based on laser light absorption of atmospheric oxygen at 761 nm is presented. The camera uses a current-tunable single frequency distributed feedback laser for active illumination and a silicon-based image sensor as a receiver. This simple combination enables capturing 3D images with a compact and mass producible set-up. The 3D camera is validated in indoor environments. Distance accuracy of better than 4 cm is demonstrated between 4 m and 10 m distances. Future potential and improvements are discussed.