Figure - available from: Machines
This content is subject to copyright.
Schematic of epipolar geometry.

Schematic of epipolar geometry.

Source publication
Article
Full-text available
Vision-based three-dimensional (3D) shape measurement techniques have been widely applied over the past decades in numerous applications due to their characteristics of high precision, high efficiency and non-contact. Recently, great advances in computing devices and artificial intelligence have facilitated the development of vision-based measureme...

Citations

... With the rapid development of 3D measurement technologies such as laser scanning, tilt photogrammetry, monocular and binocular measurements, and depth camera measurements, point clouds have been gradually applied in various fields, including simultaneous localization and mapping, underground tunnel detection, autonomous driving, cultural relic restoration, and medical 3D image construction [1][2][3][4][5][6], among others. ...
Article
Full-text available
Traditional iterative closest point (ICP) registration algorithms are sensitive to initial positions and easily fall into the trap of locally optimal solutions. To address this problem, a point cloud registration algorithm is put forward in this study based on adaptive neighborhood eigenvalue loading ratios. In the algorithm, the resolution of the point cloud is first calculated and used as an adaptive basis to determine the raster widths and radii of spherical neighborhoods in the raster filtering; then, the adaptive raster filtering is implemented to the point cloud for denoising, while the eigenvalue loading ratios of point neighborhoods are calculated to extract and match the contour feature points; subsequently, sample consensus initial alignment (SAC-IA) is used to carry out coarse registration; and finally, a fine registration is delivered with KD-tree-accelerated ICP. The experimental results of this study demonstrate that the feature points extracted with this method are highly representative while consuming only 35.6% of the time consumed by other feature point extraction algorithms. Additionally, in noisy and low-overlap scenarios, the registration error of this method can be controlled at a level of 0.1 mm, with the registration speed improved by 56% on average over that of other algorithms. Taken together, the method in this study cannot only ensure strong robustness in registration but can also deliver high registration accuracy and efficiency.
... Recent advances in precision manufacturing and semiconductor packaging have been boosting a growing demand for online 3D automated optical inspection [1]. Fringe projection profilometry (FPP) [2] is an important optical 3D measurement technique for quality control in production line, since it has the advantages of fast speed, full field data acquisition, independent of measured features and adaptability to complex industrial environment. As the parts and devices involved in the above industries become increasingly precise and miniaturized, FPP is facing challenges in small field of view measurement, especially when measuring surfaces with intricate structures and high dynamic range. ...
... , N-1; (x, y) represents the pixel coordinate, A(x, y) is the background intensity, B(x, y) is the modulated intensity; φ(x, y) represents the phase value which is related to the fringe contrast and surface reflectivity. It can be calculated by Eq. (2), and at least three phase shifted images are required for phase calculation. ...
Article
Full-text available
Fringe projection profilometry plays an important role for quality control in production line. However, it is facing challenges in the measurement of objects with intricate structures and high dynamic range that involved in precision manufacturing and semiconductor packaging. In this paper, a multi-view fringe projection profilometry system, which deploys a vertical telecentric projector and four oblique tilt-shift cameras, is presented to address the “blind spots” caused by shadowing, occlusion and local specular reflection. A flexible and accurate system calibration method is proposed, in which the corrected pinhole imaging model is used to calibrate the telecentric projection, and the unified calibration is performed by bundle adjustment. Experimental results show that the 3D repeated measurement error and standard deviation are no more than 10 μm within a measurable volume of 70 × 40 × 20 mm³. Furthermore, a group of experiments prove that the developed system can achieve complete and accurate 3D measurement for high dynamic range surfaces with complex structures.
... Phase measuring profilometry (PMP) is a technique that has been widely used for three-dimensional (3D) shape measurement [1] in a wide range of areas. This includes industrial detection, reverse engineering, cultural heritage preservation, bionic design, human body modeling, and medical diagnosis [2][3][4][5][6]. ...
Article
Full-text available
Phase measuring profilometry (PMP) has been widely used in industries for three-dimensional (3D) shape measurement. However, phase information is often lost due to image saturation results from high-reflection object surfaces, leading to subsequent 3D reconstruction errors. To address the problem, we propose an adaptive phase retrieval algorithm that can accurately fit the sinusoidal fringes damaged by high reflection in the saturated regions to retrieve the lost phase information. Under the proposal, saturated regions are first identified through a minimum error thresholding technique to narrow down regions of interest and so that computation costs are reduced. Then, images with differing exposures are fused to locate peak-valley coordinates of the fitting sinusoidal fringes. And the corresponding values of peak-valley pixels are obtained based on a least squares method. Finally, an adaptive piecewise sine function is constructed to recover the sinusoidal fringe pattern by fitting the pattern intensity distribution. And the existing PMP technology is used to obtain phase information from the retrieved sinusoidal fringes. To apply the developed method, only one (or two) image with different exposure times is needed. Compared with existing methods for measuring reflective objects, the proposed method has the advantages of short operation time, reduced system complexity, and low demand on hardware equipment. The effectiveness of the proposed method is verified through two experiments. The developed methodology provides industry an alternative way to measure high-reflection objects in a wide range of applications.
... Advances in optical sensing and computing technologies have improved the resolution and speed of machine vision (MV) technologies [1]. The quality of information conveyed by vision systems has also improved, paving the way for the adoption of MV as a fundamental technology in the advanced manufacturing paradigm of industry 4.0 [2][3][4]. Previously time-consuming and complex measurement verification processes can now be automated and integrated into industry 4.0 information architectures [5,6]. However, it is necessary to ensure that new strategies in the automated measurement and data management processes remain compliant with existing measurement verification standards. ...
... MV technologies and algorithms enable the use of cameras for carrying out vision-based measurements in industrial environments. Driven by the desire to automate inspection, improve quality and reduce cost [13], vision-based measurement has progressed to become a core technology in intelligent manufacturing industries [2,6]. At the centre of vision-based measurement systems is the projective mapping of 3D geometric shapes to 2D images where depth information is lost. ...
Article
Full-text available
New developments in vision algorithms prioritise identification and perception over accurate coordinate measurement due to the complex problem of resolving object form and pose from images. Consequently, many vision algorithms for coordinate measurements rely on known targets of primitive forms that are typically planar targets with coded patterns placed in the field of view of vision systems. Although planar targets are commonly used, they have some drawbacks, including calibration difficulties, limited viewing angles, and increased localisation uncertainties. While traditional tactile coordinate measurement systems (CMSs) adopt spherical targets as the de facto artefacts for calibration and 3D registration, the use of spheres in vision systems is limited to occasional performance verification tasks. Despite being simple to calibrate and not having orientation-dependant limitations, sphere targets are infrequently used for vision-based in-situ coordinate metrology due to the lack of efficient multi-view vision algorithms for accurate sphere measurements. Here, we propose an edge-based vision measurement system that uses a multi-sphere artefact and new measurement models to extract sphere information and derive 3D coordinate measurements. Using a spatially encoded sphere identities embedded in the artefact, a sphere matching algorithm is developed to support pose determination and tracking. The proposed algorithms are evaluated for robustness, measurement quality and computational speed to assess their performance. At the range of 500 mm to 750 mm, sphere size errors of less than 25 μm and sphere-to-sphere length errors of less than 100 μm are achievable. In addition, the proposed algorithms are shown to improve robustness by up to a factor of four and boost computational speed.
... 24 The binocular stereo vision 3D-imaging method, via integration of object images from leftand right-eye views, has been developed over the past two decades, boasting the merits of no contact, low cost, and high precision. [25][26][27] Many 3D surface imaging techniques have been developed, such as stereoscopic vision, 28 structured light, 29 and time-of-flight, 30 which are capable of offering depth information on a scene and thus characterizing the spatial distribution of an object in full, such as blood vessel structures or tumor vessels. Later, the potential of NIR-I stereo imaging in visualizing 3D blood vessel structures in vivo has been demonstrated. ...
Article
Full-text available
Significance: Optical imaging in the second near-infrared (NIR-II, 1000 to 1700 nm) region is capable of deep tumor vascular imaging due to low light scattering and low autofluorescence. Non-invasive real-time NIR-II fluorescence imaging is instrumental in monitoring tumor status. Aim: Our aim is to develop an NIR-II fluorescence rotational stereo imaging system for 360-deg three-dimensional (3D) imaging of whole-body blood vessels, tumor vessels, and 3D contour of mice. Approach: Our study combined an NIR-II camera with a 360-deg rotational stereovision technique for tumor vascular imaging and 3D surface contour for mice. Moreover, self-made NIR-II fluorescent polymer dots were applied in high-contrast NIR-II vascular imaging, along with a 3D blood vessel enhancement algorithm for acquiring high-resolution 3D blood vessel images. The system was validated with a custom-made 3D printing phantom and in vivo experiments of 4T1 tumor-bearing mice. Results: The results showed that the NIR-II 3D 360-deg tumor blood vessels and mice contour could be reconstructed with 0.15 mm spatial resolution, 0.3 mm depth resolution, and 5 mm imaging depth in an ex vivo experiment. Conclusions: The pioneering development of an NIR-II 3D 360-deg rotational stereo imaging system was first applied in small animal tumor blood vessel imaging and 3D surface contour imaging, demonstrating its capability of reconstructing tumor blood vessels and mice contour. Therefore, the 3D imaging system can be instrumental in monitoring tumor therapy effects.
... In computer vision, the stereo vision has also been widely developed. By simulating the principles of human-eye vision, and combining camera models, triangulation, and depth map methods, we can use two cameras to obtain the distance between the object and the camera, thus enabling, for example, the application of 3D morphometry [103]. ...
Preprint
Full-text available
p>This article presents a detailed review and categorizing of the marker displacement method (MDM) used in vision-based tactile sensors. Vision-based tactile sensors have been proven to be a promising solution for robot tactile perception. Among such sensors, MDM is one of the most commonly used contact characterization and extraction methods. It uses visual approaches to obtain contact deformation and achieve multimodal tactile perception using physical models and post-processing algorithms. In recent years, many tactile sensors using MDM have been developed. However, the existing research does not strictly distinguish between the different types of methods but is uniformly grouped into MDM. Without differentiation, there might be a lack of systematic and comprehensive guidance in analyzing and optimizing the characteristics of MDM and selecting the most suitable method. This article is the first to classify MDM into three typical categories based on the dimensionality perspective: 2D MDM, 2.5D MDM, and 3D MDM. 2D MDM relies only on the monocular camera to acquire the marker array’s 2D displacement field. 2.5D MDM supplements 2D MDM with selected indirect features reflecting the location of the markers in the third dimension. 3D MDM employs a multi-camera system and can obtain the 3D displacement field using the stereo vision method common. Based on the latest literature, we compare the principles, characteristics, advantages and disadvantages, and applications of the three ways in detail. This work can provide a valuable reference for researchers interested in applying MDM in fields such as vision-based tactile sensors.</p
... In computer vision, the stereo vision has also been widely developed. By simulating the principles of human-eye vision, and combining camera models, triangulation, and depth map methods, we can use two cameras to obtain the distance between the object and the camera, thus enabling, for example, the application of 3D morphometry [103]. ...
Preprint
Full-text available
p>This article presents a detailed review and categorizing of the marker displacement method (MDM) used in vision-based tactile sensors. Vision-based tactile sensors have been proven to be a promising solution for robot tactile perception. Among such sensors, MDM is one of the most commonly used contact characterization and extraction methods. It uses visual approaches to obtain contact deformation and achieve multimodal tactile perception using physical models and post-processing algorithms. In recent years, many tactile sensors using MDM have been developed. However, the existing research does not strictly distinguish between the different types of methods but is uniformly grouped into MDM. Without differentiation, there might be a lack of systematic and comprehensive guidance in analyzing and optimizing the characteristics of MDM and selecting the most suitable method. This article is the first to classify MDM into three typical categories based on the dimensionality perspective: 2D MDM, 2.5D MDM, and 3D MDM. 2D MDM relies only on the monocular camera to acquire the marker array’s 2D displacement field. 2.5D MDM supplements 2D MDM with selected indirect features reflecting the location of the markers in the third dimension. 3D MDM employs a multi-camera system and can obtain the 3D displacement field using the stereo vision method common. Based on the latest literature, we compare the principles, characteristics, advantages and disadvantages, and applications of the three ways in detail. This work can provide a valuable reference for researchers interested in applying MDM in fields such as vision-based tactile sensors.</p
... On the other hand, the non-contact based method provides more advantages [10] such that it cannot destruct the bridge surface by equipment [11], it has high precision, high efficient and high flexibility characteristics [12], and it can be operated in real time [13]. The non-contact method usually utilizes optical centric devices such as laser beam [14], radar [15,16], acoustic [17], thermal model [18], and image-based measurements [19,20]. ...
Article
Full-text available
Burgeoning off-the-selves Digital Single Lens Reflector (DSLR) cameras have been gaining attentions as a fast and affordable tool for conducting deformation monitoring of man-made engineering structures. When a sub millimetre of accuracy is sought, deliberate concerns of their usage must be considered since lingering systematic errors in the imaging process plaque such non metric cameras. This paper discusses a close range photogrammetric method to conduct structure deformation monitoring of the bridge using the digital DSLR camera. The bridge is located in Malang Municipality, East Java province, Indonesia. There are more than 100 images of the bridge’s concrete pillars were photographed using convergent photogrammetric network at distance variations between 5m to 30m long on each epoch. Then, the coordinates of around 550 captured retro-reflective markers attached on the pillars facade are calculated using self-calibrating bundle adjustment method. The coordinate differences of the markers from the two consecutive epochs are detected with a magnitude between 0.03 mm to 6 mm with a sub-millimetre precision measurement level. However, by using global congruency testing and a localization of deformation testing, it is confirmed that the bridge pillar’s structures are remain stable between those epochs.
... Nevertheless, spatial resolution of those "depth from defocus" [37] techniques is limited because the blurred information cannot be fully recovered [44]. At least until now, they do not achieve a resolution close to the Nyquist limit of the used pixel array, as it is the claim for structured light setups [45,46]. ...
... Those techniques realize single-shot, monocular 3D imaging. Although their achieved 3D data quality in terms of spatial resolution and accuracy is not yet competitive to established 3D sensor systems [44,45], it is worth to investigate whether their performance might be improvable by tailored chromatic aberration of the lens in addition to the coded aperture. Single-shot methods could potentially surpass our proposed method regarding acquisition time. ...
Article
Full-text available
Close-range 3D sensors based on the structured light principle have a constrained measuring range due to their depth of field (DOF). Focus stacking is a method to extend the DOF. The additional time to change the focus is a drawback in high-speed measurements. In our research, the method of chromatic focus stacking was applied to a high-speed 3D sensor with 180 fps frame rate. The extended DOF was evaluated by the distance-dependent 3D resolution derived from the 3D-MTF of a tilted edge. The conventional DOF of 14 mm was extended to 21 mm by stacking two foci at 455 and 520 nm wavelength. The 3D sensor allowed shape measurements with extended DOF within 44 ms.