Figure 2 - uploaded by Andrew Willis
Content may be subject to copyright.
(a) A functional block diagram of the embedded system showing system devices and their electrical interconnections. (b) An image of the as-built system with devices labeled running from a 14.4V supply provided by 2 9-cell 7.2V Ni-Cd batteries (bottom right). 

(a) A functional block diagram of the embedded system showing system devices and their electrical interconnections. (b) An image of the as-built system with devices labeled running from a 14.4V supply provided by 2 9-cell 7.2V Ni-Cd batteries (bottom right). 

Source publication
Article
Full-text available
This paper describes a novel embedded system capable of estimating 3D positions of surfaces viewed by a stereoscopic rig consisting of a pair of calibrated cameras. Novel theoretical and technical aspects of the system are tied to two aspects of the design that deviate from typical stereoscopic reconstruction systems: (1) incorporation of an 10x zo...

Contexts in source publication

Context 1
... this coordinate system, the translation vector T = (t x , t y , t z ) t = R t l (O r − O l ) defines the position of the right camera with respect to the coordinate system defined by the left camera and R = R t l R r defines the orientation of the right camera with respect to the coordinate system defined by the left camera. In contrast to the typical stereoscopic system, each camera in our system is mounted on a servo motor which rotates the cameras in the horizontal plane; roughly the world coordinate system xz − plane (see Figure 2). The intrinsic parameters of a camera within our system is referred to as the parameter vector x int = (m, k) t where m = (f, s x , s y , o x , o y ) t specifies the lens focal length, f , the width and height dimensions of a pixel, (s x , s y ), and the location of the image center o = (o x , o y ) defined as the point where the optical axis intersects the plane of the camera sensor. ...
Context 2
... hardware design of our system involved nine different devices integrated using a variety of communication interfaces for chip setup and data transmission. The overall system block diagram is shown in Figure 2(a) and a view of the as-built system with labeled components is shown in Figure 2(b). The system devices are listed below: ...
Context 3
... hardware design of our system involved nine different devices integrated using a variety of communication interfaces for chip setup and data transmission. The overall system block diagram is shown in Figure 2(a) and a view of the as-built system with labeled components is shown in Figure 2(b). The system devices are listed below: ...
Context 4
... interfaces are specific to the Blackfin such as the Serial Port Interface (SPORT) and Parallel Peripheral Interface (PPI). Our project makes use of the PPI interface to transfer image data from the CMOS image sensors to the DSP system, both UART interfaces which allow the DSP to communicate with the motorized lens controller and allow the host PC to issue commands through a command line interface to the DSP embedded system (bash console), and the I 2 C interface to communicate directly with those devices the system integrated circuits which are the Xilinx FPGA and the two Micron CMOS image sensors (see Figure 2). ...
Context 5
... system is mounted on an aluminum platform, and is powered by a battery (14V in practice). Internal system components require two different voltage levels: 12v for the V3LC lens controller and 7.2v for all other system devices. Power distribution is accomplished via two voltage regulators that provide the 12v and 7.2v outputs to power the system. Fig. ...

Citations

... Stereo-photogrammetry for reconstruction of three dimensional (3D) geometry using pairs of two dimensional (2D) images is a non-contact, non-destructive technique used for quantitative measurement of surface geometry from optical digital images [1]. The fundamental principle used to determine 3D geometry and provide depth from 2D images is triangulation [2]. ...
... It required selection of the same point in the right and left 2D images. An epipolar line [1] from the focal point of one camera to the point of the left image assisted with point selection on the right image. 3D coordinates obtained from the triangulation were used for the measurements. ...
Article
Full-text available
This paper presents a quantitative assessment of uncertainty for the stereo calibration at different cameras angle and control object 3D reconstruction. Stereo-photogrammetry with high resolution commercially available digital cameras is applied to deliver a 3D data from a pair of 2D images. The main goal of the project is the reconstruction of the 3D model from the images obtained underwater in a high vibration manufacturing environment. The crucial step in obtaining high accuracy 3D data is stereo-rig calibration. Robust camera calibration is a critical step in 3D reconstructions in order to extract precise quantitative measurements from 2D images. Two AV 6600 monochrome 29 Megapixels GigE cameras were calibrated at four different angles to ensure that the angle of the stereo rig and variation in image contrast would not influence significantly the final results. All the images were undertaken in the high vibration manufacturing environment. It was shown that calibration of the cameras set at the angle of 15o reduces significantly 3D reconstruction accuracy. However, it does not influence reprojection error, which is less than 0.5 pixels.
... Owing to the susceptibility of self-calibration methods to errors due to inaccurate determination of the conic images [9,10], hybrid approaches for zoom-lens calibration were proposed by Oh et al. [11] and Sturm [12]. Zoom-lens calibration can be categorized into three approaches: pattern-based calibration [13], selfcalibration [14], and hybrid (or semiautomatic) calibration [11]. The challenges in zoom-lens calibration are mainly due to variations in the camera's zoom parameters and because a simple pinhole camera model cannot represent the entire lens settings [1]. ...
... Liu et al. [13] reported Heikkila's method [55,56] as a calibration approach for the zoom-lens calibration of a stereo camera and determined the camera's parameters for the 3D reconstruction under variable zoom. They modeled f x and f y with zoom settings using a polynomial fitting. ...
... Modern zoom lenses do not function in the same way as expected in this method [6]. Pattern-based approaches to the zoom-lens calibration of stereo cameras [13,50] use homography-based calibration methods [51,55,56], while calibrating the zoom of a single camera is carried out using homography-based self-calibration in [14,16,18]. Oh and Sohn [11] pointed out that homography-based calibration is limited insofar as it can only determine the intrinsic parameters when employed for zoom-lens calibration using planar patterns, and the calibration parameters cannot be accurately estimated for long focal lengths. ...
Article
Full-text available
This paper surveys zoom-lens calibration approaches, such as pattern-based calibration, self-calibration, and hybrid (or semiautomatic) calibration. We describe the characteristics and applications of various calibration methods employed in zoom-lens calibration and offer a novel classification model for zoom-lens calibration approaches in both single and stereo cameras. We elaborate on these calibration techniques to discuss their common characteristics and attributes. Finally, we present a comparative analysis of zoom-lens calibration approaches, highlighting the advantages and disadvantages of each approach. Furthermore, we compare the linear and nonlinear camera models proposed for zoom-lens calibration and enlist the different techniques used to model the camera’s parameters for zoom (or focus) settings.
... Stereo-photogrammetry for reconstruction of three dimensional (3D) geometry using pairs of two dimensional (2D) images is a non-contact, non-destructive technique used for quantitative measurement of surface geometry from optical digital images [1]. The fundamental principle used to determine 3D geometry and provide depth from 2D images is triangulation [2]. ...
... It required selection of the same point in the right and left 2D images. An epipolar line [1] from the focal point of one camera to the point of the left image assisted with point selection on the right image. 3D coordinates obtained from the triangulation were used for the measurements. ...
Conference Paper
Full-text available
This paper presents a quantitative assessment of uncertainty for the stereo calibration at different cameras angle and control object 3D reconstruction. Stereo-photogrammetry with high resolution commercially available digital cameras is applied to deliver a 3D data from a pair of 2D images. The main goal of the project is the reconstruction of the 3D model from the images obtained underwater in a high vibration manufacturing environment. The crucial step in obtaining high accuracy 3D data is stereo-rig calibration. Robust camera calibration is a critical step in 3D reconstructions in order to extract precise quantitative measurements from 2D images. Two AV 6600 monochrome 29 Megapixels GigE cameras were calibrated at four different angles to ensure that the angle of the stereo rig and variation in image contrast would not influence significantly the final results. All the images were undertaken in the high vibration manufacturing environment. It was shown that calibration of the cameras set at the angle of 15o reduces significantly 3D reconstruction accuracy. However, it does not influence reprojection error, which is less than 0.5 pixels.
... However, the extracted data are usually lack of accuracy. Some methods [17], [18] calibrate the camera on some selected positions, and estimate the focal length by using some interpolation methods. In this paper, we propose a new method to estimate the camera's focal length and principle point efficiently. ...
... Several problems in employing zoom lens calibration have been described [4][5][6][7][8][9]15]. Some of these problems include the following: 1) Inaccurate non-linear zoom lens control systems pose an intense problem for zoom lens calibration. ...
... Ahmed and Farag [6] proposed the fitting method using multi-layered feedforward neural networks (MLFNs) for variation of camera parameters on lens settings. The overall distortion on variable magnification is modeled by the first four radial distortion coefficients [7], by the first two radial and tangential distortion coefficients [9], and by the first two radial distortion coefficients [15]. Several approaches have been presented for zoom lens calibration. ...
... Xian et al. proposed a calibration method based on perspective projection [8], and the variation of extrinsic and intrinsic parameters for stereo camera was reported, but this approach did not model the distortion on zoom settings, which is inevitable for zoom lens calibration [7,15]. An embedded system for stereoscopic 3D reconstruction based on variable zoom was presented [9], and the calibration parameters for each zoom setting were computed using the MATLAB camera calibration toolbox [20]. But this calibration is restricted to finding only the intrinsic parameters when used for zoom lens calibration [15]. ...
Article
In this paper, the design and implementation of a 3D sensing system with adjustable zoom based on stereo vision and structured light is discussed. The proposed system consists of a stereo camera with an illumination projector and zoom lenses to collect high-resolution data from a distant object. A calibration target suitable for zoom lens calibration is designed along with the development of a zoom lens calibration method for a stereo camera based on linear and non-linear models. The robust zoom lens control system, based on a precise automation hardware and image processing technique, is developed to produce high-quality images. Furthermore, normalized cross correlation (NCC), being robust with intensity offsets and contrast changes, is used for stereo matching of the coded images to generate the reconstruction results using a linear equation method. The integration of NCC with structured light and epipolar geometry for the proposed 3D sensing system further yields accurate 3D reconstruction results as demonstrated in the experimental results of this research. The complex relationship of the parameters of each camera on variable zoom is determined using optimal fitting. Linear interpolation is used to estimate calibration data at an arbitrary zoom setting to validate zoom lens calibration. The 3D reconstruction on variable zoom shows that higher lens magnification results in a more accurate 3D sensing system.
... Several researchers have proposed their own methods. Liu et al proposed motorized zoom lenses for reconstructing Stereoscopic 3D (S3D) shooting with a two digital image sensors [2]. Heinzle et al proposed a computational camera system for their automatic camera rig by embedding image processing capability inside [3]. ...
Article
Camera rigs for shooting 3D video are classified as manual, motorized, or fully automatic. Even in an automatic camera rig, the process of Stereoscopic 3D (S3D) video capture is very complex and time-consuming. One of the key time-consuming operations is capturing the distance parameters, which are near distance, far distance, and convergence distance. Traditionally these distances are measured by tape measure or triangular indirect measurement methods. These two methods consume a long time for every scene in shot. In our study, a compact laser distance sensing system with long range distance sensitivity is developed. The system is small enough to be installed on top of a camera and the measuring accuracy is within 2% even at a range of 50 m. The shooting time of an automatic camera rig equipped with the laser distance sensing system can be reduced significantly to less than a minute.
... The processing board is provided with a Virtex II FPGA and a TriMedia DSP, the whole system runs at a 19 Fps rate in a 256x256 resolution and with 18 disparities. In 2009, P. Liu et al. [12] have realized a stereo vision system with motorized lens which enable 10x zoom features. The system is based on a Spartan 3 FPGA and a Blackfin BF537 DSP architecture. ...
Conference Paper
Full-text available
This paper describes the architecture of a new smart vision system called BiSeeMos. This smart camera is designed for stereo vision purposes and the implementation of a simple dense stereo vision algorithm. The architecture has been designed for dedicated parallel algorithms in using a high performance FPGA. This chip provides the user with useful features for vision processing as integrated RAM blocks, embedded multipliers, phase locked loops and plenty of logic elements. In this paper, a description of our architecture and a comparison versus others works is done. A dense stereo vision algorithm has been implemented on the platform using the Census method.
Article
This paper presents a quantitative assessment of uncertainty for the 3D reconstruction of stents. This study investigates a CP stent (Numed, USA) used in congenital heart disease applications with a focus on the variance in measurements of stent geometry. The stent was mounted on a model of patient implantation site geometry, reconstructed from magnetic resonance images, and imaged using micro-computed tomography (CT), conventional CT, biplane fluoroscopy and optical stereo-photogrammetry. Image data were post-processed to retrieve the 3D stent geometry. Stent strut length, separation angle and cell asymmetry were derived and repeatability was assessed for each technique along with variation in relation to μCT data, assumed to represent the gold standard. The results demonstrate the performance of biplanar reconstruction methods is comparable with volumetric CT scans in evaluating 3D stent geometry. Uncertainty on the evaluation of strut length, separation angle and cell asymmetry using biplanar fluoroscopy is of the order ±0.2mm, 3° and 0.03, respectively. These results support the use of biplanar fluoroscopy for in vivo measurement of 3D stent geometry and provide quantitative assessment of uncertainty in the measurement of geometric parameters.
Article
Full-text available
The papers included in this volume were part of the technical conference cited on the cover and title page. Papers were selected and subject to review by the editors and conference program committee. Some conference presentations may not be available for publication. The papers published in these proceedings reflect the work and thoughts of the authors and are published herein as submitted. The publishers are not responsible for the validity of the information or for any outcomes resulting from reliance thereon.