Fig 6 - uploaded by Jiro Katto
Content may be subject to copyright.
All cameras are placed on the same plane. Vertices of solid line squares are recovered by the proposed algorithm. View 0 is regarded as a scaled orthographic view, while views 1, 2, and 3 are regarded as perspective views. Note that, in this assumption, view 0 is equal to that located between views 1 and 3 at I m depth. 

All cameras are placed on the same plane. Vertices of solid line squares are recovered by the proposed algorithm. View 0 is regarded as a scaled orthographic view, while views 1, 2, and 3 are regarded as perspective views. Note that, in this assumption, view 0 is equal to that located between views 1 and 3 at I m depth. 

Source publication
Article
Full-text available
This paper presents a novel framework for Euclidean structure recovery utilizing a scaled orthographic view and perspective views simultaneously. A scaled orthographic view is introduced in order to automatically obtain camera parameters such as camera positions, orientation, and focal length. Scaled orthographic properties enable all camera parame...

Contexts in source publication

Context 1
... We ignore lens distortion and assume that the principal point of images is the image center (USTY SHT). A cube (PHH Â PHH Â PHHmm) on which a number of squares is drawn was utilized as a target object (Fig. 4). Vertices of the squares (SHXV Â SHXVmm) are exploited as feature points. This target object was captured at the four views shown in Fig. 6. The images are captured with a focal length of PVmm in view H and with a focal length of SHmm in views I, P, and Q. In applying the proposed algorithm, view H was regarded as a scaled orthographic view and views I, P, and Q as perspective views. Note that, in this assumption view H is equal to that located between views I and Q at Im ...
Context 2
... 6. The images are captured with a focal length of PVmm in view H and with a focal length of SHmm in views I, P, and Q. In applying the proposed algorithm, view H was regarded as a scaled orthographic view and views I, P, and Q as perspective views. Note that, in this assumption view H is equal to that located between views I and Q at Im depth (Fig. 6). Fig. 5 shows the feature points recovered from views H, I, and P. In this reconstruction, all vertices of the 12 squares on the cube's surfaces are exploited as the feature points to acquire the camera parameters and 3D coordinates of the 48 vertices are estimated. In comparing the real object with this result (Fig. 5), it was found ...

Similar publications

Article
Full-text available
We discuss eight new(?) configuration theorems of classical projective geometry in the spirit of the Pappus and Pascal theorems. Comment: a chronological error corrected
Article
Full-text available
If the vertices of a triangle are projected onto a given line, the per- pendiculars from the projections to the corresponding sidelines of the triangle intersect at one point, the orthopole of the line with respect to the triangle. We prove several theorems on orthopoles using the Pappus theorem, a fundamental result of projective geometry. Theorem...
Article
Full-text available
We study the local behaviour of inflection points of families of plane curves in the projective plane. We develop normal forms and versal deformation concepts for holomorphic function germs f:(ℂ 2 )→(ℂ,0) which take into account the inflection points of the fibres of f. We give a classification of such function-germs which is a projective analog of...
Article
Full-text available
In this paper, we propose a well-justified synthetic building of the projective space. We define the concepts of plane and space of incidence and also the statement of Gallucci as an axiom to our classical projective space. To this purpose, we prove from these axioms, the theorems of Desargues, Pappus, and the fundamental theorem of projectivities,...

Citations

... Recently, several works [6,15] have addressed the problem of structure and motion recovery from a combination of camera models, i.e., perspective and weak perspective cameras for point cor-respondences. A weak perspective camera is an orthographic camera with an unknown aspect ratio. ...
Article
We introduce a linear algorithm to recover the Euclidean motion between an orthographic and two perspective cameras from straight line correspondences filling the gap in the analysis of motion estimation from line correspondences for various projection models. The general relationship between lines in three views is described by the trifocal tensor. Euclidean structure from motion for three perspective views is a special case in which the relationship is defined by a collection of three matrices. Here, we describe the case of two calibrated perspective views and an orthographic view. Similar to the other cases, our linear algorithm requires 13 or more line correspondences to recover 27 coefficients of the trifocal tensor.
Article
Three-dimension shape reconstruction is one of the important research areas in object recognition and image understanding. A structure-from-motion problem as originally proposed by C. Tomasi and T. Kanade in 1992 has attracted a lot of attention. It is based on the singular value decomposition (SVD) approach. In this paper, it is extended to cope with the multi-target case. That is, given a sequence of 2-D video images of multiple moving targets, the goal is to compute the 3-D motion of the targets and reconstruct their 3-D shapes. This is further extended to the multi-camera-multi-target problem. First, a robust algorithm which enhances the reliability of the block matching techniques is proposed for fast tracking of feature points in sequence of images. Then the feature points are mapped onto their corresponding objects using an algebraic method based on the subspace clustering method and principal singular vector (PSV). Thereafter, the motion and shape may be estimated from a matrix factorization using SVD. We demonstrate the effectiveness of the algorithms in tracking and reconstruction of the shape information using both artificially created data and a real image sequence in somewhat controlled environments.
Article
A novel method for the three dimensional (3D) reconstruction from two uncalibrated images was described. The camera intrinsic parameters can be estimated linearly by using three pairs of corresponding vanishing points from the three mutual orthogonal space directions, then the camera motion parameters between two views can be estimated from three groups of lines that are mutually orthogonal in 3D space. After calculating the camera projection matrices, the coordinates in 3D space can be calculated using triangular measurement. Comparing with the active camera self-calibration method using three mutual orthogonal translations, the camera orthogonal motion constraints are transformed into the orthogonal constraints of the spatial structure in our approach, so that it is effective to be realized with higher adaptability and the 3D models of the scene can be recovered from 2D images taken by an uncalibrated handhold digital camera. This approach has been applied to the real images of architectural scenes, and a 3D model of the building was reconstructed with better performance. The new images generated from the reconstructed 3D model for the new viewpoints are consistent with the perception of the real scene, and the measure error of the plane angle between the reconstructed 3D model and the real scene is within 1.5%-2.6%.
Article
An original mobile robot self-location technique in three dimensional environments is presented in this paper. This method solves the 3D location problem using a single colour camera on board the robot. A very simple pattern consisting of a few color points has been designed in order to achieve two main objectives. pose of the robot in the room and location of the robot in the building. Perspective projection analysis of the pattern is enough to obtain camera viewing parameters in the room coordinate system and color code analysis identifies the specific floor and room in the building. Our approach is being used for applications inside buildings Experimentation with a mobile robot, advantages and restriction of this technique are shown in the paper.
Conference Paper
An original mobile robot self-location technique in three dimensional environments is presented in this paper. This method solves the 3D location problem using a single colour camera on board the robot. A very simple pattern consisting of a few color points has been designed in order to achieve two main objectives. pose of the robot in the room and location of the robot in the building. Perspective projection analysis of the pattern is enough to obtain camera viewing parameters in the room coordinate system and color code analysis identifies the specific floor and room in the building. Our approach is being used for applications inside buildings Experimentation with a mobile robot, advantages and restriction of this technique are shown in the paper.
Article
Full-text available
Construction of three-dimensional structures from video sequences has wide applications for intelligent video analysis. This paper summarizes the key issues of the theory and surveys the recent advances in the state of the art. Reconstruction of a scene object from video sequences often takes the basic principle of structure from motion with an uncalibrated camera. This paper lists the typical strategies and summarizes the typical solutions or algorithms for modeling of complex three-dimensional structures. Open difficult problems are also suggested for further study.
Article
An approach for the three-dimensional (3D) reconstruction of architectural scenes from two un-calibrated images is described in this paper. From two views of one architectural structure, three pairs of corresponding vanishing points of three major mutual orthogonal directions can be extracted. The simple but powerful constraints of parallelism and orthogonal lines in architectural scenes can be used to calibrate the cameras and to recover the 3D information of the structure. This approach is applied to the real images of architectural scenes, and a 3D model of a building in virtual reality modelling language (VRML) format is presented which illustrates the method with successful performance.
Article
This paper presents a bibliography of nearly 1700 references related to computer vision and image analysis, arranged by subject matter. The topics covered include computational techniques; feature detection and segmentation; image and scene analysis; two-dimensional shape; pattern; color and texture; matching and stereo; 2-dimensional recovery and analysis; three-dimensional shape; and motion. A few references are also given on related topics, including geometry and graphics, compression and processing, sensors and optics, visual perception, neural networks, artificial intelligence and pattern recognition, as well as on applications.
Conference Paper
An approach for the 3D reconstruction of architectural scenes from two uncalibrated images is described in this paper. From two views of one architectural structure, three pairs of corresponding vanishing points of three major mutual orthogonal directions can be extracted. The simple but powerful constraints of parallelism and orthogonally in architectural scenes can be used to calibrate the cameras and to recover the projection matrices for each viewpoint. The projection matrices are used to reconstruct a partial 3D model of an architectural scene from two uncalibrated photographs taken from arbitrary viewpoints. The approach is applied to the real images of architectural scenes, and a 3D model of a building in VRML format is presented which illustrates the method with successful performance. It is applied to recover 3D models using the hand held digital cameras that the camera motion can't be controlled.
Article
This paper describes a novel framework for object extraction from images utilizing multiple cameras. Focused regions in images and disparities of point correspondences among multiple images are 3-D clues for the extraction. We examine the extraction of focused objects from images by these automatically acquired clues. Edges in images captured by the cameras are detected, and disparities of the edges in focused regions become the clues, called disparity keys. A focused object is extracted from an image as a set of edge intervals with the disparity keys. The falsely extracted parts can be detected by discontinuous contours of the object and recovered by contour morphing. Some experimental results under different conditions demonstrate the effectiveness and robustness of the proposed method. The method can be applied to image synthesis methods, such as synthesis/natural hybrid coding (SNHC) and to object-scalable coding in MPEG-4