Context in source publication

Context 1
... to Figure 4, from the bitmap array A B where f(c,r) = a B , a peak array A P is created of local horizontal maxima, i.e. peaks at the centre of each stripe, where ...

Similar publications

Article
Full-text available
In this reported work, Knuth's balancing scheme, which was originally developed for unconstrained binary codewords is adapted. Presented is a simple method to balance the NRZ runlength constrained block codes corresponding to (d, k) constrained NRZI sequences. A short marker violating the maximum runlength or k constraint is used to indicate the ba...
Article
Full-text available
Egg-laying chickens are usually selected for breeding on a selection index of individual and family records (or part records) of laying performance. The generation interval is 1 yr or longer. An alternative would be to mate all potential parents early in the laying period and to select the next generation as juveniles before lay, based on the avera...
Conference Paper
Full-text available
The inverse system mass matrix factorization of a multibody systems by A. Jain, G. Rodriguez, and K. Kreutz-Delgado1-4 leads to the solution of system accelerations from the dynamics equations with order(N) arithmetic operations, N being the number of bodies in the system. ... Ref. 1 used Φ and identities based on it to establish the factorization...
Article
Full-text available
With the increasing popularity of indoor positioning system technologies, many applications have become available that allow moving objects to be monitored and queried on the basis of their indoor locations. At the center of these applications is a data structure that is used for indexing the moving objects. For most of the current applications, th...
Article
Full-text available
p>A well-prepared abstract enables the reader to identify the basic content of a document quickly and accurately, to determine its relevance to their interests, and thus to decide whether to read the document in its entirety. The Abstract should be informative and completely self-explanatory, provide a clear statement of the problem, the proposed a...

Citations

... These applications are usually affected by external perturbations, in particular, image noise and reflections can be an important error source for these kinds of sensors when high precision is required or when the scanning task is developed in an industrial environment. Several techniques are used to extract 3D geometry data from a laser sensor[2,3], but most of them, especially those used with laser triangulation sensors, need to run an algorithm for laser detection in images456. Laser line detection has been a basic problem of computer vision and has become a significant source of error when a laser triangulation system is applied to high precision measurement tasks[7]. The centre of mass method has been found to provide the best results when the laser stripe detection is carried out in an industrial environment, as in the case presented in this work[8,9]. ...
Article
Full-text available
The use of laser triangulation systems is widely spread in industrial applications, especially in industrial metrology. These applications are usually affected by external perturbations, in particular, image noise and reflections can be an important error source for these kinds of sensors when high precision is required or when the scanning task is developed in an industrial environment. This research is focused in the improvement of the behavior of a laser triangulation sensor working with high noise images. The aim of the image analysis technique is to avoid or reduce the effect of the background noise on the measurement results (flatness). The analysis technique is tested with images captured in the industrial environment of the measurement system. The results show a proper behavior of the algorithm with high noise component images and the feasibility of the technique for use in the inspection of 100% of the production.
... Structured light techniques, one of the widely used 3D reconstruction methods, aim to recover the profile of an object by applying a unique color coded pattern of light. The principle is based on parallax and the use of the geometry of triangles and triangulation, using either stereo or monocular camera, as we can see in [2][3][4]. Structured light systems (SLS) have found universal applications for macroscopic detection and depth profiling of objects (volumes with 0.05-5 m side length) due to the advantages of speed, accuracy and robustness in 3D reconstruction of featureless objects (e.g. Objects with large smooth surfaces) [1]. ...
Conference Paper
Full-text available
Endoscopes, especially the structured lighting endoscopy, have been widely used in clinical applications for inspecting the interior of the patient. The purposse of this study is to provide an approach to reconstruct the 3D articular surface of the knee joint with an endoscopy (diameter ø = 7 mm) and a line laser through a microfiber (ø = 1 mm), facilitating computer-assisted diagnose (CAD) and improving operation quality for some open surgeries. Plane reconstruction of wood model is designed to examine the accuracy of this scanning system and experiment shows that the average accuracy is 1.186 mm. Finally, a 3D full view of the articular surface of the partial knee joint is presented to show the feasibility and simplicity of our proposed method.
... The output image of the segmentation stage is scanned row by row for rows where all the lines are indexable. Once those are identified and labelled, a flood fill algorithm [16] is used to propagate those indexes through all the detections. 4) Triangulation: With the labelling and the calibration, each 3D point p(t) can be computed by triangulating its corresponding laser plane π n to the line formed by joining the segmented pixel to the camera focal point, which depends on the scale factor t. ...
Conference Paper
Full-text available
A one-shot sensor for underwater 3D reconstruction is presented and tested underwater in a water tank. The system is composed of a RGB CCD camera and a 532 nm green laser with a Diffractive Optical Element attached to it. The laser projects a pattern of parallel lines into the scene. The deformed pattern obtained in the camera frame is then processed to obtain a non-dense 3D point cloud that can be later used for autonomous manipulation and grasping, or for detailed mapping of textureless objects or scenarios.
... The stripe extraction method is summarized in Figure 5. Most laser stripe extraction algorithms perform a simple columnwise maximum computation to find the peak in light intensity e.g., Robinson et al. (2003); Orghidan et al. (2006). Accordingly for the DVS the simplest approach to extract the laser stripe would be to accumulate all events after a laser pulse and find the column-wise maximum in activity. ...
Article
Full-text available
Mobile robots need to know the terrain in which they are moving for path planning and obstacle avoidance. This paper proposes the combination of a bio-inspired, redundancy-suppressing dynamic vision sensor (DVS) with a pulsed line laser to allow fast terrain reconstruction. A stable laser stripe extraction is achieved by exploiting the sensor's ability to capture the temporal dynamics in a scene. An adaptive temporal filter for the sensor output allows a reliable reconstruction of 3D terrain surfaces. Laser stripe extractions up to pulsing frequencies of 500 Hz were achieved using a line laser of 3 mW at a distance of 45 cm using an event-based algorithm that exploits the sparseness of the sensor output. As a proof of concept, unstructured rapid prototype terrain samples have been successfully reconstructed with an accuracy of 2 mm.
... This paper presents a contribution towards solving real-time issues in 3D face recognition. The works of [5, 13,14, 16, 17] have described structured light methods for fast 3D reconstruction from line projection. While those or alternative structured light methods (such as fringe processing) or stereo vision can be used, there are prescribed steps that need to be performed in order to achieve a fully automatic 3D face recognition system: Our scanner has three major components: a near-infrared (NIR) projector capable of projecting a pattern of sharp lines that remain in focus over a large distance up to 5m. ...
... Given that we know the geometry of the camera and projector, by knowing the stripe indices we can now fully reconstruct in 3D by trigonometry. Details of the process have been published in [16]. ...
... 3D reconstruction is achieved by mapping the image space to system space (camera + projector) in a Cartesian coordinate system. We have developed a number of successful algorithms to deal with the mapping as described in [5, 16]. Once this mapping is achieved, a 3D point cloud is calculated and the output is triangulated using the connectivity of the vertices as depicted inFigure 2.Figure 2. Point cloud and triangulation from the detected stripe pattern in 2D Once the surface shape has been modeled as a polygonal mesh, a number of 3D post-processing operations are required: hole filling, mesh subdivision, smoothing, and noise removal. ...
Conference Paper
Full-text available
The main contribution of this paper is to present a novel method for automatic 3D face recognition based on sampling a 3D mesh structure in the presence of noise. A structured light method using line projection is employed where a 3D face is reconstructed from a single 2D shot. The process from image acquisition to recognition is described with focus on its real-time operation. Recognition results are presented and it is demonstrated that it can perform recognition in just over one second per subject in continuous operation mode and thus, suitable for real time operation.
... For these patterns, however, the correspondence problem has proved to be difficult to overcome reliably. Following [9] we refer to it as the Stripe Indexing Problem. Previous attempts assumed some level of surface continuity that preserves stripe adjacency to some extent in the recorded image [8, 9]. ...
... Following [9] we refer to it as the Stripe Indexing Problem. Previous attempts assumed some level of surface continuity that preserves stripe adjacency to some extent in the recorded image [8, 9]. The pattern would then be indexed piecemeal, relative to indices already determined in local neighbourhoods. ...
... This parallel arrangement differs from more conventional ones (see e.g. [5, 9, 12]) where the projector and camera axes intersect. But it simplifies analysis and provides parallel epipolar lines [7]Figure 2: A coordinate system (left) is defined in relation to the projector. ...
Conference Paper
Structured light is a well-known technique for capturing 3D surface measurements but has yet to achieve satisfactory results for applications demanding high resolution models at frame rate. For these requirements a dense set of uniform uncoded white stripes seems attractive. But the problem of relating projected and recorded stripes, here called the Indexing Problem, has proved to be difficult to overcome reliably for uncoded patterns. We propose a new algorithm that uses the maximum spanning tree of a graph defining potential connectivity and adjacency in recorded stripes. Results are significantly more accurate and reliable than previous attempts. We do however also identify an important limitation of uncoded patterns and claim that, in general, additional stripe coding is necessary. Our algorithm adapts easily to accommodate a minimal coding scheme that increases neither sample size nor acquisition time. 1
... Within our research group we have developed methods for fast 3D reconstruction using line projection (e.g. [9], [2]). The method is based on projecting a pattern of lines on the target surface and processing the captured 2D image from a single shot into a point cloud of vertices in 3D space. ...
... Our research into 3D scanning has developed a novel uncoded structured light method [9], which projects a pattern of evenly-spaced white stripes onto the sub- 8th WSEAS International Conference on SIGNAL PROCESSING, COMPUTATIONAL GEOMETRY and ARTIFICIAL VISION (ISCGAV'08) Rhodes, Greece, August 20-22, 2008 ISSN: 1790-5109 15 ject, and records the deformation of the stripes in a video camera placed in a fixed geometric relationship to the stripe projector. A camera and projector configuration is depicted in Fig 1. Fig 1: Top: The projector and camera axes meet at the calibration plane which defines the origin of the coordinate system. ...
... p = c + (0, −hP F, vP F ). (2) We have shown [9] ...
Article
In this paper we discuss methods for 3D reconstruction from a single 2D image using multiple stripe line projection. The method allows 3D reconstruction in 40 milliseconds, which renders it suitable for on-line reconstruction with applications into security, manufacturing, medical engineering and entertainment industries. We start by discussing the mathematical fundamentals of 3D reconstruction and the required post-processing operations in 3D to render the models suitable for biometric applications such as noise removal, hole filling, smoothing and mesh subdivision. The incorporation of data acquired as 3D surface scans of human faces into such applications present particular challenges concerning identification and modelling of features of interest. The challenge is to accurately and consistently find predefined features in 3D such as the position of the eyes and the tip of the nose for instance. A method is presented with recognition rates up to 97% and a preliminary sensitivity analysis is carried out concerning reconstructed and subdivided models.
... Our existing research into 3D scanning uses a novel uncoded structured light method [6], which projects a pattern of evenly-spaced white stripes onto the subject , and records the deformation of the stripes in a video camera placed in a fixed geometric relationship to the stripe projector. A camera and projector configuration is depicted inFig A detail from a video frame is depicted inFig 2 (top) clearly showing the deformed stripes. ...
Article
In this paper, we discuss methods for incorporating data acquired as 3D surface scans of human faces into applications such as 3D animation and biometric 3D facial recognition. In both applications the challenge is to accurately and consistently find predefined features such as the corners of the eyes and the tip of the nose. In the field of biometry, if 3D face recognition is to compete with 2D methods, these features must be found to an accuracy greater than 1:1000. In multimedia, the greatest problem occurs with animated 3D faces, where very small inaccuracies are clearly seen in moving faces. Therefore any inconsistencies must be found and rectified. Our work starts by providing a high-speed, accurate 3D model, and then developing methods to recognise the required features.
... Using dense, uncoded stripes presents greater feature correspondence problems, but provides greater resolution of measurement and allows an accurately-coloured texture map. Solutions to the uncoded stripe problem are given in [15]. Fig 2a shows a detail from one such video frame, clearly showing the deformed stripes. ...
... Using dense, uncoded stripes presents greater feature correspondence problems, but provides greater resolution of measurement and allows an accurately-coloured texture map. Solutions to the uncoded stripe problem are given in [15].Fig 2a shows a detail from one such video frame, clearly showing the deformed stripes. The advantage of this over stereo vision methods is that the stripe pattern provides an explicitly connected mesh of vertices (Fig 2c), so that the polyhedral surface can be rendered without the need for surface reconstruction algorithms. ...
Article
Full-text available
3D face recognition is an open field. This paper presents a method for 3D facial recognition based on Principal Components Analysis. The method uses a relatively large number of facial measurements and ratios and yields reliable recognition. We also highlight an approach to sensor development for fast 3D model acquisition and automatic facial feature extraction.
Chapter
Assistive Robotics has been shown to be an important tool in the patient rehabilitation process. One of the first steps in this process is to capture the movements performed by the patient to analyze the movement restrictions presented. The present work presents a brief review of the state of the art as well as the development of a Range of Motion (ROM) measurement system based on the position of the joints in the three-dimensional space of the upper limbs using the Kinect sensor. In addition, preliminary tests to capture compensatory movements of the trunk are presented aiming to investigate the feasibility of using such system as a tool for detecting compensatory movements. Therefore, a methodology is proposed that uses the Kinect sensor to capture the range of motion and compensatory movements in order to assist in the physiotherapeutic process. The results obtained showed the feasibility of using the proposed system for the detection and capture of both the range of motion and the compensatory movement of the trunk.KeywordsAssistive roboticsRehabilitationPhysiotherapyCompensatory movements