Fig 5 - uploaded by Jundong Cho
Content may be subject to copyright.
A conceptual description of an HOC based signal decoding process which is including (a) a signal of pixel values at i-th position, (b) signal separating by multiplying a orthogonal matrix, (c) selecting of a set of probable codes, and (d) decoding of the correct address. The selection of the most confidential address is depends on specific algorithms.

A conceptual description of an HOC based signal decoding process which is including (a) a signal of pixel values at i-th position, (b) signal separating by multiplying a orthogonal matrix, (c) selecting of a set of probable codes, and (d) decoding of the correct address. The selection of the most confidential address is depends on specific algorithms.

Source publication
Article
Full-text available
This paper introduces the hardware platform of the structured light processing based on depth imaging to perform a 3D modeling of cluttered workspace for home service robots. We have discovered that the degradation of precision and robustness comes mainly from the overlapping of multiple codes in the signal received at a camera pixel. Considering t...

Context in source publication

Context 1
... decoding process divided into two parts. In the first part the address encoding process is concerned with signal separation of a mixture signals as shown in Fig. ...

Citations

... In this paper, we present briefly the process of determining disparity maps associated to a specific scene, described by a pair of rectified images previously. This is a very important problem with many applications such as autonomous navigation systems in mobile robotics [2][3][4][5], classification systems [6], mobile surface reconstruction [7], face recognition [8], among others. ...
Conference Paper
Full-text available
In this paper, we propose the development of a system for generating disparity maps using rectified images, one technique that is booming in depth perception systems in computer vision. For this reason, in this project, is made a brief explanation process associated with mating techniques for stereo vision. For this project, we explore correlation algorithms, such as sum of absolute differences and Sum of Hamming Distances, combining the latter with the use of Census Transform. Additionally, we present the behavior of disparity maps for images that have been affected by impulsive noise. Initially rectified images were considered, which were taken from the image bank, presented by Middlebury College, who have worked extensively in the stereo vision problem. In this case, we used the known stereo pair Tsukuba.
Article
This paper describes a new sensor system for 3D environment perception using stereo structured infrared light sources and a camera. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover 180° and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and two projected infrared light sources are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Two successive captures of the image with left and right infrared light projection provide several benefits, which include wider area of depth measurement, higher spatial resolution and the visibility perception.
Article
This paper describes a new sensor system for 3D range measurement using the structured infrared light. Environment and obstacle sensing is the key issue for mobile robot localization and navigation. Laser scanners and infrared scanners cover 180° and are accurate but too expensive. Those sensors use rotating light beams so that the range measurements are constrained on a plane. 3D measurements are much more useful in many ways for obstacle detection, map building and localization. Stereo vision is very common way of getting the depth information of 3D environment. However, it requires that the correspondence should be clearly identified and it also heavily depends on the light condition of the environment. Instead of using stereo camera, monocular camera and the projected infrared light are used in order to reduce the effects of the ambient light while getting 3D depth map. Modeling of the projected light pattern enabled precise estimation of the range. Identification of the cells from the pattern is the key issue in the proposed method. Several methods of correctly identifying the cells are discussed and verified with experiments.
Conference Paper
This paper aims to propose a robust invisible near-infrared (NIR) structured-light 3D sensor to identify the location of the picked-and-placed object for a mobile manipulator. The NIR 3D sensor consists of a gobo commercial projector and two off-the-shelf monochrome camera with NIR pass filter. Taking advantage of the NIR light, the negative influences of both object texture and ambient light are significantly reduced. The geometric feature-based pattern utilizes the customized design chessboard corner as the primitive of the one-shot pattern. The designed pattern has the ability to measure the target distance regardless of the target texture and occlusion. The prototype of proposed sensor is implemented and installed on the manipulator, and then experiments were conducted to evaluate the robustness and accuracy of the proposed 3D sensor using the proposed one-shot pattern. The results demonstrate that the efficiency of the developed NIR 3D sensor used for grasping manipulation.
Conference Paper
This research features a novel approach that efficiently detects depth edges in real world scenes. Depth edges play a very important role in many computer vision problems because they represent object contours. We strategically project structured light and exploit distortion of the light pattern in the structured light image along depth discontinuities to reliably detect depth edges. Distortion along depth discontinuities may not occur or be large enough to detect depending on the distance from the camera or projector. For practical application of the proposed approach, we have presented methods that guarantee the occurrence of the distortion along depth discontinuities for a continuous range of object location. Experimental results show that the proposed method accurately detects depth edges of human hand and body shapes as well as general objects.
Conference Paper
This research describes a novel approach that accurately detects depth edges with cluttered inner texture edges effectively ignored. We strategically project structured light and exploit distortion of the light pattern in the structured light image along depth discontinuities to reliably detect depth edges. In practice, distortion along depth discontinuities may not occur or be large enough to detect depending on the distance from the camera or projector. We present methods that guarantee the occurrence of the distortion along depth discontinu- ities for a continuous range of object location. Experimental results show that the proposed method accurately detects depth edges of shapes of human hands and bodies as well as general objects. 2008 Elsevier B.V. All rights reserved.