Conference PaperPDF Available

Current Work in the .enpeda.. Project

Authors:

Abstract

The environment perception and driver assistance (.enpeda..) project searches for solutions for vision-based driver assistance systems (DAS) which are currently starting to be active safety components of cars (e.g., lane departure warning, blind spot supervision). We review current projects in .enpeda.. in the international context of developments in this field.
14/11/14 12:27 pmIEEE Xplore Abstract (References) - Current work in the .enpeda.. project
Page 1 of 4http://ieeexplore.ieee.org/xpl/abstractReferences.jsp?tp=&arnum…%2Fiel5%2F5371762%2F5378349%2F05378425.pdf%3Farnumber%3D5378425
Browse Conference Publications > Image and Vision Computing Ne ...
Current work in the .enpeda.. project Full Text
Sign-In or Purchase
12
Author(s)
1. Aujol, J. F., Gilboa, G., Chan, T., and Osher, S.: Structure-texture image decomposition -
modeling, algorithms, and parameter selection. Int. J. Computer Vision, 67:111-136 (2006)
[CrossRef]
2. Badino, H.: A robust approach for ego-motion estimation using a mobile stereo platform. In Proc.
Int. Workshop Complex Motion, pages 198-208 (2007)
3. Badino, H., Franke, U., and Mester, R.: Free space computation using stochastic occupancy
grids and dynamic programming. In Proc. Workshop on Dynamical Vision, IEEE Int. Conf.
Computer Vision (2007)
4. Badino, H., Mester, R., Vaudrey, T., and Franke, U.: Stereo-based free space computation in
complex traffic scenarios. In Proc. IEEE Southwest Symp. Image Analysis Interpretation, pages
189-192 (2008)
5. Baker, S., Scharstein, D., Lewis, J. P., Roth, S., Black, M. J., and Szeliski, R.: A database and
evaluation methodology for optical flow. In Proc. IEEE Int. Conf. Computer Vision, pages 1-8
(2007)
6. Brox, T., Bruhn, A., Papenberg, N., and Weickert, J.: High accuracy optical flow estimation
based on a theory for warping. In Proc. European Conf. Computer Vision (ECCV), pages 25-36
(2004)
7. Bundler: Structure from motion for unordered image collections.
http://phototour.cs/washington.edu/bundler/, (2009)
Institutional Sign In
Basic Search Author Search Publication Search
Morris, J. ; enpeda. Project, Univ. of Auckland, Auckland, New Zealand ; Haeusler, R. ; Ruyi Jiang ; Jawed, K.
more authors
Abstract Authors References Cited By Keywords Metrics Similar
IEEE.org IEEE Xplore Digital Library
|IEEE Standards
|IEEE Spectrum
|More Sites
| Cart (0) Create Account
|Personal Sign In
|
Enter Search Term
Search
Advanced Search Other Search Options
References
Showing 1-60 of 60 Results
BROWSE MY SETTINGS GET HELP WHAT CAN I ACCESS? SUBSCRIBE
14/11/14 12:27 pmIEEE Xplore Abstract (References) - Current work in the .enpeda.. project
Page 2 of 4http://ieeexplore.ieee.org/xpl/abstractReferences.jsp?tp=&arnum…%2Fiel5%2F5371762%2F5378349%2F05378425.pdf%3Farnumber%3D5378425
8. Chen, Y., and Chen, C.: A cascade of feed-forward classifiers for fast pedestrian detection. In
Proc. Asian Conf. Computer Vision, pages 905-914 (2007)
9. Cui, X., Liu, Y., Shan, S., Chen, X., and Gao, W.: 3D Haar-like features for pedestrian detection.
In Proc. IEEE Int. Conf. Multimedia Expo, pages 1263-1266 (2007)
10. Domke, J., and Aloimonos, Y.: A probabilistic notion of correspondence and the epipolar
constraint. In Proc. Int. Symp. 3D Data Processing Visualization Transmission, pages 41-48
(2006)
11. EISATS (enpeda. Image Sequence Analysis Test Site), see
http://www.mi.auckland.ac.nz/EISATS (2009)
12. Felzenszwalb, P. F., and Huttenlocher, D.P.: Efficient belief propagation for early vision. Int. J.
Computer Vision, 70:41-54 (2006)
[CrossRef]
13. Franke, U., Rabe, C., Badino, H., and Gehrig, S. 6D-vision: fusion of stereo and motion for
robust environment perception. In Proc. Pattern Recognition - DAGM, pages 216-223 (2005)
14. Gavrilla, D.: Daimler pedestrian benchmark data set. Follow 'Looking at people' on
http://www.gavrila.net/Research/research.html (2009)
15. Gavrilla, D.M., and Munder., S.: Multi-cue pedestrian detection and tracking from a moving
vehicle. Int. J. Computer Vision, 73:41-59 (2007)
[CrossRef]
16. Gimel'farb, G. L.: Probabilistic regularisation and symmetry in binocular dynamic programming
stereo. Pattern Recognition Letters, 23:431-442 (2002)
[CrossRef]
17. Hirschmüller, H.: Stereo processing by semiglobal matching and mutual information. IEEE
Trans. Pattern Analysis Machine Intelligence, 30:328-341 (2008)
18. Hirschmüller, H., and Scharstein, D.: Evaluation of stereo matching costs on images with
radiometric differences. IEEE Trans. Pattern Analysis Machine Intelligence, to appear (2009)
19. Horn, B.K.P., and Weldon, Jr., E. J.: Direct methods for recovering motion. Int. J. Computer
Vision, 2:51-76 (1988)
[CrossRef]
20. Huguet, F., and Devernay, F.: A variational method for scene flow estimation from stereo
sequences. In Proc. IEEE Int. Conf. Computer Vision, pages 1-7 (2007)
21. Jawed, K., Morris, J., Khan, T., and Gimel'farb, G.: Real time rectification for stereo
correspondence. In Proc. IEEE/IFIP Int. Conf. Embedded Ubiquitous Computing, to appear
(2009)
22. Jiang, R., Klette, R., Wang, S., and Vaudrey, T.: New lane model and distance transform for lane
detection and tracking. In Proc. Int. Conf. Computer Analysis Images Patterns, to appear (2009)
23. Jiang, R., Klette, R., Wang, S., and Vaudrey, T.: Ego-vehicle corridors for vision-based driver
assistance. In Proc. Int. Workshop Combinatorial Image Analysis, to appear (2009)
24. Jones, M.J., and Snow, D.: Pedestrian detection using boosted features over many frames. In
Proc. Int. Conf. Pattern Recognition, pages 1-4 (2008)
25. Kajiya, J.T.: The rendering equation. In Proc. SIGGRAPH, pages 143-150 (1986)
26. Khan, T., Morris, J., and Jawed, K.: Intelligent vision for mobile agents: Contour maps in real
time. Submitted paper (2009)
27. Klette, R., Jiang, R., Morales, S., and Vaudrey, T.: Discrete driver assistance. In Proc. ISMM
(M.H.F. Wilkinson and J.B.T.M. Roerdink, eds.), LNCS 5720, pages 1-12, Springer, Berlin
14/11/14 12:27 pmIEEE Xplore Abstract (References) - Current work in the .enpeda.. project
Page 3 of 4http://ieeexplore.ieee.org/xpl/abstractReferences.jsp?tp=&arnum…%2Fiel5%2F5371762%2F5378349%2F05378425.pdf%3Farnumber%3D5378425
(2009)
28. Lafortune, E.: Mathematical models and Monte Carlo algorithms for physically based rendering.
Dept. Computer Science, Faculty of Engineering, Katholieke Universiteit Leuven (1996)
29. Leibe, B., Schindler, K., Cornelis, N., and Van Gool, L.: Coupled object detection and tracking
from static cameras and moving vehicles. IEEE Trans. Pattern Analysis Machine Intelligence,
30:1683-1698 (2008)
30. Lowe, D.: Object recognition from local scale-invariant features. In Proc. IEEE Int. Conf.
Computer Vision, pages 1150-1157 (1999)
31. Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Computer Vision,
60:91-110 (2004)
[CrossRef]
32. LuxRender. See http://www.luxrender.net/ (2009)
33. Manoharan, S.: On GPS tracking of mobile devices. In Proc. IEEE Int. Conf. Networking
Services, pages 415-418 (2009)
34. McCall, J.C., and Trivedi, M.M.: Video-based lane estimation and tracking for driver assistance:
survey, system, and evaluation. IEEE Trans. Intelligent Transportation System, 7:20-37 (2006)
Abstract | Full Text: PDF (1283KB) | Full Text: HTML
35. Morales, S., and Klette, R.: A third eye for performance evaluation in stereo sequence analysis
In Proc. Int. Conf. Computer Analysis Images Pattens, to appear (2009)
36. Morales, S., Woo, Y. W., Klette, R., and Vaudrey, T.: A study on stereo and motion data accuracy
for a moving platform. In Proc. FIRA World Congress, to appear (2009)
37. Morris, J., Jawed, K., and Gimel'farb, G.: Intelligent vision: A first step - real time stereo vision. In
Proc. ACIVS, to appear (2009)
38. Negahdaripour, S., and Horn, B.K.P.: Direct passive navigation. IEEE Trans. Pattern Analysis
Machine Intelligence, 1: 168-176 (1987)
39. Qifa, K.: Transforming camera geometry to a virtual downward-looking camera: robust ego-
motion estimation and ground-layer detection. In Proc. IEEE Conf. Computer Vision Pattern
Recognition, pages 390-397 (2003)
40. Park, S., and Jeong, H.: A high-speed parallel architecture for stereo matching. In Proc. ISVC,
LNCS 4291, pages 334-342 (2006)
41. Patras, I., Hendriks, E., and Tziritas, G.: A joint motion/disparity estimation method for the
construction of stereo interpolated images in stereoscopic image sequences. In Proc. Annual
Conf. Advanced School Computing Imaging (1997)
42. Scharstein D.: Prediction Error as a Quality Metric for Motion and Stereo. In Proc. IEEE Int.
Conf. Computer Vision, pages 781-788 (1999)
43. Scharstein, D., and Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo
correspondence algorithms. Int. J. Computer Vision, 47:7-42 (2002)
44. Shashua, A., Gdalyahu, Y., and Hayon., G.: Pedestrian detection for driving assistance systems:
Single-frame classification and system level performance. In Proc. IEEE Intelligent Vehicles
Symp., pages 1-6 (2004)
45. Shieh, J.Y., Zhuang, H., and Sudhakar, R.: Motion estimation from a sequence of stereo images:
a direct method. IEEE Trans. System Man Cybernetics, 24:1044-1053 (1994)
46. Stein, G.P.: Geometric and photometric constraints: motion and structure from three views. PhD-
Thesis. Mass. Inst. Technology, Dept. Electrical Engineering Computer Science (1998)
14/11/14 12:27 pmIEEE Xplore Abstract (References) - Current work in the .enpeda.. project
Page 4 of 4http://ieeexplore.ieee.org/xpl/abstractReferences.jsp?tp=&arnum…%2Fiel5%2F5371762%2F5378349%2F05378425.pdf%3Farnumber%3D5378425
47. Stein, G.P., Mano, O., and Shashua, A.: A robust method for computing vehicle ego-motion. In
Proc. IEEE Intelligent Vehicle Symp., pages 362-368 (2000)
48. Stein, G.P., and Shashua, A.: Model based brightness constraints: on direct estimation of
structure and motion. In Proc. IEEE Conf. Computer Vision Pattern Recognition, pages 992-
1015 (1997)
49. Thrun, S., Montemerlo, M., and Aron, A.: Probabilistic terrain analysis for high-speed desert
driving. In Proc. Robotics Science Systems Conf. (2006)
50. Shi, J., and Tomasi, C.: Good features to track. In Proc. IEEE Conf. Computer Vision Pattern
Recognition, pages 593-600 (1994)
Abstract | Full Text: PDF (764KB)
51. Tsao, A.T., Fuh, C.S., Hung, Y.P., and Chen, Y.S.: Ego-motion estimation using optical flow fields
observed from multiple cameras. In Proc. Computer Vision Pattern Recognition, pages 457-462
(1997)
52. Vaudrey, T., and Klette, R.: Residual images remove illumination artifacts for correspondence
algorithms! In Proc. Pattern Recognition - DAGM, pages 472-481 (2009)
53. Vaudrey, T., Wedel, A., Chen, C.-Y., and Klette, R.: Improving optical flow using residual images.
In Proc. Int. Conf. Arts Information Technology, to appear (2009)
54. Villamizar, M., Sanfeliu, A., and Andrade-Cetto, J.: Local boosted features for pedestrian
detection. In Proc. Iberian Conf. Pattern Recognition Image Analysis, pages 128-135 (2009)
55. Viola, P., and Jones, M.: Rapid object detection using a boosted cascade of simple features. In
Proc. Conf. Computer Vision Pattern Recognition, pages 511-518 (2001)
56. Viola, P., Jones, M., and Snow, D.: Detecting pedestrians using patterns of motion and
appearance. In Proc. IEEE Int. Conf. Computer Vision, volume 2, pages 734-741 (2003)
57. Wedel, A., Pock, T., Zach, C., Bischof, H., and Cremers, D.: An improved algorithm for TV-L
58. Wedel, A., Rabe, C., Vaudrey, T., Brox, T., Franke, U., and Cremers, D.: Efficient dense scene
flow from sparse or dense stereo data. In Proc. European Conf. Computer Vision, pages 739-
751 (2008)
59. Weng, J., and Huang, T. S.: Complete structure and motion from two monocular sequences
without stereo correspondence. In Proc. Int. Conf. Pattern Recognition, pages 651-654 (1992)
60. Zhang, Y., and Kambhamettu, C.: On 3d scene flow and structure estimation. In Proc. IEEE
Conf. Computer Vision Pattern Recognition, pages 778-785 (2001)
IEEE Account
»Change Username/Password
»Update Address
Purchase Details
»Payment Options
»Order History
»Access Purchased Documents
Profile Information
»Communications Preferences
»Profession and Education
»Technical Interests
Need Help?
»US & Canada: +1 800 678 4333
»Worldwide: +1 732 981 0060
»Contact & Support
A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.
Personal Sign In Create Account
|
About IEEE Xplore Contact Us
|Help
|Terms of Use
|Nondiscrimination Policy
|Sitemap
|Privacy & Opting Out of Cookies
|
... Accurate recovery of depth information of real-world scenes has been a fundamental, yet highly active research topic in the field of computer vision. The acquisition of reliable 3D information on street sides plays an important role in many advanced technologies such as driver-assistance systems ( Morris et al., 2009), collision avoidance (Nedevschi, Bota & Tomiuc, 2009), autonomous vehicles ( Thrun, 2010), urban planning (Hu, You & Neumann, 2003), or geosciences (Westoby, Brasington, Glasser, Hambrey & Reynolds, ...
Thesis
Full-text available
Digitisation of a 3D scene has been a fundamental yet highly active topic in the field of computer science. The acquisition of detailed 3D information on street sides is essential to many applications such as driver assistance, autonomous driving, or urban planning. Over decades, many techniques including active scanning and passive reconstruction have been developed and applied to achieve this goal. One of the state-of-the-art solutions of passive techniques uses a moving stereo camera to record a video sequence on a street which is later analysed for recovering the scene structure and the sensor's egomotion that together contribute to a 3D scene reconstruction in a consistent coordinate system. As a single reconstruction may be incomplete, the scene needs to be scanned multiple times, possibly with different types of sensors to fill in the missing data. This thesis studies the egomotion estimation problem in a wider perspective and proposes a framework that unifies multiple alignment models which are generally considered individually by existing methods. Integrated models lead to an energy minimisation-based egomotion estimation algorithm which is applicable to a wider range of sensor configurations including monocular cameras, stereo cameras, or LiDAR-engaged vision systems. This thesis also studies the integration of 3D street-side models reconstructed from multiple video sequences based on the proposed framework. A keyframe-based sequence bag-of-words matching pipeline is proposed. For integrating depth data from difference sequences, an alignment is initially found from established cross-sequence landmark-feature observations, based on the aforementioned outlier-aware pose estimation algorithm. The solution is then optimised using an improved bundle adjustment technique. Aligned point clouds are then integrated into a 3D mesh of the scanned street scene.
... Traffic sign recognition is one of the key components of Driver Assistant Systems (DAS); see, for example, [5], [8], [13] for general tasks in the design of vision-based DAS. Having traffic sign information available, DAS may judge the given context of driving, alert the driver about occurring mismatches, which altogether helps in creating active car safety systems which are able to predict critical conditions. ...
Conference Paper
Full-text available
Traffic sign recognition is a technology which allows us to recognize signs in real time, typically in videos, or sometimes just (off-line) in photos. It is used for Driver Assistance Systems (DAS), road surveys, or the management of road assets (to improve road safety). In this paper, we propose a method for general traffic sign recognition (tested for the New Zealand road signs) which combines previously designed steps, but with an overall adaptation towards general traffic sign recognition (i.e., not just speed or stop signs). First, color input images or frames are converted from RGB color space into HSV color space. Second, special shapes as potential signs are detected (circles, triangles, squares) using Hough transform. Third, potential signs are compared with the template signs as given in the database by using feature matching methods (SIFT or SURF features). At the end, we recognize the traffic sign in an image aiming at realtime DAS. Experiments show that the proposed method is robust for the selected test data, with over 95 percent success rate on average. On a single frame of size 1024 × 768, the system uses on average 80 ms for preprocessing, and 100 ms for matching a traffic sign candidate.
Article
Full-text available
This report informs about current activities and results in the .enpeda‥ (short for ‘environment perception and driver assistance’) project and related performance evaluation studies, in panoramic visualization, in environmental surveillance based on scanned footprints of small species, in artistic filters, and in the design of efficient geometric algorithms for areas related to 2D or D imaging or robotics. The report summarizes some of the current work in multimedia imaging at Tamaki campus; see [52] for a previous report and further areas of research.
Conference Paper
Full-text available
Prediction errors are commonly used when analyzing the performance of a multi-camera stereo system using at least three cameras. This paper discusses this methodology for performance evaluation for the first time on long stereo sequences (in the context of vision-based driver assistance systems). Three cameras are calibrated in an ego-vehicle, and prediction error analysis is performed on recorded stereo sequences. They are evaluated using various common stereo matching algorithms, such as belief propagation, dynamic programming, semi-global matching, or graph cut. Performance is evaluated on both synthetic and real data.
Conference Paper
Full-text available
Particle filtering of boundary points is a robust way to estimate lanes. This paper introduces a new lane model in correspondence to this particle filter-based approach, which is flexible to detect all kinds of lanes. A modified version of an Euclidean distance transform is applied to an edge map of a road image from a birds-eye view to provide information for boundary point detection. An efficient lane tracking method is also discussed. The use of this distance transform exploits useful information in lane detection situations, and greatly facilitates the initialization of the particle filter, as well as lane tracking. Finally, the paper validates the algorithm with experimental evidence for lane detection and tracking.
Article
Full-text available
This workk9] aims at determining dense mo-tion and disparity elds given a stereoscopic se-quence of images for the construction of stereo interpolated images. At e ach time instant the two dense motion elds, for the left and the right se-quences, and the disparity eld of the next stereo-scopic pair are jointly estimated. The disparity eld of the current stereoscopic pair is considered as known. The disparity eld of the rst stereo-scopic pair is estimated separately. For both prob-lems multi-scale iterative relaxation algorithms are used. Stereo occlusions and motion occlu-sions/disclosures are detected using error conn-dence measures. For the reconstruction of inter-mediate views a disparity compensated linear in-terpolation algorithm is used. Results are given for real stereoscopic data.
Chapter
Full-text available
A look at the Middlebury optical flow benchmark [5] reveals that nowadays variational methods yield the most accurate optical flow fields between two image frames. In this work we propose an improvement variant of the original duality based TV-L 1 optical flow algorithm in [31] and provide implementation details. This formulation can preserve discontinuities in the flow field by employing total variation (TV) regularization. Furthermore, it offers robustness against outliers by applying the robust L 1 norm in the data fidelity term. Our contributions are as follows. First, we propose to perform a structure-texture decomposition of the input images to get rid of violations in the optical flow constraint due to illumination changes. Second, we propose to integrate a median filter into the numerical scheme to further increase the robustness to sampling artefacts in the image data. We experimentally show that very precise and robust estimation of optical flow can be achieved with a variational approach in real-time. The numerical scheme and the implementation are described in a detailed way, which enables reimplementation of this high-end method.
Article
Full-text available
The quantitative evaluation of optical flow algorithms by Barron et al. (1994) led to significant advances in performance. The challenges for optical flow algorithms today go beyond the datasets and evaluation methods proposed in that paper. Instead, they center on problems associated with complex natural scenes, including nonrigid motion, real sensor noise, and motion discontinuities. We propose a new set of benchmarks and evaluation methods for the next generation of optical flow algorithms. To that end, we contribute four types of data to test different aspects of optical flow algorithms: (1)sequences with nonrigid motion where the ground-truth flow is determined by tracking hidden fluorescent texture, (2)realistic synthetic sequences, (3)high frame-rate video used to study interpolation error, and (4)modified stereo sequences of static scenes. In addition to the average angular error used by Barron etal., we compute the absolute flow endpoint error, measures for frame interpolation error, improved statistics, and results at motion discontinuities and in textureless regions. In October 2007, we published the performance of several well-known methods on a preliminary version of our data to establish the current state of the art. We also made the data freely available on the web at http://vision.middlebury.edu/flow/. Subsequently a number of researchers have uploaded their results to our website and published papers using the data. A significant improvement in performance has already been achieved. In this paper we analyze the results obtained to date and draw a large number of conclusions from them. KeywordsOptical flow–Survey–Algorithms–Database–Benchmarks–Evaluation–Metrics
Article
Stereo matching is one of the most active research areas in computer vision. While a large number of algorithms for stereo correspondence have been developed, relatively little work has been done on characterizing their performance. In this paper, we present a taxonomy of dense, two-frame stereo methods. Our taxonomy is designed to assess the different components and design decisions made in individual stereo algorithms. Using this taxonomy, we compare existing stereo methods and present experiments evaluating the performance of many different variants. In order to establish a common software platform and a collection of data sets for easy evaluation, we have designed a stand-alone, flexible C++ implementation that enables the evaluation of individual components and that can easily be extended to include new algorithms. We have also produced several new multi-frame stereo data sets with ground truth and are making both the code and data sets available on the Web. Finally, we include a comparative evaluation of a large set of today's best-performing stereo algorithms.