Figure 8 - uploaded by Andreas Birk
Content may be subject to copyright.
Components of the camera housing.

Components of the camera housing.

Source publication
Article
Full-text available
A stereo vision system for deep-sea operations is presented. The system consists of cameras in pressure bottles, which are daisy-chained to a computer bottle. The system has substantial computation power for on-board stereo processing as well as for further computer vision methods to support autonomous intelligent functions, e.g., object recognitio...

Contexts in source publication

Context 1
... shown in detail in [22], the model is not only much more convenient to use then state of the art methods that require in-situ calibration, it is also more accurate and leads to high quality rectification required for, e.g., stereo processing. The camera bottle is sealed with a standard double oring design for the rear cap and the front cap with the window mount (Fig.8). Furthermore, all contact surfaces, e.g., the flanges of the caps and the ends of the pressure cylinder are machined to a glossy finish (Ra 0.4). ...
Context 2
... plastic shim in front of the camera (Fig.8) is only used to prevent the aluminium clamp from scratching the glass; it has no sealing effect. When submerged, the water presses the window against the front cap and its o-ring. ...
Context 3
... the following, the main modeling steps and the boundary conditions are shortly discussed. The main components of camera housing are shown in Fig.8. The main cylinder has a length of 265mm and an inner diameter of 95mm, which are determined by the dimensions of the camera, its lens and the cabling from the bulkhead connectors at the rear cap. ...

Citations

... Luczynski et al. [52] proposed a method for improving stereo imaging hardware for deep sea operations. The method had the computation power for processing onboard stereo vision and also for tasks of computer vision such as inspection, object recognition, mapping, navigation, and intervention. ...
Preprint
Full-text available
In recent years, underwater exploration for deep-sea resource utilization and development has a considerable interest. In an underwater environment, the obtained images and videos undergo several types of quality degradation resulting from light absorption and scattering, low contrast, color deviation, blurred details, and nonuniform illumination. Therefore, the restoration and enhancement of degraded images and videos are critical. Numerous techniques of image processing, pattern recognition and computer vision have been proposed for image restoration and enhancement, but many challenges remain. This survey presents a comparison of the most prominent approaches in underwater image processing and analysis. It also discusses an overview of the underwater environment with a broad classification into enhancement and restoration techniques and introduces the main underwater image degradation reasons in addition to the underwater image model. The existing underwater image analysis techniques, methods, datasets, and evaluation metrics are presented in detail. Furthermore, the existing limitations are analyzed, which are classified into image-related and environment-related categories. In addition, the performance is validated on images from the UIEB dataset for qualitative, quantitative, and computational time assessment. Areas in which underwater images have recently been applied are briefly discussed. Finally, recommendations for future research are provided and the conclusion is presented.
... Hardware-based underwater imaging methods reduce scattering and obtain clear underwater images mainly by improving imaging sensors or systems. The methods of this kind include the use of polarizers [8][9][10][11][12][13], lasers [14], deep-sea underwater camera [15], stereo camera [16][17][18][19][20], etc. In what follows, we introduce the representative hardware-based methods. ...
... In addition, histogram equalization is used to strengthen the corner of the image. To improve the hardware of stereo imaging in the deep sea, Luczynski et al. proposed a stereo imaging system which is the primary sensor for deep-sea operations [20]. Numerous factors are considered carefully in the stereo vision system. ...
Article
Full-text available
Cameras are integrated with various underwater vision systems for underwater object detection and marine biological monitor-ing. However, underwater images captured by cameras rarely achieve the desired visual quality, which may affect their further applications. Various underwater vision enhancement technologies have been proposed to improve the visual quality of under-water images in the past few decades, which is the focus of this paper. Specifically, we review the theory of underwater image degradations and the underwater image formation models. Meanwhile, this review summarizes various underwater vision enhancement technologies and reports the existing underwater image datasets. Further, we conduct extensive and systematic experiments to explore the limitations and superiority of various underwater vision enhancement methods. Finally, the recent trends and challenges of underwater vision enhancement are discussed. We wish this paper could serve as a reference source for future study and promote the development of this research field.
... Reference [41] presented a stereo vision system for deep-sea operations. The system comprises cameras in pressure bottles that are daisy-chained to a computer bottle. ...
Article
Full-text available
The inspection-class Remotely Operated Vehicles (ROVs) are crucial in underwater inspections. Their prime function is to allow the replacing of humans during risky subaquatic operations. These vehicles gather videos from underwater scenes that are sent online to a human operator who provides control. Furthermore, these videos are used for analysis. This demands an RGB camera operating at a close distance to the observed objects. Thus, to obtain a detailed depiction, the vehicle should move with a constant speed and a measured distance from the bottom. As very few inspection-class ROVs possess navigation systems that facilitate these requirements, this study had the objective of designing a vision-based control method to compensate for this limitation. To this end, a stereo vision system and image-feature matching and tracking techniques were employed. As these tasks are challenging in the underwater environment, we carried out analyses aimed at finding fast and reliable image-processing techniques. The analyses, through a sequence of experiments designed to test effectiveness, were carried out in a swimming pool using a VideoRay Pro 4 vehicle. The results indicate that the method under consideration enables automatic control of the vehicle, given that the image features are present in stereo-pair images as well as in consecutive frames captured by the left camera.
... When designing a stereo camera system, there are a number of factors to consider. Parameters like sensor type, lens, interface, and baseline selection can be modelled to match the requirements specific to the application [17]. However, many factors may change in the development process, so two design principles are highlighted in the paragraphs below. ...
Preprint
Full-text available
Autonomous Underwater Vehicles (AUVs) are becoming increasingly important for different types of industrial applications. The generally high cost of (AUVs) restricts the access to them and therefore advances in research and technological development. However, recent advances have led to lower cost commercially available Remotely Operated Vehicles (ROVs), which present a platform that can be enhanced to enable a high degree of autonomy, similar to that of a high-end (AUV). In this article, we present how a low-cost commercial-off-the-shelf (ROV) can be used as a foundation for developing versatile and affordable (AUVs). We introduce the required hardware modifications to obtain a system capable of autonomous operations as well as the necessary software modules. Additionally, we present a set of use cases exhibiting the versatility of the developed platform for intervention and mapping tasks.
... Stereo matching is the progress of getting the depth information from stereo image pairs in the same scene, which is essential for Autonomous Driving [1], 3D Reconstruction and Mapping [2], Human-Computer Interaction [3], Marine Science and Systems [4], Planetary Exploration [5], Unmanned Aerial Vehicles (UAV) [6] or Person Re-identification [7,8]. Compared with expensive lidar equipment, stereo matching is convenient and highefficient. ...
Article
Full-text available
Stereo matching is an important research field of computer vision. Due to the dimension of cost aggregation, current neural network-based stereo methods are difficult to trade-off speed and accuracy. To this end, we integrate fast 2D stereo methods with accurate 3D networks to improve performance and reduce running time. We leverage a 2D encoder-decoder network to generate a rough disparity map and construct a disparity range to guide the 3D aggregation network, which can significantly improve the accuracy and reduce the computational cost. We use a stacked hourglass structure to refine the disparity from coarse to fine. We evaluated our method on three public datasets. According to the KITTI official website results, Our network can generate an accurate result in 80 ms on a modern GPU. Compared to other 2D stereo networks (AANet, DeepPruner, FADNet, etc.), our network has a big improvement in accuracy. Meanwhile, it is significantly faster than other 3D stereo networks (5× than PSMNet, 7.5× than CSN and 22.5× than GANet, etc.), demonstrating the effectiveness of our method.
... The object used for the experiments here ( Figure 10) is a test structure in the form of a mock-up panel for trials in the context of deep-sea oil-and gas-production (OGP) [18], which was used in the EU project "Effective Dexterous ROV Operations in Presence of Communications Latencies (DexROV)". In DexROV, the amount of robot operators required offshore (Mediterranean Sea, offshore of Marseille, France) was reduced-hence, reducing cost and inconveniences-by facilitating offshore OPG operations from an onshore control center (in Brussels, Belgium) via a satellite communication link and by reducing the gap between low-level tele-operation and full autonomy, among others by enabling machine perception on-board of the Remotely Operated Vehicle (ROV) itself [43][44][45][46]. The model of the test structure is in the following experiments in a top-down view, which corresponds to the scenario when the ROV is in the initial approach phase, i.e., when sonar is used to localize the target structure from above. ...
Article
Full-text available
Sonars are essential for underwater sensing as they can operate over extended ranges and in poor visibility conditions. The use of a synthetic aperture is a popular approach to increase the resolution of sonars, i.e., the sonar with its N transducers is positioned at k places to generate a virtual sensor with kN transducers. The state of the art for synthetic aperture sonar (SAS) is strongly coupled to constraints, especially with respect to the trajectory of the placements and the need for good navigation data. In this article, we introduce an approach to SAS using registration of scans from single arrays, i.e., at individual poses of arbitrary trajectories, hence avoiding the need for navigation data of conventional SAS systems. The approach is introduced here for the near field using the coherent phase information of sonar scans. A Delay and Sum (D&S) beamformer (BF) is used, which directly operates on pixel/voxel form on a Cartesian grid supporting the registration. It is shown that this pixel/voxel-based registration and the coherent processing of several scans forming a synthetic aperture yields substantial improvements of the image resolution. The experimental evaluation is done with an advanced simulation tool generating realistic 2D sonar array data, i.e., with simulations of a linear 1D antenna reconstructing 2D images. For the image registration of the raw sonar scans, a robust implementation of a spectral method is presented. Furthermore, analyses with respect to the trajectories of the sensor locations are provided to remedy possible grating lobes due to the gaping positions of the transmitter devices.
Article
Single underwater image enhancement remains a challenging ill-posed problem, even with advanced deep learning methods, due to the significant information degeneration and various irrelevant contents. Current deep learning-based underwater image enhancement methods only consider using a single clear image as a positive feature for guiding the training of the enhancement network. However, the limited amount of helpful information constrains the network performance, and irrelevant contents consume many bits. Therefore, it is crucial to efficiently utilize cross-view neighboring features and provide corresponding relevant information for underwater enhancement. To address the challenges of degraded underwater images, we propose a novel cross-domain enhancement network (CVE-Net) that uses high-efficiency feature alignment to utilize neighboring features better. We employ a self-built database to optimize the helpful information and develop a feature alignment module (FAM) to adapt the temporal features. The dual-branch attention block is designed to handle different types of information and give more weight to essential features. Experiments demonstrate that CVE-Net outperforms state-of-the-art (SOTA) underwater vision enhancement methods in terms of both qualitatively and quantitatively results, significantly boosts the performance on underwater image quality, achieving a PSNR of 28.28dB, which is 25% higher than Ucolor on the multi-view dataset. CVE-Net improves image quality while maintaining a good complexity-perform.
Article
Camber and toe-in are important alignment parameters of vehicle wheels, which determine the operation stability and safety during vehicle driving. An automatic and accurate measurement method is proposed for camber and toe-in alignment without vision system calibration. Different from the previous target-based vision measurement (TVM) based on the maximum likelihood estimation (MLE), the RVM-DBSC model is proposed to achieve the automatic measurement of camber and toe-in alignment. Furthermore, the camber and toe-in measurement model is established by the maximum a posteriori (MAP) and the alternating direction method of multipliers (ADMM), rather than the traditional MLE. The contrast experiments are performed by the verification instrument, which indicates that the proposed method achieves effectiveness and high accuracy of camber and toe-in measurement.