Fig 1 - uploaded by Christian W. Frey
Content may be subject to copyright.
Side-scan sonar beams (schematically).  

Side-scan sonar beams (schematically).  

Source publication
Article
Full-text available
Navigating an autonomous underwater vehicle (AUV) is a difficult task. Dead-reckoning navigation is subject to unbounded error due to sensor inaccuracies and is inapplicable for mission durations longer than a few minutes. To limit the estimation errors a global referencing method has to be used. SLAM (Simultaneous Localization And Mapping) is such...

Contexts in source publication

Context 1
... Side-scan Sonars: Side-scan (or side-looking) sonars obtain an image of the seafloor by sending out a sonar pulse (a so-called chirp) and recording the echo intensity over time. They usually have a very narrow beam (≈ 1 • ) in the horizontal plane perpendicular to the traveling direction of the AUV and a wide beam (≈ 50 • ) in the vertical plane (see Figure 1). The result is a one-dimensional scan line. ...
Context 2
... is another property of side-scan echoes that describes the effect that one may observe overlapping echoes from multiple parts of the surface that add up to a higher intensity value. This principle is shown in Figure 11 and 12. ...
Context 3
... an example, Figure 13 illustrates the solution space of possible surface patches that all explain a given backscatter intensity. Shown is a detail of a set of surface patches (blue) that may have produced the observed echo intensity (green) assum- ing a sonar signal with perfect time varying gain correction, isovelocity sound wave propagation and purely Lambertian surface scattering from a homogeneously sedimented seafloor as well as a knife-shaped sonar beam. ...
Context 4
... SLAM will be used to support navigation and to facilitate exploration. The concept is outlined in Figure 14. The proposed SLAM concept is based on an extensible fusion architecture which allows the integration of additional sensors such as multi-beam sonar or gated viewing cameras. ...

Similar publications

Conference Paper
Full-text available
Inverse-depth parameterization can successfully deal with the feature initialization problem in monocular simultaneous localization and mapping applications. However, it is redundant, and when multiple landmarks are initialized from the same image, it fails to enforce the ldquocommon originrdquo constraint. The authors propose two new variants that...
Conference Paper
Full-text available
This paper presents an approach to perform Simultaneous Localization and Mapping (SLAM) in underwater environments using a Mechanically Scanned Imaging Sonar (MSIS) not relying on the existence of features in the environment. The proposal has to deal with the particularities of the MSIS in order to obtain range scans while correcting the motion ind...
Conference Paper
Full-text available
SLAM algorithms do not perform consistent maps for large areas mainly due to the uncertainties that become prohibitive when the scenario becomes larger and to the in- crease of computational cost. The use of local maps has been demonstrated to be well suited for mapping large environments, reducing computational cost and improving map consistency....
Article
Full-text available
In this paper we present work integrating the robust sequence-based recognition capabilities of the RatSLAM system with the accurate 3D metric properties of a multicamera visual odometry system. The RatSLAM system provides scene sequence recognition capabilities that are not feature dependent, while the multicamera visual odometry system provides a...
Conference Paper
Full-text available
In this work we present a convergence analysis of the pose graph optimization problem, that arises in the context of mobile robots localization and mapping. The analysis is performed under some simplifying assumptions on the structure of the measurement covariance matrix and provides non trivial results on the aspects affecting convergence in nonli...

Citations

... In underwater area, some autonomous underwater vehicles using terrainaided navigation techniques have also achieved high positioning accuracy. In a 50 km underwater trial completed in the open sea between the Norway coast and Bering Island, the HUGIN AUV developed by Norwegian Defence Research Establishment achieved a positioning estimate with 4 meters error at the end of the trial by using underwater terrain matching as positioning update [6], which reveals the effectiveness of using terrain information as navigation reference. Some underwater SLAM approaches using side-scan sonar [7], forward-looking sonar and multi-beam sonar [8] have been reported [9].In ground environments, cameras, LiDAR and lasers can capture rich enough features or make accurate range and bearing measurements to operate SLAM with aid of inner navigation sensors [10][11]. ...
Chapter
A navigation approach with high positioning accuracy and extensive applicability is in need by many underwater and indoor tasks. This paper presents a regional navigation approach using regularly distributed artificial magnetic beacons. By placing magnetic beacons in the target area according to a certain rule, a repetitive regional magnetic field can be generated. When an autonomous vehicle navigates in this region, the magnetic matching approach can be used to get its real-time position relative to the magnetic beacons around. These magnetic beacons can then be treated as landmarks for the vehicle to perform simultaneous localization and mapping (SLAM) algorithm. To achieve as high matching accuracy as possible, the coalition game is utilized to determine the optimal beacon-distribution scheme. As the distribution of magnetic field is almost unaffected by nonmagnetic structures, this approach can be easily used in many indoor and underwater environments. By using a real regional magnetic survey dataset as background field, a simulation experiment is conducted and verifies the performance of the proposed approach in detail.
... Note that for each bin of sidescan, since we only know the slant range but not the grazing angle, we are essentially summing returns from all the angles (within the vertical beam width) within that range interval, where most of the returns are zeros due to they are corresponding to returns from water columns. However, in certain circumstances, there could be multiple surfaces where they are at the same distance from the sonar and the returns would be added up to a higher intensity, which is sometimes referred to as layover [29]. In this case, they will be multiple non-zero returns from multiple angles for one sidescan bin. ...
Preprint
Recent advances in differentiable rendering, which allow calculating the gradients of 2D pixel values with respect to 3D object models, can be applied to estimation of the model parameters by gradient-based optimization with only 2D supervision. It is easy to incorporate deep neural networks into such an optimization pipeline, allowing the leveraging of deep learning techniques. This also largely reduces the requirement for collecting and annotating 3D data, which is very difficult for applications, for example when constructing geometry from 2D sensors. In this work, we propose a differentiable renderer for sidescan sonar imagery. We further demonstrate its ability to solve the inverse problem of directly reconstructing a 3D seafloor mesh from only 2D sidescan sonar data.
... Woock and Frey [5] summarize the challenges of extracting depth information from sidescan sonar (SSS), which requires knowledge of sediment characteristics, surface and volume scattering properties, sound absorption and dispersion, water currents, variations in sound speed, and the sonar transducer beam pattern. Assumptions must be made to simplify the methods, such as isovelocity Sound Velocity Profile (SVP). ...
Preprint
We propose a novel data-driven approach for high-resolution bathymetric reconstruction from sidescan. Sidescan sonar (SSS) intensities as a function of range do contain some information about the slope of the seabed. However, that information must be inferred. Additionally, the navigation system provides the estimated trajectory, and normally the altitude along this trajectory is also available. From these we obtain a very coarse seabed bathymetry as an input. This is then combined with the indirect but high-resolution seabed slope information from the sidescan to estimate the full bathymetry. This sparse depth could be acquired by single-beam echo sounder, Doppler Velocity Log (DVL), other bottom tracking sensors or bottom tracking algorithm from sidescan itself. In our work, a fully convolutional network is used to estimate the depth contour and its aleatoric uncertainty from the sidescan images and sparse depth in an end-to-end fashion. The estimated depth is then used together with the range to calculate the point's 3D location on the seafloor. A high-quality bathymetric map can be reconstructed after fusing the depth predictions and the corresponding confidence measures from the neural networks. We show the improvement of the bathymetric map gained by using sparse depths with sidescan over estimates with sidescan alone. We also show the benefit of confidence weighting when fusing multiple bathymetric estimates into a single map.
... Ultimately, all sensors including camera-lighting systems need to be synchronized within a unique time reference and each image must be georeferenced (position and orientation) by fusing global positioning data with measurements of the DR sensors. Since image matching can provide accurate relative pose estimation, utilizing image matching techniques for post-processing can refine the coarse UV localization information (Elibol et al. 2011;Eustice et al. 2008;Negahdaripour and Xu 2002;Woock and Frey 2010). Furthermore, additional constraints such as loop detection and geophysical maps-based correction (Gracias et al. 2013) can also be applied during post processing to further improve the localization data. ...
Article
Full-text available
Visual systems are receiving increasing attention in underwater applications. While the photogrammetric and computer vision literature so far has largely targeted shallow water applications, recently also deep sea mapping research has come into focus. The majority of the seafloor, and of Earth’s surface, is located in the deep ocean below 200 m depth, and is still largely uncharted. Here, on top of general image quality degradation caused by water absorption and scattering, additional artificial illumination of the survey areas is mandatory that otherwise reside in permanent darkness as no sunlight reaches so deep. This creates unintended non-uniform lighting patterns in the images and non-isotropic scattering effects close to the camera. If not compensated properly, such effects dominate seafloor mosaics and can obscure the actual seafloor structures. Moreover, cameras must be protected from the high water pressure, e.g. by housings with thick glass ports, which can lead to refractive distortions in images. Additionally, no satellite navigation is available to support localization. All these issues render deep sea visual mapping a challenging task and most of the developed methods and strategies cannot be directly transferred to the seafloor in several kilometers depth. In this survey we provide a state of the art review of deep ocean mapping, starting from existing systems and challenges, discussing shallow and deep water models and corresponding solutions. Finally, we identify open issues for future lines of research.
... Woock and Frey [5] summarize the challenges of extracting depth information from SSS, which requires knowledge of sediment characteristics, surface and volume scattering properties, sound absorption and dispersion, water currents, variations in sound speed and the sonar transducer beam pattern. Assumptions must be made to simplify the methods, such as isospeed sound velocity profile (SVP). ...
Article
Full-text available
In this article, we propose a novel data-driven approach for high-resolution bathymetric reconstruction from sidescan. Sidescan sonar intensities as a function of range do contain some information about the slope of the seabed. However, that information must be inferred. In addition, the navigation system provides the estimated trajectory, and normally, the altitude along this trajectory is also available. From these, we obtain a very coarse seabed bathymetry as an input. This is then combined with the indirect but high-resolution seabed slope information from the sidescan to estimate the full bathymetry. This sparse depth could be acquired by single-beam echo sounder, Doppler velocity log, and other bottom tracking sensors or bottom tracking algorithm from sidescan itself. In our work, a fully convolutional network is used to estimate the depth contour and its aleatoric uncertainty from the sidescan images and sparse depth in an end-to-end fashion. The estimated depth is then used together with the range to calculate the point's three-dimensional location on the seafloor. A high-quality bathymetric map can be reconstructed after fusing the depth predictions and the corresponding confidence measures from the neural networks. We show the improvement of the bathymetric map gained by using sparse depths with sidescan over estimates with sidescan alone. We also show the benefit of confidence weighting when fusing multiple bathymetric estimates into a single map.
... In short, for the same target, sonar detects from different viewpoints, there are evident differences in the images obtained. Some scholars also call it anisotropy of acoustic imaging, which is used to represent the sensitivity of intensity to viewpoint [1][2][3][4]. The non-linear intensity difference of sonar images described above seriously hinders the development of sonar image matching technology. ...
Preprint
Full-text available
In the field of deep-sea exploration, sonar is presently the only efficient long-distance sensing device. The complicated underwater environment, such as noise interference, low target intensity or background dynamics, has brought many negative effects on sonar imaging. Among them, the problem of nonlinear intensity is extremely prevalent. It is also known as the anisotropy of acoustic sensor imaging, that is, when autonomous underwater vehicles (AUVs) carry sonar to detect the same target from different angles, the intensity variation between image pairs is sometimes very large, which makes the traditional matching algorithm almost ineffective. However, image matching is the basis of comprehensive tasks such as navigation, positioning, and mapping. Therefore, it is very valuable to obtain robust and accurate matching results. This paper proposes a combined matching method based on phase information and deep convolution features. It has two outstanding advantages: one is that the deep convolution features could be used to measure the similarity of the local and global positions of the sonar image; the other is that local feature matching could be performed at the key target position of the sonar image. This method does not need complex manual designs, and completes the matching task of nonlinear intensity sonar images in a close end-to-end manner. Feature matching experiments are carried out on the deep-sea sonar images captured by AUVs, and the results show that our proposal has preeminent matching accuracy and robustness.
... In short, for the same target, sonar detects from different viewpoints, there are evident differences in the images obtained. Some scholars also call it anisotropy of acoustic imaging, which is used to represent the sensitivity of intensity to viewpoint [1][2][3][4]. The non-linear intensity difference of sonar images described above seriously hinders the development of sonar image matching technology. ...
Preprint
In the field of deep-sea exploration, sonar is presently the only efficient long-distance sensing device. The complicated underwater environment, such as noise interference, low target intensity or background dynamics, has brought many negative effects on sonar imaging. Among them, the problem of nonlinear intensity is extremely prevalent. It is also known as the anisotropy of acoustic imaging, that is, when AUVs carry sonar to detect the same target from different angles, the intensity difference between image pairs is sometimes very large, which makes the traditional matching algorithm almost ineffective. However, image matching is the basis of comprehensive tasks such as navigation, positioning, and mapping. Therefore, it is very valuable to obtain robust and accurate matching results. This paper proposes a combined matching method based on phase information and deep convolution features. It has two outstanding advantages: one is that deep convolution features could be used to measure the similarity of the local and global positions of the sonar image; the other is that local feature matching could be performed at the key target position of the sonar image. This method does not need complex manual design, and completes the matching task of nonlinear intensity sonar images in a close end-to-end manner. Feature matching experiments are carried out on the deep-sea sonar images captured by AUVs, and the results show that our proposal has good matching accuracy and robustness.
... This is due to the fact that 2 electromagnetic waves transmission from satellite attenuates very rapidly in water column. It is the particular intention that motivates the development of another techniques for underwater navigation [3]. At the present state, most of underwater mapping and navigation algorithms rely on acoustic sensor due to its ability to propagate efficiently in the water. ...
Article
Full-text available
Underwater vision-based mapping (VbM) constructs three-dimensional (3D) map and robot position simultaneously out of a quasi-continuous structure from motion (SfM) method. It is the so-called simultaneous localization and mapping (SLAM), which might be beneficial for mapping of shallow seabed features as it is free from unnecessary parasitic returns which is found in sonar survey. This paper presents a discussion resulted from a small-scale testing of 3D underwater positioning task. We analyse the setting and performance of a standard web-camera, used for such a task, while fully submerged underwater. SLAM estimates the robot (i.e. camera) position from the constructed 3D map by reprojecting the detected features (points) to the camera scene. A marker-based camera calibration is used to eliminate refractions effect due to light propagation in water column. To analyse the positioning accuracy, a fiducial marker-based system –with millimetres accuracy of reprojection error– is used as a trajectory’s true value (ground truth). Controlled experiment with a standard web-camera running with 30 fps (frame per-second) shows that such a system is capable to robustly performing underwater navigation task. Sub-metre accuracy is achieved utilizing at least 1 pose (1 Hz) every second.
... A multi vehicle swarm of Autonomous underwater vehicle (AUV) and USV has been studied by Brink et al. [38]. Localization of AUV is challenging task because the vehicle is not able of receiving GPS signals, instead the vehicle navigation and localization can be done using long baseline (LBL) acoustic positioning system using sonar sensors [39,40]. ...
Conference Paper
Swarm robotic is a field of multi-robotics in which the robot's behavior is inspired from nature. With rapid development in the field of the multi-robotics and the lack of efficacy in traditional centralized controls method, decentralized nature inspired swarm algorithms were introduced to control the swarm behavior. Unmanned surface vehicles (USVs) are marine crafts that they can operate autonomously. Due to their potential in operating in different areas, these vehicles have been used for variety of reason including patrolling, border protection, environmental monitoring and oil spill confrontation. This paper provides a review of the Swarm of USVs, their application, simulation environments and the algorithms that has been used in the past and current projects.
... The problems that must be overcome in order to extract 3D information from SSS are summarised by Woock and Frey [8]. They include needing to know properties such as sediment, surface and volume scattering, absorption and dispersion, water currents, variations in sound speed, and the sonar transducer beam pattern. ...
Article
Full-text available
Sidescan sonar images are 2D representations of the seabed. The pixel location encodes distance from the sonar and along track coordinate. Thus one dimension is lacking for generating bathymetric maps from sidescan. The intensities of the return signals do, however, contain some information about this missing dimension. Just as shading gives clues to depth in camera images, these intensities can be used to estimate bathymetric profiles. The authors investigate the feasibility of using data driven methods to do this estimation. They include quantitative evaluations of two pixel‐to‐pixel convolutional neural networks trained as standard regression networks and using conditional generative adversarial network loss functions. Some interesting conclusions are presented as to when to use each training method.