ArticlePDF Available

A Variational Framework for Underwater Image Dehazing and Deblurring

Authors:

Abstract

Underwater captured images are usually degraded by low contrast, hazy, and blurry due to absorbing and scattering, which limits their analyses and applications. To address these problems, a red channel prior guided variational framework is proposed based on the complete underwater image formation model (UIFM). Unlike most of the existing methods that only consider the direct transmission and backscattering components, we additionally include forward scattering component into the UIFM. In the proposed variational framework, we successfully incorporate the normalized total variation item and sparse prior knowledge of blur kernel together. In addition, we perform the estimation of blur kernel by varying image resolution in a coarse-to-fine manner to avoid local minima. Moreover, for solving the generated non-smooth optimization problem, we employ the alternating direction method of multipliers (ADMM) to accelerate the whole progress. Experimental results demonstrate that the proposed method has a good performance on dehazing and deblurring. Extensive qualitative and quantitative comparisons further validate its superiority against the other state-of-the-art algorithms. The code is available online at: https://github.com/Hou-Guojia.
A preview of the PDF is not available
... Subsequently, the image is restored using the IFM, equipped with the TM and atmospheric light. In [8], a red channel prior (RCP)-guided variational framework is introduced to enhance the TM, and the image is restored utilizing the IFM. Generally, these methods heavily depend on hand-crafted priors and excel in dehazing outdoor images. ...
... In order to rigorously assess the efficacy of our proposed approach, we contrasted it against leading-edge methods in the domain. This encompasses non-learning techniques such as NLD [7], RLP [31], MMLE [10], and UNTV [8], as well as learning-driven paradigms including ACT [32], FGAN [33], DNet [12], AOD [14], and SCNet [34]. First, input images are restored through the above-mentioned methods and the proposed method, and then restoration accuracy is compared by utilizing the widely accepted quantitative metrics, Root Mean Square Error (RMSE) and Peak Signalto-Noise Ratio (PSNR). ...
Preprint
Full-text available
Underwater imaging presents unique challenges, notably color distortions and reduced contrast due to light attenuation and scattering. Most underwater image enhancement methods, first use linear transformations for color compensation and then enhance the image. We observed that linear transformation for color compensation is not suitable for certain images. For such images, a non-linear mapping is a better choice. Our paper introduces a unique underwater image restoration approach leveraging a streamlined convolutional neural network (CNN) for dynamic weight learning for linear and non-linear mapping. In the first phase, a classifier is applied that classifies the input images into Type-I or Type-II. In the second phase, depending on the classification, the Deep Line Model (DLM) for Type-I images or the Deep Curve Model (DCM) for Type-II images. For mapping an input image to an output image, DML creatively combines color compensation and contrast adjustment in a single step and uses deep lines for transformation, whereas DCM employs higher-order curves. Both models utilize lightweight neural networks that learn per-pixel dynamic weights based on the input image’s characteristics. Comprehensive evaluations on benchmark datasets using metrics like peak signal-to-noise ratio(PSNR) and root mean squares error (RMSE) affirm our method’s effectiveness in accurately restoring underwater images, outperforming existing techniques.
... Researchers have recently proposed many underwater image enhancement methods to solve these challenges. Generally speaking, these methods can be roughly summarized as image restoration [7][8][9][10][11][12][13][14][15][16][17][18][19], image enhancement [20][21][22][23][24][25][26][27][28][29][30][31], and deep learning methods [32][33][34][35][36][37][38][39]. Specifically, image restoration requires many prior assumptions to make model optimization difficult. ...
... Image restoration methods restore a clear image by deducing and estimating the raw image information and inverting the degradation process by the underwater imaging model, which mainly contains underwater optical imaging [7][8][9], polarization property [10,11], prior knowledge [12-15, 19, 41], and variational model [16][17][18]. For example, the Jaffe-McGlamery imaging model [8] and the dark channel prior (DCP) [12] are implemented to improve underwater vision. ...
Article
Underwater imaging systems have evolved into essential hardware equipment for developing and utilizing marine resources. However, the complex underwater physical environment has often led to severe quality degradation of underwater visual perception. To address these issues, we design a principal component fusion method of foreground and background to enhance an underwater image, named PCFB. Specifically, we present a color balance-guided color correction strategy to remove color distortion issues that equalize the pixel values of the a and b channels of the CIELab color model. Subsequently, we implement a percentile maximum-based contrast enhancement strategy and a multilayer transmission map estimated dehazing strategy on the color-corrected image to yield the contrast-enhanced foreground and dehazed background sub-images. Finally, we employ a principal component analysis fusion method to reconstruct a high-visibility underwater image by integrating the advantages of the foreground contrast-enhanced sub-image and the background dehazed sub-image. Comprehensive experiments on three datasets demonstrate that our PCFB surpasses state-of-the-art methods both qualitatively and quantitatively. Moreover, our PCFB exhibits outstanding generalization capabilities for addressing haze and low-light images. The code is publicly available at: https://www.researchgate.net/publication/381259520 2024-PCFB.
... In recent years, various enhancement methods have been developed to address the problem of decreasing RGB image quality captured by traditional cameras in underwater environments (Jian et al., 2021), but they all have their limitations. Physical model-based methods (Akkaynak and Treibitz, 2019;Xie et al., 2021) use estimated underwater imaging model parameters to reverse the degradation process and obtain more explicit images. However, these methods heavily rely on the model's assumptions, lack generalizability, and may produce unstable results when enhancing multi-degraded images. ...
... Marques and Albu (2020) created two lighting models for detail recovery and dark removal, respectively, followed by combining these outputs through a multi-scale fusion strategy. Similarly, Xie et al. (2021) introduced a novel red channel prior guided underwater normalized total variation model to deal with underwater image haze and blur. Even though physical model-based methods can improve color bias and visibility of underwater images in some cases, they are sensitive to prior assumptions. ...
Article
Full-text available
Underwater images often suffer from various degradations, such as color distortion, reduced visibility, and uneven illumination, caused by light absorption, scattering, and artificial lighting. However, most existing methods have focused on addressing singular or dual degradation aspects, lacking a comprehensive solution to underwater image degradation. This limitation hinders the application of vision technology in underwater scenarios. In this paper, we propose a framework for enhancing the quality of multi-degraded underwater images. This framework is distinctive in its ability to concurrently address color degradation, hazy blur, and non-uniform illumination by fusing RGB and Event signals. Specifically, an adaptive underwater color compensation algorithm is first proposed, informed by an analysis of the color degradation characteristics prevalent in underwater images. This compensation algorithm is subsequently integrated with a white balance algorithm to achieve color correction. Then, a dehazing method is developed, leveraging the fusion of sharpened images and gamma-corrected images to restore blurry details in RGB images and event reconstruction images. Finally, an illumination map is extracted from the RGB image, and a multi-scale fusion strategy is employed to merge the illumination map with the event reconstruction image, effectively enhancing the details in dark and bright areas. The proposed method successfully restores color fidelity, enhances image contrast and sharpness, and simultaneously preserves details of the original scene. Extensive experiments on the public dataset DAVIS-NUIUIED and our dataset DAVIS-MDUIED demonstrate that the proposed method outperforms state-of-the-art methods in enhancing multi-degraded underwater images.
... According to Jaffe-McGlamery model, the energy captured by camera is a linear superposition of three components: direct attenuation, forward scattering, and backward scattering components [55][56][57], as shown in Fig. 1. To avoid the ill-posed problem, the forward scattering with little effect is removed. ...
Article
Full-text available
Underwater images typically present poor visibility, color distortion, and noise, which limit the application in several high-level tasks of image analysis. To address these corruptions, a novel method is proposed to reconstruct high-quality underwater images, which is designed by integrating imaging model with noise and variational framework. Specifically, an improved underwater imaging model is first introduced by separating noise from real underwater scene. Subsequently, the hazy curves of degraded colors are decomposed to estimate transmission map, and a color loss prior is employed to correct the transmission map. Moreover, a first-order gradient guided filter is proposed to refine the transmission map. An evaluation formula is designed by combining illumination, contrast, and color deviation priors to accurately search for the background region. Finally, a variational model is established to restore underwater images and suppress noise based on the improved imaging model and image priors. Experimental results validate that the proposed method surpasses several outstanding approaches, demonstrating its well effectiveness in improving contrast, correcting color, and suppressing noise.
... An extending primal-dual hybrid gradient (E-PDHG) method combined with TV regularization was proposed for Prestack Seismic image deblurring [35]. By combining the normalized TV term with sparse prior knowledge of the blur kernel, Xie et al. [36] proposed a variational framework for underwater image dehazing and deblurring. In addition, TV-based model deblurring was also developed for hyperspectral image (HSI). ...
Article
Full-text available
Blurring and noise degrade the performance of image processing. To mitigate this effect, various regularization-based deblurring methods have been proposed. Total variation regularization is widely used owing to its excellent ability in preserving the salient edges, but it also tends to smooth the image details. In this paper, we propose a local extremum-constrained total variation (LECTV) framework for image deblurring. In the developed deblurring framework, we integrate prior knowledge of the dark channel with the structural features of the image into a single regularization term. Furthermore, unlike most existing methods that focus on the overall sparsity of the dark channel, the defined regularization term allows for a pixel-wise adaptive description of the image to restore its inherent spatial texture structure. Finally, a majorization-minimization-based method is designed to solve the developed LECTV framework. Experimental results on natural and hyperspectral images show that the designed framework exhibits excellent performance in removing multiple types and degrees of blurring. Extensive evaluations also further show its superiority compared to other advanced methods.
Conference Paper
Full-text available
Images captured underwater often suffer from sub-optimal llumination settings that can hide important visual features, reducing their quality. We present a novel single-image low-light underwater image enhancer, L^2UWE, that builds on our observation that an efficient model of atmospheric lighting can be derived from local contrast information. We create two distinct models and generate two enhanced images from them: one that highlights finer details, the other focused on darkness removal. A multi-scale fusion process is employed to combine these images while emphasizing regions of higher luminance, saliency and local contrast. We demonstrate the performance of L^2UWE by using seven metrics to test it against seven state-of-the-art enhancement methods specific to underwater and low-light scenes. Code available at https://github.com/tunai/l2uwe.
Article
Full-text available
Challenges for underwater captured image processing often lie in images degraded with haze, noise and low contrast, caused by absorption and scattering of the light during propagation. In this paper, we aim to establish a novel total variation and curvature based approach that can properly deal with these problems to achieve dehazing and denoising simultaneously. Integration with the underwater image formation model is successfully realized by formulating the global background light and the transmission map derived from the improved dark channel prior and underwater red channel prior into our variational framework respectively. Moreover, the generated non-smooth optimization problem is solved by the alternating direction method of multipliers (ADMM). Extensive experiments including real underwater image application tests and convergence curves display the significant gains of the proposed variational curvature model and developed ADMM algorithm. Qualitative and quantitative comparisons with several state-of-the-art methods as well as four evaluation metrics are further conducted to quantify the improvements of our fusion approach.
Article
The accurate and real-time detection of moving ships has become an essential component in maritime video surveillance, leading to enhanced traffic safety and security. With the rapid development of artificial intelligence, it becomes feasible to develop intelligent techniques to promote ship detection results in maritime applications. In this work, we propose to develop an enhanced convolutional neural network (CNN) to improve ship detection under different weather conditions. To be specific, the learning and representation capacities of our network are promoted by redesigning the sizes of anchor boxes, predicting the localization uncertainties of bounding boxes, introducing the soft non-maximum suppression, and reconstructing a mixed loss function. In addition, a flexible data augmentation strategy with generating synthetically-degraded images is presented to enlarge the volume and diversity of original dataset to train learning-based ship detection methods. This strategy is capable of making our CNN-based detection results more reliable and robust under adverse weather conditions, e.g., rain, haze, and low illumination. Experimental results under different monitoring conditions demonstrate that our method significantly outperforms other competing methods (e.g., SSD, Faster R-CNN, YOLOv2 and YOLOv3) in terms of detection accuracy, robustness and efficiency. The ship detection results under poor imaging conditions have also been implemented to demonstrate the superior performance of our learning method.
Article
Degraded visibility and geometrical distortion typically make the underwater vision more intractable than open air vision, which impedes the development of underwater-related machine vision and robotic perception. Therefore, this paper addresses the problem of joint underwater depth estimation and color correction from monocular underwater images, which aims at enjoying the mutual benefits between these two related tasks from a multi-task perspective. Our core ideas lie in our new deep learning architecture. Due to the lack of effective underwater training data, and the weak generalization to the real-world underwater images trained on synthetic data, we consider the problem from a novel perspective of style-level and feature-level adaptation, and propose an unsupervised adaptation network to deal with the joint learning problem. Specifically, a style adaptation network (SAN) is first proposed to learn a style-level transformation to adapt in-air images to the style of underwater domain. Then, we formulate a task network (TN) to jointly estimate the scene depth and correct the color from a single underwater image by learning domain-invariant representations. The whole framework can be trained end-to-end in an adversarial learning manner. Extensive experiments are conducted under air-to-water domain adaptation settings. We show that the proposed method performs favorably against state-of-the-art methods in both depth estimation and color correction tasks.
Conference Paper
Multi-scale approach representing image objects at various levels-of-details has been applied to various computer vision tasks. Existing image classification approaches place more emphasis on multi-scale convolution kernels, and overlook multi-scale feature maps. As such, some shallower information of the network will not be fully utilized. In this paper, we propose the Multi-Scale Residual (MSR) module that integrates multi-scale feature maps of the underlying information to the last layer of Convolutional Neural Network. Our proposed method significantly enhances the characteristics of the information in the final classification. Extensive experiments conducted on CIFAR100, Tiny-ImageNet and large-scale CalTech-256 datasets demonstrate the effectiveness of our method compared with Res-Family.
Article
Underwater image enhancement is such an important low-level vision task with many applications that numerous algorithms have been proposed in recent years. These algorithms developed upon various assumptions demonstrate successes from various aspects using different data sets and different metrics. In this work, we setup an undersea image capturing system, and construct a large-scale Real-world Underwater Image Enhancement (RUIE) data set divided into three subsets. The three subsets target at three challenging aspects for enhancement, i.e., image visibility quality, color casts, and higher-level detection/classification, respectively. We conduct extensive and systematic experiments on RUIE to evaluate the effectiveness and limitations of various algorithms to enhance visibility and correct color casts on images with hierarchical categories of degradation. Moreover, underwater image enhancement in practice usually serves as a preprocessing step for mid-level and high-level vision tasks. We thus exploit the object detection performance on enhanced images as a brand new task-specific evaluation criterion. The findings from these evaluations not only confirm what is commonly believed, but also suggest promising solutions and new directions for visibility enhancement, color correction, and object detection on real-world underwater images. The benchmark is available at: https://github.com/dlut-dimt/Realworld-Underwater-Image-Enhancement-RUIE-Benchmark .
Article
Underwater images play an essential role in acquiring and understanding underwater information. High-quality underwater images can guarantee the reliability of underwater intelligent systems. Unfortunately, underwater images are characterized by low contrast, color casts, blurring, low light, and uneven illumination, which severely affects the perception and processing of underwater information. To improve the quality of acquired underwater images, numerous methods have been proposed, particularly with the emergence of deep learning technologies. However, the performance of underwater image enhancement methods is still unsatisfactory due to lacking sufficient training data and effective network structures. In this paper, we solve this problem based on a conditional generative adversarial network (cGAN), where the clear underwater image is achieved by a multi-scale generator. Besides, we employ a dual discriminator to grab local and global semantic information, which enforces the generated results by the multi-scale generator realistic and natural. Experiments on real-world and synthetic underwater images demonstrate that the proposed method performs favorable against the state-of-the-art underwater image enhancement methods.
Article
Underwater image enhancement has been attracting much attention due to its significance in marine engineering and aquatic robotics. Numerous underwater image enhancement algorithms have been proposed in the last few years. However, these algorithms are mainly evaluated using either synthetic datasets or few selected real-world images. It is thus unclear how these algorithms would perform on images acquired in the wild and how we could gauge the progress in the field. To bridge this gap, we present the first comprehensive perceptual study and analysis of underwater image enhancement using large-scale real-world images. In this paper, we construct an Underwater Image Enhancement Benchmark (UIEB) including 950 real-world underwater images, 890 of which have the corresponding reference images. We treat the rest 60 underwater images which cannot obtain satisfactory reference images as challenging data. Using this dataset, we conduct a comprehensive study of the state-of-the-art underwater image enhancement algorithms qualitatively and quantitatively. In addition, we propose an underwater image enhancement network (called Water-Net) trained on this benchmark as a baseline, which indicates the generalization of the proposed UIEB for training Convolutional Neural Networks (CNNs). The benchmark evaluations and the proposed Water-Net demonstrate the performance and limitations of state-of-the-art algorithms, which shed light on future research in underwater image enhancement. The dataset and code are available at https://li-chongyi.github.io/proj_benchmark.html .