scholarly journals Underwater Image Enhancement Using Deep Residual Framework

Author(s):  
Prof. Anuja Phapale ◽  
Atal Deshmukh ◽  
Keshav Katkar ◽  
Onkar Karale ◽  
Puja Kasture

There are various factors such as absorption, refraction & the phenomenon of scattering of light by particles suspended in water that are responsible for distorted colors, low contrast & blurred details of original underwater images. The traditional approaches include pre-processing the image using a descattering algorithm. The super-resolution (SR) method is applied. But this method has limitation that major part of the high frequency information is lost during descattering. This paper comes up with a solution for underwater image enhancement using a deep residual framework. Firstly, the generation of synthetic underwater images takes place for which cycle-consistent adversarial networks (CycleGAN) is employed. Further, these synthetic underwater images are used as training data for convolution neural network models. Secondly, the introduction of very-deep super-resolution reconstruction model to underwater resolution applications is carried out. Using this, the underwater Resnet model is proposed. It acts as a residual learning model for underwater image enhancement operations. Furthermore, the training mode & loss function are improved. Then, a multi-term loss function is formed which comprises of proposed edge difference loss & mean squared error loss. An asynchronous training mode is also being proposed that improves the performance of the multi-term loss function. Lastly, the discussion of the impact of batch normalization takes place. After a comparative analysis & underwater image enhancements, we can say that detailed enhancement performance & color correction of these proposed methods are much efficient & superior to that of previous traditional methods & deep learning models.

2021 ◽  
Vol 9 (2) ◽  
pp. 225
Author(s):  
Farong Gao ◽  
Kai Wang ◽  
Zhangyi Yang ◽  
Yejian Wang ◽  
Qizhong Zhang

In this study, an underwater image enhancement method based on local contrast correction (LCC) and multi-scale fusion is proposed to resolve low contrast and color distortion of underwater images. First, the original image is compensated using the red channel, and the compensated image is processed with a white balance. Second, LCC and image sharpening are carried out to generate two different image versions. Finally, the local contrast corrected images are fused with sharpened images by the multi-scale fusion method. The results show that the proposed method can be applied to water degradation images in different environments without resorting to an image formation model. It can effectively solve color distortion, low contrast, and unobvious details of underwater images.


2020 ◽  
Author(s):  
Paul Francoeur ◽  
Tomohide Masuda ◽  
David R. Koes

One of the main challenges in drug discovery is predicting protein-ligand binding affinity. Recently, machine learning approaches have made substantial progress on this task. However, current methods of model evaluation are overly optimistic in measuring generalization to new targets, and there does not exist a standard dataset of sufficient size to compare performance between models. We present a new dataset for structure-based machine learning, the CrossDocked2020 set, with 22.5 million poses of ligands docked into multiple similar binding pockets across the Protein Data Bank and perform a comprehensive evaluation of grid-based convolutional neural network models on this dataset. We also demonstrate how the partitioning of the training data and test data can impact the results of models trained with the PDBbind dataset, how performance improves by adding more, lower-quality training data, and how training with docked poses imparts pose sensitivity to the predicted affinity of a complex. Our best performing model, an ensemble of 5 densely connected convolutional newtworks, achieves a root mean squared error of 1.42 and Pearson R of 0.612 on the affinity prediction task, an AUC of 0.956 at binding pose classification, and a 68.4% accuracy at pose selection on the CrossDocked2020 set. By providing data splits for clustered cross-validation and the raw data for the CrossDocked2020 set, we establish the first standardized dataset for training machine learning models to recognize ligands in non-cognate target structures while also greatly expanding the number of poses available for training. In order to facilitate community adoption of this dataset for benchmarking protein-ligand binding affinity prediction, we provide our models, weights, and the CrossDocked2020 set at https://github.com/gnina/models.


2020 ◽  
Vol 17 (5) ◽  
pp. 172988142096164
Author(s):  
Yue Zhang ◽  
Fuchun Yang ◽  
Weikai He

Due to the absorption and scattering effect on light when traveling in water, underwater images exhibit serious weakening such as color deviation, low contrast, and blurry details. Traditional algorithms have certain limitations in the case of these images with varying degrees of fuzziness and color deviation. To address these problems, a new approach for single underwater image enhancement based on fusion technology was proposed in this article. First, the original image is preprocessed by the white balance algorithm and dark channel prior dehazing technologies, respectively; then two input images were obtained by color correction and contrast enhancement; and finally, the enhanced image was obtained by utilizing the multiscale fusion strategy which is based on the weighted maps constructed by combining the features of global contrast, local contrast, saliency, and exposedness. Qualitative results revealed that the proposed approach significantly removed haze, corrected color deviation, and preserved image naturalness. For quantitative results, the test with 400 underwater images showed that the proposed approach produced a lower average value of mean square error and a higher average value of peak signal-to-noise ratio than the compared method. Moreover, the enhanced results obtain the highest average value in terms of underwater image quality measures among the comparable methods, illustrating that our approach achieves superior performance on different levels of distorted and hazy images.


2020 ◽  
Author(s):  
Paul Francoeur ◽  
Tomohide Masuda ◽  
David R. Koes

One of the main challenges in drug discovery is predicting protein-ligand binding affinity. Recently, machine learning approaches have made substantial progress on this task. However, current methods of model evaluation are overly optimistic in measuring generalization to new targets, and there does not exist a standard dataset of sufficient size to compare performance between models. We present a new dataset for structure-based machine learning, the CrossDocked2020 set, with 22.5 million poses of ligands docked into multiple similar binding pockets across the Protein Data Bank and perform a comprehensive evaluation of grid-based convolutional neural network models on this dataset. We also demonstrate how the partitioning of the training data and test data can impact the results of models trained with the PDBbind dataset, how performance improves by adding more, lower-quality training data, and how training with docked poses imparts pose sensitivity to the predicted affinity of a complex. Our best performing model, an ensemble of 5 densely connected convolutional newtworks, achieves a root mean squared error of 1.42 and Pearson R of 0.612 on the affinity prediction task, an AUC of 0.956 at binding pose classification, and a 68.4% accuracy at pose selection on the CrossDocked2020 set. By providing data splits for clustered cross-validation and the raw data for the CrossDocked2020 set, we establish the first standardized dataset for training machine learning models to recognize ligands in non-cognate target structures while also greatly expanding the number of poses available for training. In order to facilitate community adoption of this dataset for benchmarking protein-ligand binding affinity prediction, we provide our models, weights, and the CrossDocked2020 set at https://github.com/gnina/models.


Author(s):  
Yang Wang ◽  
Yang Cao ◽  
Jing Zhang ◽  
Feng Wu ◽  
Zheng-Jun Zha

Underwater imaging often suffers from color cast and contrast degradation due to range-dependent medium absorption and light scattering. Introducing image statistics as prior has been proved to be an effective solution for underwater image enhancement. However, relative to the modal divergence of light propagation and underwater scenery, the existing methods are limited in representing the inherent statistics of underwater images resulting in color artifacts and haze residuals. To address this problem, this article proposes a convolutional neural network (CNN)-based framework to learn hierarchical statistical features related to color cast and contrast degradation and to leverage them for underwater image enhancement. Specifically, a pixel disruption strategy is first proposed to suppress intrinsic colors’ influence and facilitate modeling a unified statistical representation of underwater image. Then, considering the local variation of depth of field, two parallel sub-networks: Color Correction Network (CC-Net) and Contrast Enhancement Network (CE-Net) are presented. The CC-Net and CE-Net can generate pixel-wise color cast and transmission map and achieve spatial-varied color correction and contrast enhancement. Moreover, to address the issue of insufficient training data, an imaging model-based synthesis method that incorporates pixel disruption strategy is presented to generate underwater patches with global degradation consistency. Quantitative and subjective evaluations demonstrate that our proposed method achieves state-of-the-art performance.


2021 ◽  
Vol 2066 (1) ◽  
pp. 012050
Author(s):  
Hao Chen ◽  
Hongsen He ◽  
Xinghua Feng

Abstract Concerning to the problem in the distortion of color and the low contrast of underwater image, the image enhancement method in the underwater environment based on color correction and dark channel prior was proposed. When dealing with the color bias problem, the blue channel standard ratio is firstly calculated based on the blue channel, and the red and green channels of the underwater image are compensated to remove the blue and green background colors of the underwater image. In light of the problem in the low contrast of image in underwater environment, the dark channel prior (DCP) method based on the super pixel was used to enhance the corrected underwater image. Finally, the underwater object detection dataset images are tested, and the algorithm proposed in terms of the quality is made the comparison with six advanced image enhancement method in underwater environment. The experimental results show that the proposed algorithm earned the highest score in underwater quality evaluation index (UIQM) compared with the above algorithm.


2018 ◽  
Vol 7 (2.24) ◽  
pp. 296
Author(s):  
M Suganthy ◽  
S Lakshmi ◽  
S Palanivel

Effectively analyzing underwater images and identifying any object under the water has become a difficult task. Generally, the factors affecting underwater images are uneven lighting, low contrast,  blunt colors, and characteristics of an object based on absorption and scattering of light. The proposed technique involves applying white balancing and contrast enhancement to the original image. The combination of filters namely homomorphic filtering, wavelet denoising, bilateral filter , adaptive filters are used and  applied sequentially on the degraded underwater images. The results obtained showed that the proposed algorithm works well in refining the underwater image attributes. Peak Signal to Noise Ratio (PSNR) and Mean Squared Error (MSE) are used to evaluate performance of the algorithm.  


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3289
Author(s):  
Yanling Han ◽  
Lihua Huang ◽  
Zhonghua Hong ◽  
Shouqi Cao ◽  
Yun Zhang ◽  
...  

Underwater images are important carriers and forms of underwater information, playing a vital role in exploring and utilizing marine resources. However, underwater images have characteristics of low contrast and blurred details because of the absorption and scattering of light. In recent years, deep learning has been widely used in underwater image enhancement and restoration because of its powerful feature learning capabilities, but there are still shortcomings in detailed enhancement. To address the problem, this paper proposes a deep supervised residual dense network (DS_RD_Net), which is used to better learn the mapping relationship between clear in-air images and synthetic underwater degraded images. DS_RD_Net first uses residual dense blocks to extract features to enhance feature utilization; then, it adds residual path blocks between the encoder and decoder to reduce the semantic differences between the low-level features and high-level features; finally, it employs a deep supervision mechanism to guide network training to improve gradient propagation. Experiments results (PSNR was 36.2, SSIM was 96.5%, and UCIQE was 0.53) demonstrated that the proposed method can fully retain the local details of the image while performing color restoration and defogging compared with other image enhancement methods, achieving good qualitative and quantitative effects.


Sensors ◽  
2019 ◽  
Vol 19 (24) ◽  
pp. 5567 ◽  
Author(s):  
Yidan Liu ◽  
Huiping Xu ◽  
Dinghui Shang ◽  
Chen Li ◽  
Xiangqian Quan

In the shallow-water environment, underwater images often present problems like color deviation and low contrast due to light absorption and scattering in the water body, but for deep-sea images, additional problems like uneven brightness and regional color shift can also exist, due to the use of chromatic and inhomogeneous artificial lighting devices. Since the latter situation is rarely studied in the field of underwater image enhancement, we propose a new model to include it in the analysis of underwater image degradation. Based on the theoretical study of the new model, a comprehensive method for enhancing underwater images under different illumination conditions is proposed in this paper. The proposed method is composed of two modules: color-tone correction and fusion-based descattering. In the first module, the regional or full-extent color deviation caused by different types of incident light is corrected via frequency-based color-tone estimation. And in the second module, the residual low contrast and pixel-wise color shift problems are handled by combining the descattering results under the assumption of different states of the image. The proposed method is experimented on laboratory and open-water images of different depths and illumination states. Qualitative and quantitative evaluation results demonstrate that the proposed method outperforms many other methods in enhancing the quality of different types of underwater images, and is especially effective in improving the color accuracy and information content in badly-illuminated regions of underwater images with non-uniform illumination, such as deep-sea images.


Sign in / Sign up

Export Citation Format

Share Document