scholarly journals Influence of Image Quality and Light Consistency on the Performance of Convolutional Neural Networks for Weed Mapping

2021 ◽  
Vol 13 (11) ◽  
pp. 2140
Author(s):  
Chengsong Hu ◽  
Bishwa B. Sapkota ◽  
J. Alex Thomasson ◽  
Muthukumar V. Bagavathiannan

Recent computer vision techniques based on convolutional neural networks (CNNs) are considered state-of-the-art tools in weed mapping. However, their performance has been shown to be sensitive to image quality degradation. Variation in lighting conditions adds another level of complexity to weed mapping. We focus on determining the influence of image quality and light consistency on the performance of CNNs in weed mapping by simulating the image formation pipeline. Faster Region-based CNN (R-CNN) and Mask R-CNN were used as CNN examples for object detection and instance segmentation, respectively, while semantic segmentation was represented by Deeplab-v3. The degradations simulated in this study included resolution reduction, overexposure, Gaussian blur, motion blur, and noise. The results showed that the CNN performance was most impacted by resolution, regardless of plant size. When the training and testing images had the same quality, Faster R-CNN and Mask R-CNN were moderately tolerant to low levels of overexposure, Gaussian blur, motion blur, and noise. Deeplab-v3, on the other hand, tolerated overexposure, motion blur, and noise at all tested levels. In most cases, quality inconsistency between the training and testing images reduced CNN performance. However, CNN models trained on low-quality images were more tolerant against quality inconsistency than those trained by high-quality images. Light inconsistency also reduced CNN performance. Increasing the diversity of lighting conditions in the training images may alleviate the performance reduction but does not provide the same benefit from the number increase of images with the same lighting condition. These results provide insights into the impact of image quality and light consistency on CNN performance. The quality threshold established in this study can be used to guide the selection of camera parameters in future weed mapping applications.

2020 ◽  
Vol 11 (5) ◽  
pp. 37-60
Author(s):  
Chiman Kwan ◽  
Jude Larkin

In modern digital cameras, the Bayer color filter array (CFA) has been widely used. It is also widely known as CFA 1.0. However, Bayer pattern is inferior to the red-green-blue-white (RGBW) pattern, which is also known as CFA 2.0, in low lighting conditions in which Poisson noise is present. It is well known that demosaicing algorithms cannot effectively deal with Poisson noise and additional denoising is needed in order to improve the image quality. In this paper, we propose to evaluate various conventional and deep learning based denoising algorithms for CFA 2.0 in low lighting conditions. We will also investigate the impact of the location of denoising, which refers to whether the denoising is done before or after a critical step of demosaicing. Extensive experiments show that some denoising algorithms can indeed improve the image quality in low lighting conditions. We also noticed that the location of denoising plays an important role in the overall demosaicing performance.


2019 ◽  
Vol 128 (8-9) ◽  
pp. 2126-2145 ◽  
Author(s):  
Zhen-Hua Feng ◽  
Josef Kittler ◽  
Muhammad Awais ◽  
Xiao-Jun Wu

AbstractEfficient and robust facial landmark localisation is crucial for the deployment of real-time face analysis systems. This paper presents a new loss function, namely Rectified Wing (RWing) loss, for regression-based facial landmark localisation with Convolutional Neural Networks (CNNs). We first systemically analyse different loss functions, including L2, L1 and smooth L1. The analysis suggests that the training of a network should pay more attention to small-medium errors. Motivated by this finding, we design a piece-wise loss that amplifies the impact of the samples with small-medium errors. Besides, we rectify the loss function for very small errors to mitigate the impact of inaccuracy of manual annotation. The use of our RWing loss boosts the performance significantly for regression-based CNNs in facial landmarking, especially for lightweight network architectures. To address the problem of under-representation of samples with large pose variations, we propose a simple but effective boosting strategy, referred to as pose-based data balancing. In particular, we deal with the data imbalance problem by duplicating the minority training samples and perturbing them by injecting random image rotation, bounding box translation and other data augmentation strategies. Last, the proposed approach is extended to create a coarse-to-fine framework for robust and efficient landmark localisation. Moreover, the proposed coarse-to-fine framework is able to deal with the small sample size problem effectively. The experimental results obtained on several well-known benchmarking datasets demonstrate the merits of our RWing loss and prove the superiority of the proposed method over the state-of-the-art approaches.


2020 ◽  
Vol 10 (2) ◽  
pp. 391-400 ◽  
Author(s):  
Ying Chen ◽  
Xiaomin Qin ◽  
Jingyu Xiong ◽  
Shugong Xu ◽  
Jun Shi ◽  
...  

This study aimed to propose a deep transfer learning framework for histopathological image analysis by using convolutional neural networks (CNNs) with visualization schemes, and to evaluate its usage for automated and interpretable diagnosis of cervical cancer. First, in order to examine the potential of the transfer learning for classifying cervix histopathological images, we pre-trained three state-of-the-art CNN architectures on large-size natural image datasets and then fine-tuned them on small-size histopathological datasets. Second, we investigated the impact of three learning strategies on classification accuracy. Third, we visualized both the multiple-layer convolutional kernels of CNNs and the regions of interest so as to increase the clinical interpretability of the networks. Our method was evaluated on a database of 4993 cervical histological images (2503 benign and 2490 malignant). The experimental results demonstrated that our method achieved 95.88% sensitivity, 98.93% specificity, 97.42% accuracy, 94.81% Youden's index and 99.71% area under the receiver operating characteristic curve. Our method can reduce the cognitive burden on pathologists for cervical disease classification and improve their diagnostic efficiency and accuracy. It may be potentially used in clinical routine for histopathological diagnosis of cervical cancer.


Sign in / Sign up

Export Citation Format

Share Document