scholarly journals Convolutional Neural Networks for Phytoplankton identification and classification

2018 ◽  
Vol 2 ◽  
pp. e25762 ◽  
Author(s):  
Lara Lloret ◽  
Ignacio Heredia ◽  
Fernando Aguilar ◽  
Elisabeth Debusschere ◽  
Klaas Deneudt ◽  
...  

Phytoplankton form the basis of the marine food web and are an indicator for the overall status of the marine ecosystem. Changes in this community may impact a wide range of species (Capuzzo et al. 2018) ranging from zooplankton and fish to seabirds and marine mammals. Efficient monitoring of the phytoplankton community is therefore essential (Edwards et al. 2002). Traditional monitoring techniques are highly time intensive and involve taxonomists identifying and counting numerous specimens under the light microscope. With the recent development of automated sampling devices, image analysis technologies and learning algorithms, the rate of counting and identification of phytoplankton can be increased significantly (Thyssen et al. 2015). The FlowCAM (Álvarez et al. 2013) is an imaging particle analysis system for the identification and classification of phytoplankton. Within the Belgian Lifewatch observatory, monthly phytoplankton samples are taken at nine stations in the Belgian part of the North Sea. These samples are run through the FlowCAM and each particle is photographed. Next, the particles are identified based on their morphology (and fluorescence) using state-of-the-art Convolutional Neural Networks (CNNs) for computer vision. This procedure requires learning sets of expert validated images. The CNNs are specifically designed to take advantage of the two dimensional structure of these images by finding local patterns, being easier to train and having many fewer parameters than a fully connected network with the same number of hidden units. In this work we present our approach to the use of CNNs for the identification and classification of phytoplankton, testing it on several benchmarks and comparing with previous classification techniques. The network architecture used is ResNet50 (He et al. 2016). The framework is fully written in Python using the TensorFlow (Abadi, M. et al. 2016) module for Deep Learning. Deployment and exploitation of the current framework is supported by the recently started European Union Horizon 2020 programme funded project DEEP-Hybrid-Datacloud (Grant Agreement number 777435), which supports the expensive training of the system needed to develop the application and provides the necessary computational resources to the users.

Aerospace ◽  
2020 ◽  
Vol 7 (12) ◽  
pp. 171
Author(s):  
Anil Doğru ◽  
Soufiane Bouarfa ◽  
Ridwan Arizar ◽  
Reyhan Aydoğan

Convolutional Neural Networks combined with autonomous drones are increasingly seen as enablers of partially automating the aircraft maintenance visual inspection process. Such an innovative concept can have a significant impact on aircraft operations. Though supporting aircraft maintenance engineers detect and classify a wide range of defects, the time spent on inspection can significantly be reduced. Examples of defects that can be automatically detected include aircraft dents, paint defects, cracks and holes, and lightning strike damage. Additionally, this concept could also increase the accuracy of damage detection and reduce the number of aircraft inspection incidents related to human factors like fatigue and time pressure. In our previous work, we have applied a recent Convolutional Neural Network architecture known by MASK R-CNN to detect aircraft dents. MASK-RCNN was chosen because it enables the detection of multiple objects in an image while simultaneously generating a segmentation mask for each instance. The previously obtained F1 and F2 scores were 62.67% and 59.35%, respectively. This paper extends the previous work by applying different techniques to improve and evaluate prediction performance experimentally. The approach uses include (1) Balancing the original dataset by adding images without dents; (2) Increasing data homogeneity by focusing on wing images only; (3) Exploring the potential of three augmentation techniques in improving model performance namely flipping, rotating, and blurring; and (4) using a pre-classifier in combination with MASK R-CNN. The results show that a hybrid approach combining MASK R-CNN and augmentation techniques leads to an improved performance with an F1 score of (67.50%) and F2 score of (66.37%).


Author(s):  
Héctor A. Sánchez-Hevia ◽  
Roberto Gil-Pita ◽  
Manuel Utrilla-Manso ◽  
Manuel Rosa-Zurera

AbstractThis paper analyses the performance of different types of Deep Neural Networks to jointly estimate age and identify gender from speech, to be applied in Interactive Voice Response systems available in call centres. Deep Neural Networks are used, because they have recently demonstrated discriminative and representation capabilities in a wide range of applications, including speech processing problems based on feature extraction and selection. Networks with different sizes are analysed to obtain information on how performance depends on the network architecture and the number of free parameters. The speech corpus used for the experiments is Mozilla’s Common Voice dataset, an open and crowdsourced speech corpus. The results are really good for gender classification, independently of the type of neural network, but improve with the network size. Regarding the classification by age groups, the combination of convolutional neural networks and temporal neural networks seems to be the best option among the analysed, and again, the larger the size of the network, the better the results. The results are promising for use in IVR systems, with the best systems achieving a gender identification error of less than 2% and a classification error by age group of less than 20%.


Author(s):  
Anil Dogru ◽  
Soufiane Bouarfa ◽  
Ridwan Arizar ◽  
Reyhan Aydogan

Convolutional Neural Networks combined with autonomous drones are increasingly seen as enablers of partially automating the aircraft maintenance visual inspection process. Such an innovative concept can have a significant impact on aircraft operations. Through supporting aircraft maintenance engineers detect and classify a wide range of defects, the time spent on inspection can significantly be reduced. Examples of defects that can be automatically detected include aircraft dents, paint defects, cracks and holes, and lightning strike damage. Additionally, this concept could also increase the accuracy of damage detection and reduce the number of aircraft inspection incidents related to human factors like fatigue and time pressure. In our previous work, we have applied a recent Convolutional Neural Network architecture known by MASK R-CNN to detect aircraft dents. MASK-RCNN was chosen because it enables the detection of multiple objects in an image while simultaneously generating a segmentation mask for each instance. The previously obtained F1 and F2 scores were 62.67% and 59.35% respectively. This paper extends the previous work by applying different techniques to improve and evaluate prediction performance experimentally. The approaches uses include (1) Balancing the original dataset by adding images without dents; (2) Increasing data homogeneity by focusing on wing images only; (3) Exploring the potential of three augmentation techniques in improving model performance namely flipping, rotating, and blurring; and (4) using a pre-classifier in combination with MASK R-CNN. The results show that a hybrid approache combining MASK R-CNN and augmentation techniques leads to an improved performance with an F1 score of (67.50%) and F2 score of (66.37%)


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Adam Goodwin ◽  
Sanket Padmanabhan ◽  
Sanchit Hira ◽  
Margaret Glancey ◽  
Monet Slinowsky ◽  
...  

AbstractWith over 3500 mosquito species described, accurate species identification of the few implicated in disease transmission is critical to mosquito borne disease mitigation. Yet this task is hindered by limited global taxonomic expertise and specimen damage consistent across common capture methods. Convolutional neural networks (CNNs) are promising with limited sets of species, but image database requirements restrict practical implementation. Using an image database of 2696 specimens from 67 mosquito species, we address the practical open-set problem with a detection algorithm for novel species. Closed-set classification of 16 known species achieved 97.04 ± 0.87% accuracy independently, and 89.07 ± 5.58% when cascaded with novelty detection. Closed-set classification of 39 species produces a macro F1-score of 86.07 ± 1.81%. This demonstrates an accurate, scalable, and practical computer vision solution to identify wild-caught mosquitoes for implementation in biosurveillance and targeted vector control programs, without the need for extensive image database development for each new target region.


Sign in / Sign up

Export Citation Format

Share Document