scholarly journals Bat Detective - Deep Learning Tools for Bat Acoustic Signal Detection

2017 ◽  
Author(s):  
Oisin Mac Aodha ◽  
Rory Gibb ◽  
Kate E. Barlow ◽  
Ella Browning ◽  
Michael Firman ◽  
...  

SummaryPassive acoustic sensing has emerged as a powerful tool for quantifying anthropogenic impacts on biodiversity, especially for echolocating bat species. To better assess bat population trends there is a critical need for accurate, reliable, and open source tools that allow the detection and classification of bat calls in large collections of audio recordings. The majority of existing tools are commercial or have focused on the species classification task, neglecting the important problem of first localizing echolocation calls in audio which is particularly problematic in noisy recordings.We developed a convolutional neural network (CNN) based open-source pipeline for detecting ultrasonic, full-spectrum, search-phase calls produced by echolocating bats (BatDetect). Our deep learning algorithms (CNN FULL and CNN FAST) were trained on full-spectrum ultrasonic audio collected along road-transects across Romania and Bulgaria by citizen scientists as part of the iBats programme and labelled by users of www.batdetective.org. We compared the performance of our system to other algorithms and commercial systems on expert verified test datasets recorded from different sensors and countries. As an example application, we ran our detection pipeline on iBats monitoring data collected over five years from Jersey (UK), and compared results to a widely-used commercial system.Here, we show that both CNNFULL and CNNFAST deep learning algorithms have a higher detection performance (average precision, and recall) of search-phase echolocation calls with our test sets, when compared to other existing algorithms and commercial systems tested. Precision scores for commercial systems were reasonably good across all test datasets (>0.7), but this was at the expense of recall rates. In particular, our deep learning approaches were better at detecting calls in road-transect data, which contained more noisy recordings. Our comparison of CNNFULL and CNNFAST algorithms was favourable, although CNNFAST had a slightly poorer performance, displaying a trade-off between speed and accuracy. Our example monitoring application demonstrated that our open-source, fully automatic, BatDetect CNNFAST pipeline does as well or better compared to a commercial system with manual verification previously used to analyse monitoring data.We show that it is possible to both accurately and automatically detect bat search-phase echolocation calls, particularly from noisy audio recordings. Our detection pipeline enables the automatic detection and monitoring of bat populations, and further facilitates their use as indicator species on a large scale, particularly when combined with automatic species identification. We release our system and datasets to encourage future progress and transparency.

2018 ◽  
Author(s):  
A. J. Fairbrass ◽  
M. Firman ◽  
C. Williams ◽  
G. J. Brostow ◽  
H. Titheridge ◽  
...  

SUMMARYCities support unique and valuable ecological communities, but understanding urban wildlife is limited due to the difficulties of assessing biodiversity. Ecoacoustic surveying is a useful way of assessing habitats, where biotic sound measured from audio recordings is used as a proxy for biodiversity. However, existing algorithms for measuring biotic sound have been shown to be biased by non-biotic sounds in recordings, typical of urban environments.We develop CityNet, a deep learning system using convolutional neural networks (CNNs), to measure audible biotic (CityBioNet) and anthropogenic (CityAnthroNet) acoustic activity in cities. The CNNs were trained on a large dataset of annotated audio recordings collected across Greater London, UK. Using a held-out test dataset, we compare the precision and recall of CityBioNet and CityAnthroNet separately to the best available alternative algorithms: four acoustic indices (AIs): Acoustic Complexity Index, Acoustic Diversity Index, Bioacoustic Index, and Normalised Difference Soundscape Index, and a state-of-the-art bird call detection CNN (bulbul). We also compare the effect of non-biotic sounds on the predictions of CityBioNet and bulbul. Finally we apply CityNet to describe acoustic patterns of the urban soundscape in two sites along an urbanisation gradient.CityBioNet was the best performing algorithm for measuring biotic activity in terms of precision and recall, followed by bulbul, while the AIs performed worst. CityAnthroNet outperformed the Normalised Difference Soundscape Index, but by a smaller margin than CityBioNet achieved against the competing algorithms. The CityBioNet predictions were impacted by mechanical sounds, whereas air traffic and wind sounds influenced the bulbul predictions. Across an urbanisation gradient, we show that CityNet produced realistic daily patterns of biotic and anthropogenic acoustic activity from real-world urban audio data.Using CityNet, it is possible to automatically measure biotic and anthropogenic acoustic activity in cities from audio recordings. If embedded within an autonomous sensing system, CityNet could produce environmental data for cites at large-scales and facilitate investigation of the impacts of anthropogenic activities on wildlife. The algorithms, code and pre-trained models are made freely available in combination with two expert-annotated urban audio datasets to facilitate automated environmental surveillance in cities.


2020 ◽  
Vol 107 ◽  
pp. 224-255 ◽  
Author(s):  
Zhibin Zhao ◽  
Tianfu Li ◽  
Jingyao Wu ◽  
Chuang Sun ◽  
Shibin Wang ◽  
...  

2021 ◽  
Vol 3 (9) ◽  
Author(s):  
Sajjad Mardanirad ◽  
David A. Wood ◽  
Hassan Zakeri

Abstract In this paper, we present how precise deep learning algorithms can distinguish loss circulation severities in oil drilling operations. Lost circulation is one of the costliest downhole problem encountered during oil and gas well construction. Applying artificial intelligence can help drilling teams to be forewarned of pending lost circulation events and thereby mitigate their consequences. Data-driven methods are traditionally employed for fluid loss complexity quantification but are not able to achieve reliable predictions for field cases with large quantities of data. This paper attempts to investigate the performance of deep learning (DL) approach in classification the types of fluid loss from a very large field dataset. Three DL classification models are evaluated: Convolutional Neural Network (CNN), Gated Recurrent Unit (GRU) and Long-Short Term Memory (LSTM). Five fluid-loss classes are considered: No Loss, Seepage, Partial, Severe, and Complete Loss. 20 wells drilled into the giant Azadegan oil field (Iran) provide 65,376 data records are used to predict the fluid loss classes. The results obtained, based on multiple statistical performance measures, identify the CNN model as achieving superior performance (98% accuracy) compared to the LSTM and GRU models (94% accuracy). Confusion matrices provide further insight to the prediction accuracies achieved. The three DL models evaluated were all able to classify different types of lost circulation events with reasonable prediction accuracy. Future work is required to evaluate the performance of the DL approach proposed with additional large datasets. The proposed method helps drilling teams deal with lost circulation events efficiently. Article Highlights Three deep learning models classify fluid loss severity in an oil field carbonate reservoir. Deep learning algorithms advance machine learning a large resource dataset with 65,376 data records. Convolution neural network outperformed other deep learning methods.


2022 ◽  
Author(s):  
Nils Koerber

In recent years the amount of data generated by imaging techniques has grown rapidly along with increasing computational power and the development of deep learning algorithms. To address the need for powerful automated image analysis tools for a broad range of applications in the biomedical sciences, we present the Microscopic Image Analyzer (MIA). MIA combines a graphical user interface that obviates the need for programming skills with state-of-the-art deep learning algorithms for segmentation, object detection, and classification. It runs as a standalone, platform-independent application and is compatible with commonly used open source software packages. The software provides a unified interface for easy image labeling, model training and inference. Furthermore the software was evaluated in a public competition and performed among the top three for all tested data sets. The source code is available on https://github.com/MIAnalyzer/MIA.


Author(s):  
Migran N. Gevorkyan ◽  
Anastasia V. Demidova ◽  
Dmitry S. Kulyabov

The history of using machine learning algorithms to analyze statistical models is quite long. The development of computer technology has given these algorithms a new breath. Nowadays deep learning is mainstream and most popular area in machine learning. However, the authors believe that many researchers are trying to use deep learning methods beyond their applicability. This happens because of the widespread availability of software systems that implement deep learning algorithms, and the apparent simplicity of research. All this motivate the authors to compare deep learning algorithms and classical machine learning algorithms. The Large Hadron Collider experiment is chosen for this task, because the authors are familiar with this scientific field, and also because the experiment data is open source. The article compares various machine learning algorithms in relation to the problem of recognizing the decay reaction + + + at the Large Hadron Collider. The authors use open source implementations of machine learning algorithms. We compare algorithms with each other based on calculated metrics. As a result of the research, we can conclude that all the considered machine learning methods are quite comparable with each other (taking into account the selected metrics), while different methods have different areas of applicability.


2020 ◽  
Vol 2 ◽  
pp. 58-61 ◽  
Author(s):  
Syed Junaid ◽  
Asad Saeed ◽  
Zeili Yang ◽  
Thomas Micic ◽  
Rajesh Botchu

The advances in deep learning algorithms, exponential computing power, and availability of digital patient data like never before have led to the wave of interest and investment in artificial intelligence in health care. No radiology conference is complete without a substantial dedication to AI. Many radiology departments are keen to get involved but are unsure of where and how to begin. This short article provides a simple road map to aid departments to get involved with the technology, demystify key concepts, and pique an interest in the field. We have broken down the journey into seven steps; problem, team, data, kit, neural network, validation, and governance.


Sign in / Sign up

Export Citation Format

Share Document