scholarly journals Automating the analysis of fish abundance using object detection: optimising animal ecology with deep learning

2019 ◽  
Author(s):  
Ellen M. Ditria ◽  
Sebastian Lopez-Marcano ◽  
Michael K. Sievers ◽  
Eric L. Jinks ◽  
Christopher J. Brown ◽  
...  

AbstractAquatic ecologists routinely count animals to provide critical information for conservation and management. Increased accessibility to underwater recording equipment such as cameras and unmanned underwater devices have allowed footage to be captured efficiently and safely. It has, however, led to immense volumes of data being collected that require manual processing, and thus significant time, labour and money. The use of deep learning to automate image processing has substantial benefits, but has rarely been adopted within the field of aquatic ecology. To test its efficacy and utility, we compared the accuracy and speed of deep learning techniques against human counterparts for quantifying fish abundance in underwater images and video footage. We collected footage of fish assemblages in seagrass meadows in Queensland, Australia. We produced three models using a MaskR-CNN object detection framework to detect the target species, an ecologically important fish, luderick (Girella tricuspidata). Our models were trained on three randomised 80:20 ratios of training:validation data-sets from a total of 6,080 annotations. The computer accurately determined abundance from videos with high performance using unseen footage from the same estuary as the training data (F1 = 92.4%, mAP50 = 92.5%), and from novel footage collected from a different estuary (F1 = 92.3%, mAP50 = 93.4%). The computer’s performance in determining MaxN was 7.1% better than human marine experts, and 13.4% better than citizen scientists in single image test data-sets, and 1.5% and 7.8% higher in video data-sets, respectively. We show that deep learning is a more accurate tool than humans at determining abundance, and that results are consistent and transferable across survey locations. Deep learning methods provide a faster, cheaper and more accurate alternative to manual data analysis methods currently used to monitor and assess animal abundance. Deep learning techniques have much to offer the field of aquatic ecology.

2021 ◽  
Vol 13 (2) ◽  
pp. 164
Author(s):  
Chuyao Luo ◽  
Xutao Li ◽  
Yongliang Wen ◽  
Yunming Ye ◽  
Xiaofeng Zhang

The task of precipitation nowcasting is significant in the operational weather forecast. The radar echo map extrapolation plays a vital role in this task. Recently, deep learning techniques such as Convolutional Recurrent Neural Network (ConvRNN) models have been designed to solve the task. These models, albeit performing much better than conventional optical flow based approaches, suffer from a common problem of underestimating the high echo value parts. The drawback is fatal to precipitation nowcasting, as the parts often lead to heavy rains that may cause natural disasters. In this paper, we propose a novel interaction dual attention long short-term memory (IDA-LSTM) model to address the drawback. In the method, an interaction framework is developed for the ConvRNN unit to fully exploit the short-term context information by constructing a serial of coupled convolutions on the input and hidden states. Moreover, a dual attention mechanism on channels and positions is developed to recall the forgotten information in the long term. Comprehensive experiments have been conducted on CIKM AnalytiCup 2017 data sets, and the results show the effectiveness of the IDA-LSTM in addressing the underestimation drawback. The extrapolation performance of IDA-LSTM is superior to that of the state-of-the-art methods.


2022 ◽  
pp. 27-50
Author(s):  
Rajalaxmi Prabhu B. ◽  
Seema S.

A lot of user-generated data is available these days from huge platforms, blogs, websites, and other review sites. These data are usually unstructured. Analyzing sentiments from these data automatically is considered an important challenge. Several machine learning algorithms are implemented to check the opinions from large data sets. A lot of research has been undergone in understanding machine learning approaches to analyze sentiments. Machine learning mainly depends on the data required for model building, and hence, suitable feature exactions techniques also need to be carried. In this chapter, several deep learning approaches, its challenges, and future issues will be addressed. Deep learning techniques are considered important in predicting the sentiments of users. This chapter aims to analyze the deep-learning techniques for predicting sentiments and understanding the importance of several approaches for mining opinions and determining sentiment polarity.


2019 ◽  
Vol 128 (2) ◽  
pp. 261-318 ◽  
Author(s):  
Li Liu ◽  
Wanli Ouyang ◽  
Xiaogang Wang ◽  
Paul Fieguth ◽  
Jie Chen ◽  
...  

Abstract Object detection, one of the most fundamental and challenging problems in computer vision, seeks to locate object instances from a large number of predefined categories in natural images. Deep learning techniques have emerged as a powerful strategy for learning feature representations directly from data and have led to remarkable breakthroughs in the field of generic object detection. Given this period of rapid evolution, the goal of this paper is to provide a comprehensive survey of the recent achievements in this field brought about by deep learning techniques. More than 300 research contributions are included in this survey, covering many aspects of generic object detection: detection frameworks, object feature representation, object proposal generation, context modeling, training strategies, and evaluation metrics. We finish the survey by identifying promising directions for future research.


2019 ◽  
Vol 7 (2) ◽  
pp. 418-429 ◽  
Author(s):  
Ye Yuan ◽  
Guijun Ma ◽  
Cheng Cheng ◽  
Beitong Zhou ◽  
Huan Zhao ◽  
...  

Abstract The manufacturing sector is envisioned to be heavily influenced by artificial-intelligence-based technologies with the extraordinary increases in computational power and data volumes. A central challenge in the manufacturing sector lies in the requirement of a general framework to ensure satisfied diagnosis and monitoring performances in different manufacturing applications. Here, we propose a general data-driven, end-to-end framework for the monitoring of manufacturing systems. This framework, derived from deep-learning techniques, evaluates fused sensory measurements to detect and even predict faults and wearing conditions. This work exploits the predictive power of deep learning to automatically extract hidden degradation features from noisy, time-course data. We have experimented the proposed framework on 10 representative data sets drawn from a wide variety of manufacturing applications. Results reveal that the framework performs well in examined benchmark applications and can be applied in diverse contexts, indicating its potential use as a critical cornerstone in smart manufacturing.


2021 ◽  
Author(s):  
Jian Wang ◽  
Nikolay V Dokholyan

In recent years, numerous structure-free deep-learning-based neural networks have emerged aiming to predict compound-protein interactions for drug virtual screening. Although these methods show high prediction accuracy in their own tests, we find that they are not generalizable to predict interactions between unknown proteins and unknown small molecules, thus hindering the utilization of state-of-the-art deep learning techniques in the field of virtual screening. In our work, we develop a compound-protein interaction predictor, YueL, which can predict compound-protein interactions with high generalizability. Upon comprehensive tests on various data sets, we find that YueL has the ability to predict interactions between unknown compounds and unknown proteins. We anticipate our work can motivate broad application of deep learning techniques for drug virtual screening to supersede the traditional docking and cheminformatics methods.


Author(s):  
N. Lakshmi Prasanna ◽  
Sk. Sohal Rehman ◽  
V. Naga Phani ◽  
S. Koteswara Rao ◽  
T. Ram Santosh

Automatic Colorization helps to hallucinate what an input gray scale image would look like when colorized. Automatic coloring makes it look and feel better than Grayscale. One of the most important technologies used in Machine learning is Deep Learning. Deep learning is nothing but to train the computer with certain algorithms which imitates the working of the human brain. Some of the areas in which it is used are medical, Industrial Automation, Electronics etc. The main objective of this project is coloring Grayscale images. We have umbrellaed the concepts of convolutional neural networks along with the use of the Opencv library in Python to construct our desired model. A user interface has also been fabricated to get personalized inputs using PIL. The user had to give details about boundaries, what colors to put, etc. Colorization requires considerable user intervention and remains a tedious, time consuming, and expensive task. So, in this paper we try to build a model to colorize the grayscale images automatically by using some modern deep learning techniques. In colorization task, the model needs to find characteristics to map grayscale images with colored ones.


2021 ◽  
Vol 5 (4) ◽  
pp. 544
Author(s):  
Antonius Angga Kurniawan ◽  
Metty Mustikasari

This research aims to implement deep learning techniques to determine fact and fake news in Indonesian language. The methods used are Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM). The stages of the research consisted of collecting data, labeling data, preprocessing data, word embedding, splitting data, forming CNN and LSTM models, evaluating, testing new input data and comparing evaluations of the established CNN and LSTM models. The Data are collected from a fact and fake news provider site that is valid, namely TurnbackHoax.id. There are 1786 news used in this study, with 802 fact and 984 fake news. The results indicate that the CNN and LSTM methods were successfully applied to determine fact and fake news in Indonesian language properly. CNN has an accuracy test, precision and recall value of 0.88, while the LSTM model has an accuracy test and precision value of 0.84 and a recall of 0.83. In testing the new data input, all of the predictions obtained by CNN are correct, while the prediction results obtained by LSTM have 1 wrong prediction. Based on the evaluation results and the results of testing the new data input, the model produced by the CNN method is better than the model produced by the LSTM method.


2021 ◽  
Vol 163 (1) ◽  
pp. 23
Author(s):  
Kaiming Cui ◽  
Junjie Liu ◽  
Fabo Feng ◽  
Jifeng Liu

Abstract Deep learning techniques have been well explored in the transiting exoplanet field; however, previous work mainly focuses on classification and inspection. In this work, we develop a novel detection algorithm based on a well-proven object detection framework in the computer vision field. Through training the network on the light curves of the confirmed Kepler exoplanets, our model yields about 90% precision and recall for identifying transits with signal-to-noise ratio higher than 6 (set the confidence threshold to 0.6). Giving a slightly lower confidence threshold, recall can reach higher than 95%. We also transfer the trained model to the TESS data and obtain similar performance. The results of our algorithm match the intuition of the human visual perception and make it useful to find single-transiting candidates. Moreover, the parameters of the output bounding boxes can also help to find multiplanet systems. Our network and detection functions are implemented in the Deep-Transit toolkit, which is an open-source Python package hosted on Github and PyPI.


Sign in / Sign up

Export Citation Format

Share Document