scholarly journals A Web Service for Assessing Insect Abundances for Meadow Birds by Image Analysis

2019 ◽  
Author(s):  
Ricardo Michels ◽  
Rutger Vos

AbstractMeadow birds are a group of species native to the Netherlands characterized by breeding in meadows that has been in decline over the last several decades, despite widespread conservation efforts. Agricultural intensification is thought to be one of the main causes of this decline, but no yearly data exists on the surrounding ecology of these birds. Recent efforts have tried to assess the food supply of meadow birds by setting sticky traps and counting the number of insects caught on them. However, this approach cannot be applied on a large scale since counting the insects is very labour intensive and unappealing to the volunteers that contribute to this research. To get a better assessment of the food supply at a larger scale, we present a system to automate counting of insects on sticky traps. The system is intended to process uploaded images and metadata using computer vision techniques to determine the number of insects found in photographs taken from the sticky traps.

2019 ◽  
Author(s):  
Charlotte Kaffa ◽  
Rutger Vos

ABSTRACTAs of 1990, there are 27 bird species that have been assigned as meadow birds by the Dutch equivalent of the Farmland Bird Indicator (FBI). These birds have one common characteristic that classifies them as meadow birds: they prefer to breed in meadows. Since 1960, the overall number of meadow birds has been declining rapidly and recently only five species have shown increases. However, not only meadow birds have been declining, this same rate of decline is also seen in many vertebrate, insect, and plant species throughout Europe. Increasing agriculture and urbanisation are considered to be the main causes of these alarming declines and agri-environment schemes show insufficient effect. Not only decreased reproduction rate of meadow birds, but also decreased survival rate of meadow bird chicks may play an important role in the dropping meadow bird numbers. Most of the meadow birds eat insects and it is therefore hypothesized that their food supply is too low. The Louis Bolk Insitute and ANV Water, Land & Dijken have been setting sticky traps in several meadows and counting the number of trapped insects on each sticky trap to assess if the food supply of meadow birds is sufficient. However, counting the insects is very time consuming, unappealing, and error prone. Therefore, a system that uses image analysis to automatically count the insects was improved and deployed as a web application and command line application. This system analyses photographs of sticky traps and counts the insects found on the sticky traps that were set in May 2018. These results were compared to the number of counted insects on the sticky traps that were set in May 2017, tested if the difference was significant and if there was a correlation between the usage of certain management packages. The accuracy of the automated system was also tested by determining if automatically counted results were not significantly different from hand counted results. The results showed that the accuracy of the system was improved but was still unable to provide very reliable results, most likely due to the usage of low-quality photographs from 2017. The number of counted insects from the sticky traps that were set in 2017 was significantly lower as compared to 2018 and no actual correlation could be found between the number of insects and management packages. It is possible for insect populations to have grown this much, however, the difference in insect numbers could have been caused by the difference in temperature when placing the sticky traps, or the sticky traps being less sticky. It is also very likely that the number of insects on the traps in 2017 is lower due to the poor quality of the photographs, so fewer insects could be detected. If the insect populations have grown as significantly as is indicated from the results then it can be stated that the food supply of meadow birds is more sufficient as compared to 2017 and it would be probable that an increase in meadow birds has occurred or will occur in the near future. Further research should be conducted using high quality standardized photographs and carried out for multiple years to gain plentiful reliable data.


2019 ◽  
Author(s):  
Alan Bauer ◽  
Aaron George Bostrom ◽  
Joshua Ball ◽  
Christopher Applegate ◽  
Tao Cheng ◽  
...  

AbstractAerial imagery is regularly used by farmers and growers to monitor crops during the growing season. To extract meaningful phenotypic information from large-scale aerial images collected regularly from the field, high-throughput analytic solutions are required, which not only produce high-quality measures of key crop traits, but also support agricultural practitioners to make reliable management decisions of their crops. Here, we report AirSurf-Lettuce, an automated and open-source aerial image analysis platform that combines modern computer vision, up-to-date machine learning, and modular software engineering to measure yield-related phenotypes of millions of lettuces across the field. Utilising ultra-large normalized difference vegetation index (NDVI) images acquired by fixed-wing light aircrafts together with a deep-learning classifier trained with over 100,000 labelled lettuce signals, the platform is capable of scoring and categorising iceberg lettuces with high accuracy (>98%). Furthermore, novel analysis functions have been developed to map lettuce size distribution in the field, based on which global positioning system (GPS) tagged harvest regions can be derived to enable growers and farmers’ precise harvest strategies and marketability estimates before the harvest.


Plant Methods ◽  
2021 ◽  
Vol 17 (1) ◽  
Author(s):  
Shuo Zhou ◽  
Xiujuan Chai ◽  
Zixuan Yang ◽  
Hongwu Wang ◽  
Chenxue Yang ◽  
...  

Abstract Background Maize (Zea mays L.) is one of the most important food sources in the world and has been one of the main targets of plant genetics and phenotypic research for centuries. Observation and analysis of various morphological phenotypic traits during maize growth are essential for genetic and breeding study. The generally huge number of samples produce an enormous amount of high-resolution image data. While high throughput plant phenotyping platforms are increasingly used in maize breeding trials, there is a reasonable need for software tools that can automatically identify visual phenotypic features of maize plants and implement batch processing on image datasets. Results On the boundary between computer vision and plant science, we utilize advanced deep learning methods based on convolutional neural networks to empower the workflow of maize phenotyping analysis. This paper presents Maize-IAS (Maize Image Analysis Software), an integrated application supporting one-click analysis of maize phenotype, embedding multiple functions: (I) Projection, (II) Color Analysis, (III) Internode length, (IV) Height, (V) Stem Diameter and (VI) Leaves Counting. Taking the RGB image of maize as input, the software provides a user-friendly graphical interaction interface and rapid calculation of multiple important phenotypic characteristics, including leaf sheath points detection and leaves segmentation. In function Leaves Counting, the mean and standard deviation of difference between prediction and ground truth are 1.60 and 1.625. Conclusion The Maize-IAS is easy-to-use and demands neither professional knowledge of computer vision nor deep learning. All functions for batch processing are incorporated, enabling automated and labor-reduced tasks of recording, measurement and quantitative analysis of maize growth traits on a large dataset. We prove the efficiency and potential capability of our techniques and software to image-based plant research, which also demonstrates the feasibility and capability of AI technology implemented in agriculture and plant science.


Technologies ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 2
Author(s):  
Ashish Jaiswal ◽  
Ashwin Ramesh Babu ◽  
Mohammad Zaki Zadeh ◽  
Debapriya Banerjee ◽  
Fillia Makedon

Self-supervised learning has gained popularity because of its ability to avoid the cost of annotating large-scale datasets. It is capable of adopting self-defined pseudolabels as supervision and use the learned representations for several downstream tasks. Specifically, contrastive learning has recently become a dominant component in self-supervised learning for computer vision, natural language processing (NLP), and other domains. It aims at embedding augmented versions of the same sample close to each other while trying to push away embeddings from different samples. This paper provides an extensive review of self-supervised methods that follow the contrastive approach. The work explains commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far. Next, we present a performance comparison of different methods for multiple downstream tasks such as image classification, object detection, and action recognition. Finally, we conclude with the limitations of the current methods and the need for further techniques and future directions to make meaningful progress.


Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1384
Author(s):  
Yin Dai ◽  
Yifan Gao ◽  
Fayu Liu

Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification.


2019 ◽  
Vol 11 (10) ◽  
pp. 1181 ◽  
Author(s):  
Norman Kerle ◽  
Markus Gerke ◽  
Sébastien Lefèvre

The 6th biennial conference on object-based image analysis—GEOBIA 2016—took place in September 2016 at the University of Twente in Enschede, The Netherlands (see www [...]


2013 ◽  
Vol 17 (2) ◽  
pp. 261-272
Author(s):  
Silvia B. Matiacevich ◽  
Olivia C. Henríquez ◽  
Domingo Mery ◽  
Franco Pedreschi

Sign in / Sign up

Export Citation Format

Share Document