scholarly journals Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks

2019 ◽  
Vol 11 (11) ◽  
pp. 1309 ◽  
Author(s):  
Ben G. Weinstein ◽  
Sergio Marconi ◽  
Stephanie Bohlman ◽  
Alina Zare ◽  
Ethan White

Remote sensing can transform the speed, scale, and cost of biodiversity and forestry surveys. Data acquisition currently outpaces the ability to identify individual organisms in high resolution imagery. We outline an approach for identifying tree-crowns in RGB imagery while using a semi-supervised deep learning detection network. Individual crown delineation has been a long-standing challenge in remote sensing and available algorithms produce mixed results. We show that deep learning models can leverage existing Light Detection and Ranging (LIDAR)-based unsupervised delineation to generate trees that are used for training an initial RGB crown detection model. Despite limitations in the original unsupervised detection approach, this noisy training data may contain information from which the neural network can learn initial tree features. We then refine the initial model using a small number of higher-quality hand-annotated RGB images. We validate our proposed approach while using an open-canopy site in the National Ecological Observation Network. Our results show that a model using 434,551 self-generated trees with the addition of 2848 hand-annotated trees yields accurate predictions in natural landscapes. Using an intersection-over-union threshold of 0.5, the full model had an average tree crown recall of 0.69, with a precision of 0.61 for the visually-annotated data. The model had an average tree detection rate of 0.82 for the field collected stems. The addition of a small number of hand-annotated trees improved the performance over the initial self-supervised model. This semi-supervised deep learning approach demonstrates that remote sensing can overcome a lack of labeled training data by generating noisy data for initial training using unsupervised methods and retraining the resulting models with high quality labeled data.

2019 ◽  
Author(s):  
Ben. G. Weinstein ◽  
Sergio Marconi ◽  
Stephanie Bohlman ◽  
Alina Zare ◽  
Ethan White

AbstractRemote sensing can transform the speed, scale, and cost of biodiversity and forestry surveys. Data acquisition currently outpaces the ability to identify individual organisms in high resolution imagery. We outline an approach for identifying tree-crowns in true color, or red/green blue (RGB) imagery using a deep learning detection network. Individual crown delineation is a persistent challenge in studies of forested ecosystems and has primarily been addressed using three-dimensional LIDAR. We show that deep learning models can leverage existing lidar-based unsupervised delineation approaches to initially train an RGB crown detection model, which is then refined using a small number of hand-annotated RGB images. We validate our proposed approach using an open-canopy site in the National Ecological Observation Network (NEON). Our results show that combining LIDAR and RGB methods in a self-supervised model improves predictions of trees in natural landscapes. The addition of a small number of hand-annotated trees improved performance over the initial self-supervised model. While undercounting of individual trees in complex canopies remains an area of development, deep learning can increase the performance of remotely sensed tree surveys.


Author(s):  
S. Kuikel ◽  
B. Upadhyay ◽  
D. Aryal ◽  
S. Bista ◽  
B. Awasthi ◽  
...  

Abstract. Individual Tree Crown (ITC) delineation from aerial imageries plays an important role in forestry management and precision farming. Several conventional as well as machine learning and deep learning algorithms have been recently used in ITC detection purpose. In this paper, we present Convolutional Neural Network (CNN) and Support Vector Machine (SVM) as the deep learning and machine learning algorithms along with conventional methods of classification such as Object Based Image Analysis (OBIA) and Nearest Neighborhood (NN) classification for banana tree delineation. The comparison was done based by considering two cases; Firstly, every single classifier was compared by feeding the image with height information to see the effect of height in banana tree delineation. Secondly, individual classifiers were compared quantitatively and qualitatively based on five metrices i.e., Overall Accuracy, Recall, Precision, F-Score, and Intersection Over Union (IoU) and best classifier was determined. The result shows that there are no significant differences in the metrices when height information was fed as there were banana tree of almost similar height in the farm. The result as discussed in quantitative and qualitative analysis showed that the CNN algorithm out performed SVM, OBIA and NN techniques for crown delineation in term of performance measures.


Author(s):  
Ben. G. Weinstein ◽  
Sergio Marconi ◽  
Mélaine Aubry-Kientz ◽  
Gregoire Vincent ◽  
Henry Senyondo ◽  
...  

AbstractRemote sensing of forested landscapes can transform the speed, scale, and cost of forest research. The delineation of individual trees in remote sensing images is an essential task in forest analysis. Here we introduce a new Python package, DeepForest, that detects individual trees in high resolution RGB imagery using deep learning.While deep learning has proven highly effective in a range of computer vision tasks, it requires large amounts of training data that are typically difficult to obtain in ecological studies. DeepForest overcomes this limitation by including a model pre-trained on over 30 million algorithmically generated crowns from 22 forests and fine-tuned using 10,000 hand-labeled crowns from 6 forests.The package supports the application of this general model to new data, fine tuning the model to new datasets with user labeled crowns, training new models, and evaluating model predictions. This simplifies the process of using and retraining deep learning models for a range of forests, sensors, and spatial resolutions.We illustrate the workflow of DeepForest using data from the National Ecological Observatory Network, a tropical forest in French Guiana, and street trees from Portland, Oregon.


2020 ◽  
Vol 12 (15) ◽  
pp. 2426
Author(s):  
Alin-Ionuț Pleșoianu ◽  
Mihai-Sorin Stupariu ◽  
Ionuț Șandric ◽  
Ileana Pătru-Stupariu ◽  
Lucian Drăguț

Traditional methods for individual tree-crown (ITC) detection (image classification, segmentation, template matching, etc.) applied to very high-resolution remote sensing imagery have been shown to struggle in disparate landscape types or image resolutions due to scale problems and information complexity. Deep learning promised to overcome these shortcomings due to its superior performance and versatility, proven with reported detection rates of ~90%. However, such models still find their limits in transferability across study areas, because of different tree conditions (e.g., isolated trees vs. compact forests) and/or resolutions of the input data. This study introduces a highly replicable deep learning ensemble design for ITC detection and species classification based on the established single shot detector (SSD) model. The ensemble model design is based on varying the input data for the SSD models, coupled with a voting strategy for the output predictions. Very high-resolution unmanned aerial vehicles (UAV), aerial remote sensing imagery and elevation data are used in different combinations to test the performance of the ensemble models in three study sites with highly contrasting spatial patterns. The results show that ensemble models perform better than any single SSD model, regardless of the local tree conditions or image resolution. The detection performance and the accuracy rates improved by 3–18% with only as few as two participant single models, regardless of the study site. However, when more than two models were included, the performance of the ensemble models only improved slightly and even dropped.


2021 ◽  
Vol 13 (14) ◽  
pp. 2819
Author(s):  
Sudong Zang ◽  
Lingli Mu ◽  
Lina Xian ◽  
Wei Zhang

Lunar craters are very important for estimating the geological age of the Moon, studying the evolution of the Moon, and for landing site selection. Due to a lack of labeled samples, processing times due to high-resolution imagery, the small number of suitable detection models, and the influence of solar illumination, Crater Detection Algorithms (CDAs) based on Digital Orthophoto Maps (DOMs) have not yet been well-developed. In this paper, a large number of training data are labeled manually in the Highland and Maria regions, using the Chang’E-2 (CE-2) DOM; however, the labeled data cannot cover all kinds of crater types. To solve the problem of small crater detection, a new crater detection model (Crater R-CNN) is proposed, which can effectively extract the spatial and semantic information of craters from DOM data. As incomplete labeled samples are not conducive for model training, the Two-Teachers Self-training with Noise (TTSN) method is used to train the Crater R-CNN model, thus constructing a new model—called Crater R-CNN with TTSN—which can achieve state-of-the-art performance. To evaluate the accuracy of the model, three other detection models (Mask R-CNN, no-Mask R-CNN, and Crater R-CNN) based on semi-supervised deep learning were used to detect craters in the Highland and Maria regions. The results indicate that Crater R-CNN with TTSN achieved the highest precision (of 91.4% and 88.5%, respectively) in the Highland and Maria regions, even obtaining the highest recall and F1 score. Compared with Mask R-CNN, no-Mask R-CNN, and Crater R-CNN, Crater R-CNN with TTSN had strong robustness and better generalization ability for crater detection within 1 km in different terrains, making it possible to detect small craters with high accuracy when using DOM data.


2021 ◽  
Vol 13 (3) ◽  
pp. 479
Author(s):  
Shijie Yan ◽  
Linhai Jing ◽  
Huan Wang

Tree species surveys are crucial to forest resource management and can provide references for forest protection policy making. The traditional tree species survey in the field is labor-intensive and time-consuming, supporting the practical significance of remote sensing. The availability of high-resolution satellite remote sensing data enable individual tree species (ITS) recognition at low cost. In this study, the potential of the combination of such images and a convolutional neural network (CNN) to recognize ITS was explored. Firstly, individual tree crowns were delineated from a high-spatial resolution WorldView-3 (WV3) image and manually labeled as different tree species. Next, a dataset of the image subsets of the labeled individual tree crowns was built, and several CNN models were trained based on the dataset for ITS recognition. The models were then applied to the WV3 image. The results show that the distribution maps of six ITS offered an overall accuracy of 82.7% and a kappa coefficient of 0.79 based on the modified GoogLeNet, which used the multi-scale convolution kernel to extract features of the tree crown samples and was modified for small-scale samples. The ITS recognition method proposed in this study, with multi-scale individual tree crown delineation, avoids artificial tree crown delineation. Compared with the random forest (RF) and support vector machine (SVM) approaches, this method can automatically extract features and outperform RF and SVM in the classification of six tree species.


Author(s):  
Shawn Taylor

This paper describes the methods used in the submission for team Shawn for the data science competition “Airborne Remote Sensing to Ecological Information”. I used canopy height rasters as well as NDVI rasters of the study area. I first filtered out pixels using a minimum NDVI threshold, then derived individual tree crowns using a watershed algorithm. I imposed limits on tree crown size and number using a minimum distance between two crowns and a maximum crown radius. All parameters were derived by minimizing the Jaccard coefficient. The final Jaccard coefficient on the training data was 0.117. All methods were implemented in Python are are available in code repositories.


Author(s):  
B. Hu ◽  
W. Jung

Abstract. The objective of this study was to explore the utilization of deep learning networks in individual tree crown (ITC) delineation, a very important step in individual tree analysis. Even though many traditional machine learning methods have been developed for ITC delineation, the accuracy remains low, especially for dense forests where branches, crowns, and clusters of trees usually have similar characteristics and boundaries of tree crowns are not distinct. Advance in deep learning provides a good opportunity to improve ITC delineation. In this study, U-net, Residual U-net, and attention U-net were implemented for the first time in ITC delineation. In order to ensure that the boundaries of tree crowns were classified correctly, a weight map was generated to give more weights to boundary pixels between two close crowns in the loss function. These three networks were trained and tested using optical imagery obtained over a study site within the Great Lakes-St. Lawrence forest region, Ontario Canada. Based on two test sites dominated by open mixed forest and closed deciduous forests, respectively, the overall accuracies were 0.94 and 0.90, respectively for U-net, 0.89 and 0.62 for Residual U-net, and 0.96 and 0.83 for attention U-net.


2018 ◽  
Author(s):  
Shawn Taylor

This paper describes the methods used in the submission for team Shawn for the data science competition “Airborne Remote Sensing to Ecological Information”. I used canopy height rasters as well as NDVI rasters of the study area. I first filtered out pixels using a minimum NDVI threshold, then derived individual tree crowns using a watershed algorithm. I imposed limits on tree crown size and number using a minimum distance between two crowns and a maximum crown radius. All parameters were derived by minimizing the Jaccard coefficient. The final Jaccard coefficient on the training data was 0.117. All methods were implemented in Python are are available in code repositories.


Sign in / Sign up

Export Citation Format

Share Document