scholarly journals Automated Detection of Conifer Seedlings in Drone Imagery Using Convolutional Neural Networks

2019 ◽  
Vol 11 (21) ◽  
pp. 2585 ◽  
Author(s):  
Michael Fromm ◽  
Matthias Schubert ◽  
Guillermo Castilla ◽  
Julia Linke ◽  
Greg McDermid

Monitoring tree regeneration in forest areas disturbed by resource extraction is a requirement for sustainably managing the boreal forest of Alberta, Canada. Small remotely piloted aircraft systems (sRPAS, a.k.a. drones) have the potential to decrease the cost of field surveys drastically, but produce large quantities of data that will require specialized processing techniques. In this study, we explored the possibility of using convolutional neural networks (CNNs) on this data for automatically detecting conifer seedlings along recovering seismic lines: a common legacy footprint from oil and gas exploration. We assessed three different CNN architectures, of which faster region-CNN (R-CNN) performed best (mean average precision 81%). Furthermore, we evaluated the effects of training-set size, season, seedling size, and spatial resolution on the detection performance. Our results indicate that drone imagery analyzed by artificial intelligence can be used to detect conifer seedling in regenerating sites with high accuracy, which increases with the size in pixels of the seedlings. By using a pre-trained network, the size of the training dataset can be reduced to a couple hundred seedlings without any significant loss of accuracy. Furthermore, we show that combining data from different seasons yields the best results. The proposed method is a first step towards automated monitoring of forest restoration/regeneration.

2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Gustaf Halvardsson ◽  
Johanna Peterson ◽  
César Soto-Valero ◽  
Benoit Baudry

AbstractThe automatic interpretation of sign languages is a challenging task, as it requires the usage of high-level vision and high-level motion processing systems for providing accurate image perception. In this paper, we use Convolutional Neural Networks (CNNs) and transfer learning to make computers able to interpret signs of the Swedish Sign Language (SSL) hand alphabet. Our model consists of the implementation of a pre-trained InceptionV3 network, and the usage of the mini-batch gradient descent optimization algorithm. We rely on transfer learning during the pre-training of the model and its data. The final accuracy of the model, based on 8 study subjects and 9400 images, is 85%. Our results indicate that the usage of CNNs is a promising approach to interpret sign languages, and transfer learning can be used to achieve high testing accuracy despite using a small training dataset. Furthermore, we describe the implementation details of our model to interpret signs as a user-friendly web application.


2021 ◽  
Vol 13 (13) ◽  
pp. 2627
Author(s):  
Marks Melo Moura ◽  
Luiz Eduardo Soares de Oliveira ◽  
Carlos Roberto Sanquetta ◽  
Alexis Bastos ◽  
Midhun Mohan ◽  
...  

Precise assessments of forest species’ composition help analyze biodiversity patterns, estimate wood stocks, and improve carbon stock estimates. Therefore, the objective of this work was to evaluate the use of high-resolution images obtained from Unmanned Aerial Vehicle (UAV) for the identification of forest species in areas of forest regeneration in the Amazon. For this purpose, convolutional neural networks (CNN) were trained using the Keras–Tensorflow package with the faster_rcnn_inception_v2_pets model. Samples of six forest species were used to train CNN. From these, attempts were made with the number of thresholds, which is the cutoff value of the function; any value below this output is considered 0, and values above are treated as an output 1; that is, values above the value stipulated in the Threshold are considered as identified species. The results showed that the reduction in the threshold decreases the accuracy of identification, as well as the overlap of the polygons of species identification. However, in comparison with the data collected in the field, it was observed that there exists a high correlation between the trees identified by the CNN and those observed in the plots. The statistical metrics used to validate the classification results showed that CNN are able to identify species with accuracy above 90%. Based on our results, which demonstrate good accuracy and precision in the identification of species, we conclude that convolutional neural networks are an effective tool in classifying objects from UAV images.


The vehicle classification and detecting its license plate are important tasks in intelligent security and transportation systems. However, theexisting methods of vehicle classification and detection are highly complex which provides coarse-grained outcomesbecause of underfitting or overfitting of the model. Due toadvanced accomplishmentsof the Deep Learning, it was efficiently implemented to image classification and detection of objects. This proposed paper come up with a new approach which makes use of convolutional neural networks concept in Deep Learning.It consists of two steps: i) vehicle classification ii) vehicle license plate recognition. Numerous classicmodules of neural networks hadbeen implemented in training and testing the vehicle classification and detection of license plate model, such as CNN (convolutional neural networks), TensorFlow, and Tesseract-OCR. The suggestedtechnique candetermine the vehicle type, number plate and other alternative dataeffectively. This model provides security and log details regarding vehicles by using AI Surveillance. It guides the surveillance operators and assists human resources. With the help of the original dataset (training) and enriched dataset (testing), this customized model(algorithm) can achieve best outcomewith a standard accuracy of around 97.32% inclassification and detection of vehicles. By enlarging the quantity of the training dataset, the loss function and mislearning rate declines progressively. Therefore, this proposedmodelwhich uses DeepLearning hadbetterperformance and flexibility. When compared to outstandingtechniques in the strategicImage datasets, this deep learning modelscan gethigher competitor outcomes. Eventually, the proposed system suggests modern methods for advancementof the customized model and forecasts the progressivegrowth of deep learningperformance in the explorationof artificial intelligence (AI) &machine learning (ML) techniques.


2019 ◽  
Vol 10 (1) ◽  
pp. 41
Author(s):  
Yuchen Xin ◽  
Hon-Cheng Wong ◽  
Sio-Long Lo ◽  
Junliang Li

Anime-style comics are popular world-wide and an important industry in Asia. However, the output quantity and quality control of art workers have become the biggest obstacle to industrialization, and it is time consuming to produce new manga without the help of an intelligence assisted tool. As deep learning techniques have achieved great successes in different areas, it is worth exploring them to develop algorithms and systems for computational manga. Extracting line drawings from finished illustrations is one of the main tasks in drawing a manuscript and also a crucial task in the common painting process. However, traditional filters such as Sobel, Laplace, and Canny cannot output good results and require manual adjustments of the parameters. In order to address these problems, in this paper, we propose progressive full data convolutional neural networks for extracting lines from anime-style illustrations. Experimental results show that our progressive full data convolutional neural networks not only can learn as much as processing methods for the detailed regions, but also can accomplish the target extraction work with only a small training dataset.


Author(s):  
Glen Williams ◽  
Nicholas A. Meisel ◽  
Timothy W. Simpson ◽  
Christopher McComb

Abstract The widespread growth of additive manufacturing, a field with a complex informatic “digital thread”, has helped fuel the creation of design repositories, where multiple users can upload distribute, and download a variety of candidate designs for a variety of situations. Additionally, advancements in additive manufacturing process development, design frameworks, and simulation are increasing what is possible to fabricate with AM, further growing the richness of such repositories. Machine learning offers new opportunities to combine these design repository components’ rich geometric data with their associated process and performance data to train predictive models capable of automatically assessing build metrics related to AM part manufacturability. Although design repositories that can be used to train these machine learning constructs are expanding, our understanding of what makes a particular design repository useful as a machine learning training dataset is minimal. In this study we use a metamodel to predict the extent to which individual design repositories can train accurate convolutional neural networks. To facilitate the creation and refinement of this metamodel, we constructed a large artificial design repository, and subsequently split it into sub-repositories. We then analyzed metadata regarding the size, complexity, and diversity of the sub-repositories for use as independent variables predicting accuracy and the required training computational effort for training convolutional neural networks. The networks each predict one of three additive manufacturing build metrics: (1) part mass, (2) support material mass, and (3) build time. Our results suggest that metamodels predicting the convolutional neural network coefficient of determination, as opposed to computational effort, were most accurate. Moreover, the size of a design repository, the average complexity of its constituent designs, and the average and spread of design spatial diversity were the best predictors of convolutional neural network accuracy.


2021 ◽  
Author(s):  
Blessy Babu ◽  
Hari V Sreeniva

Abstract This paper summarizes the intelligent detection of modulation scheme in an incoming signal, build on convolutional neural network (CNN). It describes the creation of training dataset, realization of CNN, testing and validation. The raw modulated signals are converted into 2D and put on to the network for training. The resulting prototype is adopted for detection. The results signify that the intended approach gives better prediction for the identification of modulated signal without need for any selective feature extraction. The system performance on noise is also evaluated and modelled.


2019 ◽  
Vol 24 (3-4) ◽  
pp. 107-113
Author(s):  
Kondratiuk S.S. ◽  

The technology, which is implemented with cross platform tools, is proposed for modeling of gesture units of sign language, animation between states of gesture units with a combination of gestures (words). Implemented technology simulates sequence of gestures using virtual spatial hand model and performs recognition of dactyl items from camera input using trained on collected training dataset set convolutional neural network. With the cross platform means technology achieves the ability to run on multiple platforms without re-implementing for each platform


2019 ◽  
Vol 24 (1-2) ◽  
pp. 94-100
Author(s):  
Kondratiuk S.S. ◽  

The technology, which is implemented with cross platform tools, is proposed for modeling of gesture units of sign language, animation between states of gesture units with a combination of gestures (words). Implemented technology simulates sequence of gestures using virtual spatial hand model and performs recognition of dactyl items from camera input using trained on collected training dataset set convolutional neural network, based on the MobileNetv3 architecture, and with the optimal configuration of layers and network parameters. On the collected test dataset accuracy of over 98% is achieved.


PLoS ONE ◽  
2020 ◽  
Vol 15 (11) ◽  
pp. e0242013
Author(s):  
Hongyu Wang ◽  
Hong Gu ◽  
Pan Qin ◽  
Jia Wang

Background Pneumothorax can lead to a life-threatening emergency. The experienced radiologists can offer precise diagnosis according to the chest radiographs. The localization of the pneumothorax lesions will help to quickly diagnose, which will be benefit for the patients in the underdevelopment areas lack of the experienced radiologists. In recent years, with the development of large neural network architectures and medical imaging datasets, deep learning methods have become a methodology of choice for analyzing medical images. The objective of this study was to the construct convolutional neural networks to localize the pneumothorax lesions in chest radiographs. Methods and findings We developed a convolutional neural network, called CheXLocNet, for the segmentation of pneumothorax lesions. The SIIM-ACR Pneumothorax Segmentation dataset was used to train and validate CheXLocNets. The training dataset contained 2079 radiographs with the annotated lesion areas. We trained six CheXLocNets with various hyperparameters. Another 300 annotated radiographs were used to select parameters of these CheXLocNets as the validation set. We determined the optimal parameters by the AP50 (average precision at the intersection over union (IoU) equal to 0.50), a segmentation evaluation metric used by several well-known competitions. Then CheXLocNets were evaluated by a test set (1082 normal radiographs and 290 disease radiographs), based on the classification metrics: area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and positive predictive value (PPV); segmentation metrics: IoU and Dice score. For the classification, CheXLocNet with best sensitivity produced an AUC of 0.87, sensitivity of 0.78 (95% CI 0.73-0.83), and specificity of 0.78 (95% CI 0.76-0.81). CheXLocNet with best specificity produced an AUC of 0.79, sensitivity of 0.46 (95% CI 0.40-0.52), and specificity of 0.92 (95% CI 0.90-0.94). For the segmentation, CheXLocNet with best sensitivity produced an IoU of 0.69 and Dice score of 0.72. CheXLocNet with best specificity produced an IoU of 0.77 and Dice score of 0.79. We combined them to form an ensemble CheXLocNet. The ensemble CheXLocNet produced an IoU of 0.81 and Dice score of 0.82. Our CheXLocNet succeeded in automatically detecting pneumothorax lesions, without any human guidance. Conclusions In this study, we proposed a deep learning network, called, CheXLocNet, for the automatic segmentation of chest radiographs to detect pneumothorax. Our CheXLocNets generated accurate classification results and high-quality segmentation masks for the pneumothorax at the same time. This technology has the potential to improve healthcare delivery and increase access to chest radiograph expertise for the detection of diseases. Furthermore, the segmentation results can offer comprehensive geometric information of lesions, which can benefit monitoring the sequential development of lesions with high accuracy. Thus, CheXLocNets can be further extended to be a reliable clinical decision support tool. Although we used transfer learning in training CheXLocNet, the parameters of CheXLocNet was still large for the radiograph dataset. Further work is necessary to prune CheXLocNet suitable for the radiograph dataset.


Sign in / Sign up

Export Citation Format

Share Document