scholarly journals Use of Unmanned Aerial Vehicle for Multi-temporal Monitoring of Soybean Vegetation Fraction

2016 ◽  
Vol 41 (2) ◽  
pp. 126-137 ◽  
Author(s):  
Hee Sup Yun ◽  
Soo Hyun Park ◽  
Hak-Jin Kim ◽  
Wonsuk Daniel Lee ◽  
Kyung Do Lee ◽  
...  
2016 ◽  
Vol 8 (5) ◽  
pp. 416 ◽  
Author(s):  
Shenghui Fang ◽  
Wenchao Tang ◽  
Yi Peng ◽  
Yan Gong ◽  
Can Dai ◽  
...  

Agriculture ◽  
2018 ◽  
Vol 8 (5) ◽  
pp. 65 ◽  
Author(s):  
Robin Mink ◽  
Avishek Dutta ◽  
Gerassimos Peteinatos ◽  
Markus Sökefeld ◽  
Johannes Engels ◽  
...  

2015 ◽  
Vol 132 ◽  
pp. 19-27 ◽  
Author(s):  
Francisco Agüera Vega ◽  
Fernando Carvajal Ramírez ◽  
Mónica Pérez Saiz ◽  
Francisco Orgaz Rosúa

2020 ◽  
Vol 12 (10) ◽  
pp. 1668 ◽  
Author(s):  
Quanlong Feng ◽  
Jianyu Yang ◽  
Yiming Liu ◽  
Cong Ou ◽  
Dehai Zhu ◽  
...  

Vegetable mapping from remote sensing imagery is important for precision agricultural activities such as automated pesticide spraying. Multi-temporal unmanned aerial vehicle (UAV) data has the merits of both very high spatial resolution and useful phenological information, which shows great potential for accurate vegetable classification, especially under complex and fragmented agricultural landscapes. In this study, an attention-based recurrent convolutional neural network (ARCNN) has been proposed for accurate vegetable mapping from multi-temporal UAV red-green-blue (RGB) imagery. The proposed model firstly utilizes a multi-scale deformable CNN to learn and extract rich spatial features from UAV data. Afterwards, the extracted features are fed into an attention-based recurrent neural network (RNN), from which the sequential dependency between multi-temporal features could be established. Finally, the aggregated spatial-temporal features are used to predict the vegetable category. Experimental results show that the proposed ARCNN yields a high performance with an overall accuracy of 92.80%. When compared with mono-temporal classification, the incorporation of multi-temporal UAV imagery could significantly boost the accuracy by 24.49% on average, which justifies the hypothesis that the low spectral resolution of RGB imagery could be compensated by the inclusion of multi-temporal observations. In addition, the attention-based RNN in this study outperforms other feature fusion methods such as feature-stacking. The deformable convolution operation also yields higher classification accuracy than that of a standard convolution unit. Results demonstrate that the ARCNN could provide an effective way for extracting and aggregating discriminative spatial-temporal features for vegetable mapping from multi-temporal UAV RGB imagery.


Author(s):  
Du Mengmeng ◽  
◽  
Noguchi Noboru ◽  
Itoh Atsushi ◽  
Shibuya Yukinori ◽  
...  

2019 ◽  
pp. 1-17 ◽  
Author(s):  
Jordan A. Carey ◽  
Nicholas Pinter ◽  
Alexandra J. Pickering ◽  
Carol S. Prentice ◽  
Stephen B. Delong

Abstract The combination of unmanned aerial vehicle (UAV) photography with structure-from-motion (SfM) digital photogrammetry provides a quickly deployable and cost-effective method for monitoring geomorphic change, particularly for hazards such as landslides. The Scenic Drive landslide is a deep-seated slope failure in La Honda, CA, with episodic activity in 1998 and 2005–06. Heavy rainfall during 2016–17 initiated movement of a new and separate landslide directly upslope of the existing Scenic Drive landslide, damaging three residences. We acquired imagery of the Upper Scenic Drive landslide beginning 2 days after initial motion using a global positioning system–enabled UAV. We used this imagery to generate seven digital elevation models (DEMs) between January and May 2017, with spatial resolutions of ∼3–10 cm/pixel. We compared these DEMs with each other and with available light detection and ranging (LiDAR) data to assess landslide kinematics, including horizontal displacement vectors, rates of motion, and total mass redistribution, and to test the accuracy and applicability of UAV/SfM-derived measurements. We estimated the maximum horizontal displacement of the slide was at least 5 m during the monitoring period and calculated that ∼3,000 m3 of material was displaced by the landslide. Comparing the UAV-derived topography with synchronous terrestrial LiDAR scanning showed that accuracies of the two techniques are comparable, generally within 0.05 m horizontally and within 0.20 m vertically in unvegetated areas. This study demonstrates the capability of combining UAV and SfM to map and monitor active geomorphic processes in emergent situations where high-resolution digital topography is needed in near-real-time.


Sign in / Sign up

Export Citation Format

Share Document