Estimation of Corn Emergence Date Using UAV Imagery

2021 ◽  
Vol 64 (4) ◽  
pp. 1173-1183
Author(s):  
Chin Nee Vong ◽  
Stirling A. Stewart ◽  
Jianfeng Zhou ◽  
Newell R. Kitchen ◽  
Kenneth A. Sudduth

HighlightsUAV imagery can be used to characterize newly-emerged corn plants.Size and shape features used in a random forest model are able to predict days after emergence within a 3-day window.Diameter and area were important size features for predicting DAE for the first, second, and third week of emergence.Abstract. Assessing corn (Zea mays L.) emergence uniformity soon after planting is important for relating to grain production and making replanting decisions. Unmanned aerial vehicle (UAV) imagery has been used for determining corn densities at vegetative growth stage 2 (V2) and later, but not as a tool for quantifying emergence date. The objective of this study was to estimate days after corn emergence (DAE) using UAV imagery and a machine learning method. A field experiment was designed with four planting depths to obtain a range of corn emergence dates. UAV imagery was collected during the first, second, and third weeks after emergence. Acquisition height was approximately 5 m above ground level, which resulted in a ground sampling distance of 1.5 mm pixel-1. Seedling size and shape features derived from UAV imagery were used for DAE classification based on a random forest machine learning model. Results showed that 1-day DAE could be distinguished based on image features within the first week after initial corn emergence with a moderate overall classification accuracy of 0.49. However, for the second week and beyond, the overall classification accuracy diminished (0.20 to 0.35). When estimating DAE within a 3-day window (-1 to +1 day), the overall 3-day classification accuracies ranged from 0.54 to 0.88. Diameter, area, and the ratio of major axis length to area were important image features to predict corn DAE. Findings demonstrated that UAV imagery can detect newly-emerged corn plants and estimate their emergence date to assist in assessing emergence uniformity. Additional studies are needed for fine-tuning the image collection procedures and image feature identification to improve accuracy. Keywords: Corn emergence, Image features, Random forest, Unmanned aerial vehicle.

Author(s):  
Serkan Biçici ◽  
Mustafa Zeybek

The accuracy of random forest (RF) classification depends on several inputs. In this study, two primary inputs—training sample and features—are evaluated for road classification from an unmanned aerial vehicle-based point cloud. Training sample selection is a challenging step since the machine learning stage of the RF classification depends greatly on it. That is, an imbalanced training sample might dramatically decrease classification accuracy. Various criteria are defined to generate different types of training samples to evaluate the effectiveness of the training sample. There are several point features that can be used in RF classification under different circumstances. More features might increase the classification accuracy, however, in that case, the processing time is also increased. Point features such as RGB (red/green/blue), surface normals, curvature, omnivariance, planarity, linearity, surface variance, anisotropy, verticality, and ground/non-ground class are investigated in this study. Different training samples and sets of features are used in the RF to extract the road surface. The experiment is conducted on a local road without a raised curb located on a relatively steep hill. The accuracy assessment is conducted by comparing the model classification results with the manually extracted road surface point cloud. It is found that the accuracy increases up to around 4%–13%, and 95% overall accuracy was obtained when using convenient training samples and features.


2021 ◽  
Vol 11 (13) ◽  
pp. 6237
Author(s):  
Azharul Islam ◽  
KyungHi Chang

Unstructured data from the internet constitute large sources of information, which need to be formatted in a user-friendly way. This research develops a model that classifies unstructured data from data mining into labeled data, and builds an informational and decision-making support system (DMSS). We often have assortments of information collected by mining data from various sources, where the key challenge is to extract valuable information. We observe substantial classification accuracy enhancement for our datasets with both machine learning and deep learning algorithms. The highest classification accuracy (99% in training, 96% in testing) was achieved from a Covid corpus which is processed by using a long short-term memory (LSTM). Furthermore, we conducted tests on large datasets relevant to the Disaster corpus, with an LSTM classification accuracy of 98%. In addition, random forest (RF), a machine learning algorithm, provides a reasonable 84% accuracy. This research’s main objective is to increase the application’s robustness by integrating intelligence into the developed DMSS, which provides insight into the user’s intent, despite dealing with a noisy dataset. Our designed model selects the random forest and stochastic gradient descent (SGD) algorithms’ F1 score, where the RF method outperforms by improving accuracy by 2% (to 83% from 81%) compared with a conventional method.


2021 ◽  
Vol 87 (10) ◽  
pp. 747-758
Author(s):  
Toshihiro Sakamoto

An early crop classification method is functionally required in a near-real-time crop-yield prediction system, especially for upland crops. This study proposes methods to estimate the mixed-pixel ratio of corn, soybean, and other classes within a low-resolution MODIS pixel by coupling MODIS-derived crop phenology information and the past Cropland Data Layer in a random-forest regression algorithm. Verification of the classification accuracy was conducted for the Midwestern United States. The following conclusions are drawn: The use of the random-forest algorithm is effective in estimating the mixed-pixel ratio, which leads to stable classification accuracy; the fusion of historical data and MODIS-derived crop phenology information provides much better crop classification accuracy than when these are used individually; and the input of a longer MODIS data period can improve classification accuracy, especially after day of year 279, because of improved estimation accuracy for the soybean emergence date.


Author(s):  
MUHAMMAD EFAN ABDULFATTAH ◽  
LEDYA NOVAMIZANTI ◽  
SYAMSUL RIZAL

ABSTRAKBencana di Indonesia didominasi oleh bencana hidrometeorologi yang mengakibatkan kerusakan dalam skala besar. Melalui pemetaan, penanganan yang menyeluruh dapat dilakukan guna membantu analisa dan penindakan selanjutnya. Unmanned Aerial Vehicle (UAV) dapat digunakan sebagai alat bantu pemetaan dari udara. Namun, karena faktor kamera maupun perangkat pengolah citra yang tidak memenuhi spesifikasi, hasilnya menjadi kurang informatif. Penelitian ini mengusulkan Super Resolution pada citra udara berbasis Convolutional Neural Network (CNN) dengan model DCSCN. Model terdiri atas Feature Extraction Network untuk mengekstraksi ciri citra, dan Reconstruction Network untuk merekonstruksi citra. Performa DCSCN dibandingkan dengan Super Resolution CNN (SRCNN). Eksperimen dilakukan pada dataset Set5 dengan nilai scale factor 2, 3 dan 4. Secara berurutan SRCNN menghasilkan nilai PSNR dan SSIM sebesar 36.66 dB / 0.9542, 32.75 dB / 0.9090 dan 30.49 dB / 0.8628. Performa DCSCN meningkat menjadi 37.614dB / 0.9588, 33.86 dB / 0.9225 dan 31.48 dB / 0.8851.Kata kunci: citra udara, deep learning, super resolution ABSTRACTDisasters in Indonesia are dominated by hydrometeorological disasters, which cause large-scale damage. Through mapping, comprehensive handling can be done to help the analysis and subsequent action. Unmanned Aerial Vehicle (UAV) can be used as an aerial mapping tool. However, due to the camera and image processing devices that do not meet specifications, the results are less informative. This research proposes Super Resolution on aerial imagery based on Convolutional Neural Network (CNN) with the DCSCN model. The model consists of Feature Extraction Network for extracting image features and Reconstruction Network for reconstructing images. DCSCN's performance is compared to CNN Super Resolution (SRCNN). Experiments were carried out on the Set5 dataset with scale factor values 2, 3, and 4. The SRCNN sequentially produced PSNR and SSIM values of 36.66dB / 0.9542, 32.75dB / 0.9090 and 30.49dB / 0.8628. DCSCN's performance increased to 37,614dB / 0.9588, 33.86dB / 0.9225 and 31.48dB / 0.8851.Keywords: aerial imagery, deep learning, super resolution


2020 ◽  
Vol 12 (9) ◽  
pp. 1357 ◽  
Author(s):  
Maitiniyazi Maimaitijiang ◽  
Vasit Sagan ◽  
Paheding Sidike ◽  
Ahmad M. Daloye ◽  
Hasanjan Erkbol ◽  
...  

Non-destructive crop monitoring over large areas with high efficiency is of great significance in precision agriculture and plant phenotyping, as well as decision making with regards to grain policy and food security. The goal of this research was to assess the potential of combining canopy spectral information with canopy structure features for crop monitoring using satellite/unmanned aerial vehicle (UAV) data fusion and machine learning. Worldview-2/3 satellite data were tasked synchronized with high-resolution RGB image collection using an inexpensive unmanned aerial vehicle (UAV) at a heterogeneous soybean (Glycine max (L.) Merr.) field. Canopy spectral information (i.e., vegetation indices) was extracted from Worldview-2/3 data, and canopy structure information (i.e., canopy height and canopy cover) was derived from UAV RGB imagery. Canopy spectral and structure information and their combination were used to predict soybean leaf area index (LAI), aboveground biomass (AGB), and leaf nitrogen concentration (N) using partial least squares regression (PLSR), random forest regression (RFR), support vector regression (SVR), and extreme learning regression (ELR) with a newly proposed activation function. The results revealed that: (1) UAV imagery-derived high-resolution and detailed canopy structure features, canopy height, and canopy coverage were significant indicators for crop growth monitoring, (2) integration of satellite imagery-based rich canopy spectral information with UAV-derived canopy structural features using machine learning improved soybean AGB, LAI, and leaf N estimation on using satellite or UAV data alone, (3) adding canopy structure information to spectral features reduced background soil effect and asymptotic saturation issue to some extent and led to better model performance, (4) the ELR model with the newly proposed activated function slightly outperformed PLSR, RFR, and SVR in the prediction of AGB and LAI, while RFR provided the best result for N estimation. This study introduced opportunities and limitations of satellite/UAV data fusion using machine learning in the context of crop monitoring.


2020 ◽  
Vol 12 (2) ◽  
pp. 215 ◽  
Author(s):  
Hainie Zha ◽  
Yuxin Miao ◽  
Tiantian Wang ◽  
Yue Li ◽  
Jing Zhang ◽  
...  

Optimizing nitrogen (N) management in rice is crucial for China’s food security and sustainable agricultural development. Nondestructive crop growth monitoring based on remote sensing technologies can accurately assess crop N status, which may be used to guide the in-season site-specific N recommendations. The fixed-wing unmanned aerial vehicle (UAV)-based remote sensing is a low-cost, easy-to-operate technology for collecting spectral reflectance imagery, an important data source for precision N management. The relationships between many vegetation indices (VIs) derived from spectral reflectance data and crop parameters are known to be nonlinear. As a result, nonlinear machine learning methods have the potential to improve the estimation accuracy. The objective of this study was to evaluate five different approaches for estimating rice (Oryza sativa L.) aboveground biomass (AGB), plant N uptake (PNU), and N nutrition index (NNI) at stem elongation (SE) and heading (HD) stages in Northeast China: (1) single VI (SVI); (2) stepwise multiple linear regression (SMLR); (3) random forest (RF); (4) support vector machine (SVM); and (5) artificial neural networks (ANN) regression. The results indicated that machine learning methods improved the NNI estimation compared to VI-SLR and SMLR methods. The RF algorithm performed the best for estimating NNI (R2 = 0.94 (SE) and 0.96 (HD) for calibration and 0.61 (SE) and 0.79 (HD) for validation). The root mean square errors (RMSEs) were 0.09, and the relative errors were <10% in all the models. It is concluded that the RF machine learning regression can significantly improve the estimation of rice N status using UAV remote sensing. The application machine learning methods offers a new opportunity to better use remote sensing data for monitoring crop growth conditions and guiding precision crop management. More studies are needed to further improve these machine learning-based models by combining both remote sensing data and other related soil, weather, and management information for applications in precision N and crop management.


Sign in / Sign up

Export Citation Format

Share Document