Super resolution-assisted deep aerial vehicle detection

Author(s):  
Syeda Nyma Ferdous ◽  
Moktari Mostofa ◽  
Nasser Nasrabadi
Author(s):  
MUHAMMAD EFAN ABDULFATTAH ◽  
LEDYA NOVAMIZANTI ◽  
SYAMSUL RIZAL

ABSTRAKBencana di Indonesia didominasi oleh bencana hidrometeorologi yang mengakibatkan kerusakan dalam skala besar. Melalui pemetaan, penanganan yang menyeluruh dapat dilakukan guna membantu analisa dan penindakan selanjutnya. Unmanned Aerial Vehicle (UAV) dapat digunakan sebagai alat bantu pemetaan dari udara. Namun, karena faktor kamera maupun perangkat pengolah citra yang tidak memenuhi spesifikasi, hasilnya menjadi kurang informatif. Penelitian ini mengusulkan Super Resolution pada citra udara berbasis Convolutional Neural Network (CNN) dengan model DCSCN. Model terdiri atas Feature Extraction Network untuk mengekstraksi ciri citra, dan Reconstruction Network untuk merekonstruksi citra. Performa DCSCN dibandingkan dengan Super Resolution CNN (SRCNN). Eksperimen dilakukan pada dataset Set5 dengan nilai scale factor 2, 3 dan 4. Secara berurutan SRCNN menghasilkan nilai PSNR dan SSIM sebesar 36.66 dB / 0.9542, 32.75 dB / 0.9090 dan 30.49 dB / 0.8628. Performa DCSCN meningkat menjadi 37.614dB / 0.9588, 33.86 dB / 0.9225 dan 31.48 dB / 0.8851.Kata kunci: citra udara, deep learning, super resolution ABSTRACTDisasters in Indonesia are dominated by hydrometeorological disasters, which cause large-scale damage. Through mapping, comprehensive handling can be done to help the analysis and subsequent action. Unmanned Aerial Vehicle (UAV) can be used as an aerial mapping tool. However, due to the camera and image processing devices that do not meet specifications, the results are less informative. This research proposes Super Resolution on aerial imagery based on Convolutional Neural Network (CNN) with the DCSCN model. The model consists of Feature Extraction Network for extracting image features and Reconstruction Network for reconstructing images. DCSCN's performance is compared to CNN Super Resolution (SRCNN). Experiments were carried out on the Set5 dataset with scale factor values 2, 3, and 4. The SRCNN sequentially produced PSNR and SSIM values of 36.66dB / 0.9542, 32.75dB / 0.9090 and 30.49dB / 0.8628. DCSCN's performance increased to 37,614dB / 0.9588, 33.86dB / 0.9225 and 31.48dB / 0.8851.Keywords: aerial imagery, deep learning, super resolution


2020 ◽  
pp. 1351010X2091785
Author(s):  
Gino Iannace ◽  
Giuseppe Ciaburro ◽  
Amelia Trematerra

In this study, the data obtained from the acoustic measurements were used to train a model based on logistic regression in order to detect a quadrotor’s vehicle in indoor environment. To simulate a real environment, we made sound recordings in a shopping center. The sounds related to two scenarios were recorded: only anthropic noise and anthropic noise with background music. Later, we reproduced these sounds in an indoor environment of the same size and characteristics as the shopping center. During the simulation test, a drone placed at different distances from the sound level meter was turned on at different speeds to identify their presence in complex acoustic scenarios. Subsequently, these measurements were used to implement a model based on logistic regression for the automatic detection of the unmanned aerial vehicle. Logistic regression is widely used in pattern recognition of the binary dependent variable. This model returns high value of accuracy (0.994), indicating a high number of correct detections. The results obtained in this study suggest the use of this tool for unmanned aerial vehicle detection applications.


2019 ◽  
Vol 11 (14) ◽  
pp. 1708 ◽  
Author(s):  
Shuang Cao ◽  
Yongtao Yu ◽  
Haiyan Guan ◽  
Daifeng Peng ◽  
Wanqian Yan

Vehicle detection from remote sensing images plays a significant role in transportation related applications. However, the scale variations, orientation variations, illumination variations, and partial occlusions of vehicles, as well as the image qualities, bring great challenges for accurate vehicle detection. In this paper, we present an affine-function transformation-based object matching framework for vehicle detection from unmanned aerial vehicle (UAV) images. First, meaningful and non-redundant patches are generated through a superpixel segmentation strategy. Then, the affine-function transformation-based object matching framework is applied to a vehicle template and each of the patches for vehicle existence estimation. Finally, vehicles are detected and located after matching cost thresholding, vehicle location estimation, and multiple response elimination. Quantitative evaluations on two UAV image datasets show that the proposed method achieves an average completeness, correctness, quality, and F1-measure of 0.909, 0.969, 0.883, and 0.938, respectively. Comparative studies also demonstrate that the proposed method achieves compatible performance with the Faster R-CNN and outperforms the other eight existing methods in accurately detecting vehicles of various conditions.


Sign in / Sign up

Export Citation Format

Share Document