scholarly journals Research on Airport Target Recognition under Low-Visibility Condition Based on Transfer Learning

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Jiajun Li ◽  
Yongzhong Wang ◽  
Yuexin Qian ◽  
Tianyi Xu ◽  
Kaiwen Wang ◽  
...  

Operational safety in the airport is the focus of the aviation industry. Target recognition under low visibility plays an essential role in arranging the circulation of objects in the airport field, identifying unpredictable obstacles in time, and monitoring aviation operation and ensuring its safety and efficiency. From the perspective of transfer learning, this paper will explore the identification of all targets (mainly including aircraft, humans, ground vehicles, hangars, and birds) in the airport field under low-visibility conditions (caused by bad weather such as fog, rain, and snow). First, a variety of deep transfer learning networks are used to identify well-visible airport targets. The experimental results show that GoogLeNet is more effective, with a recognition rate of more than 90.84%. However, the recognition rates of this method are greatly reduced under the condition of low visibility; some are even less than 10%. Therefore, the low-visibility image is processed with 11 different fog removals and vision enhancement algorithms, and then, the GoogLeNet deep neural network algorithm is used to identify the image. Finally, the target recognition rate can be significantly improved to more than 60%. According to the results, the dark channel algorithm has the best image defogging enhancement effect, and the GoogLeNet deep neural network has the highest target recognition rate.

2020 ◽  
Vol 152 ◽  
pp. S146-S147
Author(s):  
J. Perez-Alija ◽  
P. Gallego ◽  
M. Lizondo ◽  
J. Nuria ◽  
A. Latorre-Musoll ◽  
...  

2021 ◽  
Vol 10 (9) ◽  
pp. 25394-25398
Author(s):  
Chitra Desai

Deep learning models have demonstrated improved efficacy in image classification since the ImageNet Large Scale Visual Recognition Challenge started since 2010. Classification of images has further augmented in the field of computer vision with the dawn of transfer learning. To train a model on huge dataset demands huge computational resources and add a lot of cost to learning. Transfer learning allows to reduce on cost of learning and also help avoid reinventing the wheel. There are several pretrained models like VGG16, VGG19, ResNet50, Inceptionv3, EfficientNet etc which are widely used.   This paper demonstrates image classification using pretrained deep neural network model VGG16 which is trained on images from ImageNet dataset. After obtaining the convolutional base model, a new deep neural network model is built on top of it for image classification based on fully connected network. This classifier will use features extracted from the convolutional base model.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Xiaoping Guo

Traditional text annotation-based video retrieval is done by manually labeling videos with text, which is inefficient and highly subjective and generally cannot accurately describe the meaning of videos. Traditional content-based video retrieval uses convolutional neural networks to extract the underlying feature information of images to build indexes and achieves similarity retrieval of video feature vectors according to certain similarity measure algorithms. In this paper, by studying the characteristics of sports videos, we propose the histogram difference method based on using transfer learning and the four-step method based on block matching for mutation detection and fading detection of video shots, respectively. By adaptive thresholding, regions with large frame difference changes are marked as candidate regions for shots, and then the shot boundaries are determined by mutation detection algorithm. Combined with the characteristics of sports video, this paper proposes a key frame extraction method based on clustering and optical flow analysis, and experimental comparison with the traditional clustering method. In addition, this paper proposes a key frame extraction algorithm based on clustering and optical flow analysis for key frame extraction of sports video. The algorithm effectively removes the redundant frames, and the extracted key frames are more representative. Through extensive experiments, the keyword fuzzy finding algorithm based on improved deep neural network and ontology semantic expansion proposed in this paper shows a more desirable retrieval performance, and it is feasible to use this method for video underlying feature extraction, annotation, and keyword finding, and one of the outstanding features of the algorithm is that it can quickly and effectively retrieve the desired video in a large number of Internet video resources, reducing the false detection rate and leakage rate while improving the fidelity, which basically meets people’s daily needs.


2021 ◽  
Author(s):  
Liangrui Pan ◽  
boya ji ◽  
Xiaoqi wang ◽  
shaoliang peng

The use of chest X-ray images (CXI) to detect Severe Acute Respiratory Syndrome Coronavirus 2 (SARS CoV-2) caused by Coronavirus Disease 2019 (COVID-19) is life-saving important for both patients and doctors. This research proposed a multi-channel feature deep neural network algorithm to screen people infected with COVID-19. The algorithm integrates data oversampling technology and a multi-channel feature deep neural network model to carry out the training process in an end-to-end manner. In the experiment, we used a publicly available CXI database with 10,192 Normal, 6012 Lung Opacity (Non-COVID lung infection), and 1345 Viral Pneumonia images. Compared with traditional deep learning models (Densenet201, ResNet50, VGG19, GoogLeNet), the MFDNN model obtains an average test accuracy of 93.19% in all data. Furthermore, in each type of screening, the precision, recall, and F1 Score of the MFDNN model are also better than traditional deep learning networks. Secondly, compared with the latest CoroDet model, the MFDNN algorithm is 1.91% higher than the CoroDet model in the experiment of detecting the four categories of COVID19 infected persons. Finally, our experimental code will be placed at https://github.com/panliangrui/covid19.


Author(s):  
Parvathi R. ◽  
Pattabiraman V.

This chapter proposes a hybrid method for classification of the objects based on deep neural network and a similarity-based search algorithm. The objects are pre-processed with external conditions. After pre-processing and training different deep learning networks with the object dataset, the authors compare the results to find the best model to improve the accuracy of the results based on the features of object images extracted from the feature vector layer of a neural network. RPFOREST (random projection forest) model is used to predict the approximate nearest images. ResNet50, InceptionV3, InceptionV4, and DenseNet169 models are trained with this dataset. A proposal for adaptive finetuning of the deep learning models by determining the number of layers required for finetuning with the help of the RPForest model is given, and this experiment is conducted using the Xception model.


Sign in / Sign up

Export Citation Format

Share Document