Deep Complex-valued Convolutional Neural Network for Drone Recognition Based on RF Fingerprinting

Author(s):  
Hao Gu ◽  
Guangwei Qing ◽  
Yu Wang ◽  
Sheng Hong ◽  
Guan Gui ◽  
...  

<div>Drones-aided ubiquitous applications play more and more important roles in our daily life. Accurate recognition of drones is required in aviation management due to their potential risks and even disasters.</div><div>Radio frequency (RF) fingerprinting-based recognition technology based on deep learning is considered as one of the effective approaches to extract hidden abstract features from RF data of drones. Existing deep learning-based methods are either a high computational burden or low accuracy.</div><div>In this paper, we propose a deep complex-valued convolutional neural network (DC-CNN) method based on RF fingerprinting for recognizing different drones.</div><div>Compared with existing recognition methods, the DC-CNN method has the advantages of high recognition accuracy, fast running time and small network complexity.</div><div>Nine algorithm models and two datasets are used to represent the superior performance of our system.</div><div>Experimental results show that our proposed DC-CNN can achieve recognition accuracy of 99.5\% and 74.1\% respectively on 4 and 8 classes of RF drone datasets.</div>

2020 ◽  
Author(s):  
Hao Gu ◽  
Guangwei Qing ◽  
Yu Wang ◽  
Sheng Hong ◽  
Guan Gui ◽  
...  

<div>Drones-aided ubiquitous applications play more and more important roles in our daily life. Accurate recognition of drones is required in aviation management due to their potential risks and even disasters.</div><div>Radio frequency (RF) fingerprinting-based recognition technology based on deep learning is considered as one of the effective approaches to extract hidden abstract features from RF data of drones. Existing deep learning-based methods are either a high computational burden or low accuracy.</div><div>In this paper, we propose a deep complex-valued convolutional neural network (DC-CNN) method based on RF fingerprinting for recognizing different drones.</div><div>Compared with existing recognition methods, the DC-CNN method has the advantages of high recognition accuracy, fast running time and small network complexity.</div><div>Nine algorithm models and two datasets are used to represent the superior performance of our system.</div><div>Experimental results show that our proposed DC-CNN can achieve recognition accuracy of 99.5\% and 74.1\% respectively on 4 and 8 classes of RF drone datasets.</div>


Author(s):  
Xiru Wu ◽  
Xingyu Ling ◽  
Jinxia Liu

In this paper, the deep convolutional neural network (DCNN) is applied to locating and recognizing complex workpieces automatically for the vision-based sorting robot in industrial production process. Firstly, in order to obtain the location of workpieces, the pixel projection algorithm (PPA), which consists of pre-procession and pixel projection operation, is presented to eliminate uneven illumination, and locate and segment workpieces images. Then, we get the objective information and identify the object by training DCNN, which is used to recognize the rational degree and type of workpieces at a high rate of speed. Finally, experimental results prove the validity of the location-recognition algorithms for the vision-based sorting robot. The location error and recognition accuracy can be significantly improved in the experimental environment.


2021 ◽  
Vol 9 ◽  
Author(s):  
Bibo Dai ◽  
Yunmin Wang ◽  
Chunyang Ye ◽  
Qihang Li ◽  
Canming Yuan ◽  
...  

This paper proposed an improved U-Net fully convolutional neural network to automatically extract a single landslide deformation information under time series based on the physical model experiments. This method extracts time series information for three different landslide deformation ranges. Compared to U-Net and mainstream superpixel method, evaluation indicators of DSC, VOE and RVD verify the high recognition accuracy and strong robustness of our method.


10.2196/14806 ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. e14806 ◽  
Author(s):  
Arne Peine ◽  
Ahmed Hallawa ◽  
Oliver Schöffski ◽  
Guido Dartmann ◽  
Lejla Begic Fazlic ◽  
...  

Background High numbers of consumable medical materials (eg, sterile needles and swabs) are used during the daily routine of intensive care units (ICUs) worldwide. Although medical consumables largely contribute to total ICU hospital expenditure, many hospitals do not track the individual use of materials. Current tracking solutions meeting the specific requirements of the medical environment, like barcodes or radio frequency identification, require specialized material preparation and high infrastructure investment. This impedes the accurate prediction of consumption, leads to high storage maintenance costs caused by large inventories, and hinders scientific work due to inaccurate documentation. Thus, new cost-effective and contactless methods for object detection are urgently needed. Objective The goal of this work was to develop and evaluate a contactless visual recognition system for tracking medical consumable materials in ICUs using a deep learning approach on a distributed client-server architecture. Methods We developed Consumabot, a novel client-server optical recognition system for medical consumables, based on the convolutional neural network model MobileNet implemented in Tensorflow. The software was designed to run on single-board computer platforms as a detection unit. The system was trained to recognize 20 different materials in the ICU, while 100 sample images of each consumable material were provided. We assessed the top-1 recognition rates in the context of different real-world ICU settings: materials presented to the system without visual obstruction, 50% covered materials, and scenarios of multiple items. We further performed an analysis of variance with repeated measures to quantify the effect of adverse real-world circumstances. Results Consumabot reached a >99% reliability of recognition after about 60 steps of training and 150 steps of validation. A desirable low cross entropy of <0.03 was reached for the training set after about 100 iteration steps and after 170 steps for the validation set. The system showed a high top-1 mean recognition accuracy in a real-world scenario of 0.85 (SD 0.11) for objects presented to the system without visual obstruction. Recognition accuracy was lower, but still acceptable, in scenarios where the objects were 50% covered (P<.001; mean recognition accuracy 0.71; SD 0.13) or multiple objects of the target group were present (P=.01; mean recognition accuracy 0.78; SD 0.11), compared to a nonobstructed view. The approach met the criteria of absence of explicit labeling (eg, barcodes, radio frequency labeling) while maintaining a high standard for quality and hygiene with minimal consumption of resources (eg, cost, time, training, and computational power). Conclusions Using a convolutional neural network architecture, Consumabot consistently achieved good results in the classification of consumables and thus is a feasible way to recognize and register medical consumables directly to a hospital’s electronic health record. The system shows limitations when the materials are partially covered, therefore identifying characteristics of the consumables are not presented to the system. Further development of the assessment in different medical circumstances is needed.


2019 ◽  
Vol 8 (4) ◽  
pp. 4826-4828

Handwriting is a learned skill that had been an excellent means of communication and documentation for thousands of years. The simple way to communicate with the computers is through either speech or handwriting. Speech has some limitation; hence input through handwriting is recommended. It is difficult to input data for computers for Indian language scripts because of their com-plex character set. This paper focuses on exploring convolutional neural networks (CNN) which is deep learning based for the recognition of handwritten script. The proposed method has shown 99% for handwritten English numerals and promising recognition accuracy for Kannada numerals.


2020 ◽  
Vol 11 ◽  
Author(s):  
Zhuo Wang ◽  
Zhezhou Yu ◽  
Yao Wang ◽  
Huimao Zhang ◽  
Yishan Luo ◽  
...  

BackgroundMagnetic resonance imaging (MRI) has a wide range of applications in medical imaging. Recently, studies based on deep learning algorithms have demonstrated powerful processing capabilities for medical imaging data. Previous studies have mostly focused on common diseases that usually have large scales of datasets and centralized the lesions in the brain. In this paper, we used deep learning models to process MRI images to differentiate the rare neuromyelitis optical spectrum disorder (NMOSD) from multiple sclerosis (MS) automatically, which are characterized by scattered and overlapping lesions.MethodsWe proposed a novel model structure to capture 3D MRI images’ essential information and converted them into lower dimensions. To empirically prove the efficiency of our model, firstly, we used a conventional 3-dimensional (3D) model to classify the T2-weighted fluid-attenuated inversion recovery (T2-FLAIR) images and proved that the traditional 3D convolutional neural network (CNN) models lack the learning capacity to distinguish between NMOSD and MS. Then, we compressed the 3D T2-FLAIR images by a two-view compression block to apply two different depths (18 and 34 layers) of 2D models for disease diagnosis and also applied transfer learning by pre-training our model on ImageNet dataset.ResultsWe found that our models possess superior performance when our models were pre-trained on ImageNet dataset, in which the models’ average accuracies of 34 layers model and 18 layers model were 0.75 and 0.725, sensitivities were 0.707 and 0.708, and specificities were 0.759 and 0.719, respectively. Meanwhile, the traditional 3D CNN models lacked the learning capacity to distinguish between NMOSD and MS.ConclusionThe novel CNN model we proposed could automatically differentiate the rare NMOSD from MS, especially, our model showed better performance than traditional3D CNN models. It indicated that our 3D compressed CNN models are applicable in handling diseases with small-scale datasets and possess overlapping and scattered lesions.


2019 ◽  
Author(s):  
Seoin Back ◽  
Junwoong Yoon ◽  
Nianhan Tian ◽  
Wen Zhong ◽  
Kevin Tran ◽  
...  

We present an application of deep-learning convolutional neural network of atomic surface structures using atomic and Voronoi polyhedra-based neighbor information to predict adsorbate binding energies for the application in catalysis.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


2021 ◽  
Vol 13 (2) ◽  
pp. 274
Author(s):  
Guobiao Yao ◽  
Alper Yilmaz ◽  
Li Zhang ◽  
Fei Meng ◽  
Haibin Ai ◽  
...  

The available stereo matching algorithms produce large number of false positive matches or only produce a few true-positives across oblique stereo images with large baseline. This undesired result happens due to the complex perspective deformation and radiometric distortion across the images. To address this problem, we propose a novel affine invariant feature matching algorithm with subpixel accuracy based on an end-to-end convolutional neural network (CNN). In our method, we adopt and modify a Hessian affine network, which we refer to as IHesAffNet, to obtain affine invariant Hessian regions using deep learning framework. To improve the correlation between corresponding features, we introduce an empirical weighted loss function (EWLF) based on the negative samples using K nearest neighbors, and then generate deep learning-based descriptors with high discrimination that is realized with our multiple hard network structure (MTHardNets). Following this step, the conjugate features are produced by using the Euclidean distance ratio as the matching metric, and the accuracy of matches are optimized through the deep learning transform based least square matching (DLT-LSM). Finally, experiments on Large baseline oblique stereo images acquired by ground close-range and unmanned aerial vehicle (UAV) verify the effectiveness of the proposed approach, and comprehensive comparisons demonstrate that our matching algorithm outperforms the state-of-art methods in terms of accuracy, distribution and correct ratio. The main contributions of this article are: (i) our proposed MTHardNets can generate high quality descriptors; and (ii) the IHesAffNet can produce substantial affine invariant corresponding features with reliable transform parameters.


Sign in / Sign up

Export Citation Format

Share Document