scholarly journals Airborne Visual Detection and Tracking of Cooperative UAVs Exploiting Deep Learning

Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4332 ◽  
Author(s):  
Roberto Opromolla ◽  
Giuseppe Inchingolo ◽  
Giancarmine Fasano

The performance achievable by using Unmanned Aerial Vehicles (UAVs) for a large variety of civil and military applications, as well as the extent of applicable mission scenarios, can significantly benefit from the exploitation of formations of vehicles able to fly in a coordinated manner (swarms). In this respect, visual cameras represent a key instrument to enable coordination by giving each UAV the capability to visually monitor the other members of the formation. Hence, a related technological challenge is the development of robust solutions to detect and track cooperative targets through a sequence of frames. In this framework, this paper proposes an innovative approach to carry out this task based on deep learning. Specifically, the You Only Look Once (YOLO) object detection system is integrated within an original processing architecture in which the machine-vision algorithms are aided by navigation hints available thanks to the cooperative nature of the formation. An experimental flight test campaign, involving formations of two multirotor UAVs, is conducted to collect a database of images suitable to assess the performance of the proposed approach. Results demonstrate high-level accuracy, and robustness against challenging conditions in terms of illumination, background and target-range variability.

Author(s):  
Kalirajan K. ◽  
Seethalakshmi V. ◽  
Venugopal D. ◽  
Balaji K.

Moving object detection and tracking is the process of identifying and locating the class objects such as people, vehicle, toy, and human faces in the video sequences more precisely without background disturbances. It is the first and foremost step in any kind of video analytics applications, and it is greatly influencing the high-level abstractions such as classification and tracking. Traditional methods are easily affected by the background disturbances and achieve poor results. With the advent of deep learning, it is possible to improve the results with high level features. The deep learning model helps to get more useful insights about the events in the real world. This chapter introduces the deep convolutional neural network and reviews the deep learning models used for moving object detection. This chapter also discusses the parameters involved and metrics used to assess the performance of moving object detection in deep learning model. Finally, the chapter is concluded with possible recommendations for the benefit of research community.


Author(s):  
Mohamed Khedir Noraldain Alamin

In recent years, the use of Flying drones and modern Unmanned aerial vehicles (UAVs) with the latest techniques and capabilities for both civilian and military applications growing sustainably on a large scope, Drones could autonomously fly in several environments and locations and could perform various missions, providing a system for UAV detection and tracking represent crucial importance. This paper discusses Designing Detection and Tracking method as a part of Aero-vehicle Defense System (ADS) for UAVs using Deep learning algorithms. The small Radar cross-section (RCS) foot-print makes a problem for Traditional methods and Aero-vehicle Defense systems to distinguish between birds, stealth fighters, and UAVs incomparable of size and RCS characteristics, the detection is a challenge in low RCS targets because the chance of detection is incredibly less moreover, in the existence of interference and clutter which reduce the performance of detection process rapidly. 


Author(s):  
Rawaa Ismael Farhan ◽  
Abeer Tariq Maolood ◽  
Nidaa Flaih Hassan

<p>The emergence of the Internet of Things (IOT) as a result of the development of the communications system has made the study of cyber security more important. Day after day, attacks evolve and new attacks are emerged. Hence, network anomaly-based intrusion detection system is become very important, which plays an important role in protecting the network through early detection of attacks. Because of the development in  machine learning and the emergence of deep learning field,  and its ability to extract high-level features with high accuracy, made these systems involved to be worked with  real network traffic CSE-CIC-IDS2018 with a wide range of intrusions and normal behavior is an ideal way for testing and evaluation . In this paper , we  test and evaluate our  deep model (DNN) which achieved good detection accuracy about  90% .</p>


ACTA IMEKO ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 62
Author(s):  
Jakub Svatos ◽  
Jan Holub ◽  
Jan Belak

<p class="Abstract">Currently, acoustic detection techniques of gunshots (gunshot detection and its classification) are increasingly being used not only for military applications but also for civilian purposes. Detection, localisation, and classification of a dangerous event such as gunshots employing acoustic detection is a perspective alternative to visual detection, which is commonly used. In some situations, to detect and localise the source of a gunshot, an automatic acoustic detection system, which can classify the caliber, may be preferable. This paper presents a system for acoustic detection, which can detect, localise and classify acoustic events such as gunshots. The system has been tested in open and closed shooting ranges and tested firearms are 9 mm short gun, 6.35 mm short gun, .22 short gun, and .22 rifle gun with various subsonic and supersonic ammunition. As ‘false alarms’, sets of different impulse acoustic events like door slams, breaking glass, etc. have been used. Localisation and classification algorithms are also introduced. To successfully classify the tested acoustic signals, Continuous Wavelet and Mel Frequency Transformation methods have been used for the signal processing, and the fully two-layer connected neural network has been implemented. The results show that the acoustic detector can be used for reliable gunshot detection, localisation, and classification.</p>


2020 ◽  
Author(s):  
Pedro V. A. de Freitas ◽  
Antonio J. G. Busson ◽  
Álan L. V. Guedes ◽  
Sérgio Colcher

A large number of videos are uploaded on educational platforms every minute. Those platforms are responsible for any sensitive media uploaded by their users. An automated detection system to identify pornographic content could assist human workers by pre-selecting suspicious videos. In this paper, we propose a multimodal approach to adult content detection. We use two Deep Convolutional Neural Networks to extract high-level features from both image and audio sources of a video. Then, we concatenate those features and evaluate the performance of classifiers on a set of mixed educational and pornographic videos. We achieve an F1-score of 95.67% on the educational and adult videos set and an F1-score of 94% on our test subset for the pornographic class.


2021 ◽  
pp. 1-12
Author(s):  
Tina Babu ◽  
Tripty Singh ◽  
Deepa Gupta ◽  
Shahin Hameed

Colon cancer is one of the highest cancer diagnosis mortality rates worldwide. However, relying on the expertise of pathologists is a demanding and time-consuming process for histopathological analysis. The automated diagnosis of colon cancer from biopsy examination played an important role for patients and prognosis. As conventional handcrafted feature extraction requires specialized experience to select realistic features, deep learning processes have been chosen as abstract high-level features may be extracted automatically. This paper presents the colon cancer detection system using transfer learning architectures to automatically extract high-level features from colon biopsy images for automated diagnosis of patients and prognosis. In this study, the image features are extracted from a pre-trained convolutional neural network (CNN) and used to train the Bayesian optimized Support Vector Machine classifier. Moreover, Alexnet, VGG-16, and Inception-V3 pre-trained neural networks were used to analyze the best network for colon cancer detection. Furthermore, the proposed framework is evaluated using four datasets: two are collected from Indian hospitals (with different magnifications 4X, 10X, 20X, and 40X) and the other two are public colon image datasets. Compared with the existing classifiers and methods using public datasets, the test results evaluated the Inception-V3 network with the accuracy range from 96.5% - 99% as best suited for the proposed framework.


2020 ◽  
Vol 71 (7) ◽  
pp. 868-880
Author(s):  
Nguyen Hong-Quan ◽  
Nguyen Thuy-Binh ◽  
Tran Duc-Long ◽  
Le Thi-Lan

Along with the strong development of camera networks, a video analysis system has been become more and more popular and has been applied in various practical applications. In this paper, we focus on person re-identification (person ReID) task that is a crucial step of video analysis systems. The purpose of person ReID is to associate multiple images of a given person when moving in a non-overlapping camera network. Many efforts have been made to person ReID. However, most of studies on person ReID only deal with well-alignment bounding boxes which are detected manually and considered as the perfect inputs for person ReID. In fact, when building a fully automated person ReID system the quality of the two previous steps that are person detection and tracking may have a strong effect on the person ReID performance. The contribution of this paper are two-folds. First, a unified framework for person ReID based on deep learning models is proposed. In this framework, the coupling of a deep neural network for person detection and a deep-learning-based tracking method is used. Besides, features extracted from an improved ResNet architecture are proposed for person representation to achieve a higher ReID accuracy. Second, our self-built dataset is introduced and employed for evaluation of all three steps in the fully automated person ReID framework.


Author(s):  
Sagar Chhetri ◽  
Abeer Alsadoon ◽  
Thair Al‐Dala'in ◽  
P. W. C. Prasad ◽  
Tarik A. Rashid ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document