scholarly journals MultEYE: Monitoring System for Real-Time Vehicle Detection, Tracking and Speed Estimation from UAV Imagery on Edge-Computing Platforms

2021 ◽  
Vol 13 (4) ◽  
pp. 573
Author(s):  
Navaneeth Balamuralidhar ◽  
Sofia Tilon ◽  
Francesco Nex

We present MultEYE, a traffic monitoring system that can detect, track, and estimate the velocity of vehicles in a sequence of aerial images. The presented solution has been optimized to execute these tasks in real-time on an embedded computer installed on an Unmanned Aerial Vehicle (UAV). In order to overcome the limitation of existing object detection architectures related to accuracy and computational overhead, a multi-task learning methodology was employed by adding a segmentation head to an object detector backbone resulting in the MultEYE object detection architecture. On a custom dataset, it achieved 4.8% higher mean Average Precision (mAP) score, while being 91.4% faster than the state-of-the-art model and while being able to generalize to different real-world traffic scenes. Dedicated object tracking and speed estimation algorithms have been then optimized to track reliably objects from an UAV with limited computational effort. Different strategies to combine object detection, tracking, and speed estimation are discussed, too. From our experiments, the optimized detector runs at an average frame-rate of up to 29 frames per second (FPS) on frame resolution 512 × 320 on a Nvidia Xavier NX board, while the optimally combined detector, tracker and speed estimator pipeline achieves speeds of up to 33 FPS on an image of resolution 3072 × 1728. To our knowledge, the MultEYE system is one of the first traffic monitoring systems that was specifically designed and optimized for an UAV platform under real-world constraints.

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Cheng-Jian Lin ◽  
Shiou-Yun Jeng ◽  
Hong-Wei Lioa

In recent years, vehicle detection and classification have become essential tasks of intelligent transportation systems, and real-time, accurate vehicle detection from image and video data for traffic monitoring remains challenging. The most noteworthy challenges are real-time system operation to accurately locate and classify vehicles in traffic flows and working around total occlusions that hinder vehicle tracking. For real-time traffic monitoring, we present a traffic monitoring approach that overcomes the abovementioned challenges by employing convolutional neural networks that utilize You Only Look Once (YOLO). A real-time traffic monitoring system has been developed, and it has attracted significant attention from traffic management departments. Digitally processing and analyzing these videos in real time is crucial for extracting reliable data on traffic flow. Therefore, this study presents a real-time traffic monitoring system based on a virtual detection zone, Gaussian mixture model (GMM), and YOLO to increase the vehicle counting and classification efficiency. GMM and a virtual detection zone are used for vehicle counting, and YOLO is used to classify vehicles. Moreover, the distance and time traveled by a vehicle are used to estimate the speed of the vehicle. In this study, the Montevideo Audio and Video Dataset (MAVD), the GARM Road-Traffic Monitoring data set (GRAM-RTM), and our collection data sets are used to verify the proposed method. Experimental results indicate that the proposed method with YOLOv4 achieved the highest classification accuracy of 98.91% and 99.5% in MAVD and GRAM-RTM data sets, respectively. Moreover, the proposed method with YOLOv4 also achieves the highest classification accuracy of 99.1%, 98.6%, and 98% in daytime, night time, and rainy day, respectively. In addition, the average absolute percentage error of vehicle speed estimation with the proposed method is about 7.6%.


Author(s):  
M. Baskar ◽  
J. Ramkumar ◽  
C. Karthikeyan ◽  
V. Anbarasu ◽  
A. Balaji ◽  
...  

Author(s):  
Chi-Yat Lau ◽  
Man-Ching Yuen ◽  
Ka-Ho Yueng ◽  
Cheuk-Pan Fan ◽  
On-Yi Ko ◽  
...  

2020 ◽  
Vol 12 (21) ◽  
pp. 9177
Author(s):  
Vishal Mandal ◽  
Abdul Rashid Mussah ◽  
Peng Jin ◽  
Yaw Adu-Gyamfi

Manual traffic surveillance can be a daunting task as Traffic Management Centers operate a myriad of cameras installed over a network. Injecting some level of automation could help lighten the workload of human operators performing manual surveillance and facilitate making proactive decisions which would reduce the impact of incidents and recurring congestion on roadways. This article presents a novel approach to automatically monitor real time traffic footage using deep convolutional neural networks and a stand-alone graphical user interface. The authors describe the results of research received in the process of developing models that serve as an integrated framework for an artificial intelligence enabled traffic monitoring system. The proposed system deploys several state-of-the-art deep learning algorithms to automate different traffic monitoring needs. Taking advantage of a large database of annotated video surveillance data, deep learning-based models are trained to detect queues, track stationary vehicles, and tabulate vehicle counts. A pixel-level segmentation approach is applied to detect traffic queues and predict severity. Real-time object detection algorithms coupled with different tracking systems are deployed to automatically detect stranded vehicles as well as perform vehicular counts. At each stage of development, interesting experimental results are presented to demonstrate the effectiveness of the proposed system. Overall, the results demonstrate that the proposed framework performs satisfactorily under varied conditions without being immensely impacted by environmental hazards such as blurry camera views, low illumination, rain, or snow.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5280
Author(s):  
Balakrishnan Ramalingam ◽  
Rajesh Elara Mohan ◽  
Sathian Pookkuttath ◽  
Braulio Félix Gómez ◽  
Charan Satya Chandra Sairam Borusu ◽  
...  

Insect detection and control at an early stage are essential to the built environment (human-made physical spaces such as homes, hotels, camps, hospitals, parks, pavement, food industries, etc.) and agriculture fields. Currently, such insect control measures are manual, tedious, unsafe, and time-consuming labor dependent tasks. With the recent advancements in Artificial Intelligence (AI) and the Internet of things (IoT), several maintenance tasks can be automated, which significantly improves productivity and safety. This work proposes a real-time remote insect trap monitoring system and insect detection method using IoT and Deep Learning (DL) frameworks. The remote trap monitoring system framework is constructed using IoT and the Faster RCNN (Region-based Convolutional Neural Networks) Residual neural Networks 50 (ResNet50) unified object detection framework. The Faster RCNN ResNet 50 object detection framework was trained with built environment insects and farm field insect images and deployed in IoT. The proposed system was tested in real-time using four-layer IoT with built environment insects image captured through sticky trap sheets. Further, farm field insects were tested through a separate insect image database. The experimental results proved that the proposed system could automatically identify the built environment insects and farm field insects with an average of 94% accuracy.


Author(s):  
De Rosal Ignatius Moses Setiadi ◽  
Rizki Ramadhan Fratama ◽  
Nurul Diyah Ayu Partiningsih ◽  
Eko Hari Rachmawanto ◽  
Christy Atika Sari ◽  
...  

Author(s):  
Jaesun Park ◽  
Sang Boem Lim ◽  
KiHo Hong ◽  
Mu Wook Pyeon ◽  
Jin You Lin

2021 ◽  
Vol 19 (10) ◽  
pp. 51-60
Author(s):  
Yuta Ukon ◽  
Shuhei Yoshida ◽  
Shoko Ohteru ◽  
Namiko Ikeda

2020 ◽  
Vol 2 ◽  
pp. 230-245
Author(s):  
Mohammed Sarrab ◽  
Supriya Pulparambil ◽  
Medhat Awadalla

Sign in / Sign up

Export Citation Format

Share Document