scholarly journals Real-Time Road Hazard Information System

2020 ◽  
Vol 5 (9) ◽  
pp. 75
Author(s):  
Carlos Pena-Caballero ◽  
Dongchul Kim ◽  
Adolfo Gonzalez ◽  
Osvaldo Castellanos ◽  
Angel Cantu ◽  
...  

Infrastructure is a significant factor in economic growth for systems of government. In order to increase economic productivity, maintaining infrastructure quality is essential. One of the elements of infrastructure is roads. Roads are means which help local and national economies be more productive. Furthermore, road damage such as potholes, debris, or cracks is the cause of many on-road accidents that have cost the lives of many drivers. In this paper, we propose a system that uses Convolutional Neural Networks to detect road degradations without data pre-processing. We utilize the state-of-the-art object detection algorithm, YOLO detector for the system. First, we developed a basic system working on data collecting, pre-processing, and classification. Secondly, we improved the classification performance achieving 97.98% in the overall model testing, and then we utilized pixel-level classification and detection with a method called semantic segmentation. We were able to achieve decent results using this method to detect and classify four different classes (Manhole, Pothole, Blurred Crosswalk, Blurred Street Line). We trained a segmentation model that recognizes the four classes mentioned above and achieved great results with this model allowing the machine to effectively and correctly identify and classify our four classes in an image. Although we obtained excellent accuracy from the detectors, these do not perform particularly well on embedded systems due to their network size. Therefore, we opted for a smaller, less accurate detector that will run in real time on a cheap embedded system, like the Google Coral Dev Board, without needing a powerful and expensive GPU.

2021 ◽  
Vol 3 (5) ◽  
Author(s):  
João Gaspar Ramôa ◽  
Vasco Lopes ◽  
Luís A. Alexandre ◽  
S. Mogo

AbstractIn this paper, we propose three methods for door state classification with the goal to improve robot navigation in indoor spaces. These methods were also developed to be used in other areas and applications since they are not limited to door detection as other related works are. Our methods work offline, in low-powered computers as the Jetson Nano, in real-time with the ability to differentiate between open, closed and semi-open doors. We use the 3D object classification, PointNet, real-time semantic segmentation algorithms such as, FastFCN, FC-HarDNet, SegNet and BiSeNet, the object detection algorithm, DetectNet and 2D object classification networks, AlexNet and GoogleNet. We built a 3D and RGB door dataset with images from several indoor environments using a 3D Realsense camera D435. This dataset is freely available online. All methods are analysed taking into account their accuracy and the speed of the algorithm in a low powered computer. We conclude that it is possible to have a door classification algorithm running in real-time on a low-power device.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 367
Author(s):  
Thang Bui Quy ◽  
Jong-Myon Kim

This paper introduces a technique using a k-nearest neighbor (k-NN) classifier and hybrid features extracted from acoustic emission (AE) signals for detecting leakages in a gas pipeline. The whole algorithm is embedded in a microcontroller unit (MCU) to detect leaks in real-time. The embedded system receives signals continuously from a sensor mounted on the surface of a gas pipeline to diagnose any leak. To construct the system, AE signals are first recorded from a gas pipeline testbed under various conditions and used to synthesize the leak detection algorithm via offline signal analysis. The current work explores different features of normal/leaking states from corresponding datasets and eliminates redundant and outlier features to improve the performance and guarantee the real-time characteristic of the leak detection program. To obtain the robustness of leak detection, the paper normalizes features and adapts the trained k-NN classifier to the specific environment where the system is installed. Aside from using a classifier for categorizing normal/leaking states of a pipeline, the system monitors accumulative leaking event occurrence rate (ALEOR) in conjunction with a defined threshold to conclude the state of the pipeline. The entire proposed system is implemented on the 32F746G-DISCOVERY board, and to verify this system, numerous real AE signals stored in a hard drive are transferred to the board. The experimental results show that the proposed system executes the leak detection algorithm in a period shorter than the total input data time, thus guaranteeing the real-time characteristic. Furthermore, the system always yields high average classification accuracy (ACA) despite adding a white noise to input signal, and false alarms do not occur with a reasonable ALEOR threshold.


2018 ◽  
Vol 33 (9) ◽  
pp. 787-792
Author(s):  
马永杰 MA Yong-jie ◽  
宋晓凤 SONG Xiao-feng ◽  
李雪燕 LI Xue-yan ◽  
刘姣姣 LIU Jiao-jiao

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5080
Author(s):  
Baohua Qiang ◽  
Ruidong Chen ◽  
Mingliang Zhou ◽  
Yuanchao Pang ◽  
Yijie Zhai ◽  
...  

In recent years, increasing image data comes from various sensors, and object detection plays a vital role in image understanding. For object detection in complex scenes, more detailed information in the image should be obtained to improve the accuracy of detection task. In this paper, we propose an object detection algorithm by jointing semantic segmentation (SSOD) for images. First, we construct a feature extraction network that integrates the hourglass structure network with the attention mechanism layer to extract and fuse multi-scale features to generate high-level features with rich semantic information. Second, the semantic segmentation task is used as an auxiliary task to allow the algorithm to perform multi-task learning. Finally, multi-scale features are used to predict the location and category of the object. The experimental results show that our algorithm substantially enhances object detection performance and consistently outperforms other three comparison algorithms, and the detection speed can reach real-time, which can be used for real-time detection.


2019 ◽  
Vol 9 (16) ◽  
pp. 3225 ◽  
Author(s):  
He ◽  
Huang ◽  
Wei ◽  
Li ◽  
Guo

In recent years, significant advances have been gained in visual detection, and an abundance of outstanding models have been proposed. However, state-of-the-art object detection networks have some inefficiencies in detecting small targets. They commonly fail to run on portable devices or embedded systems due to their high complexity. In this workpaper, a real-time object detection model, termed as Tiny Fast You Only Look Once (TF-YOLO), is developed to implement in an embedded system. Firstly, the k-means++ algorithm is applied to cluster the dataset, which contributes to more excellent priori boxes of the targets. Secondly, inspired by the multi-scale prediction idea in the Feature Pyramid Networks (FPN) algorithm, the framework in YOLOv3 is effectively improved and optimized, by three scales to detect the earlier extracted features. In this way, the modified network is sensitive for small targets. Experimental results demonstrate that the proposed TF-YOLO method is a smaller, faster and more efficient network model increasing the performance of end-to-end training and real-time object detection for a variety of devices.


2016 ◽  
Vol 2016 ◽  
pp. 1-16 ◽  
Author(s):  
Jia Wei Tang ◽  
Nasir Shaikh-Husin ◽  
Usman Ullah Sheikh ◽  
M. N. Marsono

Moving target detection is the most common task for Unmanned Aerial Vehicle (UAV) to find and track object of interest from a bird’s eye view in mobile aerial surveillance for civilian applications such as search and rescue operation. The complex detection algorithm can be implemented in a real-time embedded system using Field Programmable Gate Array (FPGA). This paper presents the development of real-time moving target detection System-on-Chip (SoC) using FPGA for deployment on a UAV. The detection algorithm utilizes area-based image registration technique which includes motion estimation and object segmentation processes. The moving target detection system has been prototyped on a low-cost Terasic DE2-115 board mounted with TRDB-D5M camera. The system consists of Nios II processor and stream-oriented dedicated hardware accelerators running at 100 MHz clock rate, achieving 30-frame per second processing speed for 640 × 480 pixels’ resolution greyscale videos.


2021 ◽  
Author(s):  
Asma Jamesh

Every year, 1.5 lakh people die in road mishaps in India. Among these accidents, 40% are due to ‘Drowsy or Sleep Driving’. According to several statistics, almost all commercial private drivers tend to drive continuously for 10 hours a day. Nearly all road accidents caused due to lack of sleep and drowsiness are highly hazardous and fatal. Drowsy Driver Detection Algorithm acquires real-time video and captures snapshots using an external Webcam. Using the Viola-Jones Algorithm, the face and the eyes of the driver are detected. The original RGB eye image is converted to a Gray image and then into a Binary image. Two techniques, Maximally Stable Extremal Regions Feature Detection and Binarization are deployed to determine the status of the driver. This research paper focuses on the development of a MATLAB algorithm to alert the driver or the co-passenger on-time by plotting the MSER Features and thresholding the acquired real-time images.


2021 ◽  
pp. 1-26
Author(s):  
E. Çetin ◽  
C. Barrado ◽  
E. Pastor

Abstract The number of unmanned aerial vehicles (UAVs, also known as drones) has increased dramatically in the airspace worldwide for tasks such as surveillance, reconnaissance, shipping and delivery. However, a small number of them, acting maliciously, can raise many security risks. Recent Artificial Intelligence (AI) capabilities for object detection can be very useful for the identification and classification of drones flying in the airspace and, in particular, are a good solution against malicious drones. A number of counter-drone solutions are being developed, but the cost of drone detection ground systems can also be very high, depending on the number of sensors deployed and powerful fusion algorithms. We propose a low-cost counter-drone solution composed uniquely by a guard-drone that should be able to detect, locate and eliminate any malicious drone. In this paper, a state-of-the-art object detection algorithm is used to train the system to detect drones. Three existing object detection models are improved by transfer learning and tested for real-time drone detection. Training is done with a new dataset of drone images, constructed automatically from a very realistic flight simulator. While flying, the guard-drone captures random images of the area, while at the same time, a malicious drone is flying too. The drone images are auto-labelled using the location and attitude information available in the simulator for both drones. The world coordinates for the malicious drone position must then be projected into image pixel coordinates. The training and test results show a minimum accuracy improvement of 22% with respect to state-of-the-art object detection models, representing promising results that enable a step towards the construction of a fully autonomous counter-drone system.


Impact ◽  
2020 ◽  
Vol 2020 (2) ◽  
pp. 9-11
Author(s):  
Tomohiro Fukuda

Mixed reality (MR) is rapidly becoming a vital tool, not just in gaming, but also in education, medicine, construction and environmental management. The term refers to systems in which computer-generated content is superimposed over objects in a real-world environment across one or more sensory modalities. Although most of us have heard of the use of MR in computer games, it also has applications in military and aviation training, as well as tourism, healthcare and more. In addition, it has the potential for use in architecture and design, where buildings can be superimposed in existing locations to render 3D generations of plans. However, one major challenge that remains in MR development is the issue of real-time occlusion. This refers to hiding 3D virtual objects behind real articles. Dr Tomohiro Fukuda, who is based at the Division of Sustainable Energy and Environmental Engineering, Graduate School of Engineering at Osaka University in Japan, is an expert in this field. Researchers, led by Dr Tomohiro Fukuda, are tackling the issue of occlusion in MR. They are currently developing a MR system that realises real-time occlusion by harnessing deep learning to achieve an outdoor landscape design simulation using a semantic segmentation technique. This methodology can be used to automatically estimate the visual environment prior to and after construction projects.


Sign in / Sign up

Export Citation Format

Share Document