scholarly journals Camera-LiDAR Multi-Level Sensor Fusion for Target Detection at the Network Edge

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 3992
Author(s):  
Javier Mendez ◽  
Miguel Molina ◽  
Noel Rodriguez ◽  
Manuel P. Cuellar ◽  
Diego P. Morales

There have been significant advances regarding target detection in the autonomous vehicle context. To develop more robust systems that can overcome weather hazards as well as sensor problems, the sensor fusion approach is taking the lead in this context. Laser Imaging Detection and Ranging (LiDAR) and camera sensors are two of the most used sensors for this task since they can accurately provide important features such as target´s depth and shape. However, most of the current state-of-the-art target detection algorithms for autonomous cars do not take into consideration the hardware limitations of the vehicle such as the reduced computing power in comparison with Cloud servers as well as the reduced latency. In this work, we propose Edge Computing Tensor Processing Unit (TPU) devices as hardware support due to their computing capabilities for machine learning algorithms as well as their reduced power consumption. We developed an accurate and small target detection model for these devices. Our proposed Multi-Level Sensor Fusion model has been optimized for the network edge, specifically for the Google Coral TPU. As a result, high accuracy results are obtained while reducing the memory consumption as well as the latency of the system using the challenging KITTI dataset.

1998 ◽  
Author(s):  
Doug Allen ◽  
Bart Smith ◽  
Norman Morris ◽  
Charles Bjork ◽  
John Rushing

Universe ◽  
2021 ◽  
Vol 7 (7) ◽  
pp. 211
Author(s):  
Xingzhu Wang ◽  
Jiyu Wei ◽  
Yang Liu ◽  
Jinhao Li ◽  
Zhen Zhang ◽  
...  

Recently, astronomy has witnessed great advancements in detectors and telescopes. Imaging data collected by these instruments are organized into very large datasets that form data-oriented astronomy. The imaging data contain many radio galaxies (RGs) that are interesting to astronomers. However, considering that the scale of astronomical databases in the information age is extremely large, a manual search of these galaxies is impractical given the need for manual labor. Therefore, the ability to detect specific types of galaxies largely depends on computer algorithms. Applying machine learning algorithms on large astronomical data sets can more effectively detect galaxies using photometric images. Astronomers are motivated to develop tools that can automatically analyze massive imaging data, including developing an automatic morphological detection of specified radio sources. Galaxy Zoo projects have generated great interest in visually classifying galaxy samples using CNNs. Banfield studied radio morphologies and host galaxies derived from visual inspection in the Radio Galaxy Zoo project. However, there are relatively more studies on galaxy classification, while there are fewer studies on galaxy detection. We develop a galaxy detection model, which realizes the location and classification of Fanaroff–Riley class I (FR I) and Fanaroff–Riley class II (FR II) galaxies. The field of target detection has also developed rapidly since the convolutional neural network was proposed. You Only Look Once: Unified, Real-Time Object Detection (YOLO) is a neural-network-based target detection model proposed by Redmon et al. We made several improvements to the detection effect of dense galaxies based on the original YOLOv5, mainly including the following. (1) We use Varifocal loss, whose function is to weigh positive and negative samples asymmetrically and highlight the main sample of positive samples in the training phase. (2) Our neural network model adds an attention mechanism for the convolution kernel so that the feature extraction network can adjust the size of the receptive field dynamically in deep convolutional neural networks. In this way, our model has good adaptability and effect for identifying galaxies of different sizes on the picture. (3) We use empirical practices suitable for small target detection, such as image segmentation and reducing the stride of the convolutional layers. Apart from the three major contributions and novel points of the model, the thesis also included different data sources, i.e., radio images and optical images, aiming at better classification performance and more accurate positioning. We used optical image data from SDSS, radio image data from FIRST, and label data from FR Is and FR IIs catalogs to create a data set of FR Is and FR IIs. Subsequently, we used the data set to train our improved YOLOv5 model and finally realize the automatic classification and detection of FR Is and FR IIs. Experimental results prove that our improved method achieves better performance. [email protected] of our model reaches 82.3%, and the location (Ra and Dec) of the galaxies can be identified more accurately. Our model has great astronomical significance. For example, it can help astronomers find FR I and FR II galaxies to build a larger-scale galaxy catalog. Our detection method can also be extended to other types of RGs. Thus, astronomers can locate the specific type of galaxies in a considerably shorter time and with minimum human intervention, or it can be combined with other observation data (spectrum and redshift) to explore other properties of the galaxies.


2020 ◽  
Vol 8 ◽  
pp. 14-21
Author(s):  
Surya Man Koju ◽  
Nikil Thapa

This paper presents economic and reconfigurable RF based wireless communication at 2.4 GHz between two vehicles. It implements digital VLSI using two Spartan 3E FPGAs, where one vehicle receives the information of another vehicle and shares its own information to another vehicle. The information includes vehicle’s speed, location, heading and its operation, such as braking status and turning status. It implements autonomous vehicle technology. In this work, FPGA is used as central signal processing unit which is interfaced with two microcontrollers (ATmega328P). Microcontroller-1 is interfaced with compass module, GPS module, DF Player mini and nRF24L01 module. This microcontroller determines the relative position and the relative heading as seen from one vehicle to another. Microcontroller-2 is used to measure the speed of vehicle digitally. The resulting data from these microcontrollers are transmitted separately and serially through UART interface to FPGA. At FPGA, different signal processing such as speed comparison, turn comparison, distance range measurement and vehicle operation processing, are carried out to generate the voice announcement command, warning signals, event signals, and such outputs are utilized to warn drivers about potential accidents and prevent crashes before event happens.


Sign in / Sign up

Export Citation Format

Share Document