scholarly journals AI-Based Sensor Information Fusion for Supporting Deep Supervised Learning

Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1345 ◽  
Author(s):  
Carson Leung ◽  
Peter Braun ◽  
Alfredo Cuzzocrea

In recent years, artificial intelligence (AI) and its subarea of deep learning have drawn the attention of many researchers. At the same time, advances in technologies enable the generation or collection of large amounts of valuable data (e.g., sensor data) from various sources in different applications, such as those for the Internet of Things (IoT), which in turn aims towards the development of smart cities. With the availability of sensor data from various sources, sensor information fusion is in demand for effective integration of big data. In this article, we present an AI-based sensor-information fusion system for supporting deep supervised learning of transportation data generated and collected from various types of sensors, including remote sensed imagery for the geographic information system (GIS), accelerometers, as well as sensors for the global navigation satellite system (GNSS) and global positioning system (GPS). The discovered knowledge and information returned from our system provides analysts with a clearer understanding of trajectories or mobility of citizens, which in turn helps to develop better transportation models to achieve the ultimate goal of smarter cities. Evaluation results show the effectiveness and practicality of our AI-based sensor information fusion system for supporting deep supervised learning of big transportation data.

Author(s):  
S. Maier ◽  
T. Gostner ◽  
F. van de Camp ◽  
A. H. Hoppe

Abstract. In many fields today, it is necessary that a team has to do operational planning for a precise geographical location. Examples for this are staff work, the preparation of surveillance tasks at major events or state visits and sensor deployment planning for military and civil reconnaissance. For these purposes, Fraunhofer IOSB is developing the Digital Map Table (DigLT). When making important decisions, it is often helpful or even necessary to assess a situation on site. An augmented reality (AR) solution could be useful for this assessment. For the visualization of markers at specific geographical coordinates in augmented reality, a smartphone has to be aware of its position relative to the world. It is using the sensor data of the camera and inertial measurement unit (IMU) for AR while determining its absolute location and direction with the Global Navigation Satellite System (GNSS) and its magnetic compass. To validate the positional accuracy of AR markers, we investigated the current state of the art and existing solutions. A prototype application has been developed and connected to the DigLT. With this application, it is possible to place markers at geographical coordinates that will show up at the correct location in augmented reality at anyplace in the world. Additionally, a function was implemented that lets the user select a point from the environment in augmented reality, whose geographical coordinates are sent to the DigLT. The accuracy and practicality of the placement of markers were examined using geodetic reference points. As a result, we can conclude that it is possible to mark larger objects like a car or a house, but the accuracy mainly depends on the internal compass, which causes a rotational error that increases with the distance to the target.


2020 ◽  
Vol 12 (3) ◽  
pp. 411 ◽  
Author(s):  
Sangeetha Shankar ◽  
Michael Roth ◽  
Lucas Andreas Schubert ◽  
Judith Anne Verstegen

Up-to-date geodatasets on railway infrastructure are valuable resources for the field of transportation. This paper investigates three methods for mapping the center lines of railway tracks using heterogeneous sensor data: (i) conditional selection of satellite navigation (GNSS) data, (ii) a combination of inertial measurements (IMU data) and GNSS data in a Kalman filtering and smoothing framework and (iii) extraction of center lines from laser scanner data. Several combinations of the methods are compared with a focus on mapping in tree-covered areas. The center lines of the railway tracks are extracted by applying these methods to a test dataset collected by a road-rail vehicle. The guard rails in the test area were also extracted during the center line detection process. The combination of methods (i) and (ii) gave the best result for the track on which the measurement vehicle had moved, mapping almost 100% of the track. The combination of methods (ii) and (iii) and the combination of all three methods gave the best result for the other parallel tracks, mapping between 25% and 80%. The mean perpendicular distance of the mapped center lines from the reference data was 1.49 meters.


2011 ◽  
Vol 460-461 ◽  
pp. 404-408
Author(s):  
Yue Shun He ◽  
Jun Zhang ◽  
Jie He

This paper mainly analyzed the principle of multi-source spatial data fusion, and expounded the multi-source spatial data fusion of the distributed model structure. The paper considers a distributed multi-sensor information fusion system factors, A performance evaluation model was established which was suitable for distributed multi-sensor information fusion system, It can estimate the system's precision, track quality, filtering quality, and the relevant between Navigation Paths and so on. Meanwhile, we had a lot of experiments by the datum which generated by the simulation test environment, experiments show that this evaluation model is valid.


2014 ◽  
Vol 8 (3) ◽  
pp. 3297-3333 ◽  
Author(s):  
Y. Bühler ◽  
M. Marty ◽  
L. Egli ◽  
J. Veitinger ◽  
T. Jonas ◽  
...  

Abstract. Information on snow depth and its spatial distribution is crucial for many applications in snow and avalanche research as well as in hydrology and ecology. Today snow depth distributions are usually estimated using point measurements performed by automated weather stations and observers in the field combined with interpolation algorithms. However, these methodologies are not able to capture the high spatial variability of the snow depth distribution present in alpine terrain. Continuous and accurate snow depth mapping has been done using laser scanning but this method can only cover limited areas and is expensive. We use the airborne ADS80 opto-electronic scanner with 0.25 m spatial resolution to derive digital surface models (DSMs) of winter and summer terrains in the neighborhood of Davos, Switzerland. The DSMs are generated using photogrammetric image correlation techniques based on the multispectral nadir and backward looking sensor data. We compare these products with the following independent datasets acquired simultaneously: (a) manually measured snow depth plots (b) differential Global Navigation Satellite System (dGNSS) points (c) Terrestrial Laser Scanning (TLS) and (d) Ground Penetrating Radar (GPR) datasets, to assess the accuracy of the photogrammetric products. The results of this investigation demonstrate the potential of optical scanners for wide-area, continuous and high spatial resolution snow-depth mapping over alpine catchments above tree line.


Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1440
Author(s):  
Jianping Wu ◽  
Bin Jiang ◽  
Hongtian Chen ◽  
Jianwei Liu

Electrical drive systems play an increasingly important role in high-speed trains. The whole system is equipped with sensors that support complicated information fusion, which means the performance around this system ought to be monitored especially during incipient changes. In such situation, it is crucial to distinguish faulty state from observed normal state because of the dire consequences closed-loop faults might bring. In this research, an optimal neighborhood preserving embedding (NPE) method called multi-manifold regularization NPE (MMRNPE) is proposed to detect various faults in an electrical drive sensor information fusion system. By taking locality preserving embedding into account, the proposed methodology extends the united application of Euclidean distance of both designated points and paired points, which guarantees the access to both local and global sensor information. Meanwhile, this structure fuses several manifolds to extract their own features. In addition, parameters are allocated in diverse manifolds to seek an optimal combination of manifolds while entropy of information with parameters is also selected to avoid the overweight of single manifold. Moreover, an experimental test based on the platform was built to validate the MMRNPE approach and demonstrate the effectiveness of the fault detection. Results and observations show that the proposed MMRNPE offers a better fault detection representation in comparison with NPE.


Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 4837 ◽  
Author(s):  
Stamatios Samaras ◽  
Eleni Diamantidou ◽  
Dimitrios Ataloglou ◽  
Nikos Sakellariou ◽  
Anastasios Vafeiadis ◽  
...  

Usage of Unmanned Aerial Vehicles (UAVs) is growing rapidly in a wide range of consumer applications, as they prove to be both autonomous and flexible in a variety of environments and tasks. However, this versatility and ease of use also brings a rapid evolution of threats by malicious actors that can use UAVs for criminal activities, converting them to passive or active threats. The need to protect critical infrastructures and important events from such threats has brought advances in counter UAV (c-UAV) applications. Nowadays, c-UAV applications offer systems that comprise a multi-sensory arsenal often including electro-optical, thermal, acoustic, radar and radio frequency sensors, whose information can be fused to increase the confidence of threat’s identification. Nevertheless, real-time surveillance is a cumbersome process, but it is absolutely essential to detect promptly the occurrence of adverse events or conditions. To that end, many challenging tasks arise such as object detection, classification, multi-object tracking and multi-sensor information fusion. In recent years, researchers have utilized deep learning based methodologies to tackle these tasks for generic objects and made noteworthy progress, yet applying deep learning for UAV detection and classification is considered a novel concept. Therefore, the need to present a complete overview of deep learning technologies applied to c-UAV related tasks on multi-sensor data has emerged. The aim of this paper is to describe deep learning advances on c-UAV related tasks when applied to data originating from many different sensors as well as multi-sensor information fusion. This survey may help in making recommendations and improvements of c-UAV applications for the future.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6137
Author(s):  
Max Jwo Lem Lee ◽  
Li-Ta Hsu ◽  
Hoi-Fung Ng

Accurate smartphone-based outdoor localization systems in deep urban canyons are increasingly needed for various IoT applications. As smart cities have developed, building information modeling (BIM) has become widely available. This article, for the first time, presents a semantic Visual Positioning System (VPS) for accurate and robust position estimation in urban canyons where the global navigation satellite system (GNSS) tends to fail. In the offline stage, a material segmented BIM is used to generate segmented images. In the online stage, an image is taken with a smartphone camera that provides textual information about the surrounding environment. The approach utilizes computer vision algorithms to segment between the different types of material class identified in the smartphone image. A semantic VPS method is then used to match the segmented generated images with the segmented smartphone image. Each generated image contains position information in terms of latitude, longitude, altitude, yaw, pitch, and roll. The candidate with the maximum likelihood is regarded as the precise position of the user. The positioning result achieved an accuracy of 2.0 m among high-rise buildings on a street, 5.5 m in a dense foliage environment, and 15.7 m in an alleyway. This represents an improvement in positioning of 45% compared to the current state-of-the-art method. The estimation of yaw achieved accuracy of 2.3°, an eight-fold improvement compared to the smartphone IMU.


Sign in / Sign up

Export Citation Format

Share Document