Multisensor Degradation Data Fusion and Remaining Life Prediction

Author(s):  
Changxi Wang ◽  
E. A. Elsayed ◽  
Kang Li ◽  
Javier Cabrera

Multiple sensors are commonly used for degradation monitoring. Since different sensors may be sensitive at different stages of the degradation process and each sensor data contain only partial information of the degraded unit, data fusion approaches that integrate degradation data from multiple sensors can effectively improve degradation modeling and life prediction accuracy. We present a non-parametric approach that assigns weights to each sensor based on dynamic clustering of the sensors observations. A case study that involves a fatigue-crack-growth dataset is implemented in order evaluate the prognostic performance of the unit. Results show that the fused path obtained with the proposed approach outperforms any individual sensor data and other paths obtained with an adaptive threshold clustering algorithm in terms of life prediction accuracy.

Sensors ◽  
2019 ◽  
Vol 20 (1) ◽  
pp. 238 ◽  
Author(s):  
Jiande Fan ◽  
Weixin Xie ◽  
Haocui Du

In this paper, a novel multi-sensor clustering algorithm, based on the density peaks clustering (DPC) algorithm, is proposed to address the multi-sensor data fusion (MSDF) problem. The MSDF problem is raised in the multi-sensor target detection (MSTD) context and corresponds to clustering observations of multiple sensors, without prior information on clutter. During the clustering process, the data points from the same sensor cannot be grouped into the same cluster, which is called the cannot link (CL) constraint; the size of each cluster should be within a certain range; and overlapping clusters (if any) must be divided into multiple clusters to satisfy the CL constraint. The simulation results confirm the validity and reliability of the proposed algorithm.


2021 ◽  
Author(s):  
Shuang Wu ◽  
Lei Deng ◽  
Lijie Guo ◽  
Yanjie Wu

Abstract Background: Leaf Area Index (LAI) is half of the amount of leaf area per unit horizontal ground surface area. Consequently, accurate vegetation extraction in remote sensing imagery is critical for LAI estimation. However, most studies do not fully exploit the advantages of Unmanned Aerial Vehicle (UAV) imagery with high spatial resolution, such as not removing the background (soil and shadow, etc.). Furthermore, the advancement of multi-sensor synchronous observation and integration technology allows for the simultaneous collection of canopy spectral, structural, and thermal data, making it possible for data fusion.Methods: To investigate the potential of high-resolution UAV imagery combined with multi-sensor data fusion in LAI estimation. High-resolution UAV imagery was obtained with a multi-sensor integrated MicaSense Altum camera to extract the wheat canopy's spectral, structural, and thermal features. After removing the soil background, all features were fused, and LAI was estimated using Random Forest and Support Vector Machine Regression.Result: The results show that: (1) the soil background reduced the accuracy of the LAI prediction, and soil background could be effectively removed by taking advantage of high-resolution UAV imagery. After removing the soil background, the LAI prediction accuracy improved significantly, R2 raised by about 0.27, and RMSE fell by about 0.476. (2) The fusion of multi-sensor synchronous observation data improved LAI prediction accuracy and achieved the best accuracy (R2 = 0.815 and RMSE = 1.023). (3) When compared to other variables, 23 CHM, NRCT, NDRE, and BLUE are crucial for LAI estimation. Even the simple Multiple Linear Regression model could achieve high prediction accuracy (R2 = 0.679 and RMSE = 1.231), providing inspiration for rapid and efficient LAI prediction.Conclusions: The method of this study can be transferred to other sites with more extensive areas or similar agriculture structures, which will facilitate agricultural production and management.


2021 ◽  
Vol 208 ◽  
pp. 107249
Author(s):  
Naipeng Li ◽  
Nagi Gebraeel ◽  
Yaguo Lei ◽  
Xiaolei Fang ◽  
Xiao Cai ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2180 ◽  
Author(s):  
Prasanna Kolar ◽  
Patrick Benavidez ◽  
Mo Jamshidi

This paper focuses on data fusion, which is fundamental to one of the most important modules in any autonomous system: perception. Over the past decade, there has been a surge in the usage of smart/autonomous mobility systems. Such systems can be used in various areas of life like safe mobility for the disabled, senior citizens, and so on and are dependent on accurate sensor information in order to function optimally. This information may be from a single sensor or a suite of sensors with the same or different modalities. We review various types of sensors, their data, and the need for fusion of the data with each other to output the best data for the task at hand, which in this case is autonomous navigation. In order to obtain such accurate data, we need to have optimal technology to read the sensor data, process the data, eliminate or at least reduce the noise and then use the data for the required tasks. We present a survey of the current data processing techniques that implement data fusion using different sensors like LiDAR that use light scan technology, stereo/depth cameras, Red Green Blue monocular (RGB) and Time-of-flight (TOF) cameras that use optical technology and review the efficiency of using fused data from multiple sensors rather than a single sensor in autonomous navigation tasks like mapping, obstacle detection, and avoidance or localization. This survey will provide sensor information to researchers who intend to accomplish the task of motion control of a robot and detail the use of LiDAR and cameras to accomplish robot navigation.


2020 ◽  
Author(s):  
Huihui Pan ◽  
Weichao Sun ◽  
Qiming Sun ◽  
Huijun Gao

Abstract Environmental perception is one of the key technologies to realize autonomous vehicles. Autonomous vehicles are often equipped with multiple sensors to form a multi-source environmental perception system. Those sensors are very sensitive to light or background conditions, which will introduce a variety of global and local fault signals that bring great safety risks to autonomous driving system during long-term running. In this paper, a real-time data fusion network with fault diagnosis and fault tolerance mechanism is designed. By introducing prior features to realize the lightweight of the backbone network, the features of the input data can be extracted in real time accurately. Through the temporal and spatial correlation between sensor data, the sensor redundancy is utilized to diagnose the local and global condence of sensor data in real time, eliminate the fault data, and ensure the accuracy and reliability of data fusion. Experiments show that the network achieves the state-of-the-art results in speed and accuracy, and can accurately detect the location of the target when some sensors are out of focus or out of order.


Author(s):  
M. Schmitt ◽  
L. H. Hughes ◽  
X. X. Zhu

<p><strong>Abstract.</strong> While deep learning techniques have an increasing impact on many technical fields, gathering sufficient amounts of training data is a challenging problem in remote sensing. In particular, this holds for applications involving data from multiple sensors with heterogeneous characteristics. One example for that is the fusion of synthetic aperture radar (SAR) data and optical imagery. With this paper, we publish the <i>SEN1-2</i> dataset to foster deep learning research in SAR-optical data fusion. <i>SEN1-2</i> comprises 282;384 pairs of corresponding image patches, collected from across the globe and throughout all meteorological seasons. Besides a detailed description of the dataset, we show exemplary results for several possible applications, such as SAR image colorization, SAR-optical image matching, and creation of artificial optical images from SAR input data. Since <i>SEN1-2</i> is the first large open dataset of this kind, we believe it will support further developments in the field of deep learning for remote sensing as well as multi-sensor data fusion.</p>


Author(s):  
Jan Klečka ◽  
Karel Horák ◽  
Ondřej Boštík

<p>This paper is approaching a problem of Simultaneous Localization and Mapping (SLAM) algorithms focused specifically on processing of data from a heterogeneous set of sensors concurrently. Sensors are considered to be different in a sense of measured physical quantity and so the problem of effective data-fusion is discussed. A special extension of the standard probabilistic approach to SLAM algorithms is presented. This extension is composed of two parts. Firstly is presented general perspective multiple-sensors based SLAM and then thee archetypical special cases are discuses. One archetype provisionally designated as "partially collective mapping" has been analyzed also in a practical perspective because it implies a promising options for implicit map-level data-fusion.</p>


Sign in / Sign up

Export Citation Format

Share Document