Data Fusion: Cumulative Effects of Discrete Fusion on Target Detection Probability

Author(s):  
S. Churchill ◽  
C. Randell ◽  
D. Power ◽  
E. Gill
IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 59511-59523
Author(s):  
Ke Zhang ◽  
Zeyang Wang ◽  
Lele Guo ◽  
Yuanyuan Peng ◽  
Zhi Zheng

Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3193 ◽  
Author(s):  
Xueli Sheng ◽  
Yang Chen ◽  
Longxiang Guo ◽  
Jingwei Yin ◽  
Xiao Han

Multitarget tracking algorithms based on sonar usually run into detection uncertainty, complex channel and more clutters, which cause lower detection probability, single sonar sensors failing to measure when the target is in an acoustic shadow zone, and computational bottlenecks. This paper proposes a novel tracking algorithm based on multisensor data fusion to solve the above problems. Firstly, under more clutters and lower detection probability condition, a Gaussian Mixture Probability Hypothesis Density (GMPHD) filter with computational advantages was used to get local estimations. Secondly, this paper provided a maximum-detection capability multitarget track fusion algorithm to deal with the problems caused by low detection probability and the target being in acoustic shadow zones. Lastly, a novel feedback algorithm was proposed to improve the GMPHD filter tracking performance, which fed the global estimations as a random finite set (RFS). In the end, the statistical characteristics of OSPA were used as evaluation criteria in Monte Carlo simulations, which showed this algorithm’s performance against those sonar tracking problems. When the detection probability is 0.7, compared with the GMPHD filter, the OSPA mean of two sensor and three sensor fusion was decrease almost by 40% and 55%, respectively. Moreover, this algorithm successfully tracks targets in acoustic shadow zones.


Author(s):  
C. Theoharatos ◽  
A. Makedonas ◽  
N. Fragoulis ◽  
V. Tsagaris ◽  
S. Costicoglou

Data fusion has lately received a lot of attention as an effective technique for several target detection and classification applications in different remote sensing areas. In this work, a novel data fusion scheme for improving the detection accuracy of ship targets in polarimetric data is proposed, based on 2D principal components analysis (2D-PCA) technique. By constructing a fused image from different polarization channels, increased performance of ship target detection is achieved having higher true positive and lower false positive detection accuracy as compared to single channel detection performance. In addition, the use of 2D-PCA provides the ability to discriminate and classify objects and regions in the resulting image representation more effectively, with the additional advantage of being more computational efficient and requiring less time to determine the corresponding eigenvectors, compared to e.g. conventional PCA. Throughout our analysis, a constant false alarm rate (CFAR) detection model is applied to characterize the background clutter and discriminate ship targets based on the Weibull distribution and the calculation of local statistical moments for estimating the order statistics of the background clutter. Appropriate pre-processing and post-processing techniques are also introduced to the process chain, in order to boost ship discrimination and suppress false alarms caused by range focusing artifacts. Experimental results provided on a set of Envisat and RadarSat-2 images (dual and quad polarized respectively), demonstrate the advantage of the proposed data fusion scheme in terms of detection accuracy as opposed to single data ship detection and conventional PCA, in various sea conditions and resolutions. Further investigation of other data fusion techniques is currently in progress.


2020 ◽  
Vol 12 (20) ◽  
pp. 3274
Author(s):  
Keke Geng ◽  
Ge Dong ◽  
Guodong Yin ◽  
Jingyu Hu

Recent advancements in environmental perception for autonomous vehicles have been driven by deep learning-based approaches. However, effective traffic target detection in complex environments remains a challenging task. This paper presents a novel dual-modal instance segmentation deep neural network (DM-ISDNN) by merging camera and LIDAR data, which can be used to deal with the problem of target detection in complex environments efficiently based on multi-sensor data fusion. Due to the sparseness of the LIDAR point cloud data, we propose a weight assignment function that assigns different weight coefficients to different feature pyramid convolutional layers for the LIDAR sub-network. We compare and analyze the adaptations of early-, middle-, and late-stage fusion architectures in depth. By comprehensively considering the detection accuracy and detection speed, the middle-stage fusion architecture with a weight assignment mechanism, with the best performance, is selected. This work has great significance for exploring the best feature fusion scheme of a multi-modal neural network. In addition, we apply a mask distribution function to improve the quality of the predicted mask. A dual-modal traffic object instance segmentation dataset is established using a 7481 camera and LIDAR data pairs from the KITTI dataset, with 79,118 manually annotated instance masks. To the best of our knowledge, there is no existing instance annotation for the KITTI dataset with such quality and volume. A novel dual-modal dataset, composed of 14,652 camera and LIDAR data pairs, is collected using our own developed autonomous vehicle under different environmental conditions in real driving scenarios, for which a total of 62,579 instance masks are obtained using semi-automatic annotation method. This dataset can be used to validate the detection performance under complex environmental conditions of instance segmentation networks. Experimental results on the dual-modal KITTI Benchmark demonstrate that DM-ISDNN using middle-stage data fusion and the weight assignment mechanism has better detection performance than single- and dual-modal networks with other data fusion strategies, which validates the robustness and effectiveness of the proposed method. Meanwhile, compared to the state-of-the-art instance segmentation networks, our method shows much better detection performance, in terms of AP and F1 score, on the dual-modal dataset collected under complex environmental conditions, which further validates the superiority of our method.


Author(s):  
Linh

The article presents a method to evaluate the target detection efficiency of laser fuzes operating in foggy conditions. The evaluation model is built from: the distance equation of the laser system, the attenuation of the beam in two-way propagation, the disturbances affecting the system; the signal to noise ratio SRN has determined the detection probability of the receiver. The model was used to evaluate with wavelengths: 850 nm, 1000 nm and 1550 nm, when propagating in three different bad weather conditions. The results show that the most effective detection of the target when using a wavelength of 1550 nm in visibility in haze and mist conditions (visibility V > 500 m). In fog conditions (visibility V < 500 m), the above three wavelengths provide the same detection efficiency. The article provides the method and instructions for choosing the wavelength of the laser fuze.


Sign in / Sign up

Export Citation Format

Share Document