scholarly journals Tracking Multiple Targets from Multistatic Doppler Radar with Unknown Probability of Detection

Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1672 ◽  
Author(s):  
Cong-Thanh Do ◽  
Hoa Nguyen

The measurements from multistatic radar systems are typically subjected to complicated data association, noise corruption, missed detection, and false alarms. Moreover, most of the current multistatic Doppler radar-based approaches in multitarget tracking are based on the assumption of known detection probability. This assumption can lead to biased or even complete corruption of estimation results. This paper proposes a method for tracking multiple targets from multistatic Doppler radar with unknown detection probability. A closed form labeled multitarget Bayes filter was used to track unknown and time-varying targets with unknown probability of detection in the presence of clutter, misdetection, and association uncertainty. The efficiency of the proposed algorithm was illustrated via numerical simulation examples.

Sensors ◽  
2019 ◽  
Vol 19 (22) ◽  
pp. 5025 ◽  
Author(s):  
Cong-Thanh Do ◽  
Tran Thien Dat Nguyen ◽  
Weifeng Liu

In multitarget tracking, knowledge of the backgrounds plays a crucial role in the accuracy of the tracker. Clutter and detection probability are the two essential background parameters which are usually assumed to be known constants although they are, in fact, unknown and time varying. Incorrect values of these parameters lead to a degraded or biased performance of the tracking algorithms. This paper proposes a method for online tracking multiple targets using multiple sensors which jointly adapts to the unknown clutter rate and the probability of detection. An effective filter is developed from parallel estimation of these parameters and then feeding them into the state-of-the-art generalized labeled multi-Bernoulli filter. Provided that the fluctuation of these unknown backgrounds is slowly-varying in comparison to the rate of measurement-update data, the validity of the proposed method is demonstrated via numerical study using multistatic Doppler data.


Author(s):  
S. M. Kostromitsky ◽  
V. M. Artemiev ◽  
D. S. Nefedov

The problem of radar detection of small-sized targets using the traditional methods of selection of signals embedded in background noise is considered. It is shown that for a false alarm rate of 10–5, which provides for 1–2 false alarms within the entire coverage of the modern 3D radar, the probability of detection of a small-sized target is getting unacceptably low. Repeatedly decreasing the threshold can provide an acceptable level of the detection probability at ultra-low signal-tonoise ratio (SNR) values. At the same time, decreasing the threshold will result in an unacceptable increase of the false alarm rate. A new target detection procedure using the “track before detect” method (TBD) is proposed. In the TBD procedure, the target is considered detected when two conditions are met: the signal exceeds once a definite threshold; the target is detected within a strictly defined observation area (acquisition or tracking gate). For low SNR values in the range of 3–8 dB and equal false alarm rate, the detection probability increases by 20–50 % compared to the traditional detection method. The simulation results showed a strong dependence of efficacy of the TBD algorithm on the threshold value and the decision rule. The possibility is noted of adaptive control over the threshold due to the use the detection results in the preceding scanning cycles, as well as the introduction of matrix radar surveillance not only by the target coordinates and parameters, but also by the detection threshold, decision rules, etc. Examination of these issues is the subject of further research.


2014 ◽  
Vol 35 (4) ◽  
pp. 901-907
Author(s):  
Jun-kun Yan ◽  
Feng-zhou Dai ◽  
Tong Qin ◽  
Hong-wei Liu ◽  
Zheng Bao

2021 ◽  
Vol 11 (5) ◽  
pp. 2198
Author(s):  
Junwoo Jung ◽  
Jaesung Lim ◽  
Sungyeol Park ◽  
Haengik Kang ◽  
Seungbok Kwon

A frequency hopping orthogonal frequency division multiple access (FH-OFDMA) can provide low probability of detection (LPD) and anti-jamming capabilities to users against adversary detectors. To obtain an extreme LPD capability that cannot be provided by the basic symbol-by-symbol (SBS)-based FH pattern, we proposed two FH patterns, namely chaotic standard map (CSM) and cat map for FH-OFDMA systems. In our previous work, through analysis of complexity to regenerate the transmitted symbol sequence, at the point of adversary detectors, we found that the CSM had a lower probability of intercept than the cat map and SBS. It is possible when a detector already knows symbol and frame structures, and the detector has been synchronized to the FH-OFDMA system. Unlike the previous work, here, we analyze whether the CSM provides greater LPD capability than the cat map and SBS by detection probability using spectrum sensing technique. We analyze the detection probability of the CSM and provide detection probabilities of the cat map and SBS compared to the CSM. Based on our analysis of the detection probability and numerical results, it is evident that the CSM provides greater LPD capability than both the cat map and SBS-based FH-OFDMA systems.


Author(s):  
Evan S. Bentley ◽  
Richard L. Thompson ◽  
Barry R. Bowers ◽  
Justin G. Gibbs ◽  
Steven E. Nelson

AbstractPrevious work has considered tornado occurrence with respect to radar data, both WSR-88D and mobile research radars, and a few studies have examined techniques to potentially improve tornado warning performance. To date, though, there has been little work focusing on systematic, large-sample evaluation of National Weather Service (NWS) tornado warnings with respect to radar-observable quantities and the near-storm environment. In this work, three full years (2016–2018) of NWS tornado warnings across the contiguous United States were examined, in conjunction with supporting data in the few minutes preceding warning issuance, or tornado formation in the case of missed events. The investigation herein examines WSR-88D and Storm Prediction Center (SPC) mesoanalysis data associated with these tornado warnings with comparisons made to the current Warning Decision Training Division (WDTD) guidance.Combining low-level rotational velocity and the significant tornado parameter (STP), as used in prior work, shows promise as a means to estimate tornado warning performance, as well as relative changes in performance as criteria thresholds vary. For example, low-level rotational velocity peaking in excess of 30 kt (15 m s−1), in a near-storm environment which is not prohibitive for tornadoes (STP > 0), results in an increased probability of detection and reduced false alarms compared to observed NWS tornado warning metrics. Tornado warning false alarms can also be reduced through limiting warnings with weak (<30 kt), broad (>1nm) circulations in a poor (STP=0) environment, careful elimination of velocity data artifacts like sidelobe contamination, and through greater scrutiny of human-based tornado reports in otherwise questionable scenarios.


2018 ◽  
Vol 33 (6) ◽  
pp. 1501-1511 ◽  
Author(s):  
Harold E. Brooks ◽  
James Correia

Abstract Tornado warnings are one of the flagship products of the National Weather Service. We update the time series of various metrics of performance in order to provide baselines over the 1986–2016 period for lead time, probability of detection, false alarm ratio, and warning duration. We have used metrics (mean lead time for tornadoes warned in advance, fraction of tornadoes warned in advance) that work in a consistent way across the official changes in policy for warning issuance, as well as across points in time when unofficial changes took place. The mean lead time for tornadoes warned in advance was relatively constant from 1986 to 2011, while the fraction of tornadoes warned in advance increased through about 2006, and the false alarm ratio slowly decreased. The largest changes in performance take place in 2012 when the default warning duration decreased, and there is an apparent increased emphasis on reducing false alarms. As a result, the lead time, probability of detection, and false alarm ratio all decrease in 2012. Our analysis is based, in large part, on signal detection theory, which separates the quality of the warning system from the threshold for issuing warnings. Threshold changes lead to trade-offs between false alarms and missed detections. Such changes provide further evidence for changes in what the warning system as a whole considers important, as well as highlighting the limitations of measuring performance by looking at metrics independently.


2014 ◽  
Vol 8 (4) ◽  
pp. 396-405 ◽  
Author(s):  
Pietro Stinco ◽  
Maria S. Greco ◽  
Fulvio Gini ◽  
Mario La Manna

2017 ◽  
Vol 14 ◽  
pp. 187-194 ◽  
Author(s):  
Stefano Federico ◽  
Marco Petracca ◽  
Giulia Panegrossi ◽  
Claudio Transerici ◽  
Stefano Dietrich

Abstract. This study investigates the impact of the assimilation of total lightning data on the precipitation forecast of a numerical weather prediction (NWP) model. The impact of the lightning data assimilation, which uses water vapour substitution, is investigated at different forecast time ranges, namely 3, 6, 12, and 24 h, to determine how long and to what extent the assimilation affects the precipitation forecast of long lasting rainfall events (> 24 h). The methodology developed in a previous study is slightly modified here, and is applied to twenty case studies occurred over Italy by a mesoscale model run at convection-permitting horizontal resolution (4 km). The performance is quantified by dichotomous statistical scores computed using a dense raingauge network over Italy. Results show the important impact of the lightning assimilation on the precipitation forecast, especially for the 3 and 6 h forecast. The probability of detection (POD), for example, increases by 10 % for the 3 h forecast using the assimilation of lightning data compared to the simulation without lightning assimilation for all precipitation thresholds considered. The Equitable Threat Score (ETS) is also improved by the lightning assimilation, especially for thresholds below 40 mm day−1. Results show that the forecast time range is very important because the performance decreases steadily and substantially with the forecast time. The POD, for example, is improved by 1–2 % for the 24 h forecast using lightning data assimilation compared to 10 % of the 3 h forecast. The impact of the false alarms on the model performance is also evidenced by this study.


Author(s):  
V. M. Artemiev ◽  
S. M. Kostromitsky ◽  
A. O. Naumov

To increase the efficiency of detecting moving objects in radiolocation, additional features are used, associated with the characteristics of trajectories. The authors assumed that trajectories are correlated, that allows extrapolation of the coordinate values taking into account their increments over the scanning period. The detection procedure consists of two stages. At the first, detection is carried out by the classical threshold method with a low threshold level, which provides a high probability of detection with high values of the probability of false alarms. At the same time uncertainty in the selection of object trajectory embedded in false trajectories arises. Due to the statistical independence of the coordinates of the false trajectories in comparison with the correlated coordinates of the object, the average duration of the first of them is less than the average duration of the second ones. This difference is used to solve the detection problem at the second stage based on the time-selection method. The obtained results allow estimation of the degree of gain in the probability of detection when using the proposed method.


Sign in / Sign up

Export Citation Format

Share Document