Automatic censoring CFAR detector based on ordered data variability for nonhomogeneous environments

2005 ◽  
Vol 152 (1) ◽  
pp. 43 ◽  
Author(s):  
A. Farrouki ◽  
M. Barkat
2017 ◽  
Vol 2 (1) ◽  
pp. 31-39
Author(s):  
B. ZATTOUTA ◽  
L. MESSIKH

Performance comparison of automatic censoring CA-based CFAR processors contribute to the development of more efficient censoring detectors. In this paper, the authors analyze the performance of the detection schemes which named: ACCA-odv- (Automatic Censored Cell Averaging -ordered data variability-), ADCCA- (Automatic Dual Censoring Cell Averaging-), ACGCA- (Automatic Censoring Greatest Cell Averaging-), and GGDC- (Goodness-of-fit Generalized likelihood test with Dual Censoring-)-CFAR's in heterogeneous environments. The assumed environments are represented by three situations: first, the homogeneous situation, second, the presence of interfering targets, and the third case is allowed to the presence of clutter edges. The obtained results, under the assumption of a Gaussian clutter and a mono pulse processing, show that most of the studied detectors perform well in a specific conditions and there is a need to further developments to ensure the required performances for recent target detection application.


2010 ◽  
Vol 10 (5) ◽  
pp. 710-720 ◽  
Author(s):  
J. L. Solanas ◽  
M. R. Cussó

Multivariate Consumption Profiling (MCP) is a methodology to analyse the readings made by Intelligent Meter (IM) systems. Even in advanced water companies with well supported IM, full statistical analyses are not performed, since no efficient methods are available to deal with all the data items. Multivariate Analysis has been proposed as a convenient way to synthesise all IM information. MCP uses Factor Analysis, Cluster Analysis and Discriminant Analysis to analyse data variability by categories and levels, in a cyclical improvement process. MCP obtains a conceptual schema of a reference population on a set of classifying tables, one for each category. These tables are quantitative concepts to evaluate consumption, meter sizing, leakage and undermetering for populations and groupings and individual cases. They give structuring items to enhance “traditional” statistics. All the relevant data from each new meter reading can be matched to the classifying tables. A set of indexes is computed and thresholds are used to select those cases with the desired profiles. The paper gives an example of a MCP conceptual schema for five categories, three variables, and five levels, and obtains its classifying tables. It shows the use of case profiles to implement actions in accordance with the operative objectives.


Plants ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 835
Author(s):  
Jose Alvarez ◽  
Elvira Martinez ◽  
Belén Diezma

Hyperspectral imaging is an appropriate method to thoroughly investigate the microscopic structure of internally heterogeneous agro-food products. By using hyperspectral technology, identifying stress symptoms associated with salinity, before a human observer, is possible, and has obvious benefits. The objective of this paper was to prove the suitability of this technique for the analysis of Triticale seeds subjected to both magneto-priming and drought and salt stress conditions, in terms of image differences obtained among treatments. It is known that, on the one hand, drought and salt stress treatments have negative effects on seeds of almost all species, and on the other hand, magneto-priming enhances seed germination parameters. Thus, this study aimed to relate hyperspectral imaging values—neither positive nor negative in themselves—to the effects mentioned above. Two main conclusions were reached: Firstly, the hyperspectral application is a feasible method for exploring the Triticale structure and for making distinctions under different drought and salt stress treatments, in line with the data variability obtained. Secondly, the lower spectral reflectance in some treatments—in the 400–1000 nm segment—is the result of a great number of chemical compounds in the seed that could be related to magneto-priming.


Author(s):  
Yogananda Patnaik ◽  
Dipti Patra

Video coding is an imperative part of the modern day communication system. Furthermore, it has vital roles in the fields of video streaming, multimedia, video conferencing and much more. Scalable Video Coding (SVC) is an emerging research area, due to its extensive application in most of the multimedia devices as well as public demand. The proposed coding technique is capable of eliminating the Spatio-temporal regularity of a video sequence. In Discrete Bandelet Transform (DBT), the directions are modeled by a three-directional vector field, known as structural flow. Regularity is decided by this flow where the data entropy is low. The wavelet vector decomposition of geometrically ordered data results in a lesser extent of significant coefficients. The directions of geometrical regularity are interpreted with a two-dimensional vector, and the approximation of these directions is found with spline functions. This paper deals with a novel SVC technique by exploiting the DBT. The bandelet coefficients are further encoded by utilizing Set Partitioning in Hierarchical Trees (SPIHT) encoder, followed by global thresholding mechanism. The proposed method is verified with several benchmark datasets using the performance measures which gives enhanced performance. Thus, the experimental results bring out the superiority of the proposed technique over the state-of-arts.


Author(s):  
Xiaoyang Jia ◽  
Mark Woods ◽  
Hongren Gong ◽  
Di Zhu ◽  
Wei Hu ◽  
...  

The use of pavement condition data to support maintenance and resurfacing strategies and justify budget needs becomes more crucial as more data-driven approaches are being used by the state highway agencies (SHAs). Therefore, it is important to understand and thus evaluate the influence of data variability on pavement management activities. However, owing to a huge amount of data collected annually, it is a challenge for SHAs to evaluate the influence of data collection variability on network-level pavement evaluation. In this paper, network-level parallel tests were employed to evaluate data collection variability. Based on the data sets from the parallel tests, classification models were constructed to identify the segments that were subject to inconsistent rating resulting from data collection variability. These models were then used to evaluate the influence of data variability on pavement evaluation. The results indicated that the variability of longitudinal cracks was influenced by longitudinal lane joints, lateral wandering, and lane measurement zones. The influence of data variability on condition evaluation for state routes was more significant than that for interstates. However, high variability of individual metrics may not necessarily lead to high variability of combined metrics.


Sign in / Sign up

Export Citation Format

Share Document