scholarly journals The Use of Correlated Binomial Distribution in Estimating Error Rates for Firearm Evidence Identification

Author(s):  
Nien Fan Zhang

In the branch of forensic science known as firearm evidence identification, estimating error rates is a fundamental challenge. Recently, a new quantitative approach known as the congruent matching cells (CMC) method was developed to improve the accuracy of ballistic identifications and provide a basis for estimating error rates. To estimate error rates, the key is to find an appropriate probability distribution for the relative frequency distribution of observed CMCs overlaid on a relevant measured firearm surface such as the breech face of a cartridge case. Several probability models based on the assumption of independence between cell pair comparisons have been proposed, but the assumption of independence among the cell pair comparisons from the CMC method may not be valid. This article proposes statistical models based on dependent Bernoulli trials, along with corresponding methodology for parameter estimation. To demonstrate the potential improvement from the use of the dependent Bernoulli trial model, the methodology is applied to an actual data set of fired cartridge cases.

Author(s):  
Paul Caster ◽  
Randal J. Elder ◽  
Diane J. Janvrin

This exploratory study examines automation of the bank confirmation process using longitudinal data set from the largest third-party U.S. confirmation service provider supplemented with informal interviews with practitioners. We find a significant increase in electronic confirmation use in the U.S. and internationally. Errors requiring reconfirmation were less than two percent of all electronic confirmations. Errors made by auditors were almost five times more likely than errors by bank employees. Most auditor errors involved use of an invalid account number, although invalid client contact, invalid request, and invalid company name errors increased recently. Big 4 auditors made significantly more confirmation errors than did auditors at non-Big 4 national firms. Error rates and error types do not vary between confirmations initiated in the U.S. and those initiated internationally. Three themes emerged for future research: authentication of evidence, global differences in technology use, and technology adoption across firms of different sizes.


2019 ◽  
Vol 21 (3) ◽  
pp. 851-862 ◽  
Author(s):  
Charalampos Papachristou ◽  
Swati Biswas

Abstract Dissecting the genetic mechanism underlying a complex disease hinges on discovering gene–environment interactions (GXE). However, detecting GXE is a challenging problem especially when the genetic variants under study are rare. Haplotype-based tests have several advantages over the so-called collapsing tests for detecting rare variants as highlighted in recent literature. Thus, it is of practical interest to compare haplotype-based tests for detecting GXE including the recent ones developed specifically for rare haplotypes. We compare the following methods: haplo.glm, hapassoc, HapReg, Bayesian hierarchical generalized linear model (BhGLM) and logistic Bayesian LASSO (LBL). We simulate data under different types of association scenarios and levels of gene–environment dependence. We find that when the type I error rates are controlled to be the same for all methods, LBL is the most powerful method for detecting GXE. We applied the methods to a lung cancer data set, in particular, in region 15q25.1 as it has been suggested in the literature that it interacts with smoking to affect the lung cancer susceptibility and that it is associated with smoking behavior. LBL and BhGLM were able to detect a rare haplotype–smoking interaction in this region. We also analyzed the sequence data from the Dallas Heart Study, a population-based multi-ethnic study. Specifically, we considered haplotype blocks in the gene ANGPTL4 for association with trait serum triglyceride and used ethnicity as a covariate. Only LBL found interactions of haplotypes with race (Hispanic). Thus, in general, LBL seems to be the best method for detecting GXE among the ones we studied here. Nonetheless, it requires the most computation time.


2016 ◽  
Vol 5 (5) ◽  
pp. 16 ◽  
Author(s):  
Guolong Zhao

To evaluate a drug, statistical significance alone is insufficient and clinical significance is also necessary. This paper explains how to analyze clinical data with considering both statistical and clinical significance. The analysis is practiced by combining a confidence interval under null hypothesis with that under non-null hypothesis. The combination conveys one of the four possible results: (i) both significant, (ii) only significant in the former, (iii) only significant in the latter or (iv) neither significant. The four results constitute a quadripartite procedure. Corresponding tests are mentioned for describing Type I error rates and power. The empirical coverage is exhibited by Monte Carlo simulations. In superiority trials, the four results are interpreted as clinical superiority, statistical superiority, non-superiority and indeterminate respectively. The interpretation is opposite in inferiority trials. The combination poses a deflated Type I error rate, a decreased power and an increased sample size. The four results may helpful for a meticulous evaluation of drugs. Of these, non-superiority is another profile of equivalence and so it can also be used to interpret equivalence. This approach may prepare a convenience for interpreting discordant cases. Nevertheless, a larger data set is usually needed. An example is taken from a real trial in naturally acquired influenza.


Genome ◽  
2018 ◽  
Vol 61 (1) ◽  
pp. 21-31 ◽  
Author(s):  
Jason Gibbs

There is an ongoing campaign to DNA barcode the world’s >20 000 bee species. Recent revisions of Lasioglossum (Dialictus) (Hymenoptera: Halictidae) for Canada and the eastern United States were completed using integrative taxonomy. DNA barcode data from 110 species of L. (Dialictus) are examined for their value in identification and discovering additional taxonomic diversity. Specimen identification success was estimated using the best close match method. Error rates were 20% relative to current taxonomic understanding. Barcode Index Numbers (BINs) assigned using Refined Single Linkage Analysis (RESL) and barcode gaps using the Automatic Barcode Gap Discovery (ABGD) method were also assessed. RESL was incongruent for 44.5% of species, although some cryptic diversity may exist. Forty-three of 110 species were part of merged BINs with multiple species. The barcode gap is non-existent for the data set as a whole and ABGD showed levels of discordance similar to the RESL. The viridatum species-group is particularly problematic, so that DNA barcodes alone would be misleading for species delimitation and specimen identification. Character-based methods using fixed nucleotide substitutions could improve specimen identification success in some cases. The use of DNA barcoding for species discovery for standard taxonomic practice in the absence of a well-defined barcode gap is discussed.


1980 ◽  
Vol 5 (2) ◽  
pp. 129-156 ◽  
Author(s):  
George B. Macready ◽  
C. Mitchell Dayton

A variety of latent class models has been presented during the last 10 years which are restricted forms of a more general class of probability models. Each of these models involves an a priori dependency structure among a set of dichotomously scored tasks that define latent class response patterns across the tasks. In turn, the probabilities related to these latent class patterns along with a set of “Omission” and “intrusion” error rates for each task are the parameters used in defining models within this general class. One problem in using these models is that the defining parameters for a specific model may not be “identifiable.” To deal with this problem, researchers have considered curtailing the form of the model of interest by placing restrictions on the defining parameters. The purpose of this paper is to describe a two-stage conditional estimation procedure which results in reasonable estimates of specific models even though they may be nonidentifiable. This procedure involves the following stages: (a) establishment of initial parameter estimates and (b) step-wise maximum likelihood solutions for latent class probabilities and classification errors with iteration of this process until stable parameter estimates across successive iterations are obtained.


2018 ◽  
Vol 28 (8) ◽  
pp. 2418-2438
Author(s):  
Xi Shen ◽  
Chang-Xing Ma ◽  
Kam C Yuen ◽  
Guo-Liang Tian

Bilateral correlated data are often encountered in medical researches such as ophthalmologic (or otolaryngologic) studies, in which each unit contributes information from paired organs to the data analysis, and the measurements from such paired organs are generally highly correlated. Various statistical methods have been developed to tackle intra-class correlation on bilateral correlated data analysis. In practice, it is very important to adjust the effect of confounder on statistical inferences, since either ignoring the intra-class correlation or confounding effect may lead to biased results. In this article, we propose three approaches for testing common risk difference for stratified bilateral correlated data under the assumption of equal correlation. Five confidence intervals of common difference of two proportions are derived. The performance of the proposed test methods and confidence interval estimations is evaluated by Monte Carlo simulations. The simulation results show that the score test statistic outperforms other statistics in the sense that the former has robust type [Formula: see text] error rates with high powers. The score confidence interval induced from the score test statistic performs satisfactorily in terms of coverage probabilities with reasonable interval widths. A real data set from an otolaryngologic study is used to illustrate the proposed methodologies.


2016 ◽  
Vol 77 (1) ◽  
pp. 54-81 ◽  
Author(s):  
Sandip Sinharay ◽  
Matthew S. Johnson

In a pioneering research article, Wollack and colleagues suggested the “erasure detection index” (EDI) to detect test tampering. The EDI can be used with or without a continuity correction and is assumed to follow the standard normal distribution under the null hypothesis of no test tampering. When used without a continuity correction, the EDI often has inflated Type I error rates. When used with a continuity correction, the EDI has satisfactory Type I error rates, but smaller power compared with the EDI without a continuity correction. This article suggests three methods for detecting test tampering that do not rely on the assumption of a standard normal distribution under the null hypothesis. It is demonstrated in a detailed simulation study that the performance of each suggested method is slightly better than that of the EDI. The EDI and the suggested methods were applied to a real data set. The suggested methods, although more computation intensive than the EDI, seem to be promising in detecting test tampering.


2016 ◽  
Author(s):  
Aleksey V. Zimin ◽  
Daniela Puiu ◽  
Ming-Cheng Luo ◽  
Tingting Zhu ◽  
Sergey Koren ◽  
...  

AbstractLong sequencing reads generated by single-molecule sequencing technology offer the possibility of dramatically improving the contiguity of genome assemblies. The biggest challenge today is that long reads have relatively high error rates, currently around 15%. The high error rates make it difficult to use this data alone, particularly with highly repetitive plant genomes. Errors in the raw data can lead to insertion or deletion errors (indels) in the consensus genome sequence, which in turn create significant problems for downstream analysis; for example, a single indel may shift the reading frame and incorrectly truncate a protein sequence. Here we describe an algorithm that solves the high error rate problem by combining long, high-error reads with shorter but much more accurate Illumina sequencing reads, whose error rates average <1%. Our hybrid assembly algorithm combines these two types of reads to construct mega-reads, which are both long and accurate, and then assembles the mega-reads using the CABOG assembler, which was designed for long reads. We apply this technique to a large data set of Illumina and PacBio sequences from the species Aegilops tauschii, a large and highly repetitive plant genome that has resisted previous attempts at assembly. We show that the resulting assembled contigs are far larger than in any previous assembly, with an N50 contig size of 486,807. We compare the contigs to independently produced optical maps to evaluate their large-scale accuracy, and to a set of high-quality bacterial artificial chromosome (BAC)-based assemblies to evaluate base-level accuracy.


2019 ◽  
pp. 480-489
Author(s):  
P. Hiverts

The increasing number of homemade sub-machine guns in caliber 9 mm Parabellum received for examination, as well as cartridge cases discharged from this weapon, made it necessary to single out the marks which can be used for group identification. The article gives the results of generalizing and systematization of marks observed during the examinations made in the laboratory. This work singles out the marks and traces which can be observed on the cartridge case surface and can be used for the identification of the type and model of the firearm. The construction features of homemade sub-machine guns were investigated. Among these features are the construction based on open bolt mechanical scheme, the use of static firing pin, methods of barrel assembling and fixing into the body of the weapon, which can lead to the appearance of a hole in the chamber, etc. The article also shows the influence of the tool processing on leaving special marks and traces on the breach face. These marks can also be used for the group identification. Based on the results of the research the article distinguishes between the main signs, which can be used for group identification, the sings similar to the ones known in factory-made weapons and the signs typical of homemade firearms. The first group consists of the marks of ejector and the extractor cutouts and the firing pin mark. For these sings the article describes special characteristics which makes it possible to distinguish them from the marks commonly observed on the factory-made examples. To the marks typical of homemade sub-machine guns can be applied breach face marks, cartridge case deformation, caused by differences between the sizes of the chamber and the cartridge, cartridge deformation while shooting when the cartridge case is not supported by the chamber, perforation of the sidewall of the cartridge case, etc. The article also discusses the issue of cartridge case comparison and individual identification. Great variety of the traces and marks as a result of low-quality tool processing was revealed. This can be the factor which makes comparison more difficult. However, the big number of individual marks observed on the cartridge cases makes it possible to come to well-grounded conclusion. Key words: cartridge cases, submachine guns, type and kind of weapon, expert practice.


Sign in / Sign up

Export Citation Format

Share Document