scholarly journals Online Signature Verification on MOBISIG Finger-Drawn Signature Corpus

2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Margit Antal ◽  
László Zsolt Szabó ◽  
Tünde Tordai

We present MOBISIG, a pseudosignature dataset containing finger-drawn signatures from 83 users captured with a capacitive touchscreen-based mobile device. The database was captured in three sessions resulting in 45 genuine signatures and 20 skilled forgeries for each user. The database was evaluated by two state-of-the-art methods: a function-based system using local features and a feature-based system using global features. Two types of equal error rate computations are performed: one using a global threshold and the other using user-specific thresholds. The lowest equal error rate was 0.01% against random forgeries and 5.81% against skilled forgeries using user-specific thresholds that were computed a posteriori. However, these equal error rates were significantly raised to 1.68% (random forgeries case) and 14.31% (skilled forgeries case) using global thresholds. The same evaluation protocol was performed on the DooDB publicly available dataset. Besides verification performance evaluations conducted on the two finger-drawn datasets, we evaluated the quality of the samples and the users of the two datasets using basic quality measures. The results show that finger-drawn signatures can be used by biometric systems with reasonable accuracy.

1979 ◽  
Vol 73 (10) ◽  
pp. 389-399
Author(s):  
Gregory L. Goodrich ◽  
Richard R. Bennett ◽  
William R. De L'aune ◽  
Harvey Lauer ◽  
Leonard Mowinski

This study was designed to assess the Kurzweil Reading Machine's ability to read three different type styles produced by five different means. The results indicate that the Kurzweil Reading Machines tested have different error rates depending upon the means of producing the copy and upon the type style used; there was a significant interaction between copy method and type style. The interaction indicates that some type styles are better read when the copy is made by one means rather than another. Error rates varied between less than one percent and more than twenty percent. In general, the user will find that high quality printed materials will be read with a relatively high level of accuracy, but as the quality of the material decreases, the number of errors made by the machine also increases. As this error rate increases, the user will find it increasingly difficult to understand the spoken output.


Author(s):  
Hans Van Halteren

This paper demonstrates how an author recognition system could be benchmarked, as a prerequisite for admission in court. The system used in the demonstration is the FEDERALES system, and the experimental data used were taken from the British National Corpus. The system was given several tasks, namely attributing a text sample to a specific text, verifying that a text sample was taken from a specific text, and verifying that a text sample was produced by a specific author. For the former two tasks, 1,099 texts with at least 10,000 words were used; for the latter 1,366 texts with known authors, which were verified against models for the 28 known authors for whom there were three or more texts. The experimental tasks were performed with different sampling methods (sequential samples or samples of concatenated random sentences), different sample sizes (1,000, 500, 250 or 125 words), varying amounts of training material (between 2 and 20 samples) and varying amounts of test material (1 or 3 samples). Under the best conditions, the system performed very well: with 7 training and 3 test samples of 1,000 words of randomly selected sentences, text attribution had an equal error rate of 0.06% and text verification an equal error rate of 1.3%; with 20 training and 3 test samples of 1,000 words of randomly selected sentences, author verification had an equal error rate of 7.5%. Under the worst conditions, with 2 training and 1 test sample of 125 words of sequential text, equal error rates for text attribution and text verification were 26.6% and 42.2%, and author verification did not perform better than chance. Furthermore, the quality degradation curves with slowly worsening conditions were not smooth, but contained steep drops. All in all, the results show the importance of having a benchmark which is as similar as possible to the actual court material for which the system is to be used, since the measured system quality differed greatly between evaluation scenarios and system degradation could not be predicted easily on the basis of the chosen scenario parameters.


Signature recognition is a significant among the most fundamental biometrics recognition techniques, is a key bit of current business works out, and is considered a noninvasive and non-undermining process. For online signature recognition, numerous methods had been displayed previously. In any case, accuracy of the recognition framework is further to be enhanced and furthermore equal error rate is further to be decreased. To take care of these issues, a novel order method must be proposed. In this paper, Kernel Based k-Nearest Neighbor (K-kNN) is presented for online signature recognition. For experimental analysis, two datasets are utilized that are ICDAR Deutsche and ACT college dataset. Simulation results show that, the performance of the proposed recognition technique than that of the existing techniques in terms of accuracy and equal error rate. Keywords: Online signature recognition, Kernel Based kNearest Neighbor (K-kNN), accuracy, equal error rate.


2002 ◽  
Vol 126 (7) ◽  
pp. 809-815 ◽  
Author(s):  
Peter J. Howanitz ◽  
Stephen W. Renner ◽  
Molly K. Walsh

Abstract Context.—Identification of patients is one of the first steps in ensuring the accuracy of laboratory results. In the United States, hospitalized patients wear wristbands to aid in their identification, but wristbands errors are frequently found. Objective.—To investigate if continuous monitoring of wristband errors by participants of the College of American Pathologists (CAP) Q-Tracks program results in lower wristband error rates. Setting.—A total of 217 institutions voluntarily participating in the CAP Q-Tracks interlaboratory quality improvement program in 1999 and 2000. Design.—Participants completed a demographic form, answered a questionnaire, collected wristband data, and at the end of the year, best and most improved performers answered another questionnaire seeking suggestions for improvement. Each institution's phlebotomists inspected wristbands for errors before performing phlebotomy and recorded the number of patients with wristband errors. On a monthly basis, participants submitted data to the CAP for data processing, and at the end of each quarter, participants received summarized comparisons. At the end of each year, participants also received a critique of the results along with suggestions for improvement. Main Outcome Measures.—The percentage of wristband errors by quarter, types of wristband errors, and suggestions for improvement. Results.—During 2 years, 1 757 730 wristbands were examined, and 45 197 wristband errors were found. The participants' mean wristband error rate for the first quarter in 1999 was 7.40%; by the eighth quarter, the mean wristband error rate had fallen to 3.05% (P < .001). Continuous improvement occurred in each quarter for participants in the 1999 and 2000 program and in 7 of 8 quarters for those who participated in both 1999 and 2000. Missing wristbands accounted for 71.6% of wristband errors, and best performers usually had wristband error rates under 1.0%. The suggestion for improvement provided by the largest number of best and most improved performers was that phlebotomists should refuse to perform phlebotomy on a patient when a wristband error is detected. Conclusions.—The wristband error rate decreased markedly when this rate was monitored continuously using the CAP Q-Tracks program. The Q-Tracks program provides a useful tool for improving the quality of services in anatomic pathology and laboratory medicine.


Sensors ◽  
2019 ◽  
Vol 19 (11) ◽  
pp. 2491
Author(s):  
Xinnian Wang ◽  
Yanjun Wu ◽  
Tao Zhang

As a kind of forensic evidence, shoeprints have been treated as important as fingerprint and DNA evidence in forensic investigations. Shoeprint verification is used to determine whether two shoeprints could, or could not, have been made by the same shoe. Successful shoeprint verification has tremendous evidentiary value, and the result can link a suspect to a crime, or even link crime scenes to each other. In forensic practice, shoeprint verification is manually performed by forensic experts; however, it is too dependent on experts’ experience. This is a meaningful and challenging problem, and there are few attempts to tackle it in the literatures. In this paper, we propose a multi-layer feature-based method to conduct shoeprint verification automatically. Firstly, we extracted multi-layer features; and then conducted multi-layer feature matching and calculated the total similarity score. Finally, we drew a verification conclusion according to the total similarity score. We conducted extensive experiments to evaluate the effectiveness of the proposed method on two shoeprint datasets. Experimental results showed that the proposed method achieved good performance with an equal error rate (EER) of 3.2% on the MUES-SV1KR2R dataset and an EER of 10.9% on the MUES-SV2HS2S dataset.


2019 ◽  
Vol 28 (4) ◽  
pp. 1411-1431 ◽  
Author(s):  
Lauren Bislick ◽  
William D. Hula

Purpose This retrospective analysis examined group differences in error rate across 4 contextual variables (clusters vs. singletons, syllable position, number of syllables, and articulatory phonetic features) in adults with apraxia of speech (AOS) and adults with aphasia only. Group differences in the distribution of error type across contextual variables were also examined. Method Ten individuals with acquired AOS and aphasia and 11 individuals with aphasia participated in this study. In the context of a 2-group experimental design, the influence of 4 contextual variables on error rate and error type distribution was examined via repetition of 29 multisyllabic words. Error rates were analyzed using Bayesian methods, whereas distribution of error type was examined via descriptive statistics. Results There were 4 findings of robust differences between the 2 groups. These differences were found for syllable position, number of syllables, manner of articulation, and voicing. Group differences were less robust for clusters versus singletons and place of articulation. Results of error type distribution show a high proportion of distortion and substitution errors in speakers with AOS and a high proportion of substitution and omission errors in speakers with aphasia. Conclusion Findings add to the continued effort to improve the understanding and assessment of AOS and aphasia. Several contextual variables more consistently influenced breakdown in participants with AOS compared to participants with aphasia and should be considered during the diagnostic process. Supplemental Material https://doi.org/10.23641/asha.9701690


Author(s):  
Masrukin Masrukin ◽  
Hermanto Hermanto

Customer satisfaction is influenced by service quality factors, this study aims to find out and analyze how much influence the quality of service on customer satisfaction is felt by customers who use the service of Poor Rice (Raskin) at the Office of Public Companies Logistics Agency (Perum BULOG) in Sampit City Regency East Kotawaringin. The research method used in this study is the method of observation, questionnaire/questionnaire and documentation using a Likert scale and the method of determining the sample used is the error rate of 5% as many as 213 samples. Testing the hypothesis used is a statistical test with the formula "Product moment person". The results showed that there was a very strong correlation between Service Quality and Customer Satisfaction of the Office of Public Company of the Logistics Affairs Agency (Perum BULOG) in the District of East Kotawaringin. As much as 0.9968514278 based on the calculation of Pearson Product Moment value.


2014 ◽  
Vol 53 (05) ◽  
pp. 343-343

We have to report marginal changes in the empirical type I error rates for the cut-offs 2/3 and 4/7 of Table 4, Table 5 and Table 6 of the paper “Influence of Selection Bias on the Test Decision – A Simulation Study” by M. Tamm, E. Cramer, L. N. Kennes, N. Heussen (Methods Inf Med 2012; 51: 138 –143). In a small number of cases the kind of representation of numeric values in SAS has resulted in wrong categorization due to a numeric representation error of differences. We corrected the simulation by using the round function of SAS in the calculation process with the same seeds as before. For Table 4 the value for the cut-off 2/3 changes from 0.180323 to 0.153494. For Table 5 the value for the cut-off 4/7 changes from 0.144729 to 0.139626 and the value for the cut-off 2/3 changes from 0.114885 to 0.101773. For Table 6 the value for the cut-off 4/7 changes from 0.125528 to 0.122144 and the value for the cut-off 2/3 changes from 0.099488 to 0.090828. The sentence on p. 141 “E.g. for block size 4 and q = 2/3 the type I error rate is 18% (Table 4).” has to be replaced by “E.g. for block size 4 and q = 2/3 the type I error rate is 15.3% (Table 4).”. There were only minor changes smaller than 0.03. These changes do not affect the interpretation of the results or our recommendations.


2019 ◽  
Vol 152 (Supplement_1) ◽  
pp. S131-S132
Author(s):  
Kathryn Hogan ◽  
Beena Umar ◽  
Mohamed Alhamar ◽  
Kathleen Callahan ◽  
Linoj Samuel

Abstract Objectives There are few papers that characterize types of errors in microbiology laboratories and scant research demonstrating the effects of interventions on microbiology lab errors. This study aims to categorize types of culture reporting errors found in microbiology labs and to document the error rates before and after interventions designed to reduce errors and improve overall laboratory quality. Methods To improve documentation of error incidence, a self-reporting system was changed to an automatic reporting system. Errors were categorized into five types Gram stain (misinterpretations), identification (incorrect analysis), set up labeling (incorrect patient labels), procedures (not followed), and miscellaneous. Error rates were tracked according to technologist, and technologists were given real-time feedback by a manager. Error rates were also monitored in the daily quality meeting and frequently detected errors were discussed at staff meetings. Technologists attended a year-end review with a manager to improve their performance. To maintain these changes, policies were developed to monitor technologist error rate and to define corrective measures. If a certain number of errors per month was reached, technologists were required to undergo retraining by a manager. If a technologist failed to correct any error according to protocol, they were also potentially subject to corrective measures. Results In 2013, we recorded 0.5 errors per 1,000 tests. By 2018, we recorded only 0.1 errors per 1,000 tests, an 80% decrease. The yearly culture volume from 2013 to 2018 increased by 32%, while the yearly error rate went from 0.05% per year to 0.01% per year, a statistically significant decrease (P = .0007). Conclusion This study supports the effectiveness of the changes implemented to decrease errors in culture reporting. By tracking errors in real time and using a standardized process that involved timely follow-up, technologists were educated on error prevention. This practice increased safety awareness in our micro lab.


Sign in / Sign up

Export Citation Format

Share Document