scholarly journals Analysis Of Data Stratification In A Multi-Sensor Fingerprint Dataset Using Match Score Statistics

2016 ◽  
Author(s):  
Loukhya Kakumanu
Keyword(s):  
2021 ◽  
pp. 1-12
Author(s):  
Lv YE ◽  
Yue Yang ◽  
Jian-Xu Zeng

The existing recommender system provides personalized recommendation service for users in online shopping, entertainment, and other activities. In order to improve the probability of users accepting the system’s recommendation service, compared with the traditional recommender system, the interpretable recommender system will give the recommendation reasons and results at the same time. In this paper, an interpretable recommendation model based on XGBoost tree is proposed to obtain comprehensible and effective cross features from side information. The results are input into the embedded model based on attention mechanism to capture the invisible interaction among user IDs, item IDs and cross features. The captured interactions are used to predict the match score between the user and the recommended item. Cross-feature attention score is used to generate different recommendation reasons for different user-items.Experimental results show that the proposed algorithm can guarantee the quality of recommendation. The transparency and readability of the recommendation process has been improved by providing reference reasons. This method can help users better understand the recommendation behavior of the system and has certain enlightenment to help the recommender system become more personalized and intelligent.


2021 ◽  
pp. 1-12
Author(s):  
Matthew van Bommel ◽  
Luke Bornn ◽  
Peter Chow-White ◽  
Chuancong Gao

Box score statistics are the baseline measures of performance for National Collegiate Athletic Association (NCAA) basketball. Between the 2011-2012 and 2015-2016 seasons, NCAA teams performed better at home compared to on the road in nearly all box score statistics across both genders and all three divisions. Using box score data from over 100,000 games spanning the three divisions for both women and men, we examine the factors underlying this discrepancy. The prevalence of neutral location games in the NCAA provides an additional angle through which to examine the gaps in box score statistic performance, which we believe has been underutilized in existing literature. We also estimate a regression model to quantify the home court advantages for box score statistics after controlling for other factors such as number of possessions, and team strength. Additionally, we examine the biases of scorekeepers and referees. We present evidence that scorekeepers tend to have greater home team biases when observing men compared to women, higher divisions compared to lower divisions, and stronger teams compared to weaker teams. Finally, we present statistically significant results indicating referee decisions are impacted by attendance, with larger crowds resulting in greater bias in favor of the home team.


2020 ◽  
Vol 16 (4) ◽  
pp. 325-341
Author(s):  
Nicholas Clark ◽  
Brian Macdonald ◽  
Ian Kloo

AbstractAnalytics and professional sports have become linked over the past several years, but little attention has been paid to the growing field of esports within the sports analytics community. We seek to apply an Adjusted Plus Minus (APM) model, an accepted analytic approach used in traditional sports like hockey and basketball, to one particular esports game: Defense of the Ancients 2 (Dota 2). As with traditional sports, we show how APM metrics developed with Bayesian hierarchical regression can be used to quantify individual player contributions to their teams and, ultimately, use this player-level information to predict game outcomes. In particular, we first provide evidence that gold can be used as a continuous proxy for wins to evaluate a team’s performance, and then use a Bayesian APM model to estimate how players contribute to their team’s gold differential. We demonstrate that this APM model outperforms models based on common team-level statistics (often referred to as “box score statistics”). Beyond the specifics of our modeling approach, this paper serves as an example of the potential utility of applying analytical methodologies from traditional sports analytics to esports.


2008 ◽  
Vol 72 (4) ◽  
pp. 566-574 ◽  
Author(s):  
Y. J. Yoo ◽  
N. R. Mendell
Keyword(s):  

2015 ◽  
Vol 77 (18) ◽  
Author(s):  
Chiung Ching Ho ◽  
Mufaddal Ali Hussin ◽  
Hu Ng

In recent years, attacks on password databases have been carried out at an increasing rate, with significant success. Thus, a new approach is needed to prove one's claim to identity instead of relying on a password. In this paper, we investigate the use of biometric match scores for the purpose of verification. Our work was performed using the BSSR1 multimodal match score biometric dataset, which contains match scores from face and fingerprint biometric systems. We investigated the use of match scores as a feature vector, and performed Simple Sum and Product Rule fusion of match scores. The results we obtained demonstrated that the use of match scores for verification purposes can be achieved to give a result that is highly accurate.


2012 ◽  
Vol 66 (2) ◽  
pp. 92-98
Author(s):  
C. A. Field ◽  
Zhen Pang ◽  
A. H. Welsh
Keyword(s):  

2016 ◽  
Vol 2 (6) ◽  
Author(s):  
PANKAJ ,

Multimodal biometric innovation in light of unique mark and finger knuckle has pulled in footing among scientists as of late. Despite the fact that Uni-modular framework offers many focal points, it has certain intrinsic shortcomings which deny it of the appeal. Uni-modular unique finger impression biometric frameworks performed singular acknowledgment in light of a particular wellspring of biometric data. However the match score esteem must be enhanced by working with low quality little closer view zone biometric pictures. In fact, the confirmation forms delivered by Finger Knuckle Print (FKP) brings about higher relative changes. The distortions between FKP pictures of same finger are of higher extent. The unimodal biometric check framework frequently gets influenced after accomplishing higher match score esteem. Besides, bimodal check framework does not accomplish higher security level which prompts to lesser combination score esteem. To diminish relative change on multimodal biometric framework, NonFracture based Fingerprint and Finger-Knuckle print Biometric Score Fusion (NFF-BSF) component is proposed in this paper. At first, particular estimation of match score is measured utilizing multimodal fitting coarse grained dissemination work. Multimodal fitting coarse grained dissemination capacity is utilized to work with low quality petite frontal area biometric pictures that accomplish high fitting score on the test and preparing biometric pictures. Also, Non-Fracture misshapening handling is completed in NFF-BSF instrument to diminish the adjustment fit as a fiddle of protest by utilizing bend length on biometric picture surfaces. At last, a coordinating technique in NFF-BSF instrument is utilized to decrease the relative changes. Thus, the relative changes on multimodal biometric framework expands the match score combination esteem. Investigation is directed on variables, for example, certifiable acknowledgment rate, coordinating score combination level and blunder rate on multimodal coordinating


Author(s):  
Erin D. Bigler

All traditional neuropsychological assessment techniques emerged in an era prior to modern neuroimaging. In fact, question-answer/paper-and-pencil test origins that gained traction with Alfred Binet in 1905 remain the same core techniques today. Indeed, Binet’s efforts began the era of standardized human metrics designed to assess a broad spectrum of cognitive, emotional, and behavioral functions and abilities. During the early part of the 20th century, the concept of an intellectual quotient expressed as a standard score with a mean of 100 and a standard deviation of 15 also initiated the era of quantitative descriptions of mental and emotional functioning (Anastasi, 1968; Stern, 1912). Other descriptive statistical metrics were applied to human measurement, including scaled, percentile, T-score, and z-score statistics. Statistical measures became part of the assessment lexicon and each possessed strength as well as weakness for descriptive purposes, but together proved to be immensely effective for communicating test findings and inferring average and above or below the norm performances. In turn, descriptive statistical methods became the cornerstone for describing neuropsychological findings, typically reported by domain of functioning (memory, excutive, language, etc.; Cipolotti & Warrington, 1995; Lezak, Howieson, Bigler, & Tranel, 2012). As much as psychology and medicine have incorporated descriptive statistics into research and clinical application, a major focus of both disciplines also has been binary classification—normal versus abnormal. This dichotomization recognizes some variability and individual differences within a test score or laboratory procedure, but at some point the clinician makes the binary decision of normal or abnormal. In the beginnings of neuroimaging, which are discussed more thoroughly below, interpretation of computed tomographic (CT) or magnetic resonance imaging (MRI) scans mostly was approached in this manner. Although lots of information was available from CT and MRI images, if nothing obviously abnormal was seen, the radiological conclusion merely stated in the Impression section, “Normal CT (or MRI) of the brain,” with no other qualification (or quantification) of why the findings were deemed normal other than the image appeared that way. Until recently, quantification of information in an image required hand editing and was excruciatingly time consuming.


Sign in / Sign up

Export Citation Format

Share Document