scholarly journals Labeling post‐storm coastal imagery for machine learning: measurement of inter‐rater agreement

2021 ◽  
Author(s):  
Evan B. Goldstein ◽  
Daniel Buscombe ◽  
Eli D. Lazarus ◽  
Somya D. Mohanty ◽  
Shah Nafis Rafique ◽  
...  
2020 ◽  
Vol 34 (20) ◽  
pp. 2050196
Author(s):  
Haozhen Situ ◽  
Zhimin He

Machine learning techniques can help to represent and solve quantum systems. Learning measurement outcome distribution of quantum ansatz is useful for characterization of near-term quantum computing devices. In this work, we use the popular unsupervised machine learning model, variational autoencoder (VAE), to reconstruct the measurement outcome distribution of quantum ansatz. The number of parameters in the VAE are compared with the number of measurement outcomes. The numerical results show that VAE can efficiently learn the measurement outcome distribution with few parameters. The influence of entanglement on the task is also revealed.


2021 ◽  
Author(s):  
Louis Tay ◽  
Sang Eun Woo ◽  
Louis Hickman ◽  
Brandon Michael Booth ◽  
Sidney D'Mello

Given significant concerns about fairness and bias in the use of artificial intelligence (AI) and machine learning (ML) for assessing psychological constructs, we provide a conceptual framework for investigating and mitigating machine learning measurement bias (MLMB) from a psychometric perspective. MLMB is defined as differential functioning of the trained ML model between subgroups. MLMB can empirically manifest when a trained ML model produces different predicted score levels for individuals belonging to different subgroups (e.g., race, gender) despite them having the same ground truth level for the underlying construct of interest (e.g., personality), and/or when the model yields differential predictive accuracies across the subgroups. Because the development of ML models involves both data and algorithms, both biased data and algorithm training bias are potential sources of MLMB. Data bias can occur in the form of nonequivalence between subgroups in the ground truth, platform-based construct, behavioral expression, and/or feature computing. Algorithm training bias can occur when algorithms are developed with nonequivalence in the relation between extracted features and ground truth (i.e., algorithm features are differentially used, weighted, or transformed between subgroups). We explain how these potential sources of bias may manifest during ML model development and share initial ideas on how to mitigate them, recognizing that the development of new statistical and algorithmic procedures will need to follow. We also discuss how this framework brings clarity to MLMB but does not reduce the complexity of the issue.


2020 ◽  
Vol 43 ◽  
Author(s):  
Myrthe Faber

Abstract Gilead et al. state that abstraction supports mental travel, and that mental travel critically relies on abstraction. I propose an important addition to this theoretical framework, namely that mental travel might also support abstraction. Specifically, I argue that spontaneous mental travel (mind wandering), much like data augmentation in machine learning, provides variability in mental content and context necessary for abstraction.


2020 ◽  
Author(s):  
Mohammed J. Zaki ◽  
Wagner Meira, Jr
Keyword(s):  

2020 ◽  
Author(s):  
Marc Peter Deisenroth ◽  
A. Aldo Faisal ◽  
Cheng Soon Ong
Keyword(s):  

Author(s):  
Lorenza Saitta ◽  
Attilio Giordana ◽  
Antoine Cornuejols

Author(s):  
Shai Shalev-Shwartz ◽  
Shai Ben-David
Keyword(s):  

2006 ◽  
Vol 11 (1) ◽  
pp. 12-24 ◽  
Author(s):  
Alexander von Eye

At the level of manifest categorical variables, a large number of coefficients and models for the examination of rater agreement has been proposed and used. The most popular of these is Cohen's κ. In this article, a new coefficient, κ s , is proposed as an alternative measure of rater agreement. Both κ and κ s allow researchers to determine whether agreement in groups of two or more raters is significantly beyond chance. Stouffer's z is used to test the null hypothesis that κ s = 0. The coefficient κ s allows one, in addition to evaluating rater agreement in a fashion parallel to κ, to (1) examine subsets of cells in agreement tables, (2) examine cells that indicate disagreement, (3) consider alternative chance models, (4) take covariates into account, and (5) compare independent samples. Results from a simulation study are reported, which suggest that (a) the four measures of rater agreement, Cohen's κ, Brennan and Prediger's κ n , raw agreement, and κ s are sensitive to the same data characteristics when evaluating rater agreement and (b) both the z-statistic for Cohen's κ and Stouffer's z for κ s are unimodally and symmetrically distributed, but slightly heavy-tailed. Examples use data from verbal processing and applicant selection.


Sign in / Sign up

Export Citation Format

Share Document