The Black Box: Turning Raw Data into Human Factors Leading Indicators

Author(s):  
G. Walker
Mind Cure ◽  
2019 ◽  
pp. 171-186
Author(s):  
Wakoh Shannon Hickey

This chapter asks whether mindfulness is as broadly effective and powerful as proponents claim and considers methodological and other critiques of clinical research on mindfulness. Neuroscientists have produced vivid images of meditators’ brains, using functional MRI and PET scans, which seem to show clear, positive changes attributed to meditation. Such images are effective rhetorically but are produced in a “black box” of assumptions, technological constraints, and human factors that make them less definitive than they may appear. Other types of studies rely on meditators’ self-reports, which are not always reliable. A major issue in clinical research is that mindfulness is inconsistently defined and may be measured by scientists unfamiliar with the ways that meditation is described in canonical texts and understood by experienced Buddhist teachers and yogis. While research data do suggest that mindfulness can be beneficial, it is not the panacea that some advocates seem to suggest it is.


2020 ◽  
Vol 8 ◽  
pp. 61-72
Author(s):  
Kara Combs ◽  
Mary Fendley ◽  
Trevor Bihl

Artificial Intelligence and Machine Learning (AI/ML) models are increasingly criticized for their “black-box” nature. Therefore, eXplainable AI (XAI) approaches to extract human-interpretable decision processes from algorithms have been explored. However, XAI research lacks understanding of algorithmic explainability from a human factors’ perspective. This paper presents a repeatable human factors heuristic analysis for XAI with a demonstration on four decision tree classifier algorithms.


Author(s):  
R. Anderson ◽  
H. Siriwardane ◽  
P. Fraundorf ◽  
T. Stivers

Scanning force microscopes show topography, but alone cannot provide insight into the electrical properties of either a specimen or particular features of a specimen. In this project we will look at a specimen that consists of conductive spheres in a base of insulating material (FIG. 1a).A theoretical capacitance image corresponding to this particular arrangement of conductive spheres is shown in Figure 1b. Figure 2 is an x-derivative image of Figure 1b, displayed with a logarithmic gray scale to show more of the image structure. The mechanism that generates the raw data image is an electronic “black box” (capacitance differentiator) which produces a voltage proportional to the change in capacitance measured from one point on the specimen to the next We will talk more about the workings of the differentiator below. The voltage from the differentiator, modeled in the derivative image of Figure 2, is recorded to provide a map of the specimen’s electrical properties.


2005 ◽  
Vol 38 (7) ◽  
pp. 49
Author(s):  
DEEANNA FRANKLIN
Keyword(s):  

2005 ◽  
Vol 38 (9) ◽  
pp. 31
Author(s):  
BETSY BATES
Keyword(s):  

2007 ◽  
Vol 40 (23) ◽  
pp. 7
Author(s):  
ELIZABETH MECHCATIE
Keyword(s):  

2008 ◽  
Vol 41 (8) ◽  
pp. 4
Author(s):  
BROOKE MCMANUS
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document