The Effect of Movement and Cue Complexity on Tactile Change Detection

Author(s):  
Scott M. Betza ◽  
Scott T. Reeves ◽  
James H. Abernathy ◽  
Sara Lu Riggs

There is a growing interest in using touch to offload the often overburdened visual channel as its merit has been demonstrated in various work domains. However, more work is needed to understand the perceptual limitations of the tactile modality, including how it is affected by change blindness (i.e., failure to detect changes due to transients) as the majority of work on change blindness has been in vision. This study examines how movement and cue complexity affects the ability to detect tactile changes. The findings indicate the ability to detect changes are affected by: 1) movement (walking resulted in worse change detection rates compared to sitting) and 2) cue complexity (high complexity cues had worse change detection rates compared to low complexity). Overall, this work adds to the knowledge base of tactile perception and can inform the design of tactile displays for multiple work domains such as anesthesiology.

Author(s):  
Kylie Gomes ◽  
Scott Betza ◽  
Sara Lu Riggs

Objective To evaluate the effects that movement, cue complexity, and the location of tactile displays on the body have on tactile change detection. Background Tactile displays have been demonstrated as a means to address data overload by offloading the visual and auditory modalities. However, change blindness—the failure to detect changes in a stimulus when changes coincide with another event or disruption in stimulus continuity—has been demonstrated to affect the tactile modality and may be exacerbated during movement. The complexity of tactile cues and locations of tactile displays on the body may also affect the detection of changes in tactile patterns. Limitations to tactile perception need to be examined. Method Twenty-four participants performed a tactile change detection task while sitting, standing, and walking. Tactile cues varied in complexity and included low, medium, and high complexity cues presented to the arm or back. Results Movement adversely affects tactile change detection as hit rates were the highest while sitting, followed by standing and walking. Cue complexity affected tactile change detection: Low complexity cues resulted in higher detection rates compared with medium and high complexity cues. The arms exhibited better change detection performance than the back. Conclusion The design of tactile displays should consider the effect of movement. Cue complexity should be minimized and decisions about the location of a tactile display should take into account body movements to support tactile perception. Application The findings can provide design guidelines to inform tactile display design for data-rich, complex domains.


Author(s):  
Sara Lu Riggs ◽  
Nadine Sarter

Objective: The present study examined whether tactile change blindness and crossmodal visual-tactile change blindness occur in the presence of two transient types and whether their incidence is affected by the addition of a concurrent task. Background: Multimodal and tactile displays have been proposed as a promising means to overcome data overload and support attention management. To ensure the effectiveness of these displays, researchers must examine possible limitations of human information processing, such as tactile and crossmodal change blindness. Method: Twenty participants performed a unmanned aerial vehicle (UAV) monitoring task that included visual and tactile cues. They completed four blocks of 70 trials each, one involving visual transients, the other tactile transients. A search task was added to determine whether increased workload leads to a higher risk of change blindness. Results: The findings confirm that tactile change detection suffers in terms of response accuracy, sensitivity, and response bias in the presence of a tactile transient. Crossmodal visual-tactile change blindness was not observed. Also, change detection was not affected by the addition of the search task and helped reduce response bias. Conclusion: Tactile displays can help support multitasking and attention management, but their design needs to account for tactile change blindness. Simultaneous presentation of multiple tactile indications should be avoided as it adversely affects change detection. Application: The findings from this research will help inform the design of multimodal and tactile interfaces in data-rich domains, such as military operations, aviation, and healthcare.


Perception ◽  
10.1068/p3035 ◽  
2000 ◽  
Vol 29 (3) ◽  
pp. 273-286
Author(s):  
Mark W Becker ◽  
Harold Pashler ◽  
Stuart M Anstis

In three experiments, subjects attempted to detect the change of a single item in a visually presented array of items. Subjects' ability to detect a change was greatly reduced if a blank interstimulus interval (ISI) was inserted between the original array and an array in which one item had changed (‘change blindness’). However, change detection improved when the location of the change was cued during the blank ISI. This suggests that people represent more information of a scene than change blindness might suggest. We test two possible hypotheses why, in the absence of a cue, this representation fails to produce good change detection. The first claims that the intervening events employed to create change blindness result in multiple neural transients which co-occur with the to-be-detected change. Poor detection rates occur because a serial search of all the transient locations is required to detect the change, during which time the representation of the original scene fades. The second claims that the occurrence of the second frame overwrites the representation of the first frame, unless that information is insulated against overwriting by attention. The results support the second hypothesis. We conclude that people may have a fairly rich visual representation of a scene while the scene is present, but fail to detect changes because they lack the ability to simultaneously represent two complete visual representations.


Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 1994
Author(s):  
Qian Ma ◽  
Wenting Han ◽  
Shenjin Huang ◽  
Shide Dong ◽  
Guang Li ◽  
...  

This study explores the classification potential of a multispectral classification model for farmland with planting structures of different complexity. Unmanned aerial vehicle (UAV) remote sensing technology is used to obtain multispectral images of three study areas with low-, medium-, and high-complexity planting structures, containing three, five, and eight types of crops, respectively. The feature subsets of three study areas are selected by recursive feature elimination (RFE). Object-oriented random forest (OB-RF) and object-oriented support vector machine (OB-SVM) classification models are established for the three study areas. After training the models with the feature subsets, the classification results are evaluated using a confusion matrix. The OB-RF and OB-SVM models’ classification accuracies are 97.09% and 99.13%, respectively, for the low-complexity planting structure. The equivalent values are 92.61% and 99.08% for the medium-complexity planting structure and 88.99% and 97.21% for the high-complexity planting structure. For farmland with fragmentary plots and a high-complexity planting structure, as the planting structure complexity changed from low to high, both models’ overall accuracy levels decreased. The overall accuracy of the OB-RF model decreased by 8.1%, and that of the OB-SVM model only decreased by 1.92%. OB-SVM achieves an overall classification accuracy of 97.21%, and a single-crop extraction accuracy of at least 85.65%. Therefore, UAV multispectral remote sensing can be used for classification applications in highly complex planting structures.


1976 ◽  
Vol 43 (3_suppl) ◽  
pp. 1299-1302
Author(s):  
Virginia Brabender ◽  
Christopher Clay

The present experiment tested the hypothesis that nominal processing increases as stimulus complexity increases. Subjects indicated whether two 4- or 12-sided forms, separated by an interval of .5 or 4.0 sec., were the same or different. “Same” responses corresponded to matches for physical or nominal identity. Longer RTs for high complexity than low complexity forms suggest that complexity affects the efficiency of visual processing rather than the occurrence of nominal processing. An interaction between type of match and interval, due to the longer RTs for matches of nominally identical forms at only the .5-sec. interval, indicates that at this interval, matches for physical and nominal identity are made with visual and nominal representations respectively.


2021 ◽  
Vol 11 ◽  
Author(s):  
Wang Xiang

To investigate whether implicit detection occurs uniformly during change blindness with single or combination feature stimuli, and whether implicit detection is affected by exposure duration and delay, two one-shot change detection experiments are designed. The implicit detection effect is measured by comparing the reaction times (RTs) of baseline trials, in which stimulus exhibits no change and participants report “same,” and change blindness trials, in which the stimulus exhibits a change but participants report “same.” If the RTs of blindness trials are longer than those of baseline trials, implicit detection has occurred. The strength of the implicit detection effect was measured by the difference in RTs between the baseline and change blindness trials, where the larger the difference, the stronger the implicit detection effect. In both Experiments 1 and 2, the results showed that the RTs of change blindness trials were significantly longer than those of baseline trials. Whether under set size 4, 6, or 8, the RTs of the change blindness trials were significantly longer than those in the baseline trials. In Experiment 1, the difference between the baseline trials’ RTs and change blindness trials’ RTs of the single features was significantly larger than that of the combination features. However, in Experiment 2, the difference between the baseline trials’ RTs and the change blindness trials’ RTs of single features was significantly smaller than that of the combination features. In Experiment 1a, when the exposure duration was shorter, the difference between the baseline and change blindness trials’ RTs was smaller. In Experiment 2, when the delay was longer, the difference between the two trials’ RTs was larger. These results suggest that regardless of whether the change occurs in a single or a combination of features and whether there is a long exposure duration or delay, implicit detection occurs uniformly during the change blindness period. Moreover, longer exposure durations and delays strengthen the implicit detection effect. Set sizes had no significant impact on implicit detection.


2011 ◽  
pp. 295-316
Author(s):  
Markus Kampmann ◽  
Liang Zhang

This chapter introduces a complete framework for automatic adaptation of a 3D face model to a human face for visual communication applications like video conferencing or video telephony. First, facial features in a facial image are estimated. Then, the 3D face model is adapted using the estimated facial features. This framework is scalable with respect to complexity. Two complexity modes, a low complexity and a high complexity mode, are introduced. For the low complexity mode, only eye and mouth features are estimated and the low complexity face model Candide is adapted. For the high complexity mode, a more detailed face model is adapted, using eye and mouth features, eyebrow and nose features, and chin and cheek contours. Experimental results with natural videophone sequences show that with this framework automatic 3D face model adaptation with high accuracy is possible.


1971 ◽  
Vol 1 (2) ◽  
pp. 99-112 ◽  
Author(s):  
J. K. Jeglum ◽  
C. F. Wehrhahn ◽  
J. M. A. Swan

Data from a survey of lowland, mainly peatland, vegetation were subjected to environmental ordination based on measurements of water level and water conductivity, and to vegetational ordination derived from principal component analysis (P.C.A.). Analyzed were the total set of the data ("all types"), half sets ("nonwoody" and "woody types") and quarter sets (stands of "marshes", "meadows", "shrub fens", and "other woody types"); the number of distinct physiognomic groups in a set of data, and presumably the amount of contained heterogeneity, decreased at each segmentation.The effectiveness of the ordination models was tested by correlating measured distances in two-dimensional ordination models with 2W/(A + B) indices of vegetational similarity for randomly selected pairs of types or stands. As the physiognomic complexity decreased, the effectiveness of the P.C.A. vegetational ordination increased whereas that of the environmental ordination decreased. The environmental ordination seemed most appropriate to the data encompassing high complexity (total data set), while the P.C.A. vegetational ordination seemed most appropriate to data with low complexity (quarter sets of the data).


2020 ◽  
Vol 32 (2) ◽  
pp. 281-329 ◽  
Author(s):  
Sidney R. Lehky ◽  
Anh Huy Phan ◽  
Andrzej Cichocki ◽  
Keiji Tanaka

Neurons selective for faces exist in humans and monkeys. However, characteristics of face cell receptive fields are poorly understood. In this theoretical study, we explore the effects of complexity, defined as algorithmic information (Kolmogorov complexity) and logical depth, on possible ways that face cells may be organized. We use tensor decompositions to decompose faces into a set of components, called tensorfaces, and their associated weights, which can be interpreted as model face cells and their firing rates. These tensorfaces form a high-dimensional representation space in which each tensorface forms an axis of the space. A distinctive feature of the decomposition algorithm is the ability to specify tensorface complexity. We found that low-complexity tensorfaces have blob-like appearances crudely approximating faces, while high-complexity tensorfaces appear clearly face-like. Low-complexity tensorfaces require a larger population to reach a criterion face reconstruction error than medium- or high-complexity tensorfaces, and thus are inefficient by that criterion. Low-complexity tensorfaces, however, generalize better when representing statistically novel faces, which are faces falling beyond the distribution of face description parameters found in the tensorface training set. The degree to which face representations are parts based or global forms a continuum as a function of tensorface complexity, with low and medium tensorfaces being more parts based. Given the computational load imposed in creating high-complexity face cells (in the form of algorithmic information and logical depth) and in the absence of a compelling advantage to using high-complexity cells, we suggest face representations consist of a mixture of low- and medium-complexity face cells.


Sign in / Sign up

Export Citation Format

Share Document