modality combinations
Recently Published Documents


TOTAL DOCUMENTS

16
(FIVE YEARS 2)

H-INDEX

5
(FIVE YEARS 0)

2021 ◽  
Vol 15 ◽  
Author(s):  
Irene Togoli ◽  
Roberto Arrighi

Humans and other species share a perceptual mechanism dedicated to the representation of approximate quantities that allows to rapidly and reliably estimate the numerosity of a set of objects: an Approximate Number System (ANS). Numerosity perception shows a characteristic shared by all primary visual features: it is susceptible to adaptation. As a consequence of prolonged exposure to a large/small quantity (“adaptor”), the apparent numerosity of a subsequent (“test”) stimulus is distorted yielding a robust under- or over-estimation, respectively. Even if numerosity adaptation has been reported across several sensory modalities (vision, audition, and touch), suggesting the idea of a central and a-modal numerosity processing system, evidence for cross-modal effects are limited to vision and audition, two modalities that are known to preferentially encode sensory stimuli in an external coordinate system. Here we test whether numerosity adaptation for visual and auditory stimuli also distorts the perceived numerosity of tactile stimuli (and vice-versa) despite touch being a modality primarily coded in an internal (body-centered) reference frame. We measured numerosity discrimination of stimuli presented sequentially after adaptation to series of either few (around 2 Hz; low adaptation) or numerous (around 8 Hz; high adaptation) impulses for all possible combinations of visual, auditory, or tactile adapting and test stimuli. In all cases, adapting to few impulses yielded a significant overestimation of the test numerosity with the opposite occurring as a consequence of adaptation to numerous stimuli. The overall magnitude of adaptation was robust (around 30%) and rather similar for all sensory modality combinations. Overall, these findings support the idea of a truly generalized and a-modal mechanism for numerosity representation aimed to process numerical information independently from the sensory modality of the incoming signals.


2021 ◽  
Vol 104 (2) ◽  
pp. 003685042110211
Author(s):  
Lin Yang ◽  
Wu Yan ◽  
Hongmin Wu

Human-Robot Collaboration (HRC) has been widely used in daily life and industry for maximizing the advantages of humans and robots, respectively. However, the internal modeling errors or external perturbations still affect robotic systems such as human collisions and environmental changes. Multimodal anomaly detection plays an increasingly important role in HRC applications, which detects unexpected anomalies from multimodal signals. Due to the complex temporal dependence and stochasticity, it is still difficult to choose a common model applicable to all collaborative tasks, and lack of comparative analysis of existing methods and verification of specific application cases. In this paper, six representative deep learning-based methods are evaluated and the comparing metrics including detection accuracy, multi-modality combinations, and anomaly time bias. For a fair comparison, each detector models multimodal signals from non-anomalous samples and then determines an anomaly using a predefined threshold. We evaluate the detectors with force, torque, velocity, tactile, and kinematic sensing during a human-robot kitting experiment that consists of six individual skills, results indicate that the LSTM-DAGMM based detector outperformed the others, which yielding higher accuracy and efficiency. The metrics are measured with the RUC and ROC by changing the settings of multi-modality combinations and various anomaly biases, which aim to obtain the best performance of multimodal anomaly detection.


2020 ◽  
Vol 14 (2) ◽  
pp. 193-206 ◽  
Author(s):  
Michael J. M. Harrap ◽  
Natalie Hempel de Ibarra ◽  
Heather M. Whitney ◽  
Sean A. Rands

AbstractFloral guides are signal patterns that lead pollinators to floral rewards after they have located the flower, and increase foraging efficiency and pollen transfer. Patterns of several floral signalling modalities, particularly colour patterns, have been identified as being able to function as floral guides. Floral temperature frequently shows patterns that can be used by bumblebees for locating and recognising the flower, but whether these temperature patterns can function as a floral guide has not been explored. Furthermore, how combined patterns (using multiple signalling modalities) affect floral guide function has only been investigated in a few modality combinations. We assessed how artificial flowers induce behaviours in bumblebees when rewards are indicated by unimodal temperature patterns, unimodal colour patterns or multimodal combinations of these. Bees visiting flowers with unimodal temperature patterns showed an increased probability of finding rewards and increased learning of reward location, compared to bees visiting flowers without patterns. However, flowers with contrasting unimodal colour patterns showed further guide-related behavioural changes in addition to these, such as reduced reward search times and attraction to the rewarding feeder without learning. This shows that temperature patterns alone can function as a floral guide, but with reduced efficiency. When temperature patterns were added to colour patterns, bees showed similar improvements in learning reward location and reducing their number of failed visits in addition to the responses seen to colour patterns. This demonstrates that temperature pattern guides can have beneficial effects on flower handling both when alone or alongside colour patterns.


Author(s):  
Mareike A. Hoffmann ◽  
Melanie Westermann ◽  
Aleks Pieczykolan ◽  
Lynn Huestegge

Abstract. Doing two things at once (vs. one in isolation) usually yields performance costs. Such decrements are often distributed asymmetrically between the two actions involved, reflecting different processing priorities. A previous study (Huestegge & Koch, 2013) demonstrated that the particular effector systems associated with the two actions can determine the pattern of processing priorities: Vocal responses were prioritized over manual responses, as indicated by smaller performance costs (associated with dual-action demands) for the former. However, this previous study only involved auditory stimulation (for both actions). Given that previous research on input–output modality compatibility in dual tasks suggested that pairing auditory input with vocal output represents a particularly advantageous mapping, the question arises whether the observed vocal-over-manual prioritization was merely a consequence of auditory stimulation. To resolve this issue, we conducted a manual–vocal dual task study using either only auditory or only visual stimuli for both responses. We observed vocal-over-manual prioritization in both stimulus modality conditions. This suggests that input–output modality mappings can (to some extent) attenuate, but not abolish/reverse effector-based prioritization. Taken together, effector system pairings appear to have a more substantial impact on capacity allocation policies in dual-task control than input–output modality combinations.


2019 ◽  
Vol 30 (10) ◽  
pp. 1473-1482 ◽  
Author(s):  
Suddha Sourav ◽  
Ramesh Kekunnaya ◽  
Idris Shareef ◽  
Seema Banerjee ◽  
Davide Bottari ◽  
...  

Humans preferentially match arbitrary words containing higher- and lower-frequency phonemes to angular and smooth shapes, respectively. Here, we investigated the role of visual experience in the development of audiovisual and audiohaptic sound–shape associations (SSAs) using a unique set of five groups: individuals who had suffered a transient period of congenital blindness through congenital bilateral dense cataracts before undergoing cataract-reversal surgeries (CC group), individuals with a history of developmental cataracts (DC group), individuals with congenital permanent blindness (CB group), individuals with late permanent blindness (LB group), and controls with typical sight (TS group). Whereas the TS and LB groups showed highly robust SSAs, the CB, CC, and DC groups did not—in any of the modality combinations tested. These results provide evidence for a protracted sensitive period during which aberrant vision prevents SSA acquisition. Moreover, the finding of a systematic SSA in the LB group demonstrates that representations acquired during the sensitive period are resilient to loss despite dramatically changed experience.


Author(s):  
Kylie M. Gomes ◽  
Sara L. Riggs

A limited number of multimodal studies conduct crossmodal matching – a step to ensure cues are perceived to be of equal intensity across sensory modalities. The majority of work on crossmodal matching was conducted by Stevens from the 1950-1960’s, but there has been limited work on the development of a reliable crossmodal matching method to be used in more recent multimodal studies. A few studies have contributed to this goal; however, there has been little consideration towards identifying which parameters map between each modality and whether age significantly contributes to the between-subject variability. The goal of the current study is to investigate how auditory pitch and age affects crossmodal matching and the variability in the matches made. The findings of this study revealed that when auditory pitch is varied, there is a significant effect of age, especially when participants were able to control the intensity of the auditory modality. Additionally, there was significant variability between different modality combinations across both age groups. The findings demonstrate the importance of considering the appropriate parameters to be used across different sensory modalities and the effect age has on crossmodal matching.


Author(s):  
Kylie Gomes ◽  
Sara L. Riggs

Multimodal interfaces which distribute information across vision, audition, and touch have been demonstrated to improve performance in various complex domains. However, many multimodal studies to- date fail to conduct crossmodal matching, a critical step to ensure cues across different sensory channels are perceived to be of equal intensity. The present study compared two different methods of crossmodal matching based on previous work conducted by Stevens – the methods of bracketing and adjustment. Each participant completed the crossmodal matching task using two different interfaces using the method of bracketing or adjustment for all modality combinations across vision, audition, and touch. The results showed a significant effect of interface type and subject variability depending on the modality used as a reference. Overall, the findings show the viability of the new method, but also support the need of a reliable crossmodal matching technique that reduces within-subject variability.


Sign in / Sign up

Export Citation Format

Share Document