scholarly journals The Effects of Timbre on Neural Responses to Musical Emotion

2019 ◽  
Vol 37 (2) ◽  
pp. 134-146
Author(s):  
Weixia Zhang ◽  
Fang Liu ◽  
Linshu Zhou ◽  
Wanqi Wang ◽  
Hanyuan Jiang ◽  
...  

Timbre is an important factor that affects the perception of emotion in music. To date, little is known about the effects of timbre on neural responses to musical emotion. To address this issue, we used ERPs to investigate whether there are different neural responses to musical emotion when the same melodies are presented in different timbres. With a cross-modal affective priming paradigm, target faces were primed by affectively congruent or incongruent melodies without lyrics presented in the violin, flute, and voice. Results showed a larger P3 and a larger left anterior distributed LPC in response to affectively incongruent versus congruent trials in the voice version. For the flute version, however, only the LPC effect was found, which was distributed over centro-parietal electrodes. Unlike the voice and flute versions, an N400 effect was observed in the violin version. These findings revealed different patterns of neural responses to musical emotion when the same melodies were presented in different timbres, and provide evidence for the hypothesis that there are specialized neural responses to the human voice.

2021 ◽  
Vol 11 (5) ◽  
pp. 553
Author(s):  
Chenggang Wu ◽  
Juan Zhang ◽  
Zhen Yuan

In order to explore the affective priming effect of emotion-label words and emotion-laden words, the current study used unmasked (Experiment 1) and masked (Experiment 2) priming paradigm by including emotion-label words (e.g., sadness, anger) and emotion-laden words (e.g., death, gift) as primes and examined how the two kinds of words acted upon the processing of the target words (all emotion-laden words). Participants were instructed to decide the valence of target words, and their electroencephalogram was recorded at the same time. The behavioral and event-related potential (ERP) results showed that positive words produced a priming effect whereas negative words inhibited target word processing (Experiment 1). In Experiment 2, the inhibition effect of negative emotion-label words on emotion word recognition was found in both behavioral and ERP results, suggesting that modulation of emotion word type on emotion word processing could be observed even in the masked priming paradigm. The two experiments further supported the necessity of defining emotion words under an emotion word type perspective. The implications of the findings are proffered. Specifically, a clear understanding of emotion-label words and emotion-laden words can improve the effectiveness of emotional communications in clinical settings. Theoretically, the emotion word type perspective awaits further explorations and is still at its infancy.


2018 ◽  
Vol 57 (6) ◽  
pp. 1534-1548 ◽  
Author(s):  
Scotty D. Craig ◽  
Noah L. Schroeder

Technology advances quickly in today’s society. This is particularly true in regard to instructional multimedia. One increasingly important aspect of instructional multimedia design is determining the type of voice that will provide the narration; however, research in the area is dated and limited in scope. Using a randomized pretest–posttest design, we examined the efficacy of learning from an instructional animation where narration was provided by an older text-to-speech engine, a modern text-to-speech engine, or a recorded human voice. In most respects, those who learned from the modern text-to-speech engine were not statistically different in regard to their perceptions, learning outcomes, or cognitive efficiency measures compared with those who learned from the recorded human voice. Our results imply that software technologies may have reached a point where they can credibly and effectively deliver the narration for multimedia learning environments.


The aim of the project is to develop a wheel chair which can be controlled by voice of the person. It is based on the speech recognition model. The project is focused on controlling the wheel chair by human voice. The system is intended to control a wheel seat by utilizing the voice of individual. The structure of this framework will be particularly valuable to the crippled individual and furthermore to the older individuals. It is a booming technology which interfaces human with machine. Smart phone device is the interface. This will allow the challenging people to move freely without the assistant of others. They will get a moral support to live independently .The hardware used are Arduino kit, Microcontroller, Wheelchair and DC motors. DC motor helps for the movement of wheel chair. Ultra Sonic Sensor senses the obstacles between wheelchair and its way.


2021 ◽  
pp. 194084472110428
Author(s):  
Grace O' Grady

One year after beginning a large-scale research inquiry into how young people construct their identities I became ill and subsequently underwent abdominal surgery which triggered an early menopause. The process which was experienced as creatively bruising called to be written as “Artful Autoethnography” using visual images and poetry to tell a “vulnerable, evocative and therapeutic” story of illness, menopause, and their subject positions in intersecting relations of power. The process which was experienced as disempowering called to be performed as an act of resistance and activism. This performance ethnography is in line with the call for qualitative inquirers to move beyond strict methodological boundaries. In particular, the voice of activism in this performance is in the space between data (human voice and visual art pieces) and theory. To this end, and in resisting stratifying institutional/medical discourse, the performance attempts to create a space for a merger of ethnography and activism in public/private life.


Author(s):  
Robert C. Ehle

This chapter offers the author's theory of the origins of music in ancient primates a million years ago, and what music would have sounded like. Origins of nasal and tone languages and the anatomy of larynx is discussed, and then a hypothesis is presented that these creatures would fashioned a tone language. They had absolute pitch that allowed them to recognize other voices, to read each other's emotions from the sounds they made with their voices, and to convey over long distances specific information about strategies, meeting places, etc. Having an acute sense of pitch, they would have sung, essentially using tonal language for aesthetic and subjective purposes. Thus, they would have invented music. Then the physicality of the human (or hominid) voice is discussed and the way an absolute pitch can be acquired, as the musicality still lies in the vocalisms it expresses. The reason for this is that music is actually contained in the way the brain works, and the ear and the voice are parts of this system. The final part discusses the origins of musical emotion as the case for imprinting in the perinatal period.


2020 ◽  
Vol 117 (21) ◽  
pp. 11364-11367 ◽  
Author(s):  
Wim Pouw ◽  
Alexandra Paxton ◽  
Steven J. Harrison ◽  
James A. Dixon

We show that the human voice has complex acoustic qualities that are directly coupled to peripheral musculoskeletal tensioning of the body, such as subtle wrist movements. In this study, human vocalizers produced a steady-state vocalization while rhythmically moving the wrist or the arm at different tempos. Although listeners could only hear and not see the vocalizer, they were able to completely synchronize their own rhythmic wrist or arm movement with the movement of the vocalizer which they perceived in the voice acoustics. This study corroborates recent evidence suggesting that the human voice is constrained by bodily tensioning affecting the respiratory–vocal system. The current results show that the human voice contains a bodily imprint that is directly informative for the interpersonal perception of another’s dynamic physical states.


1973 ◽  
Vol 56 (4) ◽  
pp. 944-946
Author(s):  
Ernest W Nash

Abstract The human voice, as an instrument of crime, is used more often than a weapon and automobile combined. Some crimes are committed by the voice alone; therefore, to be able to identify a speaker by his voice is a very desirable goal in the fight against crime. However, desire has been somewhat hindered by the lack of technology and instrumentation. The use of spectrograms (voiceprints) to assist the expert in making an objective evaluation of the voices in question is discussed. The scientific reason for accepting the identification of a speaker’s voice is the uniqueness of man. Therefore, if a unique person uses unique physiological body parts to produce the sounds of speech, it logically follows that sound will also be unique. By the visual examination of the spectrographic analysis, a trained expert is able to compare the uniqueness.


2018 ◽  
Vol 42 (1) ◽  
pp. 37-59 ◽  
Author(s):  
Stefano Fasciani ◽  
Lonce Wyse

In this article we describe a user-driven adaptive method to control the sonic response of digital musical instruments using information extracted from the timbre of the human voice. The mapping between heterogeneous attributes of the input and output timbres is determined from data collected through machine-listening techniques and then processed by unsupervised machine-learning algorithms. This approach is based on a minimum-loss mapping that hides any synthesizer-specific parameters and that maps the vocal interaction directly to perceptual characteristics of the generated sound. The mapping adapts to the dynamics detected in the voice and maximizes the timbral space covered by the sound synthesizer. The strategies for mapping vocal control to perceptual timbral features and for automating the customization of vocal interfaces for different users and synthesizers, in general, are evaluated through a variety of qualitative and quantitative methods.


Nutrients ◽  
2019 ◽  
Vol 11 (11) ◽  
pp. 2832 ◽  
Author(s):  
Wataru Sato ◽  
Krystyna Rymarczyk ◽  
Kazusa Minemoto ◽  
Jakub Wojciechowski ◽  
Sylwia Hyniewska

Previous psychological studies have shown that images of food elicit hedonic responses, either consciously or unconsciously, and that participants’ cultural experiences moderate conscious hedonic ratings of food. However, whether cultural factors moderate unconscious hedonic responses to food remains unknown. We investigated this issue in Polish and Japanese participants using the subliminal affective priming paradigm. Images of international fast food and domestic Japanese food were presented subliminally as prime stimuli. Participants rated their preferences for the subsequently presented target ideographs. Participants also rated their preferences for supraliminally presented food images. In the subliminal rating task, Polish participants showed higher preference ratings for fast food primes than for Japanese food primes, whereas Japanese participants showed comparable preference ratings across these two conditions. In the supraliminal rating task, both Polish and Japanese participants reported comparable preferences for fast and Japanese food stimuli. These results suggest that cultural experiences moderate unconscious hedonic responses to food, which may not be detected based on explicit ratings.


2019 ◽  
Vol 34 (1) ◽  
pp. 28-47 ◽  
Author(s):  
Emna Chérif ◽  
Jean-François Lemoine

Virtual assistants are increasingly common on commercial websites. In view of the benefits they offer to businesses for improving navigation and interaction with the consumers, researchers and practitioners agree on the value of providing them with anthropomorphic characteristics. This study focuses on the effect of the voice of the virtual assistant. Although there are some studies of human–computer interaction in this field, there is no work that addresses the topic from a marketing perspective and compares the effect of a human voice versus a synthetic voice. Our findings show that consumers who interact with a virtual assistant with a human voice have a stronger impression of social presence than those interacting with a virtual assistant with a synthetic voice. The human voice also builds trust in the virtual assistant and generates stronger behavioural intentions.


Sign in / Sign up

Export Citation Format

Share Document