Music training and rate of presentation as mediators of text and song recall

2000 ◽  
Vol 28 (5) ◽  
pp. 700-710 ◽  
Author(s):  
Andrea R. Kilgour ◽  
Lorna S. Jakobson ◽  
Lola L. Cuddy
Keyword(s):  
2013 ◽  
Author(s):  
Annalise A. D'Souza ◽  
Galit Blumenthal ◽  
Linda Moradzadeh ◽  
Melody Wiseheart

2018 ◽  
Vol 44 (6) ◽  
pp. 992-999 ◽  
Author(s):  
Swathi Swaminathan ◽  
E. Glenn Schellenberg ◽  
Kirthika Venkatesan
Keyword(s):  

2020 ◽  
pp. 102986492097214
Author(s):  
Aurélien Bertiaux ◽  
François Gabrielli ◽  
Mathieu Giraud ◽  
Florence Levé

Learning to write music in the staff notation used in Western classical music is part of a musician’s training. However, writing music by hand is rarely taught formally, and many musicians are not aware of the characteristics of their musical handwriting. As with any symbolic expression, musical handwriting is related to the underlying cognition of the musical structures being depicted. Trained musicians read, think, and play music with high-level structures in mind. It seems natural that they would also write music by hand with these structures in mind. Moreover, improving our understanding of handwriting may help to improve both optical music recognition and music notation and composition interfaces. We investigated associations between music training and experience, and the way people write music by hand. We made video recordings of participants’ hands while they were copying or freely writing music, and analysed the sequence in which they wrote the elements contained in the musical score. The results confirmed experienced musicians wrote faster than beginners, were more likely to write chords from bottom to top, and they tended to write the note heads first, in a flowing fashion, and only afterwards use stems and beams to emphasize grouping, and add expressive markings.


1993 ◽  
Vol 21 (2) ◽  
pp. 114-126 ◽  
Author(s):  
Louise M. Buttsworth ◽  
Gerard J. Fogarty ◽  
Peter C. Rorke

2016 ◽  
Vol 45 (5) ◽  
pp. 752-760 ◽  
Author(s):  
Paulo E. Andrade ◽  
Patrícia Vanzella ◽  
Olga V. C. A. Andrade ◽  
E. Glenn Schellenberg

Brazilian listeners ( N = 303) were asked to identify emotions conveyed in 1-min instrumental excerpts from Wagner’s operas. Participants included musically untrained 7- to 10-year-olds and university students in music (musicians) or science (nonmusicians). After hearing each of eight different excerpts, listeners made a forced-choice judgment about which of eight emotions best matched the excerpt. The excerpts and emotions were chosen so that two were in each of four quadrants in two-dimensional space as defined by arousal and valence. Listeners of all ages performed at above-chance levels, which means that complex, unfamiliar musical materials from a different century and culture are nevertheless meaningful for young children. In fact, children performed similarly to adult nonmusicians. There was age-related improvement among children, however, and adult musicians performed best of all. As in previous research that used simpler musical excerpts, effects due to age and music training were due primarily to improvements in selecting the appropriate valence. That is, even 10-year-olds with no music training were as likely as adult musicians to match a high- or low-arousal excerpt with a high- or low-arousal emotion, respectively. Performance was independent of general cognitive ability as measured by academic achievement but correlated positively with basic pitch-perception skills.


2011 ◽  
Vol 8 (5) ◽  
pp. 608-623 ◽  
Author(s):  
Franziska Degé ◽  
Sina Wehrum ◽  
Rudolf Stark ◽  
Gudrun Schwarzer

2016 ◽  
Vol 33 (4) ◽  
pp. 472-492 ◽  
Author(s):  
Yading Song ◽  
Simon Dixon ◽  
Marcus T. Pearce ◽  
Andrea R. Halpern

Music both conveys and evokes emotions, and although both phenomena are widely studied, the difference between them is often neglected. The purpose of this study is to examine the difference between perceived and induced emotion for Western popular music using both categorical and dimensional models of emotion, and to examine the influence of individual listener differences on their emotion judgment. A total of 80 musical excerpts were randomly selected from an established dataset of 2,904 popular songs tagged with one of the four words “happy,” “sad,” “angry,” or “relaxed” on the Last.FM web site. Participants listened to the excerpts and rated perceived and induced emotion on the categorical model and dimensional model, and the reliability of emotion tags was evaluated according to participants’ agreement with corresponding labels. In addition, the Goldsmiths Musical Sophistication Index (Gold-MSI) was used to assess participants’ musical expertise and engagement. As expected, regardless of the emotion model used, music evokes emotions similar to the emotional quality perceived in music. Moreover, emotion tags predict music emotion judgments. However, age, gender and three factors from Gold-MSI, importance, emotion, and music training were found not to predict listeners’ responses, nor the agreement with tags.


Sign in / Sign up

Export Citation Format

Share Document