MUSIC PERCEPTION OF THE BRAIN AND ITS DEVELOPMENTAL EFFECTS

2020 ◽  
Vol 6 (35) ◽  
pp. 1622-1628
Author(s):  
Zeynep Ebru AYATA
Author(s):  
Diana Deutsch

In this groundbreaking synthesis of art and science, Diana Deutsch, one of the world’s leading experts on the psychology of music, shows how illusions of music and speech – many of which she discovered - have fundamentally altered thinking about the brain. These astonishing illusions show that people can differ strikingly in how they hear musical patterns - differences that reflect both variations in brain organization and influences of language on music perception. They lead Deutsch to examine questions such as: When an orchestra performs a symphony, what is the ‘real’ music? Is it in the mind of the composer, or the conductor, or different members of the audience? Deutsch also explores extremes of musical ability, and other rare responses to music and speech. Why is perfect pitch so rare? Why are some people unable to recognize simple tunes? Why do some people hallucinate music or speech? Why do we hear phantom words and phrases? Why are most people subject to stuck tunes, or ‘earworms’? Why do we hear a spoken phrase as sung just because it is presented repeatedly? In evaluating these questions, she also shows how music and speech are intertwined, and argues that they stem from an early form of communication that had elements of both. Many of the illusions described here are so striking and paradoxical that you need to hear them to believe them. So the book enables you to listen to the sounds that are described while reading about them.


2019 ◽  
Vol 11 (2) ◽  
pp. 98
Author(s):  
Artur Jaschke

Music activates a wide array of brain areas involved in different functions such as   perception, processing and execution of music. Understanding musical processes in the brain has multiple implications in the neuro- and health sciences.  Challenging the brain with a multisensory stimulus such as music activates responses beyond the auditory cortex of the temporal lobe. Other areas that are involved include the frontal lobes, parietal lobes, areas of the limbic system such as the amygdala, hippocampus and thalamus, the cerebellum and the brainstem. Nonetheless, there has been no attempt to summarize all involved brain areas in music into one overall encompassing map. This may well be, as there has been no thorough theory introduced, which would allow an initial point of departure in creating such a mapTherefore, a thorough systematic review has been conducted to identify all mentioned neural connections involved in the perception, processing and execution of music.  Communication between the thalamic nuclei is the initial step in multisensory integration, which lies at the base of the neural networks as proposed in this paper. Against this background, this manuscript introduces the to our knowledge first map of all brain regions involved in the perception, processing and execution of music.Consequently, placing thalamic multisensory integration at the core of this atlas allowed us to create a preliminary theory to explain the complexity of music induced brain activation.


2021 ◽  
pp. 161-165
Author(s):  
Daniel J. Levitin ◽  
Lindsay A. Fleming

Although much is known about the brain mechanisms underlying music perception and cognition, there is much work to be done in understanding aesthetic responses to music: Why does music make us feel the way we do? Why does it make us feel anything? In the article under discussion, the authors suggest that the brain’s own endogenous opioids mediate musical emotion, using the hypothesis of naltrexone-induced musical anhedonia. They conclude that endogenous opioids are critical to experiencing both positive and negative emotions in music and that music uses the same reward pathways as food, drugs, and sexual pleasure. Their findings add to the growing body of evidence for the evolutionary biological substrates of music.


1984 ◽  
Vol 1 (3) ◽  
pp. 350-356 ◽  
Author(s):  
Juan G. Roederer

A most basic issue in the study of music perception is the question of why humans are motivated to pay attention to, or create, musical messages, and why they respond emotionally to them, when such messages seem to convey no real-time relevant biological information as do speech, animal utterances, and environmental sounds. Expanding on previous work (Roederer, 1979,1982) three possibly concurrent factors will be examined: (1) The inborn motivation to train language-handling networks of the brain in the processing of simple, organized sound patterns as a prelude to the acquisition of language; (2) The need to extract the information contained in the "musical" components of speech; (3) The value of music as a means of transmitting information on emotional states and its effect in congregating and behaviorally equalizing masses of people. In the discussion, special attention will be paid to the role of motivation and emotion in auditory perception, to the fact that in humans limbic system functions can be activated by internally evoked images in complete detachment from the current state of environment and organism, and to the existence of two distinct strategies of cerebral information processing, namely short-term time sequencing, as required in speech communication and thinking, and holistic pattern recognition, as required in music perception.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Nan Zhao

In the treatment of children with autistic spectrum disorder (ASD) through music perception, the perception effect and the development of the disease are mainly reflected in the fluctuations of the electroencephalogram (EEG), which is clinically effective on the brain. There is an inaccuracy problem in electrogram judgment, and deep learning has great advantages in signal feature extraction and classification. Based on the theoretical basis of Deep Belief Network (DBN) in deep learning, this paper proposes a method that combines the optimized Restricted Boltzmann machine (RBM) feature extraction model with the softmax classification algorithm. Brain wave tracking analysis is performed on children with autism who have received different music perception treatments to improve classification accuracy and achieve the purpose of accurately judging the condition. Through continuous adjustment and optimization of the weight matrix in the model, a stable recognition model is obtained. The simulation results show that this optimization algorithm can effectively improve the recognition performance of DBN, with an accuracy of 94% in a certain environment, and has a better classification effect than other traditional classification methods.


Discourse ◽  
2020 ◽  
Vol 6 (1) ◽  
pp. 96-105
Author(s):  
I. V. Pavlov ◽  
V. M. Tsaplev

Introduction. A radical tendency in modern approaches to understanding the mechanisms of the brain is the tendency of some scientists to believe that the brain is a receptor capable of capturing thoughts; the nature of the occurrence of the thoughts themselves, however, is not to be clarified. However, speech expressing thoughts is undoubtedly the result of the work of the brain, so studies of the frequency structure of speech can be the basis for considering the material structure of the brain as a kind of “antenna”. In this approach, the problem of noise protection against the background of the undeniable frequency similarity of speech and music appears to us from somewhat different positions. This study raises the question of how essential the overall height of the musical system is to the perception of music (are there musical systems that are harmful or useful, in terms of their effects on the psyche). This question is also relevant to speech perception.Methodology and sources. The main sources in which the work of the brain and the essence of consciousness are considered from the positions indicated above were for us the work of American and British neurophysiologists and psychiatrists (Sam Parnia, Peter Fenwick). These scientists are studying the phenomena that accompany clinical death, and argue that at these moments the brain functions to the greatest extent as a receiving “antenna”. Assuming that any antenna is to be tuned, we are trying to identify possible ways to “tune” the brain. To do this, we propose to study the frequency characteristics of speech (in the simplest case, when singing vowels in a calm state) for their belonging to a particular musical system, as well as the peculiarities of music perception depending on the musical system (on the height of the note “la”). Varying the frequency characteristics of speech in a particular musical system can be considered, in our opinion, the main way to “tune” the brain. The methodology of the method is based on the use of frequency analysis of sound and the basic provisions of the elementary theory of music.Results and discussion. The main conclusion made by Western psychiatrists is the brain is not an organ of thought, consciousness exists independently from outside, the work of consciousness cannot be explained by the functioning of the brain – it requires a hardware check. If the neural network is an “antenna” that captures thoughts, and its “adjustment” at the physical level can be carried out (and is carried out) through sensory systems (including the hearing organ), the study of the frequency structure of speech will answer a number of important questions, including including related and higher brain functions (insight, creativity). Our experiments (Saint Petersburg Electrotechnical University, FIBS, department of EUT) showed that the influence of the “increased” or “lowered” musical-speech system on brain activity is insignificant. The study revealed the equiprobability of the frequency structure of speech. Since our brain lacks some characteristic set of frequencies – elements of a uniformly temperamental system, it is not necessary to talk about the harmful (or any other noise) effect of the “raised” and “lowered” systems due to deviation from the “internal standard”.Conclusion. In response to the assumptions made by Western experts, we proposed a frequency interpretation of the processes occurring in the brain, which, perhaps, will explain in more detail such phenomena as inspiration, discovery, etc., which occur with minimal activity of consciousness. Despite the limited methods of hardware study of factors that influence the activity of the brain and largely determine its higher functions (for example, creativity), the results of the brain's work in relation to music (both in terms of its creation and in terms of our reaction to it) are quite analyzable , which was shown in this study. The “musicality” of speech is extremely vividly represented in its frequency structure and allows one to reveal, to one degree or another, the features of the brain.


2020 ◽  
Author(s):  
Marthe Tibo ◽  
Simon Geirnaert ◽  
Alexander Bertrand

ABSTRACTWhen listening to music, the brain generates a neural response that follows the amplitude envelope of the musical sound. Previous studies have shown that it is possible to decode this envelope-following response from electroencephalography (EEG) data during music perception. However, a successful decoding and recognition of imagined music, without the physical presentation of a music stimulus, has not been established to date. During music imagination, the human brain internally replays a musical sound, which naturally leads to the hypothesis that a similar envelope-following response might be generated. In this study, we demonstrate that this response is indeed present during music imagination and that it can be decoded from EEG data. Furthermore, we show that the decoded envelope allows for classification of imagined music in a song recognition task, containing tracks with lyrics as well as purely instrumental tasks. A two-song classifier achieves a median accuracy of 95%, while a 12-song classifier achieves a median accuracy of 66.7%. The results of this study demonstrate the feasibility of decoding imagined music, thereby setting the stage for new neuroscientific experiments in this area as well as for new types of brain-computer interfaces based on music imagination.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Victor Pando-Naude ◽  
Agata Patyczek ◽  
Leonardo Bonetti ◽  
Peter Vuust

AbstractA remarkable feature of the human brain is its ability to integrate information from the environment with internally generated content. The integration of top-down and bottom-up processes during complex multi-modal human activities, however, is yet to be fully understood. Music provides an excellent model for understanding this since music listening leads to the urge to move, and music making entails both playing and listening at the same time (i.e., audio-motor coupling). Here, we conducted activation likelihood estimation (ALE) meta-analyses of 130 neuroimaging studies of music perception, production and imagery, with 2660 foci, 139 experiments, and 2516 participants. We found that music perception and production rely on auditory cortices and sensorimotor cortices, while music imagery recruits distinct parietal regions. This indicates that the brain requires different structures to process similar information which is made available either by an interaction with the environment (i.e., bottom-up) or by internally generated content (i.e., top-down).


Sign in / Sign up

Export Citation Format

Share Document