music clip
Recently Published Documents


TOTAL DOCUMENTS

9
(FIVE YEARS 1)

H-INDEX

1
(FIVE YEARS 0)

2021 ◽  
pp. 030573562110033
Author(s):  
Naomi Ziv ◽  
Revital Hollander-Shabtai

During stay-at-home orders in response to COVID-19, individuals had to deal with both health-related fear and anxiety and the difficulties related to social distancing and isolation. The present study, conducted in Israel shortly after the first lockdown was lifted, at the end of May 2020, examined individuals’ subjective evaluation of differences in their music listening habits and emotional reaction to music compared with normal times. A total of 200 participants filled an online questionnaire focusing on three issues: (1) changes in amount and situations of music listening. These included reference to new music clip types recently created and directly related to COVID-19 and its effects ( Corona Clips); (2) changes in intensity of emotions experienced in reaction to music; and (3) changes in general emotions. For most participants music listening and uses remained similar or increased. Both emotional reaction to music general negative and socially related emotions were stronger than under normal circumstances. Music uses and emotion scales were correlated with socially related emotions. The results support previous findings regarding the use of music for mood regulation and the importance of music as a means for social contact and provide a demonstration of subjective evaluation of these functions in real-time coping during a global crisis.


Music and cryptography have been linked to one another since ancient times. The idea of replacing plaintext letters with music notes and sending the music file to receiver, is not new. But such replacements sometimes result in music clips which are not pleasant to listeners and thereby leading to the music clip gaining unnecessary extra attention. Most of the works done in this area, fail to ensure the generation of a music clip that invariably conforms to any particular form of music. Melody of the music clip is neglected. In order to address this issue, current paper proposes a novel approach for sharing a secret message based on concepts of Carnatic Classical Music. The method proposed here aims at converting a message in textual format to a music clip before sending it to the receiver. Receiver can then decrypt that message using the knowledge of range of frequency values associated with each musical note also called as 'swara' in Carnatic Classical Music. Each plaintext character from English alphabet is replaced by different combinations of swaras. The set of swaras mapped to each plaintext character is so chosen that the final music file produced as the output of encryption always conforms to a melodic form ('Raga') governed by the framework of Carnatic Classical Music. Ten subject matter experts in the field of Carnatic music have given their opinion about the conformance of these music clips to specified ragas. Also, Mean Opinion Score (MOS) of 25 listeners has been tabulated to test and verify the melodic aspect of these music clips.


2019 ◽  
Author(s):  
Sushma Sharma ◽  
Arun Sasidharan ◽  
Vrinda Marigowda ◽  
Mohini Vijay ◽  
Sumit Sharma ◽  
...  

1.ABSTRACTSeveral scientific studies using Western classical music and some using Indian classical music have reported benefits of listening to musical pieces of specific ‘genre’ or ’Raga’, in terms of stress reduction and mental well-being. Within the realm of a Raga, presentation of musical pieces varies in terms of low-level musical components (like tempo, octave, timbre, etc.), and yet there is hardly any research on their effect. A commonly preferred musical pattern in Carnatic classical music is to have incremental modulations in tempo and octave (‘Ragam-Tanam-Pallavi’), and we wanted to examine whether this could have better anxiolytic effect than music without such modulations.Accordingly, in the current study, we exposed 21 male undergraduate medical students to a custom recorded South Indian classical music clip for 1 week (8 minutes clip; Raaga ‘Kaapi’; only two instruments – ‘Violin’ and ‘Mridangam’; listened thrice daily for 6 days). One set of the participants (Varying Music; n=11) listened to a version that had the incremental variations, whereas the other set (Stable Music; n=10) listened to a version without such variations. On all 6 days, one of the music listening sessions was conducted in the lab while collecting electroencephalography (EEG; 32 channels) and electrocardiography (ECG; 1 channel) data. Psychological assessment for anxiety (State-Trait Anxiety Inventory - STAI and Beck Anxiety Inventory - BAI) was conducted before (day 1) and after (day 6) the intervention. Physiological parameters studied included power spectrum across the scalp in delta, alpha, beta, theta and gamma bands from EEG and heart rate variability (HRV) from ECG, during the baseline recordings of day 1 and day 6 of intervention.Our results show that participants when exposed to varying music showed significant reduction in anxiety, in contrast to stable music or silence intervention. A global examination of power spectral changes showed a stark contrast between stable and varying music intervention in comparison to silence - former showing greater increase in higher frequencies whereas latter showing prominent decrease especially in lower frequencies, both in bilateral temporo-parieto-occipital regions. A more detailed spectral analysis in frontal region revealed that both music intervention showed greater left-dominant alpha/beta asymmetry (i.e., greater right brain activation) and decrease in overall midline power (i.e., lower default mode network or DMN activity), when compared to silence intervention. Interestingly, stable music resulted in more left asymmetry, whereas, varying music showed more midline power reduction. Both music intervention also didn’t show the reduction in HRV parameters that was associated with silence intervention.We speculate that, the enhancement in ‘mind calming effect’ of Kaapi raaga when presented with incremental variations, could be brought about by a balanced switching between a heightened mind wandering state with ‘attention to self’ during the lower-slower portions and a reduced mind wandering state with ‘attention to music’ during the higher-faster portions of the music. Such a ‘dynamic mind wandering’ exercise would allow training one’s creative thinking as well as sustained attention, during the respective high and low mind wandering states - both helping prevent ruminating thoughts. Therefore, musical properties such as tempo and octave have non-trivial influence on the various neurological and psychological mechanisms underlying stress management. Considering the impact of this finding in selection of music clips for music therapy, further studies with larger sample size is warranted.


Author(s):  
Yuan-Shan Lee ◽  
Yen-Lin Chiang ◽  
Pei-Rung Lin ◽  
Chang-Hung Lin ◽  
Tzu-Chiang Tai

This work proposes a query-by-singing (QBS) content-based music retrieval (CBMR) system that uses Approximate Karbunen–Loeve transform for noise reduction. The proposed QBS-CBMR system uses a music clip as a search key. First, a 51-dimensional matrix containing 39-Mel-frequency cepstral coefficients (MFCCs) features and 12-Chroma features are extracted from an input music clip. Next, adapted symbolic aggregate approximation (adapted SAX) is used to transform each dimension of features into a symbolic sequence. Each symbolic sequence corresponding to each dimension of MFCCs is then converted into a structure called advanced fast pattern index (AFPI) tree. The similarity between the query music clip and the songs in the database is evaluated by calculating a partial score for each AFPI tree. The final score is obtained by calculating the weighted sum of all partial scores, where the weighting of each partial score is determined by its entropy. Experimental results show that the proposed music retrieval system performs robustly and accurately with the entropy weighting mechanism.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Yu-Hao Chin ◽  
Chang-Hong Lin ◽  
Ernestasia Siahaan ◽  
Jia-Ching Wang

For music emotion detection, this paper presents a music emotion verification system based on hierarchical sparse kernel machines. With the proposed system, we intend to verify if a music clip possesses happiness emotion or not. There are two levels in the hierarchical sparse kernel machines. In the first level, a set of acoustical features are extracted, and principle component analysis (PCA) is implemented to reduce the dimension. The acoustical features are utilized to generate the first-level decision vector, which is a vector with each element being a significant value of an emotion. The significant values of eight main emotional classes are utilized in this paper. To calculate the significant value of an emotion, we construct its 2-class SVM with calm emotion as the global (non-target) side of the SVM. The probability distributions of the adopted acoustical features are calculated and the probability product kernel is applied in the first-level SVMs to obtain first-level decision vector feature. In the second level of the hierarchical system, we merely construct a 2-class relevance vector machine (RVM) with happiness as the target side and other emotions as the background side of the RVM. The first-level decision vector is used as the feature with conventional radial basis function kernel. The happiness verification threshold is built on the probability value. In the experimental results, the detection error tradeoff (DET) curve shows that the proposed system has a good performance on verifying if a music clip reveals happiness emotion.


Author(s):  
Sanghoon Jun ◽  
Seungmin Rho ◽  
Eenjun Hwang

A typical music clip consists of one or more segments with different moods and such mood information could be a crucial clue for determining the similarity between music clips. One representative mood has been selected for music clip for retrieval, recommendation or classification purposes, which often gives unsatisfactory result. In this paper, the authors propose a new music retrieval and recommendation scheme based on the mood sequence of music clips. The authors first divide each music clip into segments through beat structure analysis, then, apply the k-medoids clustering algorithm for grouping all the segments into clusters with similar features. By assigning a unique mood symbol for each cluster, one can transform each music clip into a musical mood sequence. For music retrieval, the authors use the Smith-Waterman (SW) algorithm to measure the similarity between mood sequences. However, for music recommendation, user preferences are retrieved from a recent music playlist or user interaction through the interface, which generates a music recommendation list based on the mood sequence similarity. The authors demonstrate that the proposed scheme achieves excellent performance in terms of retrieval accuracy and user satisfaction in music recommendation.


2011 ◽  
Vol 23 (3) ◽  
pp. 451-457
Author(s):  
Eun-Sook Jee ◽  
◽  
Chong Hui Kim ◽  
Hisato Kobayashi ◽  
◽  
...  

Sound is an important medium for human-robot interaction. Single sound or music clip is not enough to express delicate emotions, especially it is almost impossible to represent emotional changings. This paper tries to express different emotional levels of sounds and their transitions. In this paper, happiness, sadness, anger, and surprise are considered as a basic set of robots’ emotion. By using previous proposed nominal sound clips of the four emotions, this paper proposes a method to reproduce the different emotional levels of sounds by modulating their musical parameters ‘tempo,’ ‘pitch,’ and ‘volume.’ Basic experiments whether human subject can discern three different emotional intensity levels of the four emotions are carried out. By comparing the recognition rate, the proposed modulation works fairly well and at least shows possibility of letting humans identify three intensity levels of emotions. Since the modulation can be done by dynamically changing the three musical parameters of sound clip, our method can be expanded to dynamical changing of emotional sounds.


Author(s):  
Sanghoon Jun ◽  
Seungmin Rho ◽  
Eenjun Hwang

A typical music clip consists of one or more segments with different moods and such mood information could be a crucial clue for determining the similarity between music clips. One representative mood has been selected for music clip for retrieval, recommendation or classification purposes, which often gives unsatisfactory result. In this paper, the authors propose a new music retrieval and recommendation scheme based on the mood sequence of music clips. The authors first divide each music clip into segments through beat structure analysis, then, apply the k-medoids clustering algorithm for grouping all the segments into clusters with similar features. By assigning a unique mood symbol for each cluster, one can transform each music clip into a musical mood sequence. For music retrieval, the authors use the Smith-Waterman (SW) algorithm to measure the similarity between mood sequences. However, for music recommendation, user preferences are retrieved from a recent music playlist or user interaction through the interface, which generates a music recommendation list based on the mood sequence similarity. The authors demonstrate that the proposed scheme achieves excellent performance in terms of retrieval accuracy and user satisfaction in music recommendation.


Sign in / Sign up

Export Citation Format

Share Document