Structuring Music through Markup Language
Latest Publications


TOTAL DOCUMENTS

10
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By IGI Global

9781466624979, 9781466624986

Author(s):  
Jyri Pakarinen

This chapter discusses the central physical phenomena involved in music. The aim is to provide an explanation of the related issues in an understandable level, without delving unnecessarily deep in the underlying mathematics. The chapter is divided in two main sections: musical sound sources and sound transmission to the observer. The first section starts from the definition of sound as wave motion, and then guides the reader through the vibration of strings, bars, membranes, plates, and air columns, that is, the oscillating sources that create the sound for most of the musical instruments. Resonating structures, such as instrument bodies are also reviewed, and the section ends with a discussion on the potential physical markup parameters for musical sound sources. The second section starts with an introduction to the basics of room acoustics, and then explains the acoustic effect that the human observer causes in the sound field. The end of the second section provides a discussion on which sound transmission parameters could be used in a general music markup language. Finally, a concluding section is presented.


Author(s):  
Alexander Refsum Jensenius

The chapter starts by discussing the importance of body movement in both music performance and perception, and argues that for future research in the field it is important to develop solutions for being able to stream and store music-related movement data alongside other types of musical information. This is followed by a suggestion for a multilayered approach to structuring movement data, where each of the layers represents a separate and consistent subset of information. Finally, examples of two prototype implementations are presented: a setup for storing GDIF-data into SDIF-files, and an example of how GDIF-based OSC streams can allow for more flexible and meaningful mapping from controller to sound engine.


Author(s):  
Cory McKay ◽  
Ichiro Fujinaga

This chapter includes a critical review of existing file formats that have been used in MIR research. This is followed by a set of design priorities that are proposed for use in developing new formats and improving existing ones. The details of the ACE XML specification are then described in this context. Finally, research priorities for the future are discussed, as well as possible uses for ACE XML outside the specific domain of MIR.


Author(s):  
Jacques Steyn

Information architecture is about information structures and their relations within the information space, and in this chapter the music information space. To determine what the structures and relationships are, an ontological investigation is launched. Ontology in Information Systems has a specific meaning, and is here considered to be a methodology that results in a specific information architecture. Ontologies can apply to many levels of investigation and description, and to any of contemporary music disciplines. Music is here demarcated to a core consisting of pitch-frequency and tempo-time relationships, mapped onto music space. The roles of PitchSets (“octaves”), scales, and tuning systems within this space are explained, and proposed as the core components of the object “music.” The most basic and generic markup language for music should thus start from this core. All other ontologies and markup are secondary to this object.


Author(s):  
Jacques Steyn

The properties of a wide range of musical instruments have been considered, ranging from ancient acoustic instruments to modern ones, as well as including the instruments of many music cultures. Following on a logical analysis and synthesis of previous research rather than acoustic lab results, a high-level generic and universal model of the information architecture of acoustic music instruments is constructed.


Author(s):  
Michael D. Good

MusicXML is a universal interchange and distribution format for common Western music notation. MusicXML’s design and development began in 2000, with the purpose to be the MP3 equivalent for digital sheet music. MusicXML was developed by Recordare and can represent music from the 17th century onwards, including guitar tablature and other music notations used to notate or transcribe contemporary popular music. MusicXML is supported by over 160 applications. The development and history of MusicXML is described in this chapter.


Author(s):  
Sergio Canazza ◽  
Giovanni De Poli ◽  
Antonio Rodà ◽  
Alvise Vidolin

During the last decade, in the fields of both systematic musicology and cultural musicology, a lot of research effort (using methods borrowed from music informatics, psychology, and neurosciences) has been spent to connect two worlds that seemed to be very distant or even antithetic: machines and emotions. Mainly in the Sound and Music Computing framework of human-computer interaction an increasing interest grew in finding ways to allow machines communicating expressive, emotional content using a nonverbal channel. Such interest has been justified with the objective of an enhanced interaction between humans and machines exploiting communication channels that are typical of human-human communication and that can therefore be easier and less frustrating for users, and in particular for non-technically skilled users (e.g. musicians, teacher, students, common people). While on the one hand research on emotional communication found its way into more traditional fields of computer science such as Artificial Intelligence, on the other hand novel fields are focusing on such issues. The examples are studies on Affective Computing in the United States, KANSEI Information Processing in Japan, and Expressive Information Processing in Europe. This chapter presents the state of the art in the research field of a computational approach to the study of music performance. In addition, analysis methods and synthesis models of expressive content in music performance, carried out by the authors, are presented. Finally, an encoding system aiming to encode the music performance expressiveness will be detailed, using an XML-based approach.


Author(s):  
Gerard Roma ◽  
Perfecto Herrera

In this chapter, the authors discuss an approach to music representation that supports collaborative composition given current practices based on digital audio. A music work is represented as a directed graph that encodes sequences and layers of sound samples. The authors discuss graph grammars as a general framework for this representation. From a grammar perspective, they analyze the use of XML for storing production rules, music structures, and references to audio files. The authors describe an example implementation of this approach.


Author(s):  
Wijnand Schepens ◽  
Marc Leman
Keyword(s):  

It is hoped that developers of new XML-encodings in different domains can use Chronicle as a powerful base layer. The software can also be useful for researchers who need an easy and flexible way to store XML-data.


Author(s):  
Antoine Allombert ◽  
Myriam Desainte-Catherine

While representing musical processes or musical scores through Markup Languages is now well established, the authors assume that there is still the need for a format to encode musical material with which a musician can interact. The lack of such a format is especially crucial for contemporary music that involves computing processes. They propose such a formal representation for composing musical scores in which some temporal properties can be interactively modified during their execution. This allows the creation of scores that can be interpreted by a performer in the same way a musician can interpret a score of instrumental music. The formal representation comes with an XML format for encoding the scores and also interfacing the representation with other types of Markup Language musical description.


Sign in / Sign up

Export Citation Format

Share Document