Between the Abstract and the Concrete: A Constraint-Based Approach to Navigating Instrumental Space

2020 ◽  
Vol 43 (1) ◽  
pp. 8-20
Author(s):  
Luc Döbereiner

This article deals with a way that algorithmic composition systems can be informed by material realities of musical performance. After a general discussion of the relation of abstract algorithms to concrete materiality, the article focuses on the idea of an instrument's space of possibilities. It briefly discusses a number of compositional approaches that seek to derive musical structure from bodily movements and from the physical properties of instruments. The last part describes a new open-source JavaScript library called OboeJS and a Web application based on this library. The system is an experimental exploration of the idea of instrumental space and an attempt to bring together abstract algorithmic processing and the concrete possibilities of a musical instrument. The system implements a flexible constraint-based search algorithm for the generation of oboe fingering sequences. This tool is presented as part of a wider approach to algorithmic composition that aims not to map data output of generative procedures to “sound generators” (e.g., performers, instruments, sound synthesis processes). Instead, I propose to derive structure from the space of possibilities of the instrument itself, which in this case is the oboe.

1998 ◽  
Vol 3 (3) ◽  
pp. 199-209 ◽  
Author(s):  
Andy Hunt ◽  
Ross Kirk ◽  
Richard Orton ◽  
Benji Merrison

The challenge of composing both sound and moving image within a coherent computer-mediated framework is addressed, and some of the aesthetic issues highlighted. A conceptual model for an audiovisual delivery system is proposed, and this model acts as a guide for detailed discussion of some illustrative examples of audiovisual composition. Options for types of score generated as graphical output of the system are outlined. The need for extensive algorithmic control of compositional decisions within an interactive framework is proposed. The combination of Tabula Vigilans Audio Interactive (TVAI), an algorithmic composition language for electroacoustic music and realtime image generation, with MIDAS, a multiprocessor audiovisual system platform, is shown to have the features desired for the conceptual outline given earlier, and examples are given of work achieved using these resources. It is shown that ultimately delivery of new work may be efficiently distributed via the World Wide Web, with composers' interactive scripts delivered remotely but rendered locally by means of a user's ‘rendering black box’.


2021 ◽  
Vol 9 (3) ◽  
pp. 429
Author(s):  
I Gede Erwin Winata Pratama ◽  
Luh Arida Ayu Rahning Putri

Terompong is a type of gamelan in Bali Province. This gamelan is commonly used in traditional ceremonies in Bali, especially the Dewa Yadnya and Pitra Yadnya. The terompong are striking instruments, where the bat is made of wood. The terompong is also a two-octave musical instrument composed of 10-12 small metal gong blocks. The gong blocks are arranged parallel, which makes the gong difficult to carry and has to stay somewhere if someone want to play. Of course, with this situation people find it difficult to learn the terompong because they are quite large and heavy. This problem could be solved by replace the original terompong with synthetic terompong. The synthesis referred here the synthesis of sound. In performing sound synthesis, the method used is Frequency Modulation (FM). The result of the synthesis carried out where the difference between fundamental frequency of the original tone and the synthesis tone is almost close to zero. The sound produced almost follows the original sound, but it can't follow the sound of metal being hit with a wooden club.


2019 ◽  
Author(s):  
Gabriel Lopes Rocha ◽  
João Teixeira Araújo ◽  
Flávio Luiz Schiavoni

The structure of a digital musical instrument (DMI) can be splitted up in three parts: interface, mapping and synthesizer. For DMI’s, in which sound synthesis is done via software, the interaction interface serves to capture the performer’s gestures, which can be mapped under various techniques to different sounds. In this work, we bring videogame controls as an interface for musical interaction. Due to its great presence in popular culture and its ease of access, even people who are not in the habit of playing electronic games possibly interacted with this kind of interface once in a lifetime. Thus, gestures like pressing a sequence of buttons, pressing them simultaneously or sliding your fingers through the control can be mapped for musical creation. This work aims the elaboration of a strategy in which several gestures captured by the interface can influence one or several parameters of the sound synthesis, making a mapping denominated many to many. Buttons combinations used to perform game actions that are common in fighting games, like Street Fighter, were mapped to the synthesizer to create a music. Experiments show that this mapping is capable of influencing the musical expression of a DMI making it closer to an acoustic instrument.


Author(s):  
Mohammad Jahangir Alam

The project Criminal Record Management System in the perspective of Somalia is a Criminal record management system that uses to record crime activities of criminals. It can be used to report criminal activities. This project is mainly useful for law and enforcement agencies in Somalia. The law and enforcement authority can preserve records of the criminals and search for any criminal using the system. This is an online web application with a database system in which police will keep the record of criminals who have been arrested.  We have used HTML, JavaScript, CSS, PHP, MySql and Bootstrap to develop this system.  We have also used binary search algorithm to find a criminal from the database. The project's interface is very user-friendly and helpful for authority.


Author(s):  
Gustavo Nishihara ◽  
Tiago Fernandes Tavares

A digital musical instrument is different from an acoustic one because its gesture controllers are decoupled from the sound synthesis. Because of this, it is possible to separately design the control interface and the sound synthesis independently, and then digitally implement the gesture-sound mapping. This allows diverse possibilities for musical expression. A particular kind of digital musical instruments are musical gloves. They can capture the hand gestures, which are later mapped into sounds. By means of electronic sensors and digital sound synthesis, this work consisted of building a musical glove. In the development of the work the gesture-sound mapping and sound possibilities were explored in an embedded system with low computational resources.


Kursor ◽  
2017 ◽  
Vol 8 (3) ◽  
pp. 151
Author(s):  
I Made Widiartha

Gamelan bali is one aspect of the art highly favored by both domestik and foreign tourists. One type of popular typical Balinese gamelan is Rindik. Rindik is one of Balinese traditional musical instrument made of bamboo. Now days, the number of foreign culture and today’s lifestyle give some impacts on the declining interest of Balinese to interact with this type of conventional gamelan. The younger generation is now more inclined to like devices which are played through electronics/software component. To increase public interest towards traditional gamelan bali especially rindik we need a breakthrough to digitize gamelan rindik and presenting it in the form of rindik software application. Today's advanced technology has made a way to digitize a wide range of instruments including rindik into computerized form. For example we can use frequency modulation as a voice synthesis technique. This method was developed by researchers in the field of sound synthesis. In this research, we have done the sound synthesis process of rindik instrument into digital form using frequency modulation. The best results were obtained through the synthesis comparison of carrier signal frequency and modulator is 1:7. Outcomes of this research is a digitizing result which is  presented in the form of a gamelan rindik package on desktop based software application.


2020 ◽  
Author(s):  
Miriam Akkermann

The use of the computer as a sound generator is omnipresent in current music production and ranges from music notation programs playing back samples via MIDI control to specially programmed sound synthesis programs. The term ‚computer‘ is generally understood as a complete set of hardware and software. But a closer look at this complete set is definitely worthwhile and poses some systematical challenges. In the early days of digital sound synthesis in real time, the hardware is strongly connected to the resulting sound. The control was done by means of a programming language or a specially designed software, which offered more or less possibilities of intervention, depending on the stage of development. But do these sound generators actually fulfill the definition of a musical instrument – and what exactly is that definition? What about the so-called software instruments, which, partly hardware-independent, allow users to play music? How can and should interfaces be classified seeing that hardware extensions developed specifically for musical use, but still need (special) software and other technical equipment for sound generation and, above all, output? And who actually decides on the sound and handling of the new instrument, since the integration of computers into musical works usually takes place in close cooperation between composers, musicians, engineers and programmers? In order to be able to discuss these questions, not only new methodological approaches but also cooperation between the disciplines is unavoidable and at the same time rewarding.


2021 ◽  
Author(s):  
Risto Holopainen

Dynamic systems have found their use in sound synthesis as well as score synthesis. These levels can be integrated in monolithic autonomous systems in a novel approach to algorithmic composition that shares certain aesthetic motivations with some work with autonomous music systems, such as the search for emergence. We discuss various strategies for achieving variation on multiple time-scales by using slow-fast, hybrid dynamic systems, and statistical feedback. The ideas are illustrated with a case study.


1996 ◽  
Vol 1 (3) ◽  
pp. 195-201 ◽  
Author(s):  
ANDREW MARTIN

Reaction–diffusion systems were first proposed by mathematician and computing forerunner Alan Turing in 1952. Originally intended as an explanation of plant phyllotaxis (the structure and arrangement of leaves in plants), reaction–diffusion now forms the basis of an area in biology which is as important as DNA research in the field of biological morphogenesis (Kauffman 1993). Reaction–diffusion systems were successfully utilised within the fields of computer animation and computer graphics to generate visually naturalistic patterning and textures such as animal furs (Turk 1991). More recently, reaction–diffusion systems have been applied to methods of half-tone printing, fingerprint enhancement, and have been proposed for use in sound synthesis (Sherstinsky 1994). The recent publication The Algorithmic Beauty of Seashells (Meinhardt 1995) uses various reaction–diffusion equations to explain patterned pigmentation markings on seashells. This article details an example of the application of reaction–diffusion systems to algorithmic composition within the field of computer music. The patterned data produced by reaction–diffusion systems is used to create a naturalistic soundscape in the piece cicada.


2000 ◽  
Vol 10 ◽  
pp. 49-54 ◽  
Author(s):  
Artemis Moroni ◽  
Jônatas Manzolli ◽  
Fernando Von Zuben ◽  
Ricardo Gudwin

While recent techniques of digital sound synthesis have put numerous new sounds on the musician's desktop, several artificial-intelligence (AI) techniques have also been applied to algorithmic composition. This article introduces Vox Populi, a system based on evolutionary computation techniques for composing music in real time. In Vox Populi, a population of chords codified according to MIDI protocol evolves through the application of genetic algorithms to maximize a fitness criterion based on physical factors relevant to music. Graphical controls allow the user to manipulate fitness and sound attributes.


Sign in / Sign up

Export Citation Format

Share Document