A storytelling sound file CALL task used in a tertiary CFL classroom

2016 ◽  
Vol 27 (2) ◽  
pp. 542-554
Author(s):  
Wenying Jiang
Keyword(s):  
Author(s):  
V. J Manzo

In this chapter, we will look at some of the ways that you can play back and record sound files. As you know, Max lets you design the way you control the variables in your patch. We will apply these design concepts to the ways we control the playback of recorded sound. We will also look at some ways to track the pitch of analog audio and convert it into MIDI numbers. By the end of this chapter, you will have written a program that allows you to play back sound files using a computer keyboard as a control interface as well as a program that tracks the pitch you’re singing from a microphone and automatically harmonizes in real time. We will create a simple patch that plays back some prerecorded files I have prepared. Please locate the8 “.aif ” audio files located in the Chapter 13 Examples folder. 1. Copy these 8 audio files to a new folder somewhere on your computer 2. In Max, create a new patch 3. Click File>Save As and save the patch as playing_sounds.maxpat in the same folder where you put these 8 audio files. There should be 9 files total in the folder (8 audio and 1 Max patch) 4. Close the patch playing_sounds.maxpat 5. Re-open playing_sounds.maxpat (the audio files will now be in the search path of the Max patch) We can play back the prerecorded audio files we just copied using an object called sfplay~. The sfplay~ object takes an argument to specify how many channels of audio you would like the object to handle. For instance, if you are loading a stereo (two channel) file, you can specify the argument 2. Loading a sound file is easy: simply send the sfplay~ object the message open. Playing back the sound is just as easy: send it a 1 or a 0 from a toggle. Let’s build a patch that plays back these files.


2013 ◽  
Vol 18 (1) ◽  
pp. 60-70 ◽  
Author(s):  
Elizabeth Hoffman

Adorno's theory of musical reproduction is unfinished, inconsistent and attuned only to score-based acoustic music – but it has relevance for electroacoustic performance as well. His theory prompts contemplation about what ‘good’ interpretation, and interpretation itself, means for fixed electroacoustic music. A digital sound file is frequently, if not typically, viewed as more rigid and precise than a score. This article uses Adorno's theory to compare ontologies of score and digital file realizations respectively, thus questioning the above assumption. Do electroacoustic works truly exist apart from their performed features, or is a given work only its performances? Different answers imply different work concepts and interpretive strategies. Toward the essay's goals, we examine three features often viewed as nonontological to an electroacoustic work, namely performed spatialisation, equalisation, and amplitude balance. We consider the impacts of these features when they are manipulated in real time, or performance to performance. As Adorno asks how choices of timing or dynamics dictate a notated work's aesthetic ‘clarity’, this paper asks how performed choices contribute to an electroacoustic work's clarity, and to the unique interpretive potential of electroacoustic music. Tape music and acousmatic music, with its diffusion tradition, are central to this paper's thesis; but multi-channel works are circumscribed by it as well.


Author(s):  
V. J Manzo

In this chapter, we will examine some ways to interact with audio processing objects in formal compositions. Examples of traditional instrumentalists interacting with Max patches in concert performances are common. In the interest of copyright availability, we will examine a composition of mine for E♭clarinet and computer (a Max patch). The remaining example patches in this chapter will deal with audio processing as it relates to hearing and some aspects of perception. In this composition, discourse, the clarinetist plays from a score while the Max patch “listens” to the performer (using a microphone) and processes the clarinet sound in predetermined ways. The Max patch follows a time-based “score” of its own for performing the effects on the clarinet sound and, thus, processes the audio signal the same way each time the piece is performed. Our purpose in exploring this patch has less to do with the effects that are used or any aesthetic you get from the piece than with the implementation of a usable timeline that both the clarinetist and the computer can perform to. 1. Open the file discourse.maxpat from within the folder discourse located in the Chapter 20 Examples folder When the space bar is pressed, a clocker within the patch will begin triggering the events in the Max patch; this is like the score for the computer. These events assume that since the user has pressed the space bar, the patch can expect to hear the notes of the score played back at tempo to coincide with the different audio processing taking place within the patch. Unless you happen to have your E♭clarinet handy (a PDF of the score is also available in the discourse folder), we will use a demo sound file of a synthesized clarinet playing this piece in lieu of actually performing it. This will give us a sense of what the piece would sound like if we were to perform the clarinet part live.


In a country like India, wide variety of fruits are available. Fruits plays an important role in the health of human beings and naturally health improves, if the quality of the fruit is good. Grading of the watermelon quality helps the consumers and vendors. The proposed work is to classify the watermelons based on the sound. Sound file dataset is created manually by tapping the watermelon and recording the sound. Dataset consist of different types of watermelon. For this, different size, colour and shape of the watermelons are used. Features are extracted from the sound files. Naïve Bayes, SMO and Random Tree classifiers are used for classification. The proposed work has achieved average accuracy of 78.8 %.


Sign in / Sign up

Export Citation Format

Share Document