Leonardo ◽  
2009 ◽  
Vol 42 (5) ◽  
pp. 439-442 ◽  
Author(s):  
Eduardo R. Miranda ◽  
John Matthias

Music neurotechnology is a new research area emerging at the crossroads of neurobiology, engineering sciences and music. Examples of ongoing research into this new area include the development of brain-computer interfaces to control music systems and systems for automatic classification of sounds informed by the neurobiology of the human auditory apparatus. The authors introduce neurogranular sampling, a new sound synthesis technique based on spiking neuronal networks (SNN). They have implemented a neurogranular sampler using the SNN model developed by Izhikevich, which reproduces the spiking and bursting behavior of known types of cortical neurons. The neurogranular sampler works by taking short segments (or sound grains) from sound files and triggering them when any of the neurons fire.


Author(s):  
Amer T Saeed ◽  
Zaid Raad Saber ◽  
Ahmed M. Sana ◽  
Musa A. Hameed

<p><a name="_Hlk536186602"></a><span style="font-size: 9pt; font-family: 'Times New Roman', serif;">Unwanted signals or noise signals in sound files are considered one of the major challenges and issues for a thousand users. It is impossible to reduce or remove these noise signals without identifying their types and ranges. Therefore, to address one of the big problems in the digital or analogue communication, which is noise signals or unwanted signals, an adaptive selection method and noise signal removal algorithm are proposed in this research. The proposed algorithm is done through specifying the types of undesirable signals, frequency, and time range, then utilizing digital signal processing system which includes design several types of digital filters based on the types and numbers of unwanted signals. Four digital filters are used in this research to remove noise signals from the sound file by implementing the proposed algorithm using Matlab Code. Results show that our proposed algorithm was done successfully and the whole noise signals were removed without any negative consequence in the output sound signal. </span><span style="font-family: 'Times New Roman', serif; font-size: 9pt;">Unwanted signals or noise signals in sound files are considered one of the major challenges and issues for a thousand users. It is impossible to reduce or remove these noise signals without identifying their types and ranges. Therefore, to address one of the big problems in the digital or analogue communication, which is noise signals or unwanted signals, an adaptive selection method and noise signal removal algorithm are proposed in this research. The proposed algorithm is done through specifying the types of undesirable signals, frequency, and time range, then utilizing digital signal processing system which includes design several types of digital filters based on the types and numbers of unwanted signals. Four digital filters are used in this research to remove noise signals from the sound file by implementing the proposed algorithm using Matlab Code. Results show that our proposed algorithm was done successfully and the whole noise signals were removed without any negative consequence in the output sound signal.</span></p>


Author(s):  
Diauddin Ismail

In everyday life, it is not uncommon when we hear the sound of chanting the holy verses of the Al Al Qur’an  which are read in mosques before prayer time or in other conditions we seem interested in knowing what Surah and which verse is being recited. This is due to the love of Muslims themselves for the Al Qur’an  but not all Muslims memorize the entire contents of the Al Qur’an . Based on the limitations and the magnitude of curiosity about Surah and Verse information, the writer is interested in developing a computer system that can recognize and provide information on the recited Surah and Verse. Advances in computer technology not only make it easier for humans to carry out activities. One of the human intelligences that are planted into computer technology is to recognize the verses of the Al Al Qur’an  Surah Al-Falaq through voice. Ada-Boost method is one method to identify or recognize voice classification, and by using this method the success rate in recognizing verse numbers reaches 72%. This system can only recognize the number of verses of the Al Al Qur’an  Surah Al-Falaq, recorded sound files with the .wav file extension and built using the Delphi programming language.


Author(s):  
V. J Manzo

In this chapter, we will look at some of the ways that you can play back and record sound files. As you know, Max lets you design the way you control the variables in your patch. We will apply these design concepts to the ways we control the playback of recorded sound. We will also look at some ways to track the pitch of analog audio and convert it into MIDI numbers. By the end of this chapter, you will have written a program that allows you to play back sound files using a computer keyboard as a control interface as well as a program that tracks the pitch you’re singing from a microphone and automatically harmonizes in real time. We will create a simple patch that plays back some prerecorded files I have prepared. Please locate the8 “.aif ” audio files located in the Chapter 13 Examples folder. 1. Copy these 8 audio files to a new folder somewhere on your computer 2. In Max, create a new patch 3. Click File>Save As and save the patch as playing_sounds.maxpat in the same folder where you put these 8 audio files. There should be 9 files total in the folder (8 audio and 1 Max patch) 4. Close the patch playing_sounds.maxpat 5. Re-open playing_sounds.maxpat (the audio files will now be in the search path of the Max patch) We can play back the prerecorded audio files we just copied using an object called sfplay~. The sfplay~ object takes an argument to specify how many channels of audio you would like the object to handle. For instance, if you are loading a stereo (two channel) file, you can specify the argument 2. Loading a sound file is easy: simply send the sfplay~ object the message open. Playing back the sound is just as easy: send it a 1 or a 0 from a toggle. Let’s build a patch that plays back these files.


ReCALL ◽  
2009 ◽  
Vol 21 (2) ◽  
pp. 227-240 ◽  
Author(s):  
Barbara Geraghty ◽  
Ann Marcus Quinn

AbstractAs Japanese uses three writing systems (hiragana, katakana, and the ideograms known as kanji), and as materials in the target language include all three, it is a major challenge to learn to read and write quickly. This paper focuses on interactive multi-media methods of teaching Japanese reading which foster learner autonomy.As little has been published on interactive multi-media methods of teaching Japanese reading, it seems likely that traditional resources are generally used for this activity. The courseware includes sound files showing the pronunciation of each kana as well as simultaneous animation showing how to write each character. This paper investigates whether interactive courseware, used independently of classroom interaction, results in measurably greater recognition of the hiragana syllabary than more traditional methods. After briefly situating the study in the context of research on the teaching of Japanese reading and learner autonomy, the paper will present the courseware as well as an empirical study comparing the results of the use of the courseware by learners at beginners’ level: one group using the courseware, and the other using paper-based materials. This is followed by an account of learner diaries written by zero-beginner level learners of Japanese using the courseware.The study indicates that acquisition of a recognition-level knowledge of hiragana is approximately twice as fast using the courseware as using paper-based materials. Learners also learned to write the hiragana without explicit instruction.


1995 ◽  
Vol 4 (2) ◽  
pp. 193-202
Author(s):  
Peter Jeffery

Anyone who pays attention to the popular news media will have read or heard a lot lately about the rapidly expanding international network of computer networks known as the Internet. Though the Internet was originally developed by the universities to support international research cooperation and the exchange of scholarly information, the hardware and software have become so cheap and easily available that many commercial firms, non-educational organizations and individuals are now connected to the Internet also. The most explosive growth has been in the multimedia portion of the Internet, known as the World Wide Web, which is able to transmit computerized image, video and sound files as well as text. The lack of any central authority that can regulate or moderate the content of Internet communications has forced governments and citizens to become increasingly embroiled in issues of free speech, fair trade and community responsibility – yet despite the empty chatter, political grandstanding, sales hype and pornography, the Internet remains an unparalleled and unprecedented medium of valuable information, much of which would otherwise be unavailable to many, or available only with extensive travel, inconvenience or expense. The quantity of such information increases daily, and it includes much that is helpful to serious students of medieval music and chant.


2016 ◽  
Vol 31 (4) ◽  
pp. 464-492 ◽  
Author(s):  
Stefan Helmreich

In February 2016, U.S.-based astronomers announced that they had detected gravitational waves, vibrations in the substance of space-time. When they made the detection public, they translated the signal into sound, a “chirp,” a sound wave swooping up in frequency, indexing, scientists said, the collision of two black holes 1.3 billion years ago. Drawing on interviews with gravitational-wave scientists at MIT and interpreting popular representations of this cosmic audio, I ask after these scientists’ acoustemology—that is, what the anthropologist of sound Steven Feld would call their “sonic way of knowing and being.” Some scientists suggest that interpreting gravitational-wave sounds requires them to develop a “vocabulary,” a trained judgment about how to listen to the impress of interstellar vibration on the medium of the detector. Gravitational-wave detection sounds, I argue, are thus articulations of theories with models and of models with instrumental captures of the cosmically nonhuman. Such articulations, based on mathematical and technological formalisms—Einstein’s equations, interferometric observatories, and sound files—operate alongside less fully disciplined collections of acoustic, auditory, and even musical metaphors, which I call informalisms. Those informalisms then bounce back on the original articulations, leading to rhetorical reverb, in which articulations—amplified through analogies, similes, and metaphors—become difficult to fully isolate from the rhetorical reflections they generate. Filtering analysis through a number of accompanying sound files, this article contributes to the anthropology of listening, positing that scientific audition often operates by listening through technologies that have been tuned to render theories and their accompanying formalisms both materially explicit and interpretively resonant.


2015 ◽  
pp. 99
Author(s):  
Zsófia Schön

This paper aims to present the first steps of a corpus based dialect dictionary of postpositions in several Khanty dialects and subdialects. Based primarily on specifically elicitated data from more than fifty informants, this ongoing project focuses not only on the semantic properties of this part of speech in Khanty, but also on the morphology and combinatorics as exhibited by (sub)dialectal microvariation. Special attention is paid to two of the Northern dialects – Kazym and Shuryshkary Khanty – and to one of the Eastern dialects – Surgut Khanty.The lexicon entries have been compiled according to TEI P5 guidelines in XML format, while the corpus data is stored in a MySQL database. A web application combining the lexicon with the corpus data, sound files, annotations and metadata is currently under construction.As a multilingual dialect dictionary of Khanty postpositions, this project hopes to fill a gap in current research on Khanty: namely the lack of easily accessible digital dictionaries. It is designed to be a pilot project for forthcoming digital Khanty dictionaries.


2021 ◽  
Vol 9 (2) ◽  
pp. 1-17
Author(s):  
Edward Kelly

This paper presents a family of objects for manipulating polyrhythmic sequences and isorhythmic relationships, in both the signal and event domains. These work together and are tightly synchronised to an audio phase signal, so that relative temporal relationships can be tempo-manipulated in a linear fashion. Many permutations of polyrhythmic sequences including incomplete tuplets, scrambled elements, interleaved tuplets and any complex franctional relation can be realised. Similarly, these many be driven with controllable isorhythmic generators derived from a single driver, so that sequences of different fractionally related lengths may be combined and synchronised. It is possible to use signals to drive audio playback that are directly generated, so that disparate sound files may be combined into sequences. A set of sequenced parameters are included to facilitate this process.


Sign in / Sign up

Export Citation Format

Share Document