scholarly journals TECHNOLOGICAL PROCESSES OF SOUND MAKING IN AUDIOVISUAL WORK

Author(s):  
Lishafai Oleksandr

The relevance of the article. The article explores technological processes of creating sound accompaniment in audiovisual works and points out main categories that expose a panoramic picture of these processes. The soundtrack in relation to a motion picture is a flexible, modern structure with a large number of components. There is a number of technical means for reproducing sound accompaniment in audiovisual work. More than a hundred years ago, when it was necessary to make works of fine art more affecting and impressive with the help of sound, they used to fall back on acoustic musical instruments, which were part of various types of orchestras. Noise sounds were also widely used. At the same time, the development of audio–visual interaction in motion picture took place. Eventually, this process began to include technologies based on the use of vinyl records, discs, films and tape recordings. Today, they are replaced by the latest, rapidly updated digital equipment that enhances and expands the possibilities of sound in audiovisual work The purpose of the article is to reveal and systematize the main components of the technological process of reproducing sound accompaniment in a motion picture, as well as, generalize results in graphical form, showing a panoramic view of this process. Research methods of studying technological processes of sound production in audiovisual work are: system-analytical (investigating sources that reveal these prosesses, their comparative analysis and systematization) and generalizing (investigating origins of the main categories and exposing a panoramic picture of these processes). Conclusions. The work summarizes information on the technology of sound field creation, main categories that make up this process (sound as a unit of musical information and a core of the background composition; engineering equipment and a group of specialists responsible for sound recording), principles of sound sources processing (methods of working with sounds and effects), structural categories (holistic background compositions) and methods of implementation, the effectiveness of technological processes in creating sound design. The article outlines the author’s concept of creating soundtrack in audiovisual work as a scientific and practical process and gives prospects for its development.

Author(s):  
Oleksandr Lishafai

The purpose of the article is to study the main components of the technological process in the formation of the soundtrack of the audiovisual space. The research methodology is based on the use of methods of source search, systematization, and comparative analysis. The scientific novelty lies in the representation of the attempt to create a theoretical concept concerning the importance of technological processes in creating the sound design of works. Conclusions. The article presents the concept of technological processes for creating sound in the audiovisual context, as a scientific and practical phenomenon, in the form of a panoramic figure. The research will serve as an incentive to search for updated options for combining existing elements of the background composition, as well as to create previously unused sound effects. Keywords: sound design skills, background composition, sound field unit, sound recording techniques, sound design.


Author(s):  
Kin’ya Takahashi ◽  
Masataka Miyamoto ◽  
Yasunori Ito ◽  
Toshiya Takami ◽  
Taizo Kobayashi ◽  
...  

The acoustic mechanisms of 2D and 3D edge tones and a 2D small air-reed instrument have been studied numerically with compressible Large Eddy Simulation (LES). Sound frequencies of the 2D and 3D edge tones obtained numerically change with the jet velocity well following Brown’s semi-empirical equation, while that of the 2D air-reed instrument behaves in a different manner and obeys the semi-empirical theory, so called Cremer-Ising-Coltman theory. We have also calculated aerodynamic sound sources for the 2D edge tone and the 2D air-reed instrument relying on Ligthhill’s acoustic analogy and have discussed similarities and differences between them. The sound source of the air-reed instrument is more localized around the open mouth compared with that of the edge tone due to the effect of the strong sound field excited in the resonator.


1999 ◽  
Vol 5 (2) ◽  
pp. 135-140
Author(s):  
Vytautas Stauskis

The paper deals with the differences between the energy created by four different pulsed sound sources, ie a sound gun, a start gun, a toy gun, and a hunting gun. A knowledge of the differences between the maximum energy and the minimum energy, or the signal-noise ratio, is necessary to correctly calculate the frequency dependence of reverberation time. It has been established by investigations that the maximum energy excited by the sound gun is within the frequency range of 250 to 2000 Hz. It decreases by about 28 dB at the low frequencies. The character of change in the energy created by the hunting gun differs from that of the sound gun. There is no change in the maximum energy within the frequency range of 63–100 Hz, whereas afterwards it increases with the increase in frequency but only to the limit of 2000 Hz. In the frequency range of 63–500 Hz, the energy excited by the hunting gun is lower by 15–30 dB than that of the sound gun. As frequency increases the difference is reduced and amounts to 5–10 dB. The maximum energy of the start gun is lower by 4–5 dB than that of the hunting gun in the frequency range of up to 1000 Hz, while afterwards the difference is insignificant. In the frequency range of 125–250 Hz, the maximum energy generated by the sound gun exceeds that generated by the hunting gun by 20 dB, that by the start gun by 25 dB, and that by the toy gun—by as much as 35 dB. The maximum energy emitted by it occupies a wide frequency range of 250 to 2000 Hz. Thus, the sound gun has an advantage over the other three sound sources from the point of view of maximum energy. Up until 500 Hz the character of change in the direct sound energy is similar for all types of sources. The maximum energy of direct sound is also created by the sound gun and it increases along with frequency, the maximum values being reached at 500 Hz and 1000 Hz. The maximum energy of the hunting gun in the frequency range of 125—500 Hz is lower by about 20 dB than that of the sound gun, while the maximum energy of the toy gun is lower by about 25 dB. The maximum of the direct sound energy generated by the hunting gun, the start gun and the toy gun is found at high frequencies, ie at 1000 Hz and 2000 Hz, while the sound gun generates the maximum energy at 500 Hz and 1000 Hz. Thus, the best results are obtained when the energy is emitted by the sound gun. When the sound field is generated by the sound gun, the difference between the maximum energy and the noise level is about 35 dB at 63 Hz, while the use of the hunting gun reduces the difference to about 20–22 dB. The start gun emits only small quantities of low frequencies and is not suitable for room's acoustical analysis at 63 Hz. At the frequency of 80 Hz, the difference between the maximum energy and the noise level makes up about 50 dB, when the sound field is generated by the sound gun, and about 27 dB, when it is generated by the hunting gun. When the start gun is used, the difference between the maximum signal and the noise level is as small as 20 dB, which is not sufficient to make a reverberation time analysis correctly. At the frequency of 100 Hz, the difference of about 55 dB between the maximum energy and the noise level is only achieved by the sound gun. The hunting gun, the start gun and the toy gun create the decrease of about 25 dB, which is not sufficient for the calculation of the reverberation time. At the frequency of 125 Hz, a sufficiently large difference in the sound field decay amounting to about 40 dB is created by the sound gun, the hunting gun and the start gun, though the character of the sound field curve decay of the latter is different from the former two. At 250 Hz, the sound gun produces a field decay difference of almost 60 dB, the hunting gun almost 50 dB, the start gun almost 40 dB, and the toy gun about 45 dB. At 500 Hz, the sound field decay is sufficient when any of the four sound sources is used. The energy difference created by the sound gun is as large as 70 dB, by the hunting gun 50 dB, by the start gun 52 dB, and by the toy gun 48 dB. Such energy differences are sufficient for the analysis of acoustic indicators. At the high frequencies of 1000 to 4000 Hz, all the four sound sources used, even the toy gun, produce a good difference of the sound field decay and in all cases it is possible to analyse the reverberation process at varied intervals of the sound level decay.


2021 ◽  
Vol 2 ◽  
Author(s):  
Thirsa Huisman ◽  
Axel Ahrens ◽  
Ewen MacDonald

To reproduce realistic audio-visual scenarios in the laboratory, Ambisonics is often used to reproduce a sound field over loudspeakers and virtual reality (VR) glasses are used to present visual information. Both technologies have been shown to be suitable for research. However, the combination of both technologies, Ambisonics and VR glasses, might affect the spatial cues for auditory localization and thus, the localization percept. Here, we investigated how VR glasses affect the localization of virtual sound sources on the horizontal plane produced using either 1st-, 3rd-, 5th- or 11th-order Ambisonics with and without visual information. Results showed that with 1st-order Ambisonics the localization error is larger than with the higher orders, while the differences across the higher orders were small. The physical presence of the VR glasses without visual information increased the perceived lateralization of the auditory stimuli by on average about 2°, especially in the right hemisphere. Presenting visual information about the environment and potential sound sources did reduce this HMD-induced shift, however it could not fully compensate for it. While the localization performance itself was affected by the Ambisonics order, there was no interaction between the Ambisonics order and the effect of the HMD. Thus, the presence of VR glasses can alter acoustic localization when using Ambisonics sound reproduction, but visual information can compensate for most of the effects. As such, most use cases for VR will be unaffected by these shifts in the perceived location of the auditory stimuli.


2007 ◽  
Vol 274 (1626) ◽  
pp. 2703-2710 ◽  
Author(s):  
Kenneth K Jensen ◽  
Brenton G Cooper ◽  
Ole N Larsen ◽  
Franz Goller

The principal physical mechanism of sound generation is similar in songbirds and humans, despite large differences in their vocal organs. Whereas vocal fold dynamics in the human larynx are well characterized, the vibratory behaviour of the sound-generating labia in the songbird vocal organ, the syrinx, is unknown. We present the first high-speed video records of the intact syrinx during induced phonation. The syrinx of anaesthetized crows shows a vibration pattern of the labia similar to that of the human vocal fry register. Acoustic pulses result from short opening of the labia, and pulse generation alternates between the left and right sound sources. Spontaneously calling crows can also generate similar pulse characteristics with only one sound generator. Airflow recordings in zebra finches and starlings show that pulse tone sounds can be generated unilaterally, synchronously or by alternating between the two sides. Vocal fry-like dynamics therefore represent a common production mechanism for low-frequency sounds in songbirds. These results also illustrate that complex vibration patterns can emerge from the mechanical properties of the coupled sound generators in the syrinx. The use of vocal fry-like dynamics in the songbird syrinx extends the similarity to this unusual vocal register with mammalian sound production mechanisms.


Perception ◽  
1973 ◽  
Vol 2 (3) ◽  
pp. 337-341 ◽  
Author(s):  
S M Anstis

A subject wore for six days a microphone on each hand, connected to stereo headphones. This effectively placed his ears on his hands. Hand movements, with eyes closed, produced apparent movements of sound sources, and crossing the hands over appeared to reverse the sound field. No perceptual adaptation to this auditory rearrangement was found.


Author(s):  
Michael Edward Edgerton

This chapter presents an overview of new developments in vocal exploration. Beginning with a discussion of multiple parameters involved in voice production, this chapter identifies the crucial role that non-linear phenomena has in the performance of the extra-normal voice. In this article, two related taxonomies are presented (source production related to degree of voicing; emphases within the acoustic framework of power, source, resonance, and articulation) that may be used as powerful generative tools for the production of multiple sound sources, filtering processes, and aerodynamic effects, etc. The paper then posits how scaled, multidimensional networks may be used to intelligently explore all elements of the acoustic sound production apparatus and not solely articulation, as is seen with some proponents of complex networks. In this discussion, it will be presented how fully scaling each parameter space will encompass far reaching benefits by engaging with little traversed regions of the total vocal topography.


2001 ◽  
Author(s):  
Arzu Gonenc Sorguc ◽  
Ichiro Hagiwara ◽  
Qinzhong Shi ◽  
Haldun Akagunduz

Abstract In this study, sound field inside acoustically-structurally coupled rectangular cavity excited by structural loading and sound sources is shaped by optimizing the position of the sound source. In the optimization, Most Probable Optimal Design (MPOD) based on Holographic Neural Network is employed and the results are compared with Sequential Quadratic Programming (SQP). It is shown that source position, rather than source strength, is more effective in acoustically controlled modes. The nodal positions for in-vacuo acoustical normal modes are good candidates for initial starting points.


Author(s):  
Jyri Pakarinen

This chapter discusses the central physical phenomena involved in music. The aim is to provide an explanation of the related issues in an understandable level, without delving unnecessarily deep in the underlying mathematics. The chapter is divided in two main sections: musical sound sources and sound transmission to the observer. The first section starts from the definition of sound as wave motion, and then guides the reader through the vibration of strings, bars, membranes, plates, and air columns, that is, the oscillating sources that create the sound for most of the musical instruments. Resonating structures, such as instrument bodies are also reviewed, and the section ends with a discussion on the potential physical markup parameters for musical sound sources. The second section starts with an introduction to the basics of room acoustics, and then explains the acoustic effect that the human observer causes in the sound field. The end of the second section provides a discussion on which sound transmission parameters could be used in a general music markup language. Finally, a concluding section is presented.


Sign in / Sign up

Export Citation Format

Share Document