Nonlinear Acoustic Measurements and Rayleigh Waves

Author(s):  
T. O. Mueller
2013 ◽  
Author(s):  
R. W. Martin ◽  
R. D. Mooers ◽  
A. L. Hutson ◽  
S. Sathish ◽  
M. P. Blodgett

2013 ◽  
Author(s):  
R. W. Martin ◽  
R. D. Mooers ◽  
A. L. Hutson ◽  
S. Sathish ◽  
M. P. Blodgett

2020 ◽  
Vol 63 (12) ◽  
pp. 3991-3999
Author(s):  
Benjamin van der Woerd ◽  
Min Wu ◽  
Vijay Parsa ◽  
Philip C. Doyle ◽  
Kevin Fung

Objectives This study aimed to evaluate the fidelity and accuracy of a smartphone microphone and recording environment on acoustic measurements of voice. Method A prospective cohort proof-of-concept study. Two sets of prerecorded samples (a) sustained vowels (/a/) and (b) Rainbow Passage sentence were played for recording via the internal iPhone microphone and the Blue Yeti USB microphone in two recording environments: a sound-treated booth and quiet office setting. Recordings were presented using a calibrated mannequin speaker with a fixed signal intensity (69 dBA), at a fixed distance (15 in.). Each set of recordings (iPhone—audio booth, Blue Yeti—audio booth, iPhone—office, and Blue Yeti—office), was time-windowed to ensure the same signal was evaluated for each condition. Acoustic measures of voice including fundamental frequency ( f o ), jitter, shimmer, harmonic-to-noise ratio (HNR), and cepstral peak prominence (CPP), were generated using a widely used analysis program (Praat Version 6.0.50). The data gathered were compared using a repeated measures analysis of variance. Two separate data sets were used. The set of vowel samples included both pathologic ( n = 10) and normal ( n = 10), male ( n = 5) and female ( n = 15) speakers. The set of sentence stimuli ranged in perceived voice quality from normal to severely disordered with an equal number of male ( n = 12) and female ( n = 12) speakers evaluated. Results The vowel analyses indicated that the jitter, shimmer, HNR, and CPP were significantly different based on microphone choice and shimmer, HNR, and CPP were significantly different based on the recording environment. Analysis of sentences revealed a statistically significant impact of recording environment and microphone type on HNR and CPP. While statistically significant, the differences across the experimental conditions for a subset of the acoustic measures (viz., jitter and CPP) have shown differences that fell within their respective normative ranges. Conclusions Both microphone and recording setting resulted in significant differences across several acoustic measurements. However, a subset of the acoustic measures that were statistically significant across the recording conditions showed small overall differences that are unlikely to have clinical significance in interpretation. For these acoustic measures, the present data suggest that, although a sound-treated setting is ideal for voice sample collection, a smartphone microphone can capture acceptable recordings for acoustic signal analysis.


2019 ◽  
pp. 40-46 ◽  
Author(s):  
V.V. Savchenko ◽  
A.V. Savchenko

We consider the task of automated quality control of sound recordings containing voice samples of individuals. It is shown that in this task the most acute is the small sample size. In order to overcome this problem, we propose the novel method of acoustic measurements based on relative stability of the pitch frequency within a voice sample of short duration. An example of its practical implementation using aninter-periodic accumulation of a speech signal is considered. An experimental study with specially developed software provides statistical estimates of the effectiveness of the proposed method in noisy environments. It is shown that this method rejects the audio recording as unsuitable for a voice biometric identification with a probability of 0,95 or more for a signal to noise ratio below 15 dB. The obtained results are intended for use in the development of new and modifying existing systems of collecting and automated quality control of biometric personal data. The article is intended for a wide range of specialists in the field of acoustic measurements and digital processing of speech signals, as well as for practitioners who organize the work of authorized organizations in preparing for registration samples of biometric personal data.


2010 ◽  
Vol 32 (2) ◽  
pp. 107-120
Author(s):  
Pham Chi Vinh ◽  
Trinh Thi Thanh Hue ◽  
Dinh Van Quang ◽  
Nguyen Thi Khanh Linh ◽  
Nguyen Thi Nam

The method of first integrals (MFI) based on the equation of motion for the displacement vector, or  based on the one for the traction vector was introduced  recently in order to find explicit secular equations of Rayleigh waves whose characteristic equations (i.e the equations determining the attenuation factor) are fully quartic or are of higher order (then the classical approach is not applicable). In this paper it is shown that, not only to Rayleigh waves,  the MFI can be applicable also to other waves by running it on the equations for mixed vectors. In particular: (i) By applying the MFI  to the equations for the displacement-traction vector we get the explicit dispersion equations of Stoneley waves in twinned crystals (ii)  Running the MFI on the equations for the traction-electric induction vector and the traction-electrical potential vector provides the explicit dispersion equations of SH-waves in piezoelastic materials. The obtained dispersion equations are identical with the ones previously derived using the method of polarization vector, but the procedure of driving them is more simple.


2015 ◽  
Vol 37 (4) ◽  
pp. 303-315 ◽  
Author(s):  
Pham Chi Vinh ◽  
Nguyen Thi Khanh Linh ◽  
Vu Thi Ngoc Anh

This paper presents  a technique by which the transfer matrix in explicit form of an orthotropic layer can be easily obtained. This transfer matrix is applicable for both the wave propagation problem and the reflection/transmission problem. The obtained transfer matrix is then employed to derive the explicit secular equation of Rayleigh waves propagating in an orthotropic half-space coated by an orthotropic layer of arbitrary thickness.


Sign in / Sign up

Export Citation Format

Share Document