Perceptual Dimensions Underlying Vowel Lipreading Performance

1976 ◽  
Vol 19 (4) ◽  
pp. 796-812 ◽  
Author(s):  
Pamela L. Jackson ◽  
Allen A. Montgomery ◽  
Carl A. Binnie

This study was concerned with the extraction, description, and verification of visual perceptual features underlying vowel lipreading performance. Ten viewers with normal hearing rated the visual similarity of pairs of 15 vowels and diphthongs presented in an /h_g/ context by four speakers. Multidimensional scaling techniques were used to extract potential perceptual features which were then labeled by the experimenters. The resulting perceptual dimensions were correlated with physical measurements of lip shape to evaluate the adequacy of the feature labels. The results indicated that the traditional extended-rounded vowel feature and a vertical lip separation feature were the characteristics most prominent in judging the stimuli. In addition, a feature related to overall area of maximum lip opening and two features unique to diphthong perception were tentatively identified.

1987 ◽  
Vol 65 (3) ◽  
pp. 837-838
Author(s):  
Robert Allen Fox ◽  
Jean Booth

It has been argued that bark-scale transformed formant frequency values more accurately reflect auditory representations of vowels in the perceptual system than do the absolute physical values (in Hertz). In the present study the perceptual features of 15 monophthongal and diphthongal vowels (obtained using multidimensional scaling) were compared with both absolute and bark-scale transformed acoustic vowel measures. Analyses suggest that bark-transformation of the acoustic data does not necessarily produce better predictions of the vowels' perceptual space.


Author(s):  
James A. Kleiss

Previous research indicates two properties of real-world scenes are important to pilots for visual low-altitude flight: (a) vertical development mediated by presence or absence of hills and ridges, and (b) discrete objects exemplified by large objects or groups of objects. The present investigation sought to determine whether these scene properties can be represented with adequate perceptual fidelity in flight simulator visual scenes. The stimuli were sixteen computer-generated scenes exhibiting variation in both properties described above. Subjects rated the visual similarity of scenes with regard to properties useful for visual low-altitude flight. Ratings were analyzed using multidimensional scaling. A two-dimensional spatial configuration captured orderly variation in both scene properties. Unlike previous results using real-world scenes, discrete objects were relatively more important than vertical development in computer-generated scenes. Also, groups of trees were no more salient than randomly scattered trees in computer-generated scenes. Thus, properties important in real-world scenes can be effectively modeled in computer-generated scenes although some differences remain.


1986 ◽  
Vol 7 (5) ◽  
pp. 318-322 ◽  
Author(s):  
Jeffrey L. Danhauer ◽  
Caroline Abdala ◽  
Carole Johnson ◽  
Carl Asp

1976 ◽  
Vol 19 (1) ◽  
pp. 68-77 ◽  
Author(s):  
Jeffrey L. Danhauer ◽  
Margret A. Appel

Twenty-four normal-hearing subjects received CV stimuli of uni- and bisensory inputs through the visual (speechreading), tactile (touch), and visual-tactile (speechreading and touch) modalities. Stimuli were presented via either a videotape monitor or a tactile vibrator or both. The purposes were to investigate the contribution of the tactile modality in uni- and bisensory conditions, and to analyze consonantal substitution errors to find perceptual features utilized by the subjects in their decision-making processes. The subjects' consonantal perceptions were phonetically transcribed and submitted to INDSCAL analysis which yielded mainly a four-dimensional solution. The interpretations of these dimensions were primarily closed bilabial, easy to see/hard to see, voiced/voiceless, and front/back place. Additional features were retrieved, but occurred less consistently.


Author(s):  
J.P. Fallon ◽  
P.J. Gregory ◽  
C.J. Taylor

Quantitative image analysis systems have been used for several years in research and quality control applications in various fields including metallurgy and medicine. The technique has been applied as an extension of subjective microscopy to problems requiring quantitative results and which are amenable to automatic methods of interpretation.Feature extraction. In the most general sense, a feature can be defined as a portion of the image which differs in some consistent way from the background. A feature may be characterized by the density difference between itself and the background, by an edge gradient, or by the spatial frequency content (texture) within its boundaries. The task of feature extraction includes recognition of features and encoding of the associated information for quantitative analysis.Quantitative Analysis. Quantitative analysis is the determination of one or more physical measurements of each feature. These measurements may be straightforward ones such as area, length, or perimeter, or more complex stereological measurements such as convex perimeter or Feret's diameter.


Sign in / Sign up

Export Citation Format

Share Document