Intelligibility of Time-Compressed CNC Monosyllables

1972 ◽  
Vol 15 (2) ◽  
pp. 340-350 ◽  
Author(s):  
Daniel S. Beasley ◽  
Shelley Schwimmer ◽  
William F. Rintelmann

The effects of time-compressed monosyllabic CNCs on the auditory discrimination performance of 96 young adults with normal hearing were studied. Five conditions of time compression, 30% through 70% in 10% steps, plus a 0% control condition were presented at four sensation levels (8, 16, 24, and 32 dB). Ear presentation and list version were counterbalanced with these factors. Results indicated that intelligibility was inversely related to time-compression ratio and directly related to sensation level. Ear and list effects were minimal.

1980 ◽  
Vol 23 (4) ◽  
pp. 722-731 ◽  
Author(s):  
Daniel S. Beasley ◽  
Gene W. Bratt ◽  
William F. Rintelmann

Time-compressed monosyllables have been studied relative to the assessment of central auditory disorders. In certain instances, sentential stimuli may be more useful than word lists in central auditory testing, particularly when results may be contaminated by concomitant peripheral hearing losses. Central Institute for the Deaf (CID) and Revised CID sentence lists and a contrived sentential approximation task were presented to 96 normal hearing young adults at time-compression ratios of 0%, 40%, 60%, and 70%, under sensation levels of 24 and 40 dB. The CID and RCID stimuli were more intelligible than the sentential approximations. The results are presented and discussed as they pertain to central auditory testing and are compared to earlier data using consonant-nucleus-consonant monosyllabic stimuli.


1970 ◽  
Vol 13 (2) ◽  
pp. 347-359 ◽  
Author(s):  
J. M. Pickett ◽  
J. Mártony

Measurements of vowel formant discrimination were made on 6 listeners with severe-to-profound sensorineural hearing losses and compared with discrimination in 4 normal listeners. The measure of discrimination was the size of the threshold for a frequency change in the formant of a synthetic vowel. An adaptive procedure was used to locate threshold. Results indicated that, at two low formant locations, 205 and 275 Hz, sensorineural discrimination was equal to normal; at 400 and 875 Hz, however, the sensorineural subjects had less discrimination than normal. Learning to maximum discrimination performance was slow for the sensorineural subjects at the two higher formant locations. Formant frequency discrimination appeared to be insensitive to changes in sensation level. Tactual discrimination tests with the vowel stimuli indicated that the obtained performance levels for very poor discrimination may have reflected tactual discrimination rather than auditory discrimination.


2004 ◽  
Vol 13 (1) ◽  
pp. 23-28 ◽  
Author(s):  
Andrew Stuart

The equivalency of Lists 1 to 4 of the Northwestern University Auditory Test No. 6 (NU-6; T. W. Tillman & R. Carhart, 1966) was investigated in interrupted broadband noise. Forty-eight young adults with normal hearing participated. All lists were administered at 50 dB sensation level re: listener spondee recognition thresholds at signal-to-noise ratios (S/Ns) of 10, 5, 0, –5, –10, –15, –20, –25, and –30 dB. Significant differences in listener performance were observed only at S/Ns ranging from 10 to –10. Significant mean list differences varied from 5.8% to 12.0%. These findings support the notion that caution should be exercised in the interpretation of listener performance differences with NU-6 stimuli presented in a background of interrupted noise.


2017 ◽  
Vol 28 (03) ◽  
pp. 222-231 ◽  
Author(s):  
Riki Taitelbaum-Swead ◽  
Michal Icht ◽  
Yaniv Mama

AbstractIn recent years, the effect of cognitive abilities on the achievements of cochlear implant (CI) users has been evaluated. Some studies have suggested that gaps between CI users and normal-hearing (NH) peers in cognitive tasks are modality specific, and occur only in auditory tasks.The present study focused on the effect of learning modality (auditory, visual) and auditory feedback on word memory in young adults who were prelingually deafened and received CIs before the age of 5 yr, and their NH peers.A production effect (PE) paradigm was used, in which participants learned familiar study words by vocal production (saying aloud) or by no-production (silent reading or listening). Words were presented (1) in the visual modality (written) and (2) in the auditory modality (heard). CI users performed the visual condition twice—once with the implant ON and once with it OFF. All conditions were followed by free recall tests.Twelve young adults, long-term CI users, implanted between ages 1.7 and 4.5 yr, and who showed ≥50% in monosyllabic consonant-vowel-consonant open-set test with their implants were enrolled. A group of 14 age-matched NH young adults served as the comparison group.For each condition, we calculated the proportion of study words recalled. Mixed-measures analysis of variances were carried out with group (NH, CI) as a between-subjects variable, and learning condition (aloud or silent reading) as a within-subject variable. Following this, paired sample t tests were used to evaluate the PE size (differences between aloud and silent words) and overall recall ratios (aloud and silent words combined) in each of the learning conditions.With visual word presentation, young adults with CIs (regardless of implant status CI-ON or CI-OFF), showed comparable memory performance (and a similar PE) to NH peers. However, with auditory presentation, young adults with CIs showed poorer memory for nonproduced words (hence a larger PE) relative to their NH peers.The results support the construct that young adults with CIs will benefit more from learning via the visual modality (reading), rather than the auditory modality (listening). Importantly, vocal production can largely improve auditory word memory, especially for the CI group.


1978 ◽  
Vol 43 (2) ◽  
pp. 200-207 ◽  
Author(s):  
Grace Haugland Bargstadt ◽  
John M. Hutchinson ◽  
Michael A. Nerbonne

This investigation provides a preliminary evaluation of the use of the video articulator, a phonemic recognition device for the hearing impaired. The subjects were five young adults with normal hearing and vision (corrected) who were matched with respect to age, sex, dialect, education, and phonological sophistication. Each subject received 150 min of programmed training to learn the video configurations of the eight English fricatives both in isolation and consonant-vowel contexts. Following the training period, the subjects were given a test to determine adequacy of learning and retention of the video configurations for the training stimuli, in the absence of auditory cues. The subjects' responses were analyzed using a common covariance measure. The results demonstrated generally low transmission values for consonants in isolation. Moreover, identification of consonants in context was less accurate. The subjects, as a group, had greater difficulty in recognizing the productions of other subjects when compared with recognition of their own utterances. The clinical implications of these findings are discussed.


Author(s):  
Lalit B. Damahe ◽  
Nileshsingh V. Thakur

Image representation and compression is one of the important fields of computer vision that contribute to the reduction of size of an image and other types of application areas such as image restoration, retrieval, etc. Image representation is important with respect to storage of image information, and it further extends to the compression, which may be lossy or lossless. Image compression can be applied to various applications which mainly include medical imaging, traffic monitoring, military, multimedia transmission, smart cell devices, and almost in all the domains that require less transmission and storage cost, specifically image retrieval processing. This chapter presents the various image representation compression and retrieval approaches. The retrieval approaches on personal computer and smart cell devices are discussed. Finally, the key issues are identified for image representation compression and retrieval on the basis of performance evaluation parameters like encoding time, decoding time, compression ratio, precision, recall, and elapsed time.


QJM ◽  
2020 ◽  
Vol 113 (Supplement_1) ◽  
Author(s):  
W A Elkholy ◽  
D M Hassan ◽  
N A Shafik ◽  
Y E K Eltoukhy

Abstract Background Cortical auditory evoked potentials (CAEPs) are brain responses evoked by sound and are processed in or near the auditory cortex. ACC is a cortical auditory evoked potential (P1-N1-P2) elicited by a change within an ongoing sound stimulus. Objective To reach the best stimuli that can elicit ACC and act as an objective tool for assessment of cortical auditory discrimination in normal hearing children. Patients and Methods The present study was originally designed to standardize ACC evoked response in 41 children aged from 2 to 10 years. The mean age in our study group was 6.2 years with no significant difference between males and females. Stimuli used in this study were specifically designed to be used by AEP equipment that is capable of uploading short duration stimuli (500 msec.), thus can be used in a regular AEP lab. ACC was elicited by three groups of stimuli. Gap-in-tones stimuli represent temporal change (6, 10, 30 and 50 msec. gap introduced to 1000 Hz tone separately), frequency pairs stimuli represent frequency change (2%, 4%, 10% and 25% change from base freq. 1000 Hz) and vowel pairs stimuli represent spectral change (/i-u/, /u-i/, /i-a/. /a-i/, /u-a/, /a-u/). ACC response parameters were compared when using the different stimuli as regards percent detectability, morphology, latency and amplitude. Results Gap-in-tones at 6 msec. and 4% frequency change could elicit ACC response in 100% of subjects. For spectral change, /u-i/ was the highest in eliciting ACC (78%) followed by /i-u/ (68.2%) then /a-i/ (58.5%). ACC had the same morphology of the onset response in the majority of subjects, with longer latency and smaller amplitude. ACC amplitude is a better indicator of cortical discrimination compared to latency because it is consistently affected by magnitude of change. Conclusion ACC is a good electrophysiological tool for cortical auditory discrimination for temporal, frequency and spectral change.


Sign in / Sign up

Export Citation Format

Share Document