scholarly journals Lack of regularity between letters impacts word recognition performance

2019 ◽  
Author(s):  
Sofie Beier ◽  
Jean-Baptiste Bernard

AbstractPhysical inter-letter dissimilarity has been suggested as a solution to increase perceptual differences between letter shapes and hence a solution to improve reading performance. However, the deleterious effects of font tuning suggest that low inter-letter regularity (due to the enhancement of specific letter features to make them more differentiable) may impair word recognition performance. The aim of the present investigation was 1) to validate our hypothesis that reducing inter-letter regularity impairs reading performance, as suggested by font tuning, and 2) to test whether some forms of non-regularities could impair visual word recognition more. To do so, we designed four new fonts. For each font we induced one type of increased perceptual difference: for the first font, the letters have longer extender length; for the second font, the letters have different slants; and for the third font, the letters have different font cases. We also designed a fourth font where letters differ on all three aspects (worst regularity across letters). Word recognition performance was measured for each of the four fonts in comparison to a traditional sans serif font (best regularity across letters) through a lexical decision task. Results showed a significant decrease in word recognition performance only for the fonts with mixed-case letters, suggesting that fonts with low regularity, such as mixed-case letters, should be avoided in the definition of new “optimal” fonts. Letter recognition performance measured for the five different fonts through a trigram recognition task showed that this effect is not consistently due to poor letter identification.

1977 ◽  
Vol 29 (3) ◽  
pp. 515-525 ◽  
Author(s):  
Eleanor M. Saffran ◽  
Oscar S. M. Marin

This study of an aphasic dyslexic supports the view that there are separate visual and phonological pathways in reading. The patient retained a reading vocabulary of at least 16 500 words although she was unable to perform operations that critically depend on grapheme-to-phoneme conversion; these included reading nonsense words, recognizing rhymes and homophones, and accessing lexical entries from homophonic spellings such as “kote”. Typographical variation, such as mixed case presentation, did not interfere with her reading performance, which suggests that it is mediated by letter identification rather than by a wholistic method of word recognition. The total performance pattern strongly suggests that this patient identifies words by matching particular letter-strings with their corresponding meanings.


2005 ◽  
Vol 36 (3) ◽  
pp. 219-229 ◽  
Author(s):  
Peggy Nelson ◽  
Kathryn Kohnert ◽  
Sabina Sabur ◽  
Daniel Shaw

Purpose: Two studies were conducted to investigate the effects of classroom noise on attention and speech perception in native Spanish-speaking second graders learning English as their second language (L2) as compared to English-only-speaking (EO) peers. Method: Study 1 measured children’s on-task behavior during instructional activities with and without soundfield amplification. Study 2 measured the effects of noise (+10 dB signal-to-noise ratio) using an experimental English word recognition task. Results: Findings from Study 1 revealed no significant condition (pre/postamplification) or group differences in observations in on-task performance. Main findings from Study 2 were that word recognition performance declined significantly for both L2 and EO groups in the noise condition; however, the impact was disproportionately greater for the L2 group. Clinical Implications: Children learning in their L2 appear to be at a distinct disadvantage when listening in rooms with typical noise and reverberation. Speech-language pathologists and audiologists should collaborate to inform teachers, help reduce classroom noise, increase signal levels, and improve access to spoken language for L2 learners.


2003 ◽  
Vol 46 (2) ◽  
pp. 390-404 ◽  
Author(s):  
Adam R. Kaiser ◽  
Karen Iler Kirk ◽  
Lorin Lachs ◽  
David B. Pisoni

The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, R a , was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.


2008 ◽  
Vol 19 (10) ◽  
pp. 998-1006 ◽  
Author(s):  
Janet Hui-wen Hsiao ◽  
Garrison Cottrell

It is well known that there exist preferred landing positions for eye fixations in visual word recognition. However, the existence of preferred landing positions in face recognition is less well established. It is also unknown how many fixations are required to recognize a face. To investigate these questions, we recorded eye movements during face recognition. During an otherwise standard face-recognition task, subjects were allowed a variable number of fixations before the stimulus was masked. We found that optimal recognition performance is achieved with two fixations; performance does not improve with additional fixations. The distribution of the first fixation is just to the left of the center of the nose, and that of the second fixation is around the center of the nose. Thus, these appear to be the preferred landing positions for face recognition. Furthermore, the fixations made during face learning differ in location from those made during face recognition and are also more variable in duration; this suggests that different strategies are used for face learning and face recognition.


2003 ◽  
Vol 14 (09) ◽  
pp. 453-470 ◽  
Author(s):  
Richard H. Wilson

A simple word-recognition task in multitalker babble for clinic use was developed in the course of four experiments involving listeners with normal hearing and listeners with hearing loss. In Experiments 1 and 2, psychometric functions for the individual NU No. 6 words from Lists 2, 3, and 4 were obtained with each word in a unique segment of multitalker babble. The test paradigm that emerged involved ten words at each of seven signal-to-babble ratios (S/B) from 0 to 24 dB. Experiment 3 examined the effect that babble presentation level (70, 80, and 90 dB SPL) had on recognition performance in babble, whereas Experiment 4 studied the effect that monaural and binaural listening had on recognition performance. For listeners with normal hearing, the 90th percentile was 6 dB S/B. In comparison to the listeners with normal hearing, the 50% correct points on the functions for listeners with hearing loss were at 5 to 15 dB higher signal-to-babble ratios.


2005 ◽  
Vol 16 (06) ◽  
pp. 367-382 ◽  
Author(s):  
Richard H. Wilson ◽  
Deborah G. Weakley

The purpose of this study was to determine if performances on a 500 Hz MLD task and a word-recognition task in multitalker babble covaried or varied independently for listeners with normal hearing and for listeners with hearing loss. Young listeners with normal hearing (n = 25) and older listeners (25 per decade from 40–80 years, n = 125) with sensorineural hearing loss were studied. Thresholds at 500 and 1000 Hz were ≤30 dB HL and ≤40 dB HL, respectively, with thresholds above 1000 Hz <100 dB HL. There was no systematic relationship between the 500 Hz MLD and word-recognition performance in multitalker babble. Higher SoNo and SπNo; thresholds were observed for the older listeners, but the MLDs were the same for all groups. Word recognition in babble in terms of signal-to-babble ratio was on average 6.5 (40- to 49-year-old group) to 10.8 dB (80- to 89-year-old group) poorer for the older listeners with hearing loss. Neither pure-tone thresholds nor word-recognition abilities in quiet accurately predicted word-recognition performance in multitalker babble.


1993 ◽  
Vol 14 (3) ◽  
pp. 369-385 ◽  
Author(s):  
Norman S. Segalowitz ◽  
Sidney J. Segalowitz

ABSTRACTPractice on cognitive tasks, in general, and word recognition tasks, in particular, will usually lead to faster and more stable responding. We present an analysis of the relationship between observed reductions in performance latency and latency variability with respect to whether processing has merely become faster across the board or whether a qualitative change, such as automatization, has taken place. The coefficient of variability (CV) - the standard deviation of response time divided by the mean latency - is shown to be useful for this purpose. A cognitive interpretation of the CV is given that relates it to issues of skill development.Data from second language learners' word recognition performance and from a simple detection task are presented which confirm predictions drawn from this interpretation of the cognitive significance of the CV. Initial improvement in a second language word recognition task was interpreted as involving more efficient controlled processing, which later gave way to automatization. The implications of this index of skill are discussed in relation to second language development and the general issue of automaticity of processing components in cognitive skills.


2014 ◽  
Vol 17 ◽  
Author(s):  
María Macaya ◽  
Manuel Perea

AbstractThe study of the effects of typographical factors on lexical access has been rather neglected in the literature on visual-word recognition. Indeed, current computational models of visual-word recognition employ an unrefined letter feature level in their coding schemes. In a letter recognition experiment, Pelli, Burns, Farell, and Moore-Page (2006), letters in Bookman boldface produced more efficiency (i.e., a higher ratio of thresholds of an ideal observer versus a human observer) than the letters in Bookman regular under visual noise. Here we examined whether the effect of bold emphasis can be generalized to a common visual-word recognition task (lexical decision: “is the item a word?”) under standard viewing conditions. Each stimulus was presented either with or without bold emphasis (e.g., actor vs. actor). To help determine the locus of the effect of bold emphasis, word-frequency (low vs. high) was also manipulated. Results revealed that responses to words in boldface were faster than the responses to the words without emphasis –this advantage was restricted to low-frequency words. Thus, typographical features play a non-negligible role during visual-word recognition and, hence, the letter feature level of current models of visual-word recognition should be amended.


1978 ◽  
Vol 10 (3) ◽  
pp. 303-305 ◽  
Author(s):  
Roberta A. Dollinger ◽  
David N. Walker

A word recognition task was presented to 72 lower social class and 72 upper-middle social class kindergarteners who were randomly assigned to one of 3 methods (Word, Word Picture, or Word Object) and were asked to learn to recognize eight low frequency words. There were significant differences between lower and upper socioeconomic levels and among the three methods ( p < .01). The Word Method was superior to the Word Picture and Word Object Methods ( p < .05) for children from both lower and middle class schools and students from upper socioeconomic level schools scored higher than those from the lower socioeconomic schools ( p < .01).


1978 ◽  
Vol 47 (3_suppl) ◽  
pp. 1231-1238 ◽  
Author(s):  
Robert C. Marshall ◽  
Deanie Kushner ◽  
David Phillips

This study examined the letter-recognition abilities of 44 aphasic and 10 normal subjects. On a 26-item letter recognition task normal subjects made no errors. Moderately aphasic subjects illustrated minimal difficulty but did not differ significantly in performance from normals. Severely aphasic subjects exhibited marked impairment and made significantly lower letter-recognition scores than moderate aphasic or normal subjects. Type of aphasia as determined by conversational speech fluency (fluent or nonfluent) seemed to affect the letter-recognition performance of the severely aphasic subjects. Fluent severely aphasic subjects made significantly lower scores than all groups; nonfluent severely aphasic subjects made significantly lower scores than all groups except the severe nonfluent group. The types of letter-recognition errors produced by the two severely aphasic groups offer some explanation as to their performance differences. Errors of the latter group were more likely to be related to the stimulus letter; errors from the former group tended to be random. Findings indicate the intactness of the aphasic subject's semantic associational network is important to the letter-recognition process.


Sign in / Sign up

Export Citation Format

Share Document