The maze task: Measuring forced incremental sentence processing time

2009 ◽  
Vol 41 (1) ◽  
pp. 163-171 ◽  
Author(s):  
Kenneth I. Forster ◽  
Christine Guerrera ◽  
Lisa Elliot
2021 ◽  
Author(s):  
Dustin Alfonso Chacón

Processing filler-gap dependencies (‘extraction’) depends on complex top-down predictions. This is observed in comprehenders’ ability to avoid resolving filler-gap dependencies in syntactic island contexts, and in the immediate sensitivity to the plausibility of the resulting interpretation. How complex can these predictions be? In this paper, we examine the processing of extraction from adjunct clauses. Adjunct clauses are argued to be syntactic islands, however, extraction is permitted if the adjunct clause and main clause satisfy specific compositional and conceptual semantic criteria. In an acceptability judgment task, we found that this generalization is robust. Additionally, our results show that this is a property specific to adjunct clauses by comparing adjunct clauses to conjunct VPs, which are similarly argued to permit extraction depending on semantic factors. However, in an A-Maze task, we found no evidence that this knowledge is deployed in real-time sentence processing. Instead, we found that comprehenders attempted to resolve a filler-gap dependency in an adjunct clause regardless of its island status. We propose that this is because deploying this linguistic constraint depends on a second-order serial search over event schemata, which is likely costly and time-consuming. Thus, comprehenders opt for a riskier strategy and attempt resolution into adjunct clauses categorically.


2020 ◽  
Vol 15 (2) ◽  
pp. 366-383
Author(s):  
Jordan Gallant ◽  
Gary Libben

Abstract The maze task (Forster, Guererra & Elliot, 2009; Forster, 2010) is designed to measure focal lexical and sentence processing effects in a highly controlled manner. We discuss how this task can be modified and extended to provide a unique opportunity for the investigation of lexical effects in sentence context. We present results that demonstrate how the maze task can be used to examine both facilitation and inhibition effects. Most importantly, it can do this while leaving the target sentence unchanged across conditions. This is an advantage that is not available with other paradigms. We also present new versions of the maze task that allow for the isolation of specific lexical effects and that enhance the measurement of lexical recognition through visual animation. Finally, we discuss how the maze task brings to the foreground the extent to which complex multi-layered priming and inhibition are intrinsic to sentence reading and how the maze task can tap this complexity.


2010 ◽  
Vol 5 (3) ◽  
pp. 347-357 ◽  
Author(s):  
Kenneth I. Forster

A word maze consists of a sequence of frames, each containing two alternatives. Subjects are required to select one of those alternatives according to some criterion defined by the experimenter. This simple technique can be used to investigate a wide range of issues. For example, if one alternative is a word and the other is a nonword, the subject may be required to press a key to indicate where the word is. This provides an interesting variant of the lexical decision task, since the difficulty of the lexical discrimination can be manipulated on a trial-by-trial basis by varying the properties of the nonword alternative. On the other hand, a version of a self-paced reading task is created if each successive frame contains a word that can continue a sentence, and the subject is required to identify which word that is. Once again, by manipulating the properties of the incorrect alternative one may be able to control the mode of processing adopted by the subject. Although this is a highly artificial form of reading, it does allow one to study the sentence processing under more tightly controlled conditions.


2017 ◽  
Author(s):  
James Nye ◽  
Fernanda Ferreira

The reported studies investigate online processing of taboo words (e.g. shit) and their censored equivalents (e.g. s**t), relative to semantically matched non-taboo words (e.g. junk). Participants’ eyes were tracked as they read sentences which contained one of the critical words. In Experiment 1, participants also encountered censored-neutral words, known as masked (e.g. j**k), but in Experiment 2, participants only encountered the taboo, censored, and neutral conditions, thus manipulating the perceptual certainty of censored words. Taboo and neutral words required similar processing time across all reading measures; liberal post-hoc analyses replicated the null effect. With regards to the censored words, Experiment 1 revealed that early word-recognition requirements were similar between censored, taboo, and neutral words, with censored words requiring additional processing time in later sentence integration measures. However, the results from Experiment 2 revealed no differences in reading time between conditions, suggesting that the masked words in Experiment 1 motivated participants to double-check the censored words due to their orthographic similarity. After reading all of the sentences in Experiment 2, participants’ memory of the sentences was tested. Participants were able to differentiate between whether they encountered a neutral or profane word (i.e. either taboo or censor), but participants were unable to identify the specific profane word that they encountered in the reading task. We argue that the results relating to the taboo words further clarifies language’s role within the functional architecture of cognition while the results relating to censorship informs how statistical regularities of language are used to process lexical-semantic information.


Author(s):  
James C. Long

Over the years, many techniques and products have been developed to reduce the amount of time spent in a darkroom processing electron microscopy negatives and micrographs. One of the latest tools, effective in this effort, is the Mohr/Pro-8 film and rc paper processor.At the time of writing, a unit has been recently installed in the photographic facilities of the Electron Microscopy Center at Texas A&M University. It is being evaluated for use with TEM sheet film, SEM sheet film, 35mm roll film (B&W), and rc paper.Originally designed for use in the phototypesetting industry, this processor has only recently been introduced to the field of electron microscopy.The unit is a tabletop model, approximately 1.5 × 1.5 × 2.0 ft, and uses a roller transport method of processing. It has an adjustable processing time of 2 to 6.5 minutes, dry-to-dry. The installed unit has an extended processing switch, enabling processing times of 8 to 14 minutes to be selected.


Author(s):  
Margreet Vogelzang ◽  
Christiane M. Thiel ◽  
Stephanie Rosemann ◽  
Jochem W. Rieger ◽  
Esther Ruigendijk

Purpose Adults with mild-to-moderate age-related hearing loss typically exhibit issues with speech understanding, but their processing of syntactically complex sentences is not well understood. We test the hypothesis that listeners with hearing loss' difficulties with comprehension and processing of syntactically complex sentences are due to the processing of degraded input interfering with the successful processing of complex sentences. Method We performed a neuroimaging study with a sentence comprehension task, varying sentence complexity (through subject–object order and verb–arguments order) and cognitive demands (presence or absence of a secondary task) within subjects. Groups of older subjects with hearing loss ( n = 20) and age-matched normal-hearing controls ( n = 20) were tested. Results The comprehension data show effects of syntactic complexity and hearing ability, with normal-hearing controls outperforming listeners with hearing loss, seemingly more so on syntactically complex sentences. The secondary task did not influence off-line comprehension. The imaging data show effects of group, sentence complexity, and task, with listeners with hearing loss showing decreased activation in typical speech processing areas, such as the inferior frontal gyrus and superior temporal gyrus. No interactions between group, sentence complexity, and task were found in the neuroimaging data. Conclusions The results suggest that listeners with hearing loss process speech differently from their normal-hearing peers, possibly due to the increased demands of processing degraded auditory input. Increased cognitive demands by means of a secondary visual shape processing task influence neural sentence processing, but no evidence was found that it does so in a different way for listeners with hearing loss and normal-hearing listeners.


Sign in / Sign up

Export Citation Format

Share Document