On-line processing of verb-argument constructions: Visual recognition threshold and naming latency

Author(s):  
Nick C. Ellis
2016 ◽  
Vol 14 (1) ◽  
pp. 105-135 ◽  
Author(s):  
Nick C. Ellis

Ellis, O’Donnell, and Römer (2014) used free-association tasks to investigate knowledge of Verb-Argument Constructions (VACs). They demonstrated that English speakers have independent implicit knowledge of (i) verb frequency in the VAC, (ii) VAC-verb contingency, and (iii) verb prototypicality in terms of centrality within the VAC semantic network. They concluded that VAC processing involves rich associations, tuned by verb type and token frequencies and their contingencies of usage, which interface syntax, lexis, and semantics. However, the tasks they used, where respondents had a minute to think of the verbs that fitted in VAC frames like ‘he __ across the….’, ‘it __ of the….’, etc., were quite conscious and explicit. The current experiments therefore investigate the effects of these factors in on-line processing for recognition and naming. Experiment 1 tested the recognition of VAC exemplars from very brief, masked, visual presentations. Recognition threshold was affected by overall verb frequency in the language, by the frequency with which verbs appear in the VAC, and by VAC-verb contingency (ΔPcw). Experiment 2 had participants successively name VAC arguments as quickly as possible: first the VAC and then the preposition. Preposition naming latency was a function of verb frequency in the VAC. We consider the implications for the representation and processing of VACs.


1964 ◽  
Vol 19 (1) ◽  
pp. 207-210 ◽  
Author(s):  
James H. Koplin ◽  
Charles D. Spielberger

High and low Verbal Ability Ss responded to 12 trigrams presented in a Gerbrands tachistoscope. The trigrams varied in meaningfulness (low, medium, high) as defined by Noble's a' scale and were homogeneous in pronounceability. All Ss received the trigrams in the same order with meaningfulness counterbalanced over the first and second half of the series. Contrary to expectation recognition thresholds tended to vary directly with meaningfulness, but only for words in the second half of the list. These findings were discussed in terms of their implications for previously observed relationships between frequency of usage, meaningfulness, and recognition thresholds for words and nonsense material.


1965 ◽  
Vol 20 (3_suppl) ◽  
pp. 1141-1146 ◽  
Author(s):  
Robert M. Dowling

Recognition thresholds of the words push, pull and part were measured for 36 Ss under conditions of pushing, pulling, and no specific activity to test the hypotheses (a) that sensorimotor activity would have a threshold-lowering effect on words directionally related to the activity and (b) a threshold-raising effect for words not so related. Results supporting the first hypothesis were obtained and discussed as indicating a need for refinement of the generalized statement, based on previous research, that concurrent activity interferes with perceptual functioning. Failure to observe support of the second hypothesis was discussed as suggesting the need to consider, as well, the variable of the amount of physical activity involved.


1985 ◽  
Vol 38 (3) ◽  
pp. 217-222 ◽  
Author(s):  
Lorraine K. Tyler ◽  
Jeanine Wessels
Keyword(s):  

2011 ◽  
Vol 51 (17) ◽  
pp. 1966-1971 ◽  
Author(s):  
Clementine Thurgood ◽  
T.W. Allan Whitfield ◽  
John Patterson

Author(s):  
William Krakow

In the past few years on-line digital television frame store devices coupled to computers have been employed to attempt to measure the microscope parameters of defocus and astigmatism. The ultimate goal of such tasks is to fully adjust the operating parameters of the microscope and obtain an optimum image for viewing in terms of its information content. The initial approach to this problem, for high resolution TEM imaging, was to obtain the power spectrum from the Fourier transform of an image, find the contrast transfer function oscillation maxima, and subsequently correct the image. This technique requires a fast computer, a direct memory access device and even an array processor to accomplish these tasks on limited size arrays in a few seconds per image. It is not clear that the power spectrum could be used for more than defocus correction since the correction of astigmatism is a formidable problem of pattern recognition.


Author(s):  
A.M.H. Schepman ◽  
J.A.P. van der Voort ◽  
J.E. Mellema

A Scanning Transmission Electron Microscope (STEM) was coupled to a small computer. The system (see Fig. 1) has been built using a Philips EM400, equipped with a scanning attachment and a DEC PDP11/34 computer with 34K memory. The gun (Fig. 2) consists of a continuously renewed tip of radius 0.2 to 0.4 μm of a tungsten wire heated just below its melting point by a focussed laser beam (1). On-line operation procedures were developped aiming at the reduction of the amount of radiation of the specimen area of interest, while selecting the various imaging parameters and upon registration of the information content. Whereas the theoretical limiting spot size is 0.75 nm (2), routine resolution checks showed minimum distances in the order 1.2 to 1.5 nm between corresponding intensity maxima in successive scans. This value is sufficient for structural studies of regular biological material to test the performance of STEM over high resolution CTEM.


Author(s):  
Neil Rowlands ◽  
Jeff Price ◽  
Michael Kersker ◽  
Seichi Suzuki ◽  
Steve Young ◽  
...  

Three-dimensional (3D) microstructure visualization on the electron microscope requires that the sample be tilted to different positions to collect a series of projections. This tilting should be performed rapidly for on-line stereo viewing and precisely for off-line tomographic reconstruction. Usually a projection series is collected using mechanical stage tilt alone. The stereo pairs must be viewed off-line and the 60 to 120 tomographic projections must be aligned with fiduciary markers or digital correlation methods. The delay in viewing stereo pairs and the alignment problems in tomographic reconstruction could be eliminated or improved by tilting the beam if such tilt could be accomplished without image translation.A microscope capable of beam tilt with simultaneous image shift to eliminate tilt-induced translation has been investigated for 3D imaging of thick (1 μm) biologic specimens. By tilting the beam above and through the specimen and bringing it back below the specimen, a brightfield image with a projection angle corresponding to the beam tilt angle can be recorded (Fig. 1a).


Author(s):  
G.Y. Fan ◽  
J.M. Cowley

In recent developments, the ASU HB5 has been modified so that the timing, positioning, and scanning of the finely focused electron probe can be entirely controlled by a host computer. This made the asynchronized handshake possible between the HB5 STEM and the image processing system which consists of host computer (PDP 11/34), DeAnza image processor (IP 5000) which is interfaced with a low-light level TV camera, array processor (AP 400) and various peripheral devices. This greatly facilitates the pattern recognition technique initiated by Monosmith and Cowley. Software called NANHB5 is under development which, instead of employing a set of photo-diodes to detect strong spots on a TV screen, uses various software techniques including on-line fast Fourier transform (FFT) to recognize patterns of greater complexity, taking advantage of the sophistication of our image processing system and the flexibility of computer software.


Sign in / Sign up

Export Citation Format

Share Document