The effects of intelligence test preparation

1995 ◽  
Vol 9 (1) ◽  
pp. 43-56 ◽  
Author(s):  
Henk T. van der Molen ◽  
Jan Te Nijenhuis ◽  
Gert Keen

The first goal of this study was to investigate the effects of reading a book concerning intelligence tests and the effects of a specific test‐training programme on numerical and verbal intelligence tests. The second goal was to investigate to what extent the acquisition of test‐specific problem‐solving strategies affects the ability to solve items on different, but comparable tests (transfer). In the experimental design two factors were included: practice (pretest or no pretest) and (level of) preparation (none, book, or training), so there were six conditions. Each condition consisted of about 26 subjects, who had been randomly assigned to one of the conditions. The results showed a strong effect of preparation, especially for the numerical intelligence test and to a lesser degree for the verbal intelligence test. No practice or pretest effects were found. Positive transfer was demonstrated for the numerical test. The results for the verbal test were less clear. The implications for the predictive and construct validity are discussed.

Assessment ◽  
2018 ◽  
Vol 27 (6) ◽  
pp. 1198-1212 ◽  
Author(s):  
Gilles E. Gignac ◽  
Ka Ki Wong

The purpose of this investigation was to examine a single-anagram, a double-anagram, and multi-anagram versions of the Anagram Persistence Task (APT) for factorial validity, reliability, and convergent validity. Additionally, a battery of intelligence tests was administered to examine convergent validity. Based on an unrestricted factor analysis, two factors were uncovered from the 14 anagram (seven very difficult and seven very easy) response times: test-taking persistence and verbal processing speed. The internal consistency reliabilities for the single-anagram, double-anagram, and multi-anagram (seven difficult anagrams) measures were .42, .85, and .86, respectively. Furthermore, all three versions of the APT correlated positively with intelligence test performance ( r ≈ .22). However, the double-anagram and multi-anagram versions also evidenced negative, nonlinear effects with intelligence test performance ( r ≈ −.15), which suggested the possibility of testee adaptation. Taking psychometrics and administration time into consideration, simultaneously, the double-anagram version of the APT may be regarded as preferred.


1966 ◽  
Vol 19 (3) ◽  
pp. 987-990 ◽  
Author(s):  
Raymond H. Holden ◽  
Martin A. Mendelson ◽  
Spencer De Vault

In the present study a short, self-administered intelligence test (SRA Non-verbal Test, Form AH) was correlated with the WAIS for an unselected sample of 29 mothers on the Providence Child Development Study. The correlation of .81 is significant and similar to the r of .82 between the WAIS Verbal Scale and the WAIS Performance Scale. It is concluded that the 10-min. SRA Non-verbal Test is a satisfactory and valid intelligence test instrument when dealing with specific groups of Ss but that WAIS IQ category labels should not be applied indiscriminately to SRA IQs.


1975 ◽  
Vol 12 (4) ◽  
pp. 469-477 ◽  
Author(s):  
Kenneth D. Hopkins ◽  
Glenn H. Bracht

Intelligence tests continue to be the most widely used measures of cognitive aptitudes. Performance on such measures is usually expressed as an IQ score. Popular opinion to the contrary, relatively little is known about the long term measuring of IQ scores from group verbal and non-verbal intelligence tests, especially the latter. This study shows that, below ten years of age, stability in IQ scores from group verbal tests is considerably below that for the Stanford-Binet. Non-verbal IQ scores were found to have substantially less stability than Verbal IQs.


1973 ◽  
Vol 33 (1) ◽  
pp. 127-130 ◽  
Author(s):  
Louis S. Dickstein ◽  
Jayne Ayers

Previous research has indicated that the manipulation of motivating conditions through explicit examiner's expectancy of good performance can significantly improve performance on intelligence tests. In the present study, the manipulation of incentive was used to improve performance. College women were told that the five best scorers would receive monetary rewards. The group receiving incentive-instructions scored significantly higher than the control group on the WAIS Performance Scale and on the Object Assembly subtest. No difference between the groups was obtained for the Advanced Progressive Matrices.


1983 ◽  
Vol 52 (3) ◽  
pp. 747-750 ◽  
Author(s):  
Albert N. Katz

100 participants were administered three forms of the Quick Test, both the Verbal and Figural Torrance Tests of creativity, the Remote Associates Test of creativity, and the Revised Art Scale. In addition two non-creative tests were administered, the Hidden Words Test, a test of closure and the Deciphering Language Test, a test of logical reasoning. The Quick Test correlated significantly and low to moderately with all scales of Torrance's Verbal test, the elaboration subscale of Torrance's Figural test, the Remote Associates Test, and the Deciphering Language Test. These results indicate that the most commonly employed verbal tests of creativity have a substantial component of verbal intelligence.


2020 ◽  
Vol 117 (47) ◽  
pp. 29390-29397 ◽  
Author(s):  
Maithilee Kunda

Observations abound about the power of visual imagery in human intelligence, from how Nobel prize-winning physicists make their discoveries to how children understand bedtime stories. These observations raise an important question for cognitive science, which is, what are the computations taking place in someone’s mind when they use visual imagery? Answering this question is not easy and will require much continued research across the multiple disciplines of cognitive science. Here, we focus on a related and more circumscribed question from the perspective of artificial intelligence (AI): If you have an intelligent agent that uses visual imagery-based knowledge representations and reasoning operations, then what kinds of problem solving might be possible, and how would such problem solving work? We highlight recent progress in AI toward answering these questions in the domain of visuospatial reasoning, looking at a case study of how imagery-based artificial agents can solve visuospatial intelligence tests. In particular, we first examine several variations of imagery-based knowledge representations and problem-solving strategies that are sufficient for solving problems from the Raven’s Progressive Matrices intelligence test. We then look at how artificial agents, instead of being designed manually by AI researchers, might learn portions of their own knowledge and reasoning procedures from experience, including learning visuospatial domain knowledge, learning and generalizing problem-solving strategies, and learning the actual definition of the task in the first place.


Sign in / Sign up

Export Citation Format

Share Document