scholarly journals Psychophysical Scaling Reveals a Unified Theory of Visual Memory Strength

2018 ◽  
Author(s):  
Mark W. Schurgin ◽  
John T. Wixted ◽  
Timothy F. Brady

AbstractAlmost all models of visual memory implicitly assume that errors in mnemonic representations are linearly related to distance in stimulus space. Here, we show that neither memory nor perception are appropriately scaled in stimulus space; instead, they are based on a transformed similarity representation that is non-linearly related to stimulus space. This result calls into question a foundational assumption of extant models of visual working memory. Once psychophysical similarity is taken into account, aspects of memory that have been thought to demonstrate a fixed working memory capacity of ~3-4 items and to require fundamentally different representations -- across different stimuli, tasks, and types of memory -- can be parsimoniously explained with a unitary signal detection framework. These results have significant implications for the study of visual memory and lead to a substantial reinterpretation of the relationship between perception, working memory and long-term memory.

Author(s):  
Stoo Sepp ◽  
Steven J. Howard ◽  
Sharon Tindall-Ford ◽  
Shirley Agostinho ◽  
Fred Paas

In 1956, Miller first reported on a capacity limitation in the amount of information the human brain can process, which was thought to be seven plus or minus two items. The system of memory used to process information for immediate use was coined “working memory” by Miller, Galanter, and Pribram in 1960. In 1968, Atkinson and Shiffrin proposed their multistore model of memory, which theorized that the memory system was separated into short-term memory, long-term memory, and the sensory register, the latter of which temporarily holds and forwards information from sensory inputs to short term-memory for processing. Baddeley and Hitch built upon the concept of multiple stores, leading to the development of the multicomponent model of working memory in 1974, which described two stores devoted to the processing of visuospatial and auditory information, both coordinated by a central executive system. Later, Cowan’s theorizing focused on attentional factors in the effortful and effortless activation and maintenance of information in working memory. In 1988, Cowan published his model—the scope and control of attention model. In contrast, since the early 2000s Engle has investigated working memory capacity through the lens of his individual differences model, which does not seek to quantify capacity in the same way as Miller or Cowan. Instead, this model describes working memory capacity as the interplay between primary memory (working memory), the control of attention, and secondary memory (long-term memory). This affords the opportunity to focus on individual differences in working memory capacity and extend theorizing beyond storage to the manipulation of complex information. These models and advancements have made significant contributions to understandings of learning and cognition, informing educational research and practice in particular. Emerging areas of inquiry include investigating use of gestures to support working memory processing, leveraging working memory measures as a means to target instructional strategies for individual learners, and working memory training. Given that working memory is still debated, and not yet fully understood, researchers continue to investigate its nature, its role in learning and development, and its implications for educational curricula, pedagogy, and practice.


2016 ◽  
Vol 12 (4) ◽  
pp. 567-583
Author(s):  
Hamdollah Manzari Tavakoli

The relationship between children’s accuracy during numerical magnitude comparisons and arithmetic ability has been investigated by many researchers. Contradictory results have been reported from these studies due to the use of many different tasks and indices to determine the accuracy of numerical magnitude comparisons. In the light of this inconsistency among measurement techniques, the present study aimed to investigate this relationship among Iranian second grade children (n = 113) using a pre-established test (known as the Numeracy Screener) to measure numerical magnitude comparison accuracy. The results revealed that both the symbolic and non-symbolic items of the Numeracy Screener significantly correlated with arithmetic ability. However, after controlling for the effect of working memory, processing speed, and long-term memory, only performance on symbolic items accounted for the unique variances in children’s arithmetic ability. Furthermore, while working memory uniquely contributed to arithmetic ability in one-and two-digit arithmetic problem solving, processing speed uniquely explained only the variance in single-digit arithmetic skills and long-term memory did not contribute to any significant additional variance for one-digit or two-digit arithmetic problem solving.


2019 ◽  
Author(s):  
Annalise Miner ◽  
Mark Schurgin ◽  
Timothy F. Brady

Long-term memory is often considered easily corruptible, imprecise and inaccurate, especially in comparison to working memory. However, most research used to support these findings relies on weak long-term memories: those where people have had only one brief exposure to an item. Here we investigated the fidelity of visual long-term memory in more naturalistic setting, with repeated exposures, and ask how it compares to visual working memory fidelity. Using psychophysical methods designed to precisely measure the fidelity of visual memory, we demonstrate that long-term memory for the color of frequently seen objects is as accurate as working memory for the color of a single item seen 1 second ago. In particular, we show that repetition greatly improves long-term memory, including the ability to discriminate an item from a very similar item ('fidelity'), in both a lab setting (Exps. 1-3) and a naturalistic setting (brand logos, Exp. 4). Overall our results demonstrate the impressive nature of visual long-term memory fidelity, which we find is even higher fidelity than previously indicated in situations involving repetitions. Furthermore, our results suggest that there is no distinction between the fidelity of visual working memory and visual long-term memory, but instead both memory systems are capable of storing similar incredibly high fidelity memories under the right circumstances. Our results also provide further evidence that there is no fundamental distinction between the ‘precision’ of memory and the ‘likelihood of retrieving a memory’, instead suggesting a single continuous measure of memory strength best accounts for working and long-term memory.


2019 ◽  
Vol 34 (2) ◽  
pp. 268-281 ◽  
Author(s):  
Lea M. Bartsch ◽  
Vanessa M. Loaiza ◽  
Klaus Oberauer

2003 ◽  
Vol 26 (6) ◽  
pp. 742-742
Author(s):  
Janice M. Keenan ◽  
Jukka Hyönä ◽  
Johanna K. Kaakinen

Ruchkin et al.'s view of working memory as activated long-term memory is more compatible with language processing than models such as Baddeley's, but it raises questions about individual differences in working memory and the validity of domain-general capacity estimates. Does it make sense to refer to someone as having low working memory capacity if capacity depends on particular knowledge structures tapped by the task?


Sign in / Sign up

Export Citation Format

Share Document