Polyhedral Object Recognition with Sparse Data in SIMD Processing Mode

1988 ◽  
Author(s):  
D. Holder ◽  
H. Buxton
2020 ◽  
Author(s):  
Joshua S. Rule ◽  
Maximilian Riesenhuber

AbstractHumans quickly learn new visual concepts from sparse data, sometimes just a single example. Decades of prior work have established the hierarchical organization of the ventral visual stream as key to this ability. Computational work has shown that networks which hierarchically pool afferents across scales and positions can achieve human-like object recognition performance and predict human neural activity. Prior computational work has also reused previously acquired features to efficiently learn novel recognition tasks. These approaches, however, require magnitudes of order more examples than human learners and only reuse intermediate features at the object level or below. None has attempted to reuse extremely high-level visual features capturing entire visual concepts. We used a benchmark deep learning model of object recognition to show that leveraging prior learning at the concept level leads to vastly improved abilities to learn from few examples. These results suggest computational techniques for learning even more efficiently as well as neuroscientific experiments to better understand how the brain learns from sparse data. Most importantly, however, the model architecture provides a biologically plausible way to learn new visual concepts from a small number of examples, and makes several novel predictions regarding the neural bases of concept representations in the brain.Author summaryWe are motivated by the observation that people regularly learn new visual concepts from as little as one or two examples, far better than, e.g., current machine vision architectures. To understand the human visual system’s superior visual concept learning abilities, we used an approach inspired by computational models of object recognition which: 1) use deep neural networks to achieve human-like performance and predict human brain activity; and 2) reuse previous learning to efficiently master new visual concepts. These models, however, require many times more examples than human learners and, critically, reuse only low-level and intermediate information. None has attempted to reuse extremely high-level visual features (i.e., entire visual concepts). We used a neural network model of object recognition to show that reusing concept-level features leads to vastly improved abilities to learn from few examples. Our findings suggest techniques for future software models that could learn even more efficiently, as well as neuroscience experiments to better understand how people learn so quickly. Most importantly, however, our model provides a biologically plausible way to learn new visual concepts from a small number of examples.


GeroPsych ◽  
2010 ◽  
Vol 23 (3) ◽  
pp. 169-175 ◽  
Author(s):  
Adrian Schwaninger ◽  
Diana Hardmeier ◽  
Judith Riegelnig ◽  
Mike Martin

In recent years, research on cognitive aging increasingly has focused on the cognitive development across middle adulthood. However, little is still known about the long-term effects of intensive job-specific training of fluid intellectual abilities. In this study we examined the effects of age- and job-specific practice of cognitive abilities on detection performance in airport security x-ray screening. In Experiment 1 (N = 308; 24–65 years), we examined performance in the X-ray Object Recognition Test (ORT), a speeded visual object recognition task in which participants have to find dangerous items in x-ray images of passenger bags; and in Experiment 2 (N = 155; 20–61 years) in an on-the-job object recognition test frequently used in baggage screening. Results from both experiments show high performance in older adults and significant negative age correlations that cannot be overcome by more years of job-specific experience. We discuss the implications of our findings for theories of lifespan cognitive development and training concepts.


Sign in / Sign up

Export Citation Format

Share Document