View-based dynamic object recognition based on human perception

Author(s):  
H.H. Bulthoff ◽  
C. Wallraven ◽  
A. Graf
2010 ◽  
Vol 50 (2) ◽  
pp. 202-210 ◽  
Author(s):  
Alinda Friedman ◽  
Quoc C. Vuong ◽  
Marcia Spetch

2013 ◽  
Vol 48 (1) ◽  
pp. 33-45 ◽  
Author(s):  
Jinwook Oh ◽  
Gyeonghoon Kim ◽  
Junyoung Park ◽  
Injoon Hong ◽  
Seungjin Lee ◽  
...  

Author(s):  
Napoleon H. Reyes ◽  
◽  
Elmer P. Dadios ◽  

This paper presents a novel Logit-Logistic Fuzzy Color Constancy (LLFCC) algorithm and its variants for dynamic color object recognition. Contrary to existing color constancy algorithms, the proposed scheme focuses on manipulating a color locus depicting the colors of an object, and not stabilizing the whole image appearance per se. In this paper, a new set of adaptive contrast manipulation operators is introduced and utilized in conjunction with a fuzzy inference system. Moreover, a new perspective in extracting color descriptors of an object from the rg-chromaticity space is presented. Such color descriptors allow for the reduction of the effects of brightness/darkness and at the same time adhere to human perception of colors. The proposed scheme tremendously cuts processing time by simultaneously compensating for the effects of a multitude of factors that plague the scene of traversal, eliminating the need for image pre-processing steps. Experiment results attest to its robustness in scenes with multiple white light sources, spatially varying illumination intensities, varying object position, and presence of highlights.


2006 ◽  
Vol 34 (3) ◽  
pp. 215-228 ◽  
Author(s):  
Marcia L. Spetch ◽  
Alinda Friedman ◽  
Quoc C. Vuong

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Georgin Jacob ◽  
R. T. Pramod ◽  
Harish Katti ◽  
S. P. Arun

AbstractDeep neural networks have revolutionized computer vision, and their object representations across layers match coarsely with visual cortical areas in the brain. However, whether these representations exhibit qualitative patterns seen in human perception or brain representations remains unresolved. Here, we recast well-known perceptual and neural phenomena in terms of distance comparisons, and ask whether they are present in feedforward deep neural networks trained for object recognition. Some phenomena were present in randomly initialized networks, such as the global advantage effect, sparseness, and relative size. Many others were present after object recognition training, such as the Thatcher effect, mirror confusion, Weber’s law, relative size, multiple object normalization and correlated sparseness. Yet other phenomena were absent in trained networks, such as 3D shape processing, surface invariance, occlusion, natural parts and the global advantage. These findings indicate sufficient conditions for the emergence of these phenomena in brains and deep networks, and offer clues to the properties that could be incorporated to improve deep networks.


Sign in / Sign up

Export Citation Format

Share Document