object affordances
Recently Published Documents


TOTAL DOCUMENTS

71
(FIVE YEARS 7)

H-INDEX

17
(FIVE YEARS 0)

2021 ◽  
Vol 92 ◽  
pp. 103133
Author(s):  
Matthias G. Arend ◽  
Jochen Müsseler
Keyword(s):  

2021 ◽  
Author(s):  
Gwendolyn L Rehrig ◽  
Madison Barker ◽  
Candace Elise Peacock ◽  
Taylor R. Hayes ◽  
John M. Henderson ◽  
...  

As we act on the world around us, our eyes seek out objects we plan to interact with. A growing body of evidence suggests that overt visual attention selects objects in the environment that could be interacted with, even when the task precludes physical interaction. Our previous work showed objects that afford grasping interactions influenced attention when static scenes depicted reachable spaces, and attention was otherwise better explained by general meaning (Rehrig, Peacock, et al., 2021). Because grasping is but one of many object interactions, our previous work may have downplayed the influence of object affordances on attention. The current study investigated the relationship between overt visual attention and object affordances versus broadly construed semantic information in scenes as speakers describe possible actions. In addition to meaning and grasp maps—which capture informativeness and grasping object affordances in scenes, respectively—we introduce interact maps, which capture affordances more broadly. In a mixed-effects analysis of 3 eyetracking experiments, interact map values predicted fixated regions in all experiments, whereas there was no main effect of meaning, and grasp maps marginally predicted fixated locations for scenes that depicted reachable spaces only. Our findings suggest speakers consistently allocate attention to scene regions that could be readily interacted with when describing the possible actions in a scene, while the other variants of semantic information tested (graspability and general meaning) have a compensatory or additive influence on attention. The current study clarifies the importance of object affordances in guiding visual attention in scenes.


2021 ◽  
Vol 54 (3) ◽  
pp. 1-35
Author(s):  
Mohammed Hassanin ◽  
Salman Khan ◽  
Murat Tahtali

Nowadays, robots are dominating the manufacturing, entertainment, and healthcare industries. Robot vision aims to equip robots with the capabilities to discover information, understand it, and interact with the environment, which require an agent to effectively understand object affordances and functions in complex visual domains. In this literature survey, first, “visual affordances” are focused on and current state-of-the-art approaches for solving relevant problems as well as open problems and research gaps are summarized. Then, sub-problems, such as affordance detection, categorization, segmentation, and high-level affordance reasoning, are specifically discussed. Furthermore, functional scene understanding and its prevalent descriptors used in the literature are covered. This survey also provides the necessary background to the problem, sheds light on its significance, and highlights the existing challenges for affordance and functionality learning.


Author(s):  
Thomas D. Ferguson ◽  
Daniel N. Bub ◽  
Michael E. J. Masson ◽  
Olave E. Krigolson

Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 816
Author(s):  
Ander Iriondo ◽  
Elena Lazkano ◽  
Ander Ansuategi

Grasping point detection has traditionally been a core robotic and computer vision problem. In recent years, deep learning based methods have been widely used to predict grasping points, and have shown strong generalization capabilities under uncertainty. Particularly, approaches that aim at predicting object affordances without relying on the object identity, have obtained promising results in random bin-picking applications. However, most of them rely on RGB/RGB-D images, and it is not clear up to what extent 3D spatial information is used. Graph Convolutional Networks (GCNs) have been successfully used for object classification and scene segmentation in point clouds, and also to predict grasping points in simple laboratory experimentation. In the present proposal, we adapted the Deep Graph Convolutional Network model with the intuition that learning from n-dimensional point clouds would lead to a performance boost to predict object affordances. To the best of our knowledge, this is the first time that GCNs are applied to predict affordances for suction and gripper end effectors in an industrial bin-picking environment. Additionally, we designed a bin-picking oriented data preprocessing pipeline which contributes to ease the learning process and to create a flexible solution for any bin-picking application. To train our models, we created a highly accurate RGB-D/3D dataset which is openly available on demand. Finally, we benchmarked our method against a 2D Fully Convolutional Network based method, improving the top-1 precision score by 1.8% and 1.7% for suction and gripper respectively.


2020 ◽  
Vol 146 ◽  
pp. 105639
Author(s):  
Paula J. Rowe ◽  
Corinna Haenschel ◽  
Nareg Khachatoorian ◽  
Kielan Yarrow

2019 ◽  
Vol 135 ◽  
pp. 103582 ◽  
Author(s):  
Giovanni Federico ◽  
Maria A. Brandimonte

Sign in / Sign up

Export Citation Format

Share Document