scholarly journals Perceptual and Semantic Processing in Cognitive Robots

Electronics ◽  
2021 ◽  
Vol 10 (18) ◽  
pp. 2216
Author(s):  
Syed Tanweer Shah Bukhari ◽  
Wajahat Mahmood Qazi

The challenge in human–robot interaction is to build an agent that can act upon human implicit statements, where the agent is instructed to execute tasks without explicit utterance. Understanding what to do under such scenarios requires the agent to have the capability to process object grounding and affordance learning from acquired knowledge. Affordance has been the driving force for agents to construct relationships between objects, their effects, and actions, whereas grounding is effective in the understanding of spatial maps of objects present in the environment. The main contribution of this paper is to propose a methodology for the extension of object affordance and grounding, the Bloom-based cognitive cycle, and the formulation of perceptual semantics for the context-based human–robot interaction. In this study, we implemented YOLOv3 to formulate visual perception and LSTM to identify the level of the cognitive cycle, as cognitive processes synchronized in the cognitive cycle. In addition, we used semantic networks and conceptual graphs as a method to represent knowledge in various dimensions related to the cognitive cycle. The visual perception showed average precision of 0.78, an average recall of 0.87, and an average F1 score of 0.80, indicating an improvement in the generation of semantic networks and conceptual graphs. The similarity index used for the lingual and visual association showed promising results and improves the overall experience of human–robot interaction.

2020 ◽  
Vol 14 (2) ◽  
pp. 2937-2948
Author(s):  
Emrah Benli ◽  
Yuichi Motai ◽  
John Rogers

2021 ◽  
Author(s):  
Nicolas Spatola ◽  
Thierry Chaminade

Abstract Human-human and human-robot interaction are often compared with the overarching question of the differences in terms of cognitive processes engaged and what can explain these differences. However, research addressing this topic, especially in neuro-imagery, use extremely artificial interaction settings. Also, they neglect a crucial parameter of human social cognition: interaction is an adaptive (rather than fixed) process. Building upon the first fMRI paradigm requiring participants to interact online with both a human and a robot in a dyadic setting, we investigate the differences and changes of brain activity during the two type of interactions in a whole brain analysis. Our results show that, grounding on a common default level, the activity in specific neural regions associated with social cognition (e.g. Posterior Cingulate Cortex) increase in HHI while remaining stable in HRI. We discuss these results regarding the iterative process of deepening the social engagement facing humans but not robots.


Author(s):  
William Leslie Brown-Acquaye ◽  
Ezer Osei Yeboah-Boateng ◽  
Forgor Lempogo

Cognitive robots, exhibiting cognitive characteristics and synthesizing knowledge to perform tasks and interacting with humans in both industrial and social settings, have become a big part of modern societies. In this chapter, the authors review the processes and approaches to knowledge management in cognitive robot agents for effective human robot interaction. They present the current state of the art in current robotics technology and human-robot interaction. They state current requirements of cognitive robot agents in human-robot interaction and examine the role of knowledge in human-robot interaction. They finally propose a knowledge management framework for cognitive robots that consist of three main stages: knowledge acquisition and grounding, knowledge representation and knowledge integration, and instantiation into robot architectures.


2008 ◽  
Vol 5 (4) ◽  
pp. 235-241 ◽  
Author(s):  
Rajesh Elara Mohan ◽  
Carlos Antonio Acosta Calderon ◽  
Changjiu Zhou ◽  
Pik Kong Yue

In the field of human-computer interaction, the Natural Goals, Operators, Methods, and Selection rules Language (NGOMSL) model is one of the most popular methods for modelling knowledge and cognitive processes for rapid usability evaluation. The NGOMSL model is a description of the knowledge that a user must possess to operate the system represented as elementary actions for effective usability evaluations. In the last few years, mobile robots have been exhibiting a stronger presence in commercial markets and very little work has been done with NGOMSL modelling for usability evaluations in the human-robot interaction discipline. This paper focuses on extending the NGOMSL model for usability evaluation of human-humanoid robot interaction in the soccer robotics domain. The NGOMSL modelled human-humanoid interaction design of Robo-Erectus Junior was evaluated and the results of the experiments showed that the interaction design was able to find faults in an average time of 23.84 s. Also, the interaction design was able to detect the fault within the 60 s in 100% of the cases. The Evaluated Interaction design was adopted by our Robo-Erectus Junior version of humanoid robots in the RoboCup 2007 humanoid soccer league.


Author(s):  
Abdulaziz Abubshait ◽  
Eva Wiese

When we interact with others, we use nonverbal behavior such as changes in gaze direction to make inferences about what people think or what they want to do next – a process called mentalizing. Previous studies have shown that how we react to others’ gaze signals depends on how much “mind” we ascribe to the gazer, and that this process of mind perception is related to activation in brain areas that process social information (i.e., social brain). Although brain stimulation studies have identified prefrontal structures like the ventromedial prefrontal cortex (vmPFC) as the potential neural substrate through which mind perception modulates social-cognitive processes like attentional orienting to gaze cues (i.e., gaze following), little is known about whether and how individual differences in preferences for human versus robot agents modulate this relationship. To address this question, the current study examines how transcranial direct current stimulation (tDCS) of left prefrontal versus left temporo-parietal areas affects attentional orienting to gaze signals as a function of the participants’ preferences for human ( Human Gaze Followers, HGF) versus robot ( Robot Gaze Followers; RGF) agents at baseline (prior to brain stimulation). Results show that prefrontal (but not temporo-parietal) stimulation positively affected attentional orienting to gaze signals for HGFs for the human but not the robot gazer; RGFs showed no effect of brain stimulation in neither of the stimulation conditions. These findings inform how preferences for human versus nonhuman agent types can influence subsequent interactions and communications in human-robot interaction.


Author(s):  
Jacquelyn L. Schreck ◽  
Olivia B. Newton ◽  
Jihye Song ◽  
Stephen M. Fiore

This study examined how human-robot interaction is influenced by individual differences in theory of mind ability. Participants engaged in a hallway navigation task with a robot over a number of trials. The display on the robot and its proxemics behavior was manipulated, and participants made mental state attributions across trials. Participant ability in theory of mind was also assessed. Results show that proxemics behavior and robotic display characteristics differentially influence the degree to which individuals perceive the robot when making mental state attributions about self or other. Additionally, theory of mind ability interacted with proxemics and display characteristics. The findings illustrate the importance of understanding individual differences in higher level cognition. As robots become more social, the need to understand social cognitive processes in human-robot interactions increases. Results are discussed in the context of how individual differences and social signals theory inform research in human-robot interaction.


Author(s):  
Samantha F. Warta ◽  
Katelynn A. Kapalo ◽  
Andrew Best ◽  
Stephen M. Fiore

Robotic teammates are becoming prevalent in increasingly complex and dynamic operational and social settings. For this reason, the perception of robots operating in such environments has transitioned from the perception of robots as tools, extending human capabilities, to the perception of robots as teammates, collaborating with humans and displaying complex social cognitive processes. The goal of this paper is to introduce a discussion on an integrated set of robotic design elements, as well as provide support for the idea that human-robot interaction requires a clearer understanding of social cognitive constructs to optimize human-robot collaboration. We develop a set of research questions addressing these constructs with the goal of improving the engineering of artificial cognitive systems reliant on natural human-robot interaction.


Sign in / Sign up

Export Citation Format

Share Document