A new localization system for autonomous robots

Author(s):  
S. Hernandez ◽  
C.A. Morales ◽  
J.M. Torres ◽  
L. Acosta
2006 ◽  
Author(s):  
E. B. Pacis ◽  
B. Sights ◽  
G. Ahuja ◽  
G. Kogut ◽  
H. R. Everett

2017 ◽  
Vol 70 ◽  
pp. 422-435 ◽  
Author(s):  
Ángel Manuel Guerrero-Higueras ◽  
Noemí DeCastro-García ◽  
Francisco Javier Rodríguez-Lera ◽  
Vicente Matellán

Author(s):  
Nadia Ghariani ◽  
Mohamed Salah Karoui ◽  
Mondher Chaoui ◽  
Mongi Lahiani ◽  
Hamadi Ghariani

Author(s):  
Badr Elkari ◽  
Hassan Ayad ◽  
Abdeljalil El Kari ◽  
Mostafa Mjahed

Author(s):  
PAUL A. BOXER

Autonomous robots are unsuccessful at operating in complex, unconstrained environments. They lack the ability to learn about the physical behavior of different objects through the use of vision. We combine Bayesian networks and qualitative spatial representation to learn general physical behavior by visual observation. We input training scenarios that allow the system to observe and learn normal physical behavior. The position and velocity of the visible objects are represented as qualitative states. Transitions between these states over time are entered as evidence into a Bayesian network. The network provides probabilities of future transitions to produce predictions of future physical behavior. We use test scenarios to determine how well the approach discriminates between normal and abnormal physical behavior and actively predicts future behavior. We examine the ability of the system to learn three naive physical concepts, "no action at a distance", "solidity" and "movement on continuous paths". We conclude that the combination of qualitative spatial representations and Bayesian network techniques is capable of learning these three rules of naive physics.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Lars Grundhofer ◽  
Stefan Gewies ◽  
Giovanni Del Galdo

Sign in / Sign up

Export Citation Format

Share Document