A novel 1-dimensional object location estimation algorithm using leaky coaxial cable

2013 ◽  
Vol 5 (3-4) ◽  
pp. 159-167
Author(s):  
Chen Peng ◽  
Wu Nan ◽  
Wang Pan-yang ◽  
Liu Xiao-long
2015 ◽  
Vol 42 (2) ◽  
pp. 264-271
Author(s):  
Enkhzaya Myagmar ◽  
Soonryang Kwon ◽  
Dong Myung Lee

2011 ◽  
Vol 59 (6) ◽  
pp. 2396-2403 ◽  
Author(s):  
Kenji Inomata ◽  
Yashio Yamaguchi ◽  
Hiroyoshi Yamada ◽  
Wataru Tsujita ◽  
Masahiro Shikai ◽  
...  

2018 ◽  
Vol 10 (7) ◽  
pp. 2621-2632 ◽  
Author(s):  
Darshak Sundar ◽  
Siddharth Sendil ◽  
Vasanth Subramanian ◽  
Vidhya Balasubramanian

Author(s):  
Myeong In Seo ◽  
Woo Jin Jang ◽  
Junhwan Ha ◽  
Kyongtae Park ◽  
Dong Hwan Kim

This study introduces the control method of duct cleaning robot that enables real-time position tracking and self-driving over L-shaped and T-shaped duct sections. The developed robot has three legs and is designed to flexibly respond to duct sizes. The position of the robot inside the duct is identified using the UWB communication module and the location estimation algorithm. Although UWB communication has relatively large distance error within the metal, the positional error was reduced by introducing appropriate filters to estimate the robot position accurately. TCP/IP communication allows commands to be sent between the PC and the robot and to receive live images of the camera attached to the robot. Using Haar-like and classifiers, the robot can recognize the type of duct that is difficult to overcome, such as L-shaped and T-shaped duct, and it moves successfully inside the duct according to the corresponding moving algorithms.


2021 ◽  
Author(s):  
Vladislava Segen

The current study investigated a systematic bias in spatial memory in which people, following a perspective shift from encoding to recall, indicated the location of an object further to the direction of the shit. In Experiment 1, we documented this bias by asking participants to encode the position of an object in a virtual room and then indicate it from memory following a perspective shift induced by camera translation and rotation. In Experiment 2, we decoupled the influence of camera translations and camera rotations and examined also whether adding more information in the scene would reduce the bias. We also investigated the presence of age-related differences in the precision of object location estimates and the tendency to display the bias related to perspective shift. Overall, our results showed that camera translations led to greater systematic bias than camera rotations. Furthermore, the use of additional spatial information improved the precision with which object locations were estimated and reduced the bias associated with camera translation. Finally, we found that although older adults were as precise as younger participants when estimating object locations, they benefited less from additional spatial information and their responses were more biased in the direction of camera translations. We propose that accurate representation of camera translations requires more demanding mental computations than camera rotations, leading to greater uncertainty about the position of an object in memory. This uncertainty causes people to rely on an egocentric anchor thereby giving rise to the systematic bias in the direction of camera translation.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3193
Author(s):  
Adrian Fazekas ◽  
Markus Oeser

The next generation of Intelligent Transportation Systems (ITS) will strongly rely on a high level of detail and coverage in traffic data acquisition. Beyond aggregated traffic parameters like the flux, mean speed, and density used in macroscopic traffic analysis, a continuous location estimation of individual vehicles on a microscopic scale will be required. On the infrastructure side, several sensor techniques exist today that are able to record the data of individual vehicles at a cross-section, such as static radar detectors, laser scanners, or computer vision systems. In order to record the position data of individual vehicles over longer sections, the use of multiple sensors along the road with suitable synchronization and data fusion methods could be adopted. This paper presents appropriate methods considering realistic scale and accuracy conditions of the original data acquisition. Datasets consisting of a timestamp and a speed for each individual vehicle are used as input data. As a first step, a closed formulation for a sensor offset estimation algorithm with simultaneous vehicle registration is presented. Based on this initial step, the datasets are fused to reconstruct microscopic traffic data using quintic Beziér curves. With the derived trajectories, the dependency of the results on the accuracy of the individual sensors is thoroughly investigated. This method enhances the usability of common cross-section-based sensors by enabling the deriving of non-linear vehicle trajectories without the necessity of precise prior synchronization.


Sign in / Sign up

Export Citation Format

Share Document