Automatic Estimation of Camera Position in Robot Soccer

Author(s):  
Donald Bailey ◽  
Gourab Sen Gupta
2014 ◽  
Vol 73 (6) ◽  
pp. 511-527 ◽  
Author(s):  
V.V. Abramova ◽  
S. K. Abramov ◽  
V. V. Lukin ◽  
A. A. Roenko ◽  
Benoit Vozel

Author(s):  
Sunita Nadella ◽  
Lloyd A. Herman

Video traffic data were collected in 24 combinations of four different camera position parameters. A machine vision processor was used to detect vehicle speeds and volumes from the videotapes. The machine vision results were then compared with the actual vehicle volumes and speeds to give the percentage errors in each case. The results of the study provide a procedure with which to establish camera position parameters with specific reference points to help machine vision users select suitable camera positions and develop appropriate measurement error expectations. The camera position parameters that were most likely to produce the least overall volume and speed errors, for the specific site and field setup with the parameter ranges used in this study, were the low height of approximately 7.6 m (25 ft), with an upstream orientation (traffic moving toward the camera), a 50-mm (midangle) focal length, and a 15° vertical angle.


Author(s):  
Ivan Mendoza ◽  
Gustavo Alvarez ◽  
Mateo Coello ◽  
Joaquin Lopez ◽  
Pablo Carvallo

2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Matthias Ivantsits ◽  
Lennart Tautz ◽  
Simon Sündermann ◽  
Isaac Wamala ◽  
Jörg Kempfert ◽  
...  

AbstractMinimally invasive surgery is increasingly utilized for mitral valve repair and replacement. The intervention is performed with an endoscopic field of view on the arrested heart. Extracting the necessary information from the live endoscopic video stream is challenging due to the moving camera position, the high variability of defects, and occlusion of structures by instruments. During such minimally invasive interventions there is no time to segment regions of interest manually. We propose a real-time-capable deep-learning-based approach to detect and segment the relevant anatomical structures and instruments. For the universal deployment of the proposed solution, we evaluate them on pixel accuracy as well as distance measurements of the detected contours. The U-Net, Google’s DeepLab v3, and the Obelisk-Net models are cross-validated, with DeepLab showing superior results in pixel accuracy and distance measurements.


Sign in / Sign up

Export Citation Format

Share Document