Vision system model for mobile robotic systems

Author(s):  
Kateryna Matviichuk ◽  
Vasyl Teslyuk ◽  
Taras Teslyuk
Author(s):  
Vassilios E. Theodoracatos ◽  
Dale E. Calkins

Abstract The development of a “light striping” (structured light) based three-dimensional vision system for automatic surface sensing is presented. The three-dimensional world-point reconstruction process and system modeling methodology involves homogeneous coordinate transformations applied in two independent stages; the video imaging stage using three-dimensional perspective transformations, and the mechanical scanning stage, using three-dimensional affine transformations. Concatenation of the two independent matrix models leads to a robust four-by-four matrix system model. The independent treatment of the two-dimensional imaging process from the three-dimensional modeling process, has reduced the number of unknown internal and external geometrical parameters. The reconstructed sectional contours (light stripes) are automatically and in real-time registered with respect to a common world coordinate system in a format compatible with B-spline surface approximation. The reconstruction process is demonstrated by measuring the surface of a 19.5-ft long by 2 feet beam rowing shell. A detailed statistical accuracy and precision analysis shows an average error, 0.2 percent (0.002), of an object’s largest dimension within the the camera’s field-of-view. System sensitivity analysis reveals a nonlinear increase for angles between the normals of the image and laser planes higher than 45 degrees.


Author(s):  
I. Sgibnev ◽  
A. Sorokin ◽  
B. Vishnyakov ◽  
Y. Vizilter

Abstract. This paper is devoted to the problem of image semantic segmentation for machine vision system of off-road autonomous robotic vehicle. Most modern convolutional neural networks require large computing resources that go beyond the capabilities of many robotic platforms. Therefore, the main drawback of such models is extremely high complexity of the convolutional neural network used, whereas tasks in real applications must be performed on devices with limited resources in real-time. This paper focuses on the practical application of modern lightweight architectures as applied to the task of semantic segmentation on mobile robotic systems. The article discusses backbones based on ResNet18, ResNet34, MobileNetV2, ShuffleNetV2, EfficientNet-B0 and decoders based on U-Net and DeepLabV3 as well as additional components that can increase the accuracy of segmentation and reduce the inference time. In this paper we propose a model using ResNet34 and DeepLabV3 decoding with Squeeze & Excitation blocks that was optimal in terms of inference time and accuracy. We also demonstrate our off-road dataset and simulated dataset for semantic segmentation. Furthermore, we present that using pre-trained weights on simulated dataset achieves to increase 2.7% mIoU on our off-road dataset compared pre-trained weights on the Cityscapes. Moreover, we achieve 75.6% mIoU on the Cityscapes validation set and 85.2% mIoU on our off-road validation set with a speed of 37 FPS for a 1,024×1,024 input on one NVIDIA GeForce RTX 2080 card using NVIDIA TensorRT.


1993 ◽  
Vol 11 (1) ◽  
pp. 75-99 ◽  
Author(s):  
Vassilios E. Theodoracatos ◽  
Dale E. Calkins

2019 ◽  
Vol 31 (1) ◽  
pp. 45-56 ◽  
Author(s):  
Taku Senoo ◽  
Yuji Yamakawa ◽  
Shouren Huang ◽  
Keisuke Koyama ◽  
Makoto Shimojo ◽  
...  

This paper presents an overview of the high-speed vision system that the authors have been developing, and its applications. First, examples of high-speed vision are presented, and image-related technologies are described. Next, we describe the use of vision systems to track flying objects at sonic speed. Finally, we present high-speed robotic systems that use high-speed vision for robotic control. Descriptions of the tasks that employ high-speed robots center on manipulation, bipedal running, and human-robot cooperation.


2021 ◽  
Vol 54 (2) ◽  
pp. 113-125
Author(s):  
KABANOV ALEKSEY A. ◽  
◽  
KRAMAR VADIM A. ◽  
KRAMAR OLEG A. ◽  
◽  
...  

The article deals with the development of an integrated high-resolution 3D vision system for remotely controlled and autonomous underwater robotic systems. The problem of developing a 3D vision system as a sensor subsystem of an underwater robot is presented. Typical examples of tasks solved with the help of such robotic systems are environmental monitoring; detection of objects and obstacles, their localization; convergence of the CRS with the object; performing operations with objects. The structure of a complex 3D vision system is presented. The description of the design of the main structural elements of the system is given, the main functionality of the developed test software is described. The technical characteristics of the subsystems are given, such as the module for the near-vision of the 3D vision system, the module for additional lighting, and the module for far vision. The description of the developed software for processing video data of a stereo camera (stereo module) in real-time is given. The paper presents the results of the test approbation of software that implements many algorithms for 3D reconstruction of the working space of an underwater robot based on information received from the stereo vision module. The approbation was carried out on the operating model of the underwater stereo vision module connected to the developed underwater robot. The study of the characteristics of a complex 3D vision system was carried out in an aquarium. The effectiveness of the developed system was shown by the experiments with the underwater robot.


Sign in / Sign up

Export Citation Format

Share Document