Direct Interpretation of Dynamic Images and Camera Motion for Visual Servoing Without Image Feature Correspondence

1997 ◽  
Vol 9 (2) ◽  
pp. 104-110 ◽  
Author(s):  
Koichiro Deguchi ◽  

A general scheme to represent the relation between dynamic images and camera motion is presented. Then its application to visual servoing is proposed. For a specific object, every possible combination of the camera pose and the obtained image should be constrained on a lower dimensional hyper surface in the product space of the whole combination of image data and camera position. Visual servoing, for example, is interpreted as finding a path on this surface leading to a given image. Our approach is to analyze the properties of this surface, and use its differential or tangential property for visual servoing. The coefficient matrix of the tangent plane of this surface is related to the so-called Interaction Matrix. For this approach, the reduction of the dimension of the image information becomes a key problem. We propose to use the principal component analysis and to represent images with a composition of small number of ""eigenimages"" by using Karhune Loève (K-L) expansion. A normal vector We confirm the feasibility of our basic idea for visual servoing with some experiments using a real robot arm.

Author(s):  
PATRICE WIRA ◽  
JEAN-PHILIPPE URBAN

Prediction in real-time image sequences is a key-feature for visual servoing applications. It is used to compensate for the time-delay introduced by the image feature extraction process in the visual feedback loop. In order to track targets in a three-dimensional space in real-time with a robot arm, the target's movement and the robot end-effector's next position are predicted from the previous movements. A modular prediction architecture is presented, which is based on the Kalman filtering principle. The Kalman filter is an optimal stochastic estimation technique which needs an accurate system model and which is particularly sensitive to noise. The performances of this filter diminish with nonlinear systems and with time-varying environments. Therefore, we propose an adaptive Kalman filter using the modular framework of mixture of experts regulated by a gating network. The proposed filter has an adaptive state model to represent the system around its current state as close as possible. Different realizations of these state model adaptive Kalman filters are organized according to the divide-and-conquer principle: they all participate to the global estimation and a neural network mediates their different outputs in an unsupervised manner and tunes their parameters. The performances of the proposed approach are evaluated in terms of precision, capability to estimate and compensate abrupt changes in targets trajectories, as well as to adapt to time-variant parameters. The experiments prove that, without the use of models (e.g. the camera model, kinematic robot model, and system parameters) and without any prior knowledge about the targets movements, the predictions allow to compensate for the time-delay and to reduce the tracking error.


2018 ◽  
Vol 15 (1) ◽  
pp. 172988141775385 ◽  
Author(s):  
Che-Liang Li ◽  
Ming-Yang Cheng ◽  
Wei-Che Chang

Image-based visual servoing (IBVS) has increasingly gained popularity and has been adopted in applications such as industrial robots, quadrotors, and unmanned aerial vehicles. When exploiting IBVS, the image feature velocity command obtained from the visual loop controller is converted to the velocity command of the workspace through the interaction matrix so as to converge image feature error. However, issues such as the noise/disturbance arising from image processing and the smoothness of image feature command are often overlooked in the design of the visual loop controller, especially in a contour following task. In particular, noise in the image feature will contaminate the image feedback signal so that the visual loop performance can be substantially affected. To cope with the aforementioned problem, this article employs the sliding mode controller to suppress the adverse effects caused by image feature noise. Moreover, by exploiting the idea of motion planning, a parametric curve interpolator is developed to generate smooth image feature commands. In addition, a depth observer is also designed to provide the depth information essential in the implementation of the interaction matrix. In order to assess the feasibility of the proposed approach, a two-degrees-of-freedom planar robot that employs an IBVS structure and an eye-to-hand camera configuration is used to conduct a contour following task. Contour following results verify the effectiveness of the proposed approach.


2012 ◽  
Vol 162 ◽  
pp. 487-496 ◽  
Author(s):  
Aurelien Yeremou Tamtsia ◽  
Youcef Mezouar ◽  
Philippe Martinet ◽  
Haman Djalo ◽  
Emmanuel Tonye

Among region-based descriptors, geometric moments have been widely exploited to design visual servoing schemes. However, they present several disadvantages such as high sensitivity to noise measurement, high dynamic range and information redundancy (since they are not computed onto orthogonal basis). In this paper, we propose to use a class of orthogonal moments (namely Legendre moments) instead of geometric moments to improve the behavior of moment-based control schemes. The descriptive form of the interaction matrix related to the Legendre moments computed from a set of points is rst derived. Six visual features are then selected to design a partially-decoupled control scheme. Finally simulated and experimental results are presented to illustrate the validity of our proposal.


Author(s):  
J. Li-Chee-Ming ◽  
C. Armenakis

This paper presents a novel application of the Visual Servoing Platform’s (ViSP) for pose estimation in indoor and GPS-denied outdoor environments. Our proposed solution integrates the trajectory solution from RGBD-SLAM into ViSP’s pose estimation process. Li-Chee-Ming and Armenakis (2015) explored the application of ViSP in mapping large outdoor environments, and tracking larger objects (i.e., building models). Their experiments revealed that tracking was often lost due to a lack of model features in the camera’s field of view, and also because of rapid camera motion. Further, the pose estimate was often biased due to incorrect feature matches. This work proposes a solution to improve ViSP’s pose estimation performance, aiming specifically to reduce the frequency of tracking losses and reduce the biases present in the pose estimate. This paper explores the integration of ViSP with RGB-D SLAM. We discuss the performance of the combined tracker in mapping indoor environments and tracking 3D wireframe indoor building models, and present preliminary results from our experiments.


2015 ◽  
Vol 2015 ◽  
pp. 1-14 ◽  
Author(s):  
Ting Yun ◽  
Weizheng Li ◽  
Yuan Sun ◽  
Lianfeng Xue

In order to retrieve gap fraction, leaf inclination angle, and leaf area index (LAI) of subtropical forestry canopy, here we acquired forestry detailed information by means of hemispherical photography, terrestrial laser scanning, and LAI-2200 plant canopy analyzer. Meanwhile, we presented a series of image processing and computer graphics algorithms that include image and point cloud data (PCD) segmentation methods for branch and leaf classification and PCD features, such as normal vector, tangent plane extraction, and hemispherical projection method for PCD coordinate transformation. In addition, various forestry mathematical models were proposed to deduce forestry canopy indexes based on the radiation transfer model of Beer-Lambert law. Through the comparison of the experimental results on many plot samples, the terrestrial laser scanner- (TLS-) based index estimation method obtains results similar to digital hemispherical photograph (HP) and LAI-2200 plant canopy analyzer taken of the same stands and used for validation. It indicates that the TLS-based algorithm is able to capture the variability in LAI of forest stands with a range of densities, and there is a high chance to enhance TLS as a calibration tool for other devices.


Sensors ◽  
2019 ◽  
Vol 19 (23) ◽  
pp. 5310
Author(s):  
Lai Kang ◽  
Yingmei Wei ◽  
Jie Jiang ◽  
Yuxiang Xie

Cylindrical panorama stitching is able to generate high resolution images of a scene with a wide field-of-view (FOV), making it a useful scene representation for applications like environmental sensing and robot localization. Traditional image stitching methods based on hand-crafted features are effective for constructing a cylindrical panorama from a sequence of images in the case when there are sufficient reliable features in the scene. However, these methods are unable to handle low-texture environments where no reliable feature correspondence can be established. This paper proposes a novel two-step image alignment method based on deep learning and iterative optimization to address the above issue. In particular, a light-weight end-to-end trainable convolutional neural network (CNN) architecture called ShiftNet is proposed to estimate the initial shifts between images, which is further optimized in a sub-pixel refinement procedure based on a specified camera motion model. Extensive experiments on a synthetic dataset, rendered photo-realistic images, and real images were carried out to evaluate the performance of our proposed method. Both qualitative and quantitative experimental results demonstrate that cylindrical panorama stitching based on our proposed image alignment method leads to significant improvements over traditional feature based methods and recent deep learning based methods for challenging low-texture environments.


Algorithms ◽  
2020 ◽  
Vol 13 (10) ◽  
pp. 263
Author(s):  
Xin Chen ◽  
Hong Zhao ◽  
Ping Zhou

In anatomy, the lung can be divided by lung fissures into several pulmonary lobe units with specific functions. Identifying the lung lobes and the distribution of various diseases among different lung lobes from CT images is important for disease diagnosis and tracking after recovery. In order to solve the problems of low tubular structure segmentation accuracy and long algorithm time in segmenting lung lobes based on lung anatomical structure information, we propose a segmentation algorithm based on lung fissure surface classification using a point cloud region growing approach. We cluster the pulmonary fissures, transformed into point cloud data, according to the differences in the pulmonary fissure surface normal vector and curvature estimated by principal component analysis. Then, a multistage spline surface fitting method is used to fill and expand the lung fissure surface to realize the lung lobe segmentation. The proposed approach was qualitatively and quantitatively evaluated on a public dataset from Lobe and Lung Analysis 2011 (LOLA11), and obtained an overall score of 0.84. Although our approach achieved a slightly lower overall score compared to the deep learning based methods (LobeNet_V2 and V-net), the inter-lobe boundaries from our approach were more accurate for the CT images with visible lung fissures.


Sign in / Sign up

Export Citation Format

Share Document