Image feature command generation of contour following tasks for SCARA robots employing Image-Based Visual Servoing—A PH-spline approach

2017 ◽  
Vol 44 ◽  
pp. 57-66 ◽  
Author(s):  
Wei-Che Chang ◽  
Ming-Yang Cheng ◽  
Hong-Jin Tsai
2018 ◽  
Vol 15 (1) ◽  
pp. 172988141775385 ◽  
Author(s):  
Che-Liang Li ◽  
Ming-Yang Cheng ◽  
Wei-Che Chang

Image-based visual servoing (IBVS) has increasingly gained popularity and has been adopted in applications such as industrial robots, quadrotors, and unmanned aerial vehicles. When exploiting IBVS, the image feature velocity command obtained from the visual loop controller is converted to the velocity command of the workspace through the interaction matrix so as to converge image feature error. However, issues such as the noise/disturbance arising from image processing and the smoothness of image feature command are often overlooked in the design of the visual loop controller, especially in a contour following task. In particular, noise in the image feature will contaminate the image feedback signal so that the visual loop performance can be substantially affected. To cope with the aforementioned problem, this article employs the sliding mode controller to suppress the adverse effects caused by image feature noise. Moreover, by exploiting the idea of motion planning, a parametric curve interpolator is developed to generate smooth image feature commands. In addition, a depth observer is also designed to provide the depth information essential in the implementation of the interaction matrix. In order to assess the feasibility of the proposed approach, a two-degrees-of-freedom planar robot that employs an IBVS structure and an eye-to-hand camera configuration is used to conduct a contour following task. Contour following results verify the effectiveness of the proposed approach.


1995 ◽  
Vol 12 (1) ◽  
pp. 1-21 ◽  
Author(s):  
Eve Coste-Mani�re ◽  
Philippe Couvignou ◽  
Pradeep K. Khosla

Author(s):  
PATRICE WIRA ◽  
JEAN-PHILIPPE URBAN

Prediction in real-time image sequences is a key-feature for visual servoing applications. It is used to compensate for the time-delay introduced by the image feature extraction process in the visual feedback loop. In order to track targets in a three-dimensional space in real-time with a robot arm, the target's movement and the robot end-effector's next position are predicted from the previous movements. A modular prediction architecture is presented, which is based on the Kalman filtering principle. The Kalman filter is an optimal stochastic estimation technique which needs an accurate system model and which is particularly sensitive to noise. The performances of this filter diminish with nonlinear systems and with time-varying environments. Therefore, we propose an adaptive Kalman filter using the modular framework of mixture of experts regulated by a gating network. The proposed filter has an adaptive state model to represent the system around its current state as close as possible. Different realizations of these state model adaptive Kalman filters are organized according to the divide-and-conquer principle: they all participate to the global estimation and a neural network mediates their different outputs in an unsupervised manner and tunes their parameters. The performances of the proposed approach are evaluated in terms of precision, capability to estimate and compensate abrupt changes in targets trajectories, as well as to adapt to time-variant parameters. The experiments prove that, without the use of models (e.g. the camera model, kinematic robot model, and system parameters) and without any prior knowledge about the targets movements, the predictions allow to compensate for the time-delay and to reduce the tracking error.


2014 ◽  
Vol 625 ◽  
pp. 627-632
Author(s):  
Chi Ying Lin ◽  
Yu Sheng Zeng

Over the past few decades, vision based alignment has been accepted as an important technique to achieve higher economic benefits for precision manufacturing and measurement applications. Also referred to as visual servoing, this technique basically applies the vision feedback information and drives the moving parts to the desired target location using some appropriate control laws. Although recently rapid development of advanced image processing algorithms and hardware have made this alignment process an easier task, some fundamental issues including inevitable system constraints and singularities, still remain as a challenging research topic for further investigation. This paper aims to develop a visual servoing method for automatic alignment system using model predictive control (MPC). The reason for using this optimal control for visual servoing design is because of its capability of handling constraints such as motor and image constraints in precision alignment systems. In particular, a microassembly system for peg and hole alignment application is adopted to illustrate the design process. The goal is to perform visual tracking of two image feature points based on a XYθ motor-stage system. From the viewpoint of MPC, this is an optimization problem that minimizes feature errors under given constraints. Therefore, a dynamic model consisting of camera parameters and motion stage dynamics is first derived to build the prediction model and set up the cost function. At each sample step the control command is obtained by solving a quadratic programming optimization problem. Finally, simulation results with comparison to a conventional image based visual servoing method demonstrate the effectiveness and potential use of this method.


Electronics ◽  
2019 ◽  
Vol 8 (8) ◽  
pp. 903 ◽  
Author(s):  
Ahmad Ghasemi ◽  
Pengcheng Li ◽  
Wen-Fang Xie ◽  
Wei Tian

In this paper, an enhanced switch image-based visual servoing controller for a six-degree-of-freedom (DOF) robot with a monocular eye-in-hand camera configuration is presented. The switch control algorithm separates the rotating and translational camera motions and divides the image-based visual servoing (IBVS) control into three distinct stages with different gains. In the proposed method, an image feature reconstruction algorithm based on the Kalman filter is proposed to handle the situation where the image features go outside the camera’s field of view (FOV). The combination of the switch controller and the feature reconstruction algorithm improves the system response speed and tracking performance of IBVS, while ensuring the success of servoing in the case of the feature loss. Extensive simulation and experimental tests are carried out on a 6-DOF robot to verify the effectiveness of the proposed method.


2021 ◽  
Vol 104 (1) ◽  
Author(s):  
Jing Xin ◽  
Caixia Dong ◽  
Youmin Zhang ◽  
Yumeng Yao ◽  
Ailing Gong

AbstractAiming at satisfying the increasing demand of family service robots for housework, this paper proposes a robot visual servoing scheme based on the randomized trees to complete the visual servoing task of unknown objects in natural scenes. Here, “unknown” means that there is no prior information on object models, such as template or database of the object. Firstly, an object to be manipulated is randomly selected by user prior to the visual servoing task execution. Then, the raw image information about the object can be obtained and used to train a randomized tree classifier online. Secondly, the current image features can be computed using the well-trained classifier. Finally, the visual controller can be designed according to the error of image feature, which is defined as the difference between the desired image features and current image features. Five visual positioning of unknown objects experiments, including 2D rigid object and 3D non-rigid object, are conducted on a MOTOMAN-SV3X six degree-of-freedom (DOF) manipulator robot. Experimental results show that the proposed scheme can effectively position an unknown object in complex natural scenes, such as occlusion and illumination changes. Furthermore, the developed robot visual servoing scheme has an excellent positioning accuracy within 0.05 mm positioning error.


Sign in / Sign up

Export Citation Format

Share Document